00:00:00.000 Started by upstream project "autotest-nightly-lts" build number 2006 00:00:00.000 originally caused by: 00:00:00.001 Started by upstream project "nightly-trigger" build number 3267 00:00:00.001 originally caused by: 00:00:00.001 Started by timer 00:00:00.085 Checking out git https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool into /var/jenkins_home/workspace/nvmf-tcp-phy-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4 to read jbp/jenkins/jjb-config/jobs/autotest-downstream/autotest-phy.groovy 00:00:00.086 The recommended git tool is: git 00:00:00.086 using credential 00000000-0000-0000-0000-000000000002 00:00:00.088 > git rev-parse --resolve-git-dir /var/jenkins_home/workspace/nvmf-tcp-phy-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4/jbp/.git # timeout=10 00:00:00.119 Fetching changes from the remote Git repository 00:00:00.121 > git config remote.origin.url https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool # timeout=10 00:00:00.161 Using shallow fetch with depth 1 00:00:00.161 Fetching upstream changes from https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool 00:00:00.161 > git --version # timeout=10 00:00:00.202 > git --version # 'git version 2.39.2' 00:00:00.202 using GIT_ASKPASS to set credentials SPDKCI HTTPS Credentials 00:00:00.236 Setting http proxy: proxy-dmz.intel.com:911 00:00:00.236 > git fetch --tags --force --progress --depth=1 -- https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool refs/heads/master # timeout=5 00:00:06.320 > git rev-parse origin/FETCH_HEAD^{commit} # timeout=10 00:00:06.331 > git rev-parse FETCH_HEAD^{commit} # timeout=10 00:00:06.342 Checking out Revision 9bf0dabeadcf84e29a3d5dbec2430e38aceadf8d (FETCH_HEAD) 00:00:06.342 > git config core.sparsecheckout # timeout=10 00:00:06.354 > git read-tree -mu HEAD # timeout=10 00:00:06.371 > git checkout -f 9bf0dabeadcf84e29a3d5dbec2430e38aceadf8d # timeout=5 00:00:06.393 Commit message: "inventory: add WCP3 to free inventory" 00:00:06.393 > git rev-list --no-walk 9bf0dabeadcf84e29a3d5dbec2430e38aceadf8d # timeout=10 00:00:06.522 [Pipeline] Start of Pipeline 00:00:06.537 [Pipeline] library 00:00:06.538 Loading library shm_lib@master 00:00:06.538 Library shm_lib@master is cached. Copying from home. 00:00:06.558 [Pipeline] node 00:00:06.570 Running on GP11 in /var/jenkins/workspace/nvmf-tcp-phy-autotest 00:00:06.572 [Pipeline] { 00:00:06.583 [Pipeline] catchError 00:00:06.585 [Pipeline] { 00:00:06.597 [Pipeline] wrap 00:00:06.606 [Pipeline] { 00:00:06.614 [Pipeline] stage 00:00:06.616 [Pipeline] { (Prologue) 00:00:06.795 [Pipeline] sh 00:00:07.076 + logger -p user.info -t JENKINS-CI 00:00:07.098 [Pipeline] echo 00:00:07.100 Node: GP11 00:00:07.106 [Pipeline] sh 00:00:07.410 [Pipeline] setCustomBuildProperty 00:00:07.419 [Pipeline] echo 00:00:07.420 Cleanup processes 00:00:07.425 [Pipeline] sh 00:00:07.704 + sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:00:07.704 3895631 sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:00:07.714 [Pipeline] sh 00:00:07.995 ++ sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:00:07.995 ++ grep -v 'sudo pgrep' 00:00:07.995 ++ awk '{print $1}' 00:00:07.995 + sudo kill -9 00:00:07.995 + true 00:00:08.009 [Pipeline] cleanWs 00:00:08.019 [WS-CLEANUP] Deleting project workspace... 00:00:08.019 [WS-CLEANUP] Deferred wipeout is used... 00:00:08.025 [WS-CLEANUP] done 00:00:08.030 [Pipeline] setCustomBuildProperty 00:00:08.044 [Pipeline] sh 00:00:08.326 + sudo git config --global --replace-all safe.directory '*' 00:00:08.409 [Pipeline] httpRequest 00:00:08.440 [Pipeline] echo 00:00:08.442 Sorcerer 10.211.164.101 is alive 00:00:08.449 [Pipeline] httpRequest 00:00:08.452 HttpMethod: GET 00:00:08.453 URL: http://10.211.164.101/packages/jbp_9bf0dabeadcf84e29a3d5dbec2430e38aceadf8d.tar.gz 00:00:08.453 Sending request to url: http://10.211.164.101/packages/jbp_9bf0dabeadcf84e29a3d5dbec2430e38aceadf8d.tar.gz 00:00:08.470 Response Code: HTTP/1.1 200 OK 00:00:08.471 Success: Status code 200 is in the accepted range: 200,404 00:00:08.471 Saving response body to /var/jenkins/workspace/nvmf-tcp-phy-autotest/jbp_9bf0dabeadcf84e29a3d5dbec2430e38aceadf8d.tar.gz 00:00:13.761 [Pipeline] sh 00:00:14.050 + tar --no-same-owner -xf jbp_9bf0dabeadcf84e29a3d5dbec2430e38aceadf8d.tar.gz 00:00:14.067 [Pipeline] httpRequest 00:00:14.104 [Pipeline] echo 00:00:14.106 Sorcerer 10.211.164.101 is alive 00:00:14.114 [Pipeline] httpRequest 00:00:14.119 HttpMethod: GET 00:00:14.120 URL: http://10.211.164.101/packages/spdk_4b94202c659be49093c32ec1d2d75efdacf00691.tar.gz 00:00:14.120 Sending request to url: http://10.211.164.101/packages/spdk_4b94202c659be49093c32ec1d2d75efdacf00691.tar.gz 00:00:14.140 Response Code: HTTP/1.1 200 OK 00:00:14.140 Success: Status code 200 is in the accepted range: 200,404 00:00:14.141 Saving response body to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk_4b94202c659be49093c32ec1d2d75efdacf00691.tar.gz 00:01:13.074 [Pipeline] sh 00:01:13.363 + tar --no-same-owner -xf spdk_4b94202c659be49093c32ec1d2d75efdacf00691.tar.gz 00:01:16.679 [Pipeline] sh 00:01:16.972 + git -C spdk log --oneline -n5 00:01:16.972 4b94202c6 lib/event: Bug fix for framework_set_scheduler 00:01:16.972 507e9ba07 nvme: add lock_depth for ctrlr_lock 00:01:16.972 62fda7b5f nvme: check pthread_mutex_destroy() return value 00:01:16.972 e03c164a1 nvme: add nvme_ctrlr_lock 00:01:16.972 d61f89a86 nvme/cuse: Add ctrlr_lock for cuse register and unregister 00:01:16.985 [Pipeline] } 00:01:17.001 [Pipeline] // stage 00:01:17.009 [Pipeline] stage 00:01:17.011 [Pipeline] { (Prepare) 00:01:17.029 [Pipeline] writeFile 00:01:17.045 [Pipeline] sh 00:01:17.330 + logger -p user.info -t JENKINS-CI 00:01:17.343 [Pipeline] sh 00:01:17.622 + logger -p user.info -t JENKINS-CI 00:01:17.634 [Pipeline] sh 00:01:17.927 + cat autorun-spdk.conf 00:01:17.928 SPDK_RUN_FUNCTIONAL_TEST=1 00:01:17.928 SPDK_TEST_NVMF=1 00:01:17.928 SPDK_TEST_NVME_CLI=1 00:01:17.928 SPDK_TEST_NVMF_TRANSPORT=tcp 00:01:17.928 SPDK_TEST_NVMF_NICS=e810 00:01:17.928 SPDK_RUN_UBSAN=1 00:01:17.928 NET_TYPE=phy 00:01:17.936 RUN_NIGHTLY=1 00:01:17.941 [Pipeline] readFile 00:01:17.970 [Pipeline] withEnv 00:01:17.972 [Pipeline] { 00:01:17.986 [Pipeline] sh 00:01:18.274 + set -ex 00:01:18.274 + [[ -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/autorun-spdk.conf ]] 00:01:18.274 + source /var/jenkins/workspace/nvmf-tcp-phy-autotest/autorun-spdk.conf 00:01:18.274 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:01:18.274 ++ SPDK_TEST_NVMF=1 00:01:18.274 ++ SPDK_TEST_NVME_CLI=1 00:01:18.274 ++ SPDK_TEST_NVMF_TRANSPORT=tcp 00:01:18.274 ++ SPDK_TEST_NVMF_NICS=e810 00:01:18.274 ++ SPDK_RUN_UBSAN=1 00:01:18.274 ++ NET_TYPE=phy 00:01:18.274 ++ RUN_NIGHTLY=1 00:01:18.274 + case $SPDK_TEST_NVMF_NICS in 00:01:18.274 + DRIVERS=ice 00:01:18.274 + [[ tcp == \r\d\m\a ]] 00:01:18.274 + [[ -n ice ]] 00:01:18.274 + sudo rmmod mlx4_ib mlx5_ib irdma i40iw iw_cxgb4 00:01:18.274 rmmod: ERROR: Module mlx4_ib is not currently loaded 00:01:18.274 rmmod: ERROR: Module mlx5_ib is not currently loaded 00:01:18.274 rmmod: ERROR: Module irdma is not currently loaded 00:01:18.274 rmmod: ERROR: Module i40iw is not currently loaded 00:01:18.274 rmmod: ERROR: Module iw_cxgb4 is not currently loaded 00:01:18.274 + true 00:01:18.274 + for D in $DRIVERS 00:01:18.274 + sudo modprobe ice 00:01:18.274 + exit 0 00:01:18.285 [Pipeline] } 00:01:18.305 [Pipeline] // withEnv 00:01:18.311 [Pipeline] } 00:01:18.329 [Pipeline] // stage 00:01:18.340 [Pipeline] catchError 00:01:18.342 [Pipeline] { 00:01:18.357 [Pipeline] timeout 00:01:18.357 Timeout set to expire in 50 min 00:01:18.358 [Pipeline] { 00:01:18.372 [Pipeline] stage 00:01:18.374 [Pipeline] { (Tests) 00:01:18.390 [Pipeline] sh 00:01:18.678 + jbp/jenkins/jjb-config/jobs/scripts/autoruner.sh /var/jenkins/workspace/nvmf-tcp-phy-autotest 00:01:18.678 ++ readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest 00:01:18.678 + DIR_ROOT=/var/jenkins/workspace/nvmf-tcp-phy-autotest 00:01:18.678 + [[ -n /var/jenkins/workspace/nvmf-tcp-phy-autotest ]] 00:01:18.678 + DIR_SPDK=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:01:18.678 + DIR_OUTPUT=/var/jenkins/workspace/nvmf-tcp-phy-autotest/output 00:01:18.678 + [[ -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk ]] 00:01:18.678 + [[ ! -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/output ]] 00:01:18.678 + mkdir -p /var/jenkins/workspace/nvmf-tcp-phy-autotest/output 00:01:18.678 + [[ -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/output ]] 00:01:18.678 + [[ nvmf-tcp-phy-autotest == pkgdep-* ]] 00:01:18.678 + cd /var/jenkins/workspace/nvmf-tcp-phy-autotest 00:01:18.678 + source /etc/os-release 00:01:18.678 ++ NAME='Fedora Linux' 00:01:18.678 ++ VERSION='38 (Cloud Edition)' 00:01:18.678 ++ ID=fedora 00:01:18.678 ++ VERSION_ID=38 00:01:18.678 ++ VERSION_CODENAME= 00:01:18.678 ++ PLATFORM_ID=platform:f38 00:01:18.678 ++ PRETTY_NAME='Fedora Linux 38 (Cloud Edition)' 00:01:18.678 ++ ANSI_COLOR='0;38;2;60;110;180' 00:01:18.678 ++ LOGO=fedora-logo-icon 00:01:18.678 ++ CPE_NAME=cpe:/o:fedoraproject:fedora:38 00:01:18.678 ++ HOME_URL=https://fedoraproject.org/ 00:01:18.678 ++ DOCUMENTATION_URL=https://docs.fedoraproject.org/en-US/fedora/f38/system-administrators-guide/ 00:01:18.678 ++ SUPPORT_URL=https://ask.fedoraproject.org/ 00:01:18.678 ++ BUG_REPORT_URL=https://bugzilla.redhat.com/ 00:01:18.678 ++ REDHAT_BUGZILLA_PRODUCT=Fedora 00:01:18.678 ++ REDHAT_BUGZILLA_PRODUCT_VERSION=38 00:01:18.678 ++ REDHAT_SUPPORT_PRODUCT=Fedora 00:01:18.678 ++ REDHAT_SUPPORT_PRODUCT_VERSION=38 00:01:18.678 ++ SUPPORT_END=2024-05-14 00:01:18.678 ++ VARIANT='Cloud Edition' 00:01:18.678 ++ VARIANT_ID=cloud 00:01:18.678 + uname -a 00:01:18.678 Linux spdk-gp-11 6.7.0-68.fc38.x86_64 #1 SMP PREEMPT_DYNAMIC Mon Jan 15 00:59:40 UTC 2024 x86_64 GNU/Linux 00:01:18.678 + sudo /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh status 00:01:19.616 Hugepages 00:01:19.616 node hugesize free / total 00:01:19.616 node0 1048576kB 0 / 0 00:01:19.616 node0 2048kB 0 / 0 00:01:19.616 node1 1048576kB 0 / 0 00:01:19.616 node1 2048kB 0 / 0 00:01:19.616 00:01:19.616 Type BDF Vendor Device NUMA Driver Device Block devices 00:01:19.616 I/OAT 0000:00:04.0 8086 0e20 0 ioatdma - - 00:01:19.616 I/OAT 0000:00:04.1 8086 0e21 0 ioatdma - - 00:01:19.616 I/OAT 0000:00:04.2 8086 0e22 0 ioatdma - - 00:01:19.616 I/OAT 0000:00:04.3 8086 0e23 0 ioatdma - - 00:01:19.616 I/OAT 0000:00:04.4 8086 0e24 0 ioatdma - - 00:01:19.616 I/OAT 0000:00:04.5 8086 0e25 0 ioatdma - - 00:01:19.616 I/OAT 0000:00:04.6 8086 0e26 0 ioatdma - - 00:01:19.616 I/OAT 0000:00:04.7 8086 0e27 0 ioatdma - - 00:01:19.616 I/OAT 0000:80:04.0 8086 0e20 1 ioatdma - - 00:01:19.616 I/OAT 0000:80:04.1 8086 0e21 1 ioatdma - - 00:01:19.616 I/OAT 0000:80:04.2 8086 0e22 1 ioatdma - - 00:01:19.616 I/OAT 0000:80:04.3 8086 0e23 1 ioatdma - - 00:01:19.616 I/OAT 0000:80:04.4 8086 0e24 1 ioatdma - - 00:01:19.616 I/OAT 0000:80:04.5 8086 0e25 1 ioatdma - - 00:01:19.616 I/OAT 0000:80:04.6 8086 0e26 1 ioatdma - - 00:01:19.616 I/OAT 0000:80:04.7 8086 0e27 1 ioatdma - - 00:01:19.616 NVMe 0000:88:00.0 8086 0a54 1 nvme nvme0 nvme0n1 00:01:19.616 + rm -f /tmp/spdk-ld-path 00:01:19.616 + source autorun-spdk.conf 00:01:19.616 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:01:19.616 ++ SPDK_TEST_NVMF=1 00:01:19.616 ++ SPDK_TEST_NVME_CLI=1 00:01:19.616 ++ SPDK_TEST_NVMF_TRANSPORT=tcp 00:01:19.616 ++ SPDK_TEST_NVMF_NICS=e810 00:01:19.616 ++ SPDK_RUN_UBSAN=1 00:01:19.616 ++ NET_TYPE=phy 00:01:19.616 ++ RUN_NIGHTLY=1 00:01:19.616 + (( SPDK_TEST_NVME_CMB == 1 || SPDK_TEST_NVME_PMR == 1 )) 00:01:19.616 + [[ -n '' ]] 00:01:19.616 + sudo git config --global --add safe.directory /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:01:19.876 + for M in /var/spdk/build-*-manifest.txt 00:01:19.876 + [[ -f /var/spdk/build-pkg-manifest.txt ]] 00:01:19.876 + cp /var/spdk/build-pkg-manifest.txt /var/jenkins/workspace/nvmf-tcp-phy-autotest/output/ 00:01:19.876 + for M in /var/spdk/build-*-manifest.txt 00:01:19.876 + [[ -f /var/spdk/build-repo-manifest.txt ]] 00:01:19.876 + cp /var/spdk/build-repo-manifest.txt /var/jenkins/workspace/nvmf-tcp-phy-autotest/output/ 00:01:19.876 ++ uname 00:01:19.876 + [[ Linux == \L\i\n\u\x ]] 00:01:19.876 + sudo dmesg -T 00:01:19.876 + sudo dmesg --clear 00:01:19.876 + dmesg_pid=3896927 00:01:19.876 + [[ Fedora Linux == FreeBSD ]] 00:01:19.876 + export UNBIND_ENTIRE_IOMMU_GROUP=yes 00:01:19.876 + UNBIND_ENTIRE_IOMMU_GROUP=yes 00:01:19.876 + sudo dmesg -Tw 00:01:19.876 + [[ -e /var/spdk/dependencies/vhost/spdk_test_image.qcow2 ]] 00:01:19.876 + [[ -x /usr/src/fio-static/fio ]] 00:01:19.876 + export FIO_BIN=/usr/src/fio-static/fio 00:01:19.876 + FIO_BIN=/usr/src/fio-static/fio 00:01:19.876 + [[ '' == \/\v\a\r\/\j\e\n\k\i\n\s\/\w\o\r\k\s\p\a\c\e\/\n\v\m\f\-\t\c\p\-\p\h\y\-\a\u\t\o\t\e\s\t\/\q\e\m\u\_\v\f\i\o\/* ]] 00:01:19.876 + [[ ! -v VFIO_QEMU_BIN ]] 00:01:19.876 + [[ -e /usr/local/qemu/vfio-user-latest ]] 00:01:19.876 + export VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:01:19.876 + VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:01:19.876 + [[ -e /usr/local/qemu/vanilla-latest ]] 00:01:19.876 + export QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:01:19.876 + QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:01:19.876 + spdk/autorun.sh /var/jenkins/workspace/nvmf-tcp-phy-autotest/autorun-spdk.conf 00:01:19.876 Test configuration: 00:01:19.876 SPDK_RUN_FUNCTIONAL_TEST=1 00:01:19.876 SPDK_TEST_NVMF=1 00:01:19.876 SPDK_TEST_NVME_CLI=1 00:01:19.876 SPDK_TEST_NVMF_TRANSPORT=tcp 00:01:19.876 SPDK_TEST_NVMF_NICS=e810 00:01:19.876 SPDK_RUN_UBSAN=1 00:01:19.876 NET_TYPE=phy 00:01:19.876 RUN_NIGHTLY=1 07:18:35 -- common/autobuild_common.sh@15 -- $ source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:01:19.876 07:18:35 -- scripts/common.sh@433 -- $ [[ -e /bin/wpdk_common.sh ]] 00:01:19.876 07:18:35 -- scripts/common.sh@441 -- $ [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:01:19.876 07:18:35 -- scripts/common.sh@442 -- $ source /etc/opt/spdk-pkgdep/paths/export.sh 00:01:19.876 07:18:35 -- paths/export.sh@2 -- $ PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:01:19.876 07:18:35 -- paths/export.sh@3 -- $ PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:01:19.876 07:18:35 -- paths/export.sh@4 -- $ PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:01:19.876 07:18:35 -- paths/export.sh@5 -- $ export PATH 00:01:19.876 07:18:35 -- paths/export.sh@6 -- $ echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:01:19.876 07:18:35 -- common/autobuild_common.sh@434 -- $ out=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output 00:01:19.876 07:18:35 -- common/autobuild_common.sh@435 -- $ date +%s 00:01:19.876 07:18:35 -- common/autobuild_common.sh@435 -- $ mktemp -dt spdk_1720934315.XXXXXX 00:01:19.876 07:18:35 -- common/autobuild_common.sh@435 -- $ SPDK_WORKSPACE=/tmp/spdk_1720934315.fSXcvZ 00:01:19.876 07:18:35 -- common/autobuild_common.sh@437 -- $ [[ -n '' ]] 00:01:19.876 07:18:35 -- common/autobuild_common.sh@441 -- $ '[' -n '' ']' 00:01:19.876 07:18:35 -- common/autobuild_common.sh@444 -- $ scanbuild_exclude='--exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/' 00:01:19.876 07:18:35 -- common/autobuild_common.sh@448 -- $ scanbuild_exclude+=' --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/xnvme --exclude /tmp' 00:01:19.876 07:18:35 -- common/autobuild_common.sh@450 -- $ scanbuild='scan-build -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/scan-build-tmp --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/ --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/xnvme --exclude /tmp --status-bugs' 00:01:19.876 07:18:35 -- common/autobuild_common.sh@451 -- $ get_config_params 00:01:19.876 07:18:35 -- common/autotest_common.sh@387 -- $ xtrace_disable 00:01:19.876 07:18:35 -- common/autotest_common.sh@10 -- $ set +x 00:01:19.876 07:18:35 -- common/autobuild_common.sh@451 -- $ config_params='--enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-coverage --with-ublk' 00:01:19.876 07:18:35 -- spdk/autobuild.sh@11 -- $ SPDK_TEST_AUTOBUILD= 00:01:19.876 07:18:35 -- spdk/autobuild.sh@12 -- $ umask 022 00:01:19.876 07:18:35 -- spdk/autobuild.sh@13 -- $ cd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:01:19.876 07:18:35 -- spdk/autobuild.sh@16 -- $ date -u 00:01:19.876 Sun Jul 14 05:18:35 AM UTC 2024 00:01:19.876 07:18:35 -- spdk/autobuild.sh@17 -- $ git describe --tags 00:01:19.876 LTS-59-g4b94202c6 00:01:19.876 07:18:35 -- spdk/autobuild.sh@19 -- $ '[' 0 -eq 1 ']' 00:01:19.876 07:18:35 -- spdk/autobuild.sh@23 -- $ '[' 1 -eq 1 ']' 00:01:19.876 07:18:35 -- spdk/autobuild.sh@24 -- $ run_test ubsan echo 'using ubsan' 00:01:19.876 07:18:35 -- common/autotest_common.sh@1077 -- $ '[' 3 -le 1 ']' 00:01:19.876 07:18:35 -- common/autotest_common.sh@1083 -- $ xtrace_disable 00:01:19.876 07:18:35 -- common/autotest_common.sh@10 -- $ set +x 00:01:19.876 ************************************ 00:01:19.876 START TEST ubsan 00:01:19.876 ************************************ 00:01:19.876 07:18:35 -- common/autotest_common.sh@1104 -- $ echo 'using ubsan' 00:01:19.876 using ubsan 00:01:19.876 00:01:19.876 real 0m0.000s 00:01:19.876 user 0m0.000s 00:01:19.876 sys 0m0.000s 00:01:19.876 07:18:35 -- common/autotest_common.sh@1105 -- $ xtrace_disable 00:01:19.876 07:18:35 -- common/autotest_common.sh@10 -- $ set +x 00:01:19.876 ************************************ 00:01:19.876 END TEST ubsan 00:01:19.876 ************************************ 00:01:19.876 07:18:35 -- spdk/autobuild.sh@27 -- $ '[' -n '' ']' 00:01:19.876 07:18:35 -- spdk/autobuild.sh@31 -- $ case "$SPDK_TEST_AUTOBUILD" in 00:01:19.876 07:18:35 -- spdk/autobuild.sh@47 -- $ [[ 0 -eq 1 ]] 00:01:19.876 07:18:35 -- spdk/autobuild.sh@51 -- $ [[ 0 -eq 1 ]] 00:01:19.876 07:18:35 -- spdk/autobuild.sh@55 -- $ [[ -n '' ]] 00:01:19.876 07:18:35 -- spdk/autobuild.sh@57 -- $ [[ 0 -eq 1 ]] 00:01:19.876 07:18:35 -- spdk/autobuild.sh@59 -- $ [[ 0 -eq 1 ]] 00:01:19.876 07:18:35 -- spdk/autobuild.sh@62 -- $ [[ 0 -eq 1 ]] 00:01:19.876 07:18:35 -- spdk/autobuild.sh@67 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/configure --enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-coverage --with-ublk --with-shared 00:01:19.876 Using default SPDK env in /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk 00:01:19.876 Using default DPDK in /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build 00:01:20.136 Using 'verbs' RDMA provider 00:01:30.690 Configuring ISA-L (logfile: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/isa-l/spdk-isal.log)...done. 00:01:40.671 Configuring ISA-L-crypto (logfile: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/isa-l-crypto/spdk-isal-crypto.log)...done. 00:01:40.671 Creating mk/config.mk...done. 00:01:40.671 Creating mk/cc.flags.mk...done. 00:01:40.671 Type 'make' to build. 00:01:40.671 07:18:56 -- spdk/autobuild.sh@69 -- $ run_test make make -j48 00:01:40.671 07:18:56 -- common/autotest_common.sh@1077 -- $ '[' 3 -le 1 ']' 00:01:40.671 07:18:56 -- common/autotest_common.sh@1083 -- $ xtrace_disable 00:01:40.671 07:18:56 -- common/autotest_common.sh@10 -- $ set +x 00:01:40.671 ************************************ 00:01:40.671 START TEST make 00:01:40.671 ************************************ 00:01:40.671 07:18:56 -- common/autotest_common.sh@1104 -- $ make -j48 00:01:40.671 make[1]: Nothing to be done for 'all'. 00:01:48.832 The Meson build system 00:01:48.832 Version: 1.3.1 00:01:48.832 Source dir: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk 00:01:48.832 Build dir: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build-tmp 00:01:48.832 Build type: native build 00:01:48.832 Program cat found: YES (/usr/bin/cat) 00:01:48.832 Project name: DPDK 00:01:48.832 Project version: 23.11.0 00:01:48.832 C compiler for the host machine: cc (gcc 13.2.1 "cc (GCC) 13.2.1 20231011 (Red Hat 13.2.1-4)") 00:01:48.832 C linker for the host machine: cc ld.bfd 2.39-16 00:01:48.832 Host machine cpu family: x86_64 00:01:48.832 Host machine cpu: x86_64 00:01:48.832 Message: ## Building in Developer Mode ## 00:01:48.832 Program pkg-config found: YES (/usr/bin/pkg-config) 00:01:48.832 Program check-symbols.sh found: YES (/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/buildtools/check-symbols.sh) 00:01:48.832 Program options-ibverbs-static.sh found: YES (/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/buildtools/options-ibverbs-static.sh) 00:01:48.832 Program python3 found: YES (/usr/bin/python3) 00:01:48.832 Program cat found: YES (/usr/bin/cat) 00:01:48.832 Compiler for C supports arguments -march=native: YES 00:01:48.832 Checking for size of "void *" : 8 00:01:48.832 Checking for size of "void *" : 8 (cached) 00:01:48.832 Library m found: YES 00:01:48.832 Library numa found: YES 00:01:48.832 Has header "numaif.h" : YES 00:01:48.832 Library fdt found: NO 00:01:48.832 Library execinfo found: NO 00:01:48.832 Has header "execinfo.h" : YES 00:01:48.832 Found pkg-config: YES (/usr/bin/pkg-config) 1.8.0 00:01:48.832 Run-time dependency libarchive found: NO (tried pkgconfig) 00:01:48.832 Run-time dependency libbsd found: NO (tried pkgconfig) 00:01:48.832 Run-time dependency jansson found: NO (tried pkgconfig) 00:01:48.832 Run-time dependency openssl found: YES 3.0.9 00:01:48.832 Run-time dependency libpcap found: YES 1.10.4 00:01:48.832 Has header "pcap.h" with dependency libpcap: YES 00:01:48.832 Compiler for C supports arguments -Wcast-qual: YES 00:01:48.833 Compiler for C supports arguments -Wdeprecated: YES 00:01:48.833 Compiler for C supports arguments -Wformat: YES 00:01:48.833 Compiler for C supports arguments -Wformat-nonliteral: NO 00:01:48.833 Compiler for C supports arguments -Wformat-security: NO 00:01:48.833 Compiler for C supports arguments -Wmissing-declarations: YES 00:01:48.833 Compiler for C supports arguments -Wmissing-prototypes: YES 00:01:48.833 Compiler for C supports arguments -Wnested-externs: YES 00:01:48.833 Compiler for C supports arguments -Wold-style-definition: YES 00:01:48.833 Compiler for C supports arguments -Wpointer-arith: YES 00:01:48.833 Compiler for C supports arguments -Wsign-compare: YES 00:01:48.833 Compiler for C supports arguments -Wstrict-prototypes: YES 00:01:48.833 Compiler for C supports arguments -Wundef: YES 00:01:48.833 Compiler for C supports arguments -Wwrite-strings: YES 00:01:48.833 Compiler for C supports arguments -Wno-address-of-packed-member: YES 00:01:48.833 Compiler for C supports arguments -Wno-packed-not-aligned: YES 00:01:48.833 Compiler for C supports arguments -Wno-missing-field-initializers: YES 00:01:48.833 Compiler for C supports arguments -Wno-zero-length-bounds: YES 00:01:48.833 Program objdump found: YES (/usr/bin/objdump) 00:01:48.833 Compiler for C supports arguments -mavx512f: YES 00:01:48.833 Checking if "AVX512 checking" compiles: YES 00:01:48.833 Fetching value of define "__SSE4_2__" : 1 00:01:48.833 Fetching value of define "__AES__" : 1 00:01:48.833 Fetching value of define "__AVX__" : 1 00:01:48.833 Fetching value of define "__AVX2__" : (undefined) 00:01:48.833 Fetching value of define "__AVX512BW__" : (undefined) 00:01:48.833 Fetching value of define "__AVX512CD__" : (undefined) 00:01:48.833 Fetching value of define "__AVX512DQ__" : (undefined) 00:01:48.833 Fetching value of define "__AVX512F__" : (undefined) 00:01:48.833 Fetching value of define "__AVX512VL__" : (undefined) 00:01:48.833 Fetching value of define "__PCLMUL__" : 1 00:01:48.833 Fetching value of define "__RDRND__" : 1 00:01:48.833 Fetching value of define "__RDSEED__" : (undefined) 00:01:48.833 Fetching value of define "__VPCLMULQDQ__" : (undefined) 00:01:48.833 Fetching value of define "__znver1__" : (undefined) 00:01:48.833 Fetching value of define "__znver2__" : (undefined) 00:01:48.833 Fetching value of define "__znver3__" : (undefined) 00:01:48.833 Fetching value of define "__znver4__" : (undefined) 00:01:48.833 Compiler for C supports arguments -Wno-format-truncation: YES 00:01:48.833 Message: lib/log: Defining dependency "log" 00:01:48.833 Message: lib/kvargs: Defining dependency "kvargs" 00:01:48.833 Message: lib/telemetry: Defining dependency "telemetry" 00:01:48.833 Checking for function "getentropy" : NO 00:01:48.833 Message: lib/eal: Defining dependency "eal" 00:01:48.833 Message: lib/ring: Defining dependency "ring" 00:01:48.833 Message: lib/rcu: Defining dependency "rcu" 00:01:48.833 Message: lib/mempool: Defining dependency "mempool" 00:01:48.833 Message: lib/mbuf: Defining dependency "mbuf" 00:01:48.833 Fetching value of define "__PCLMUL__" : 1 (cached) 00:01:48.833 Fetching value of define "__AVX512F__" : (undefined) (cached) 00:01:48.833 Compiler for C supports arguments -mpclmul: YES 00:01:48.833 Compiler for C supports arguments -maes: YES 00:01:48.833 Compiler for C supports arguments -mavx512f: YES (cached) 00:01:48.833 Compiler for C supports arguments -mavx512bw: YES 00:01:48.833 Compiler for C supports arguments -mavx512dq: YES 00:01:48.833 Compiler for C supports arguments -mavx512vl: YES 00:01:48.833 Compiler for C supports arguments -mvpclmulqdq: YES 00:01:48.833 Compiler for C supports arguments -mavx2: YES 00:01:48.833 Compiler for C supports arguments -mavx: YES 00:01:48.833 Message: lib/net: Defining dependency "net" 00:01:48.833 Message: lib/meter: Defining dependency "meter" 00:01:48.833 Message: lib/ethdev: Defining dependency "ethdev" 00:01:48.833 Message: lib/pci: Defining dependency "pci" 00:01:48.833 Message: lib/cmdline: Defining dependency "cmdline" 00:01:48.833 Message: lib/hash: Defining dependency "hash" 00:01:48.833 Message: lib/timer: Defining dependency "timer" 00:01:48.833 Message: lib/compressdev: Defining dependency "compressdev" 00:01:48.833 Message: lib/cryptodev: Defining dependency "cryptodev" 00:01:48.833 Message: lib/dmadev: Defining dependency "dmadev" 00:01:48.833 Compiler for C supports arguments -Wno-cast-qual: YES 00:01:48.833 Message: lib/power: Defining dependency "power" 00:01:48.833 Message: lib/reorder: Defining dependency "reorder" 00:01:48.833 Message: lib/security: Defining dependency "security" 00:01:48.833 Has header "linux/userfaultfd.h" : YES 00:01:48.833 Has header "linux/vduse.h" : YES 00:01:48.833 Message: lib/vhost: Defining dependency "vhost" 00:01:48.833 Compiler for C supports arguments -Wno-format-truncation: YES (cached) 00:01:48.833 Message: drivers/bus/pci: Defining dependency "bus_pci" 00:01:48.833 Message: drivers/bus/vdev: Defining dependency "bus_vdev" 00:01:48.833 Message: drivers/mempool/ring: Defining dependency "mempool_ring" 00:01:48.833 Message: Disabling raw/* drivers: missing internal dependency "rawdev" 00:01:48.833 Message: Disabling regex/* drivers: missing internal dependency "regexdev" 00:01:48.833 Message: Disabling ml/* drivers: missing internal dependency "mldev" 00:01:48.833 Message: Disabling event/* drivers: missing internal dependency "eventdev" 00:01:48.833 Message: Disabling baseband/* drivers: missing internal dependency "bbdev" 00:01:48.833 Message: Disabling gpu/* drivers: missing internal dependency "gpudev" 00:01:48.833 Program doxygen found: YES (/usr/bin/doxygen) 00:01:48.833 Configuring doxy-api-html.conf using configuration 00:01:48.833 Configuring doxy-api-man.conf using configuration 00:01:48.833 Program mandb found: YES (/usr/bin/mandb) 00:01:48.833 Program sphinx-build found: NO 00:01:48.833 Configuring rte_build_config.h using configuration 00:01:48.833 Message: 00:01:48.833 ================= 00:01:48.833 Applications Enabled 00:01:48.833 ================= 00:01:48.833 00:01:48.833 apps: 00:01:48.833 00:01:48.833 00:01:48.833 Message: 00:01:48.833 ================= 00:01:48.833 Libraries Enabled 00:01:48.833 ================= 00:01:48.833 00:01:48.833 libs: 00:01:48.833 log, kvargs, telemetry, eal, ring, rcu, mempool, mbuf, 00:01:48.833 net, meter, ethdev, pci, cmdline, hash, timer, compressdev, 00:01:48.833 cryptodev, dmadev, power, reorder, security, vhost, 00:01:48.833 00:01:48.833 Message: 00:01:48.833 =============== 00:01:48.833 Drivers Enabled 00:01:48.833 =============== 00:01:48.833 00:01:48.833 common: 00:01:48.833 00:01:48.833 bus: 00:01:48.833 pci, vdev, 00:01:48.833 mempool: 00:01:48.833 ring, 00:01:48.833 dma: 00:01:48.833 00:01:48.833 net: 00:01:48.833 00:01:48.833 crypto: 00:01:48.833 00:01:48.833 compress: 00:01:48.833 00:01:48.833 vdpa: 00:01:48.833 00:01:48.833 00:01:48.833 Message: 00:01:48.833 ================= 00:01:48.833 Content Skipped 00:01:48.833 ================= 00:01:48.833 00:01:48.833 apps: 00:01:48.833 dumpcap: explicitly disabled via build config 00:01:48.833 graph: explicitly disabled via build config 00:01:48.833 pdump: explicitly disabled via build config 00:01:48.833 proc-info: explicitly disabled via build config 00:01:48.833 test-acl: explicitly disabled via build config 00:01:48.833 test-bbdev: explicitly disabled via build config 00:01:48.833 test-cmdline: explicitly disabled via build config 00:01:48.833 test-compress-perf: explicitly disabled via build config 00:01:48.833 test-crypto-perf: explicitly disabled via build config 00:01:48.833 test-dma-perf: explicitly disabled via build config 00:01:48.833 test-eventdev: explicitly disabled via build config 00:01:48.833 test-fib: explicitly disabled via build config 00:01:48.833 test-flow-perf: explicitly disabled via build config 00:01:48.833 test-gpudev: explicitly disabled via build config 00:01:48.833 test-mldev: explicitly disabled via build config 00:01:48.833 test-pipeline: explicitly disabled via build config 00:01:48.833 test-pmd: explicitly disabled via build config 00:01:48.833 test-regex: explicitly disabled via build config 00:01:48.833 test-sad: explicitly disabled via build config 00:01:48.833 test-security-perf: explicitly disabled via build config 00:01:48.833 00:01:48.833 libs: 00:01:48.833 metrics: explicitly disabled via build config 00:01:48.833 acl: explicitly disabled via build config 00:01:48.833 bbdev: explicitly disabled via build config 00:01:48.833 bitratestats: explicitly disabled via build config 00:01:48.833 bpf: explicitly disabled via build config 00:01:48.833 cfgfile: explicitly disabled via build config 00:01:48.833 distributor: explicitly disabled via build config 00:01:48.833 efd: explicitly disabled via build config 00:01:48.833 eventdev: explicitly disabled via build config 00:01:48.833 dispatcher: explicitly disabled via build config 00:01:48.833 gpudev: explicitly disabled via build config 00:01:48.833 gro: explicitly disabled via build config 00:01:48.833 gso: explicitly disabled via build config 00:01:48.833 ip_frag: explicitly disabled via build config 00:01:48.833 jobstats: explicitly disabled via build config 00:01:48.833 latencystats: explicitly disabled via build config 00:01:48.833 lpm: explicitly disabled via build config 00:01:48.833 member: explicitly disabled via build config 00:01:48.833 pcapng: explicitly disabled via build config 00:01:48.833 rawdev: explicitly disabled via build config 00:01:48.833 regexdev: explicitly disabled via build config 00:01:48.833 mldev: explicitly disabled via build config 00:01:48.833 rib: explicitly disabled via build config 00:01:48.833 sched: explicitly disabled via build config 00:01:48.833 stack: explicitly disabled via build config 00:01:48.833 ipsec: explicitly disabled via build config 00:01:48.833 pdcp: explicitly disabled via build config 00:01:48.833 fib: explicitly disabled via build config 00:01:48.833 port: explicitly disabled via build config 00:01:48.833 pdump: explicitly disabled via build config 00:01:48.833 table: explicitly disabled via build config 00:01:48.833 pipeline: explicitly disabled via build config 00:01:48.833 graph: explicitly disabled via build config 00:01:48.833 node: explicitly disabled via build config 00:01:48.833 00:01:48.833 drivers: 00:01:48.833 common/cpt: not in enabled drivers build config 00:01:48.833 common/dpaax: not in enabled drivers build config 00:01:48.833 common/iavf: not in enabled drivers build config 00:01:48.833 common/idpf: not in enabled drivers build config 00:01:48.833 common/mvep: not in enabled drivers build config 00:01:48.833 common/octeontx: not in enabled drivers build config 00:01:48.833 bus/auxiliary: not in enabled drivers build config 00:01:48.833 bus/cdx: not in enabled drivers build config 00:01:48.833 bus/dpaa: not in enabled drivers build config 00:01:48.833 bus/fslmc: not in enabled drivers build config 00:01:48.833 bus/ifpga: not in enabled drivers build config 00:01:48.833 bus/platform: not in enabled drivers build config 00:01:48.833 bus/vmbus: not in enabled drivers build config 00:01:48.833 common/cnxk: not in enabled drivers build config 00:01:48.833 common/mlx5: not in enabled drivers build config 00:01:48.833 common/nfp: not in enabled drivers build config 00:01:48.833 common/qat: not in enabled drivers build config 00:01:48.833 common/sfc_efx: not in enabled drivers build config 00:01:48.833 mempool/bucket: not in enabled drivers build config 00:01:48.833 mempool/cnxk: not in enabled drivers build config 00:01:48.833 mempool/dpaa: not in enabled drivers build config 00:01:48.833 mempool/dpaa2: not in enabled drivers build config 00:01:48.833 mempool/octeontx: not in enabled drivers build config 00:01:48.833 mempool/stack: not in enabled drivers build config 00:01:48.833 dma/cnxk: not in enabled drivers build config 00:01:48.833 dma/dpaa: not in enabled drivers build config 00:01:48.833 dma/dpaa2: not in enabled drivers build config 00:01:48.833 dma/hisilicon: not in enabled drivers build config 00:01:48.833 dma/idxd: not in enabled drivers build config 00:01:48.833 dma/ioat: not in enabled drivers build config 00:01:48.833 dma/skeleton: not in enabled drivers build config 00:01:48.833 net/af_packet: not in enabled drivers build config 00:01:48.833 net/af_xdp: not in enabled drivers build config 00:01:48.833 net/ark: not in enabled drivers build config 00:01:48.833 net/atlantic: not in enabled drivers build config 00:01:48.833 net/avp: not in enabled drivers build config 00:01:48.833 net/axgbe: not in enabled drivers build config 00:01:48.833 net/bnx2x: not in enabled drivers build config 00:01:48.833 net/bnxt: not in enabled drivers build config 00:01:48.833 net/bonding: not in enabled drivers build config 00:01:48.833 net/cnxk: not in enabled drivers build config 00:01:48.833 net/cpfl: not in enabled drivers build config 00:01:48.833 net/cxgbe: not in enabled drivers build config 00:01:48.833 net/dpaa: not in enabled drivers build config 00:01:48.833 net/dpaa2: not in enabled drivers build config 00:01:48.833 net/e1000: not in enabled drivers build config 00:01:48.833 net/ena: not in enabled drivers build config 00:01:48.833 net/enetc: not in enabled drivers build config 00:01:48.833 net/enetfec: not in enabled drivers build config 00:01:48.833 net/enic: not in enabled drivers build config 00:01:48.833 net/failsafe: not in enabled drivers build config 00:01:48.833 net/fm10k: not in enabled drivers build config 00:01:48.833 net/gve: not in enabled drivers build config 00:01:48.833 net/hinic: not in enabled drivers build config 00:01:48.833 net/hns3: not in enabled drivers build config 00:01:48.833 net/i40e: not in enabled drivers build config 00:01:48.833 net/iavf: not in enabled drivers build config 00:01:48.833 net/ice: not in enabled drivers build config 00:01:48.833 net/idpf: not in enabled drivers build config 00:01:48.833 net/igc: not in enabled drivers build config 00:01:48.833 net/ionic: not in enabled drivers build config 00:01:48.833 net/ipn3ke: not in enabled drivers build config 00:01:48.833 net/ixgbe: not in enabled drivers build config 00:01:48.833 net/mana: not in enabled drivers build config 00:01:48.833 net/memif: not in enabled drivers build config 00:01:48.833 net/mlx4: not in enabled drivers build config 00:01:48.833 net/mlx5: not in enabled drivers build config 00:01:48.833 net/mvneta: not in enabled drivers build config 00:01:48.833 net/mvpp2: not in enabled drivers build config 00:01:48.833 net/netvsc: not in enabled drivers build config 00:01:48.833 net/nfb: not in enabled drivers build config 00:01:48.833 net/nfp: not in enabled drivers build config 00:01:48.833 net/ngbe: not in enabled drivers build config 00:01:48.833 net/null: not in enabled drivers build config 00:01:48.833 net/octeontx: not in enabled drivers build config 00:01:48.833 net/octeon_ep: not in enabled drivers build config 00:01:48.833 net/pcap: not in enabled drivers build config 00:01:48.833 net/pfe: not in enabled drivers build config 00:01:48.833 net/qede: not in enabled drivers build config 00:01:48.833 net/ring: not in enabled drivers build config 00:01:48.833 net/sfc: not in enabled drivers build config 00:01:48.833 net/softnic: not in enabled drivers build config 00:01:48.833 net/tap: not in enabled drivers build config 00:01:48.833 net/thunderx: not in enabled drivers build config 00:01:48.833 net/txgbe: not in enabled drivers build config 00:01:48.833 net/vdev_netvsc: not in enabled drivers build config 00:01:48.833 net/vhost: not in enabled drivers build config 00:01:48.833 net/virtio: not in enabled drivers build config 00:01:48.833 net/vmxnet3: not in enabled drivers build config 00:01:48.833 raw/*: missing internal dependency, "rawdev" 00:01:48.833 crypto/armv8: not in enabled drivers build config 00:01:48.833 crypto/bcmfs: not in enabled drivers build config 00:01:48.833 crypto/caam_jr: not in enabled drivers build config 00:01:48.833 crypto/ccp: not in enabled drivers build config 00:01:48.833 crypto/cnxk: not in enabled drivers build config 00:01:48.833 crypto/dpaa_sec: not in enabled drivers build config 00:01:48.833 crypto/dpaa2_sec: not in enabled drivers build config 00:01:48.833 crypto/ipsec_mb: not in enabled drivers build config 00:01:48.833 crypto/mlx5: not in enabled drivers build config 00:01:48.833 crypto/mvsam: not in enabled drivers build config 00:01:48.833 crypto/nitrox: not in enabled drivers build config 00:01:48.833 crypto/null: not in enabled drivers build config 00:01:48.833 crypto/octeontx: not in enabled drivers build config 00:01:48.833 crypto/openssl: not in enabled drivers build config 00:01:48.833 crypto/scheduler: not in enabled drivers build config 00:01:48.833 crypto/uadk: not in enabled drivers build config 00:01:48.833 crypto/virtio: not in enabled drivers build config 00:01:48.833 compress/isal: not in enabled drivers build config 00:01:48.833 compress/mlx5: not in enabled drivers build config 00:01:48.834 compress/octeontx: not in enabled drivers build config 00:01:48.834 compress/zlib: not in enabled drivers build config 00:01:48.834 regex/*: missing internal dependency, "regexdev" 00:01:48.834 ml/*: missing internal dependency, "mldev" 00:01:48.834 vdpa/ifc: not in enabled drivers build config 00:01:48.834 vdpa/mlx5: not in enabled drivers build config 00:01:48.834 vdpa/nfp: not in enabled drivers build config 00:01:48.834 vdpa/sfc: not in enabled drivers build config 00:01:48.834 event/*: missing internal dependency, "eventdev" 00:01:48.834 baseband/*: missing internal dependency, "bbdev" 00:01:48.834 gpu/*: missing internal dependency, "gpudev" 00:01:48.834 00:01:48.834 00:01:49.091 Build targets in project: 85 00:01:49.091 00:01:49.091 DPDK 23.11.0 00:01:49.091 00:01:49.091 User defined options 00:01:49.091 buildtype : debug 00:01:49.091 default_library : shared 00:01:49.091 libdir : lib 00:01:49.091 prefix : /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build 00:01:49.091 c_args : -fPIC -Werror -Wno-stringop-overflow -fcommon -Wno-stringop-overread -Wno-array-bounds 00:01:49.091 c_link_args : 00:01:49.091 cpu_instruction_set: native 00:01:49.091 disable_apps : test-fib,test-sad,test,test-regex,test-security-perf,test-bbdev,dumpcap,test-crypto-perf,test-flow-perf,test-gpudev,test-cmdline,test-dma-perf,test-eventdev,test-pipeline,test-acl,proc-info,test-compress-perf,graph,test-pmd,test-mldev,pdump 00:01:49.092 disable_libs : bbdev,latencystats,member,gpudev,mldev,pipeline,lpm,efd,regexdev,sched,node,dispatcher,table,bpf,port,gro,fib,cfgfile,ip_frag,gso,rawdev,ipsec,pdcp,rib,acl,metrics,graph,pcapng,jobstats,eventdev,stack,bitratestats,distributor,pdump 00:01:49.092 enable_docs : false 00:01:49.092 enable_drivers : bus,bus/pci,bus/vdev,mempool/ring 00:01:49.092 enable_kmods : false 00:01:49.092 tests : false 00:01:49.092 00:01:49.092 Found ninja-1.11.1.git.kitware.jobserver-1 at /usr/local/bin/ninja 00:01:49.358 ninja: Entering directory `/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build-tmp' 00:01:49.654 [1/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hypervisor.c.o 00:01:49.654 [2/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_errno.c.o 00:01:49.654 [3/265] Compiling C object lib/librte_eal.a.p/eal_common_rte_version.c.o 00:01:49.654 [4/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_cpuflags.c.o 00:01:49.654 [5/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_class.c.o 00:01:49.654 [6/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hexdump.c.o 00:01:49.654 [7/265] Compiling C object lib/librte_kvargs.a.p/kvargs_rte_kvargs.c.o 00:01:49.654 [8/265] Compiling C object lib/librte_log.a.p/log_log_linux.c.o 00:01:49.654 [9/265] Linking static target lib/librte_kvargs.a 00:01:49.654 [10/265] Compiling C object lib/librte_eal.a.p/eal_common_rte_reciprocal.c.o 00:01:49.654 [11/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_debug.c.o 00:01:49.654 [12/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_string_fns.c.o 00:01:49.654 [13/265] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_data.c.o 00:01:49.654 [14/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_uuid.c.o 00:01:49.654 [15/265] Compiling C object lib/librte_log.a.p/log_log.c.o 00:01:49.654 [16/265] Compiling C object lib/librte_eal.a.p/eal_unix_eal_debug.c.o 00:01:49.654 [17/265] Linking static target lib/librte_log.a 00:01:49.654 [18/265] Compiling C object lib/librte_eal.a.p/eal_unix_eal_firmware.c.o 00:01:49.654 [19/265] Compiling C object lib/librte_eal.a.p/eal_linux_eal_cpuflags.c.o 00:01:49.654 [20/265] Compiling C object lib/librte_eal.a.p/eal_unix_rte_thread.c.o 00:01:49.654 [21/265] Compiling C object lib/librte_eal.a.p/eal_linux_eal_thread.c.o 00:01:50.235 [22/265] Generating lib/kvargs.sym_chk with a custom command (wrapped by meson to capture output) 00:01:50.500 [23/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_config.c.o 00:01:50.500 [24/265] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_legacy.c.o 00:01:50.500 [25/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dynmem.c.o 00:01:50.500 [26/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memalloc.c.o 00:01:50.500 [27/265] Compiling C object lib/librte_eal.a.p/eal_common_hotplug_mp.c.o 00:01:50.500 [28/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_bus.c.o 00:01:50.500 [29/265] Compiling C object lib/librte_eal.a.p/eal_unix_eal_file.c.o 00:01:50.500 [30/265] Compiling C object lib/librte_eal.a.p/eal_unix_eal_filesystem.c.o 00:01:50.500 [31/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_interrupts.c.o 00:01:50.500 [32/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_tailqs.c.o 00:01:50.500 [33/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_ctf.c.o 00:01:50.500 [34/265] Compiling C object lib/librte_eal.a.p/eal_common_malloc_elem.c.o 00:01:50.500 [35/265] Compiling C object lib/librte_eal.a.p/eal_common_rte_keepalive.c.o 00:01:50.500 [36/265] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_timer.c.o 00:01:50.500 [37/265] Compiling C object lib/librte_eal.a.p/eal_common_malloc_mp.c.o 00:01:50.500 [38/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_devargs.c.o 00:01:50.500 [39/265] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_thread.c.o 00:01:50.500 [40/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_mcfg.c.o 00:01:50.500 [41/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace.c.o 00:01:50.500 [42/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memzone.c.o 00:01:50.500 [43/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_thread.c.o 00:01:50.500 [44/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_points.c.o 00:01:50.500 [45/265] Compiling C object lib/librte_eal.a.p/eal_x86_rte_hypervisor.c.o 00:01:50.500 [46/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dev.c.o 00:01:50.500 [47/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_utils.c.o 00:01:50.500 [48/265] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio_mp_sync.c.o 00:01:50.500 [49/265] Compiling C object lib/librte_eal.a.p/eal_x86_rte_spinlock.c.o 00:01:50.500 [50/265] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry.c.o 00:01:50.500 [51/265] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cpuflags.c.o 00:01:50.500 [52/265] Compiling C object lib/librte_eal.a.p/eal_linux_eal_alarm.c.o 00:01:50.500 [53/265] Linking static target lib/librte_telemetry.a 00:01:50.500 [54/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_proc.c.o 00:01:50.500 [55/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_timer.c.o 00:01:50.500 [56/265] Compiling C object lib/librte_eal.a.p/eal_linux_eal_lcore.c.o 00:01:50.500 [57/265] Compiling C object lib/librte_eal.a.p/eal_common_rte_service.c.o 00:01:50.500 [58/265] Compiling C object lib/librte_eal.a.p/eal_linux_eal_hugepage_info.c.o 00:01:50.500 [59/265] Compiling C object lib/librte_eal.a.p/eal_common_rte_random.c.o 00:01:50.500 [60/265] Compiling C object lib/librte_eal.a.p/eal_linux_eal_dev.c.o 00:01:50.500 [61/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_lcore.c.o 00:01:50.500 [62/265] Compiling C object lib/librte_eal.a.p/eal_common_malloc_heap.c.o 00:01:50.500 [63/265] Compiling C object lib/librte_eal.a.p/eal_linux_eal.c.o 00:01:50.762 [64/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_fbarray.c.o 00:01:50.762 [65/265] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_memory.c.o 00:01:50.762 [66/265] Compiling C object lib/librte_eal.a.p/eal_common_rte_malloc.c.o 00:01:50.762 [67/265] Compiling C object lib/librte_pci.a.p/pci_rte_pci.c.o 00:01:50.762 [68/265] Linking static target lib/librte_pci.a 00:01:50.762 [69/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_launch.c.o 00:01:50.762 [70/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memory.c.o 00:01:50.762 [71/265] Compiling C object lib/librte_eal.a.p/eal_linux_eal_interrupts.c.o 00:01:50.762 [72/265] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline.c.o 00:01:50.762 [73/265] Generating lib/log.sym_chk with a custom command (wrapped by meson to capture output) 00:01:50.762 [74/265] Compiling C object lib/librte_eal.a.p/eal_linux_eal_timer.c.o 00:01:50.762 [75/265] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memory.c.o 00:01:50.762 [76/265] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_cirbuf.c.o 00:01:51.027 [77/265] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse.c.o 00:01:51.027 [78/265] Linking target lib/librte_log.so.24.0 00:01:51.027 [79/265] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memalloc.c.o 00:01:51.027 [80/265] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_num.c.o 00:01:51.027 [81/265] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_string.c.o 00:01:51.027 [82/265] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_socket.c.o 00:01:51.027 [83/265] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_portlist.c.o 00:01:51.027 [84/265] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_vt100.c.o 00:01:51.027 [85/265] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_os_unix.c.o 00:01:51.027 [86/265] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_rdline.c.o 00:01:51.290 [87/265] Generating symbol file lib/librte_log.so.24.0.p/librte_log.so.24.0.symbols 00:01:51.290 [88/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_options.c.o 00:01:51.290 [89/265] Generating lib/pci.sym_chk with a custom command (wrapped by meson to capture output) 00:01:51.290 [90/265] Linking target lib/librte_kvargs.so.24.0 00:01:51.290 [91/265] Compiling C object lib/net/libnet_crc_avx512_lib.a.p/net_crc_avx512.c.o 00:01:51.290 [92/265] Linking static target lib/net/libnet_crc_avx512_lib.a 00:01:51.290 [93/265] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cycles.c.o 00:01:51.290 [94/265] Compiling C object lib/librte_eal.a.p/eal_x86_rte_power_intrinsics.c.o 00:01:51.290 [95/265] Compiling C object lib/librte_ring.a.p/ring_rte_ring.c.o 00:01:51.290 [96/265] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio.c.o 00:01:51.290 [97/265] Linking static target lib/librte_ring.a 00:01:51.551 [98/265] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops_default.c.o 00:01:51.551 [99/265] Linking static target lib/librte_eal.a 00:01:51.551 [100/265] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops.c.o 00:01:51.551 [101/265] Compiling C object lib/librte_meter.a.p/meter_rte_meter.c.o 00:01:51.551 [102/265] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_ptype.c.o 00:01:51.551 [103/265] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_pool_ops.c.o 00:01:51.551 [104/265] Linking static target lib/librte_meter.a 00:01:51.551 [105/265] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_dyn.c.o 00:01:51.551 [106/265] Compiling C object lib/librte_mempool.a.p/mempool_mempool_trace_points.c.o 00:01:51.551 [107/265] Compiling C object lib/librte_net.a.p/net_rte_net.c.o 00:01:51.551 [108/265] Generating symbol file lib/librte_kvargs.so.24.0.p/librte_kvargs.so.24.0.symbols 00:01:51.551 [109/265] Generating lib/telemetry.sym_chk with a custom command (wrapped by meson to capture output) 00:01:51.551 [110/265] Compiling C object lib/librte_net.a.p/net_net_crc_sse.c.o 00:01:51.551 [111/265] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_profile.c.o 00:01:51.551 [112/265] Compiling C object lib/librte_power.a.p/power_guest_channel.c.o 00:01:51.551 [113/265] Compiling C object lib/librte_power.a.p/power_power_kvm_vm.c.o 00:01:51.551 [114/265] Linking target lib/librte_telemetry.so.24.0 00:01:51.551 [115/265] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool.c.o 00:01:51.551 [116/265] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_telemetry.c.o 00:01:51.551 [117/265] Linking static target lib/librte_mempool.a 00:01:51.551 [118/265] Compiling C object lib/librte_rcu.a.p/rcu_rte_rcu_qsbr.c.o 00:01:51.551 [119/265] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_private.c.o 00:01:51.811 [120/265] Compiling C object lib/librte_power.a.p/power_power_common.c.o 00:01:51.811 [121/265] Linking static target lib/librte_rcu.a 00:01:51.811 [122/265] Compiling C object lib/librte_net.a.p/net_rte_ether.c.o 00:01:51.811 [123/265] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_class_eth.c.o 00:01:51.811 [124/265] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_driver.c.o 00:01:51.811 [125/265] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8472.c.o 00:01:51.811 [126/265] Compiling C object lib/librte_net.a.p/net_rte_net_crc.c.o 00:01:51.811 [127/265] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_common.c.o 00:01:51.811 [128/265] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_etheraddr.c.o 00:01:51.811 [129/265] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_cman.c.o 00:01:51.811 [130/265] Compiling C object lib/librte_vhost.a.p/vhost_fd_man.c.o 00:01:51.811 [131/265] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_telemetry.c.o 00:01:51.811 [132/265] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_ipaddr.c.o 00:01:51.811 [133/265] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8079.c.o 00:01:51.811 [134/265] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8636.c.o 00:01:51.811 [135/265] Generating symbol file lib/librte_telemetry.so.24.0.p/librte_telemetry.so.24.0.symbols 00:01:51.811 [136/265] Linking static target lib/librte_cmdline.a 00:01:51.811 [137/265] Compiling C object lib/librte_hash.a.p/hash_rte_fbk_hash.c.o 00:01:52.070 [138/265] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_params.c.o 00:01:52.070 [139/265] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_mtr.c.o 00:01:52.070 [140/265] Generating lib/ring.sym_chk with a custom command (wrapped by meson to capture output) 00:01:52.070 [141/265] Compiling C object lib/librte_net.a.p/net_rte_arp.c.o 00:01:52.070 [142/265] Generating lib/meter.sym_chk with a custom command (wrapped by meson to capture output) 00:01:52.070 [143/265] Compiling C object lib/librte_timer.a.p/timer_rte_timer.c.o 00:01:52.070 [144/265] Linking static target lib/librte_net.a 00:01:52.070 [145/265] Linking static target lib/librte_timer.a 00:01:52.070 [146/265] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev_params.c.o 00:01:52.071 [147/265] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev.c.o 00:01:52.331 [148/265] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev_pmd.c.o 00:01:52.331 [149/265] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_tm.c.o 00:01:52.331 [150/265] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_pmd.c.o 00:01:52.331 [151/265] Generating lib/rcu.sym_chk with a custom command (wrapped by meson to capture output) 00:01:52.331 [152/265] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev_trace_points.c.o 00:01:52.331 [153/265] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_trace_points.c.o 00:01:52.331 [154/265] Compiling C object lib/librte_hash.a.p/hash_rte_thash.c.o 00:01:52.331 [155/265] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev.c.o 00:01:52.331 [156/265] Linking static target lib/librte_dmadev.a 00:01:52.331 [157/265] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_trace_points.c.o 00:01:52.590 [158/265] Generating lib/net.sym_chk with a custom command (wrapped by meson to capture output) 00:01:52.590 [159/265] Compiling C object lib/librte_power.a.p/power_rte_power_uncore.c.o 00:01:52.590 [160/265] Compiling C object lib/librte_power.a.p/power_rte_power.c.o 00:01:52.590 [161/265] Generating lib/timer.sym_chk with a custom command (wrapped by meson to capture output) 00:01:52.590 [162/265] Compiling C object lib/librte_hash.a.p/hash_rte_cuckoo_hash.c.o 00:01:52.590 [163/265] Linking static target lib/librte_hash.a 00:01:52.590 [164/265] Compiling C object lib/librte_power.a.p/power_power_cppc_cpufreq.c.o 00:01:52.590 [165/265] Compiling C object lib/librte_power.a.p/power_power_acpi_cpufreq.c.o 00:01:52.590 [166/265] Compiling C object lib/librte_power.a.p/power_power_intel_uncore.c.o 00:01:52.591 [167/265] Generating lib/mempool.sym_chk with a custom command (wrapped by meson to capture output) 00:01:52.591 [168/265] Compiling C object lib/librte_power.a.p/power_power_amd_pstate_cpufreq.c.o 00:01:52.591 [169/265] Compiling C object lib/librte_power.a.p/power_power_pstate_cpufreq.c.o 00:01:52.591 [170/265] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_comp.c.o 00:01:52.591 [171/265] Linking static target lib/librte_compressdev.a 00:01:52.850 [172/265] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common_uio.c.o 00:01:52.850 [173/265] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net_ctrl.c.o 00:01:52.850 [174/265] Compiling C object lib/librte_power.a.p/power_rte_power_pmd_mgmt.c.o 00:01:52.850 [175/265] Linking static target lib/librte_power.a 00:01:52.850 [176/265] Generating lib/dmadev.sym_chk with a custom command (wrapped by meson to capture output) 00:01:52.850 [177/265] Compiling C object lib/librte_vhost.a.p/vhost_vdpa.c.o 00:01:52.850 [178/265] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common.c.o 00:01:52.850 [179/265] Compiling C object lib/librte_vhost.a.p/vhost_socket.c.o 00:01:52.850 [180/265] Generating lib/cmdline.sym_chk with a custom command (wrapped by meson to capture output) 00:01:52.850 [181/265] Compiling C object lib/librte_vhost.a.p/vhost_iotlb.c.o 00:01:52.850 [182/265] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci.c.o 00:01:52.850 [183/265] Compiling C object lib/librte_vhost.a.p/vhost_vduse.c.o 00:01:52.850 [184/265] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev.c.o 00:01:52.850 [185/265] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_uio.c.o 00:01:52.850 [186/265] Linking static target drivers/libtmp_rte_bus_vdev.a 00:01:52.850 [187/265] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_vfio.c.o 00:01:53.109 [188/265] Linking static target drivers/libtmp_rte_bus_pci.a 00:01:53.109 [189/265] Compiling C object lib/librte_vhost.a.p/vhost_vhost_user.c.o 00:01:53.109 [190/265] Generating lib/hash.sym_chk with a custom command (wrapped by meson to capture output) 00:01:53.109 [191/265] Generating lib/compressdev.sym_chk with a custom command (wrapped by meson to capture output) 00:01:53.109 [192/265] Compiling C object lib/librte_reorder.a.p/reorder_rte_reorder.c.o 00:01:53.109 [193/265] Linking static target lib/librte_reorder.a 00:01:53.109 [194/265] Generating drivers/rte_bus_vdev.pmd.c with a custom command 00:01:53.109 [195/265] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf.c.o 00:01:53.109 [196/265] Linking static target lib/librte_mbuf.a 00:01:53.109 [197/265] Compiling C object drivers/libtmp_rte_mempool_ring.a.p/mempool_ring_rte_mempool_ring.c.o 00:01:53.109 [198/265] Compiling C object drivers/librte_bus_vdev.a.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:01:53.109 [199/265] Compiling C object drivers/librte_bus_vdev.so.24.0.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:01:53.109 [200/265] Linking static target drivers/libtmp_rte_mempool_ring.a 00:01:53.109 [201/265] Linking static target drivers/librte_bus_vdev.a 00:01:53.109 [202/265] Generating drivers/rte_bus_pci.pmd.c with a custom command 00:01:53.109 [203/265] Compiling C object drivers/librte_bus_pci.a.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:01:53.109 [204/265] Compiling C object drivers/librte_bus_pci.so.24.0.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:01:53.109 [205/265] Compiling C object lib/librte_vhost.a.p/vhost_vhost.c.o 00:01:53.109 [206/265] Linking static target drivers/librte_bus_pci.a 00:01:53.368 [207/265] Compiling C object lib/librte_security.a.p/security_rte_security.c.o 00:01:53.368 [208/265] Linking static target lib/librte_security.a 00:01:53.368 [209/265] Generating lib/power.sym_chk with a custom command (wrapped by meson to capture output) 00:01:53.368 [210/265] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_flow.c.o 00:01:53.368 [211/265] Generating lib/reorder.sym_chk with a custom command (wrapped by meson to capture output) 00:01:53.368 [212/265] Generating drivers/rte_mempool_ring.pmd.c with a custom command 00:01:53.368 [213/265] Compiling C object drivers/librte_mempool_ring.a.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:01:53.368 [214/265] Compiling C object drivers/librte_mempool_ring.so.24.0.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:01:53.368 [215/265] Linking static target drivers/librte_mempool_ring.a 00:01:53.368 [216/265] Generating drivers/rte_bus_vdev.sym_chk with a custom command (wrapped by meson to capture output) 00:01:53.368 [217/265] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev.c.o 00:01:53.368 [218/265] Linking static target lib/librte_ethdev.a 00:01:53.627 [219/265] Compiling C object lib/librte_cryptodev.a.p/cryptodev_rte_cryptodev.c.o 00:01:53.627 [220/265] Linking static target lib/librte_cryptodev.a 00:01:53.627 [221/265] Generating lib/security.sym_chk with a custom command (wrapped by meson to capture output) 00:01:53.627 [222/265] Generating lib/mbuf.sym_chk with a custom command (wrapped by meson to capture output) 00:01:53.627 [223/265] Generating drivers/rte_bus_pci.sym_chk with a custom command (wrapped by meson to capture output) 00:01:54.560 [224/265] Generating lib/cryptodev.sym_chk with a custom command (wrapped by meson to capture output) 00:01:55.933 [225/265] Compiling C object lib/librte_vhost.a.p/vhost_vhost_crypto.c.o 00:01:57.827 [226/265] Generating lib/ethdev.sym_chk with a custom command (wrapped by meson to capture output) 00:01:57.827 [227/265] Generating lib/eal.sym_chk with a custom command (wrapped by meson to capture output) 00:01:57.827 [228/265] Linking target lib/librte_eal.so.24.0 00:01:57.827 [229/265] Generating symbol file lib/librte_eal.so.24.0.p/librte_eal.so.24.0.symbols 00:01:57.827 [230/265] Linking target lib/librte_ring.so.24.0 00:01:57.827 [231/265] Linking target lib/librte_meter.so.24.0 00:01:57.827 [232/265] Linking target lib/librte_timer.so.24.0 00:01:57.827 [233/265] Linking target lib/librte_pci.so.24.0 00:01:57.827 [234/265] Linking target lib/librte_dmadev.so.24.0 00:01:57.827 [235/265] Linking target drivers/librte_bus_vdev.so.24.0 00:01:58.085 [236/265] Generating symbol file lib/librte_meter.so.24.0.p/librte_meter.so.24.0.symbols 00:01:58.085 [237/265] Generating symbol file lib/librte_timer.so.24.0.p/librte_timer.so.24.0.symbols 00:01:58.085 [238/265] Generating symbol file lib/librte_ring.so.24.0.p/librte_ring.so.24.0.symbols 00:01:58.085 [239/265] Generating symbol file lib/librte_pci.so.24.0.p/librte_pci.so.24.0.symbols 00:01:58.085 [240/265] Generating symbol file lib/librte_dmadev.so.24.0.p/librte_dmadev.so.24.0.symbols 00:01:58.085 [241/265] Linking target lib/librte_rcu.so.24.0 00:01:58.085 [242/265] Linking target lib/librte_mempool.so.24.0 00:01:58.085 [243/265] Linking target drivers/librte_bus_pci.so.24.0 00:01:58.085 [244/265] Generating symbol file lib/librte_rcu.so.24.0.p/librte_rcu.so.24.0.symbols 00:01:58.085 [245/265] Generating symbol file lib/librte_mempool.so.24.0.p/librte_mempool.so.24.0.symbols 00:01:58.085 [246/265] Linking target lib/librte_mbuf.so.24.0 00:01:58.085 [247/265] Linking target drivers/librte_mempool_ring.so.24.0 00:01:58.342 [248/265] Generating symbol file lib/librte_mbuf.so.24.0.p/librte_mbuf.so.24.0.symbols 00:01:58.342 [249/265] Linking target lib/librte_compressdev.so.24.0 00:01:58.342 [250/265] Linking target lib/librte_reorder.so.24.0 00:01:58.342 [251/265] Linking target lib/librte_net.so.24.0 00:01:58.342 [252/265] Linking target lib/librte_cryptodev.so.24.0 00:01:58.601 [253/265] Generating symbol file lib/librte_net.so.24.0.p/librte_net.so.24.0.symbols 00:01:58.601 [254/265] Generating symbol file lib/librte_cryptodev.so.24.0.p/librte_cryptodev.so.24.0.symbols 00:01:58.601 [255/265] Linking target lib/librte_hash.so.24.0 00:01:58.601 [256/265] Linking target lib/librte_cmdline.so.24.0 00:01:58.601 [257/265] Linking target lib/librte_security.so.24.0 00:01:58.601 [258/265] Linking target lib/librte_ethdev.so.24.0 00:01:58.601 [259/265] Generating symbol file lib/librte_hash.so.24.0.p/librte_hash.so.24.0.symbols 00:01:58.601 [260/265] Generating symbol file lib/librte_ethdev.so.24.0.p/librte_ethdev.so.24.0.symbols 00:01:58.601 [261/265] Linking target lib/librte_power.so.24.0 00:02:01.134 [262/265] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net.c.o 00:02:01.134 [263/265] Linking static target lib/librte_vhost.a 00:02:02.070 [264/265] Generating lib/vhost.sym_chk with a custom command (wrapped by meson to capture output) 00:02:02.070 [265/265] Linking target lib/librte_vhost.so.24.0 00:02:02.070 INFO: autodetecting backend as ninja 00:02:02.070 INFO: calculating backend command to run: /usr/local/bin/ninja -C /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build-tmp -j 48 00:02:03.007 CC lib/ut_mock/mock.o 00:02:03.007 CC lib/log/log.o 00:02:03.007 CC lib/log/log_flags.o 00:02:03.007 CC lib/log/log_deprecated.o 00:02:03.007 CC lib/ut/ut.o 00:02:03.007 LIB libspdk_ut_mock.a 00:02:03.007 SO libspdk_ut_mock.so.5.0 00:02:03.007 LIB libspdk_log.a 00:02:03.007 LIB libspdk_ut.a 00:02:03.265 SO libspdk_log.so.6.1 00:02:03.265 SO libspdk_ut.so.1.0 00:02:03.265 SYMLINK libspdk_ut_mock.so 00:02:03.265 SYMLINK libspdk_ut.so 00:02:03.265 SYMLINK libspdk_log.so 00:02:03.265 CC lib/ioat/ioat.o 00:02:03.265 CC lib/dma/dma.o 00:02:03.265 CXX lib/trace_parser/trace.o 00:02:03.265 CC lib/util/base64.o 00:02:03.265 CC lib/util/bit_array.o 00:02:03.265 CC lib/util/cpuset.o 00:02:03.265 CC lib/util/crc16.o 00:02:03.265 CC lib/util/crc32.o 00:02:03.265 CC lib/util/crc32c.o 00:02:03.265 CC lib/util/crc32_ieee.o 00:02:03.265 CC lib/util/crc64.o 00:02:03.265 CC lib/util/dif.o 00:02:03.265 CC lib/util/fd.o 00:02:03.265 CC lib/util/file.o 00:02:03.265 CC lib/util/hexlify.o 00:02:03.265 CC lib/util/iov.o 00:02:03.265 CC lib/util/math.o 00:02:03.265 CC lib/util/pipe.o 00:02:03.265 CC lib/util/strerror_tls.o 00:02:03.265 CC lib/util/string.o 00:02:03.265 CC lib/util/uuid.o 00:02:03.265 CC lib/util/fd_group.o 00:02:03.265 CC lib/util/xor.o 00:02:03.265 CC lib/util/zipf.o 00:02:03.523 CC lib/vfio_user/host/vfio_user_pci.o 00:02:03.523 CC lib/vfio_user/host/vfio_user.o 00:02:03.523 LIB libspdk_dma.a 00:02:03.523 SO libspdk_dma.so.3.0 00:02:03.523 SYMLINK libspdk_dma.so 00:02:03.523 LIB libspdk_ioat.a 00:02:03.523 SO libspdk_ioat.so.6.0 00:02:03.781 SYMLINK libspdk_ioat.so 00:02:03.781 LIB libspdk_vfio_user.a 00:02:03.781 SO libspdk_vfio_user.so.4.0 00:02:03.781 SYMLINK libspdk_vfio_user.so 00:02:03.781 LIB libspdk_util.a 00:02:04.039 SO libspdk_util.so.8.0 00:02:04.039 SYMLINK libspdk_util.so 00:02:04.307 CC lib/conf/conf.o 00:02:04.307 CC lib/rdma/common.o 00:02:04.307 CC lib/vmd/vmd.o 00:02:04.307 CC lib/idxd/idxd.o 00:02:04.307 CC lib/rdma/rdma_verbs.o 00:02:04.307 CC lib/env_dpdk/env.o 00:02:04.307 CC lib/idxd/idxd_user.o 00:02:04.307 CC lib/vmd/led.o 00:02:04.307 CC lib/json/json_parse.o 00:02:04.307 CC lib/env_dpdk/memory.o 00:02:04.307 CC lib/idxd/idxd_kernel.o 00:02:04.307 CC lib/json/json_util.o 00:02:04.307 CC lib/env_dpdk/pci.o 00:02:04.307 CC lib/json/json_write.o 00:02:04.307 CC lib/env_dpdk/init.o 00:02:04.307 CC lib/env_dpdk/threads.o 00:02:04.307 CC lib/env_dpdk/pci_ioat.o 00:02:04.307 CC lib/env_dpdk/pci_virtio.o 00:02:04.307 CC lib/env_dpdk/pci_vmd.o 00:02:04.307 CC lib/env_dpdk/pci_idxd.o 00:02:04.307 CC lib/env_dpdk/pci_event.o 00:02:04.307 CC lib/env_dpdk/sigbus_handler.o 00:02:04.307 CC lib/env_dpdk/pci_dpdk.o 00:02:04.307 CC lib/env_dpdk/pci_dpdk_2207.o 00:02:04.307 CC lib/env_dpdk/pci_dpdk_2211.o 00:02:04.307 LIB libspdk_trace_parser.a 00:02:04.307 SO libspdk_trace_parser.so.4.0 00:02:04.307 SYMLINK libspdk_trace_parser.so 00:02:04.617 LIB libspdk_conf.a 00:02:04.617 SO libspdk_conf.so.5.0 00:02:04.617 LIB libspdk_rdma.a 00:02:04.617 SYMLINK libspdk_conf.so 00:02:04.617 SO libspdk_rdma.so.5.0 00:02:04.617 LIB libspdk_json.a 00:02:04.617 SO libspdk_json.so.5.1 00:02:04.617 SYMLINK libspdk_rdma.so 00:02:04.617 SYMLINK libspdk_json.so 00:02:04.876 CC lib/jsonrpc/jsonrpc_server.o 00:02:04.876 CC lib/jsonrpc/jsonrpc_server_tcp.o 00:02:04.876 CC lib/jsonrpc/jsonrpc_client.o 00:02:04.876 CC lib/jsonrpc/jsonrpc_client_tcp.o 00:02:04.876 LIB libspdk_idxd.a 00:02:04.876 SO libspdk_idxd.so.11.0 00:02:04.876 SYMLINK libspdk_idxd.so 00:02:04.876 LIB libspdk_vmd.a 00:02:04.876 SO libspdk_vmd.so.5.0 00:02:04.876 SYMLINK libspdk_vmd.so 00:02:04.876 LIB libspdk_jsonrpc.a 00:02:05.135 SO libspdk_jsonrpc.so.5.1 00:02:05.135 SYMLINK libspdk_jsonrpc.so 00:02:05.135 CC lib/rpc/rpc.o 00:02:05.393 LIB libspdk_rpc.a 00:02:05.393 SO libspdk_rpc.so.5.0 00:02:05.393 SYMLINK libspdk_rpc.so 00:02:05.662 CC lib/notify/notify.o 00:02:05.662 CC lib/notify/notify_rpc.o 00:02:05.662 CC lib/sock/sock.o 00:02:05.662 CC lib/trace/trace.o 00:02:05.662 CC lib/trace/trace_flags.o 00:02:05.662 CC lib/sock/sock_rpc.o 00:02:05.662 CC lib/trace/trace_rpc.o 00:02:05.662 LIB libspdk_notify.a 00:02:05.662 SO libspdk_notify.so.5.0 00:02:05.920 LIB libspdk_trace.a 00:02:05.920 SYMLINK libspdk_notify.so 00:02:05.920 SO libspdk_trace.so.9.0 00:02:05.920 SYMLINK libspdk_trace.so 00:02:05.920 LIB libspdk_sock.a 00:02:05.920 SO libspdk_sock.so.8.0 00:02:05.920 CC lib/thread/thread.o 00:02:05.920 CC lib/thread/iobuf.o 00:02:06.178 SYMLINK libspdk_sock.so 00:02:06.178 CC lib/nvme/nvme_ctrlr_cmd.o 00:02:06.178 CC lib/nvme/nvme_ctrlr.o 00:02:06.178 CC lib/nvme/nvme_fabric.o 00:02:06.178 LIB libspdk_env_dpdk.a 00:02:06.178 CC lib/nvme/nvme_ns_cmd.o 00:02:06.178 CC lib/nvme/nvme_ns.o 00:02:06.178 CC lib/nvme/nvme_pcie_common.o 00:02:06.178 CC lib/nvme/nvme_pcie.o 00:02:06.178 CC lib/nvme/nvme_qpair.o 00:02:06.178 CC lib/nvme/nvme.o 00:02:06.178 CC lib/nvme/nvme_quirks.o 00:02:06.178 CC lib/nvme/nvme_transport.o 00:02:06.178 CC lib/nvme/nvme_discovery.o 00:02:06.178 CC lib/nvme/nvme_ctrlr_ocssd_cmd.o 00:02:06.178 CC lib/nvme/nvme_ns_ocssd_cmd.o 00:02:06.178 CC lib/nvme/nvme_tcp.o 00:02:06.178 CC lib/nvme/nvme_opal.o 00:02:06.178 CC lib/nvme/nvme_io_msg.o 00:02:06.178 CC lib/nvme/nvme_poll_group.o 00:02:06.178 CC lib/nvme/nvme_zns.o 00:02:06.178 CC lib/nvme/nvme_cuse.o 00:02:06.178 CC lib/nvme/nvme_vfio_user.o 00:02:06.178 CC lib/nvme/nvme_rdma.o 00:02:06.178 SO libspdk_env_dpdk.so.13.0 00:02:06.436 SYMLINK libspdk_env_dpdk.so 00:02:07.808 LIB libspdk_thread.a 00:02:07.808 SO libspdk_thread.so.9.0 00:02:07.808 SYMLINK libspdk_thread.so 00:02:07.808 CC lib/blob/blobstore.o 00:02:07.808 CC lib/accel/accel.o 00:02:07.808 CC lib/virtio/virtio.o 00:02:07.808 CC lib/init/json_config.o 00:02:07.808 CC lib/virtio/virtio_vhost_user.o 00:02:07.808 CC lib/accel/accel_rpc.o 00:02:07.808 CC lib/init/subsystem.o 00:02:07.808 CC lib/blob/request.o 00:02:07.808 CC lib/virtio/virtio_vfio_user.o 00:02:07.808 CC lib/accel/accel_sw.o 00:02:07.808 CC lib/init/subsystem_rpc.o 00:02:07.808 CC lib/blob/zeroes.o 00:02:07.808 CC lib/virtio/virtio_pci.o 00:02:07.808 CC lib/init/rpc.o 00:02:07.808 CC lib/blob/blob_bs_dev.o 00:02:08.065 LIB libspdk_init.a 00:02:08.065 SO libspdk_init.so.4.0 00:02:08.065 LIB libspdk_virtio.a 00:02:08.065 SYMLINK libspdk_init.so 00:02:08.065 SO libspdk_virtio.so.6.0 00:02:08.323 SYMLINK libspdk_virtio.so 00:02:08.323 CC lib/event/app.o 00:02:08.323 CC lib/event/reactor.o 00:02:08.323 CC lib/event/log_rpc.o 00:02:08.323 CC lib/event/app_rpc.o 00:02:08.323 CC lib/event/scheduler_static.o 00:02:08.580 LIB libspdk_nvme.a 00:02:08.580 SO libspdk_nvme.so.12.0 00:02:08.580 LIB libspdk_event.a 00:02:08.580 SO libspdk_event.so.12.0 00:02:08.838 SYMLINK libspdk_event.so 00:02:08.838 LIB libspdk_accel.a 00:02:08.838 SYMLINK libspdk_nvme.so 00:02:08.838 SO libspdk_accel.so.14.0 00:02:08.838 SYMLINK libspdk_accel.so 00:02:09.095 CC lib/bdev/bdev.o 00:02:09.095 CC lib/bdev/bdev_rpc.o 00:02:09.095 CC lib/bdev/bdev_zone.o 00:02:09.095 CC lib/bdev/part.o 00:02:09.095 CC lib/bdev/scsi_nvme.o 00:02:10.467 LIB libspdk_blob.a 00:02:10.467 SO libspdk_blob.so.10.1 00:02:10.725 SYMLINK libspdk_blob.so 00:02:10.725 CC lib/lvol/lvol.o 00:02:10.725 CC lib/blobfs/blobfs.o 00:02:10.725 CC lib/blobfs/tree.o 00:02:11.668 LIB libspdk_bdev.a 00:02:11.668 SO libspdk_bdev.so.14.0 00:02:11.668 LIB libspdk_blobfs.a 00:02:11.668 SO libspdk_blobfs.so.9.0 00:02:11.668 LIB libspdk_lvol.a 00:02:11.668 SYMLINK libspdk_bdev.so 00:02:11.668 SO libspdk_lvol.so.9.1 00:02:11.668 SYMLINK libspdk_blobfs.so 00:02:11.668 SYMLINK libspdk_lvol.so 00:02:11.668 CC lib/nbd/nbd.o 00:02:11.668 CC lib/nvmf/ctrlr.o 00:02:11.668 CC lib/nvmf/ctrlr_discovery.o 00:02:11.668 CC lib/nbd/nbd_rpc.o 00:02:11.668 CC lib/ublk/ublk.o 00:02:11.668 CC lib/nvmf/ctrlr_bdev.o 00:02:11.668 CC lib/ublk/ublk_rpc.o 00:02:11.668 CC lib/nvmf/subsystem.o 00:02:11.668 CC lib/scsi/dev.o 00:02:11.668 CC lib/nvmf/nvmf.o 00:02:11.668 CC lib/scsi/lun.o 00:02:11.668 CC lib/nvmf/nvmf_rpc.o 00:02:11.668 CC lib/ftl/ftl_core.o 00:02:11.668 CC lib/scsi/port.o 00:02:11.668 CC lib/nvmf/transport.o 00:02:11.668 CC lib/ftl/ftl_init.o 00:02:11.668 CC lib/scsi/scsi.o 00:02:11.668 CC lib/nvmf/tcp.o 00:02:11.668 CC lib/ftl/ftl_layout.o 00:02:11.668 CC lib/nvmf/rdma.o 00:02:11.668 CC lib/ftl/ftl_debug.o 00:02:11.668 CC lib/ftl/ftl_io.o 00:02:11.668 CC lib/scsi/scsi_bdev.o 00:02:11.668 CC lib/ftl/ftl_sb.o 00:02:11.668 CC lib/scsi/scsi_rpc.o 00:02:11.668 CC lib/scsi/scsi_pr.o 00:02:11.668 CC lib/ftl/ftl_l2p_flat.o 00:02:11.668 CC lib/ftl/ftl_l2p.o 00:02:11.668 CC lib/scsi/task.o 00:02:11.668 CC lib/ftl/ftl_nv_cache.o 00:02:11.668 CC lib/ftl/ftl_band.o 00:02:11.668 CC lib/ftl/ftl_band_ops.o 00:02:11.668 CC lib/ftl/ftl_writer.o 00:02:11.668 CC lib/ftl/ftl_rq.o 00:02:11.668 CC lib/ftl/ftl_reloc.o 00:02:11.668 CC lib/ftl/ftl_l2p_cache.o 00:02:11.668 CC lib/ftl/ftl_p2l.o 00:02:11.668 CC lib/ftl/mngt/ftl_mngt.o 00:02:11.668 CC lib/ftl/mngt/ftl_mngt_bdev.o 00:02:11.668 CC lib/ftl/mngt/ftl_mngt_shutdown.o 00:02:11.668 CC lib/ftl/mngt/ftl_mngt_startup.o 00:02:11.668 CC lib/ftl/mngt/ftl_mngt_md.o 00:02:11.668 CC lib/ftl/mngt/ftl_mngt_misc.o 00:02:11.668 CC lib/ftl/mngt/ftl_mngt_ioch.o 00:02:11.668 CC lib/ftl/mngt/ftl_mngt_l2p.o 00:02:11.668 CC lib/ftl/mngt/ftl_mngt_band.o 00:02:11.668 CC lib/ftl/mngt/ftl_mngt_self_test.o 00:02:11.668 CC lib/ftl/mngt/ftl_mngt_p2l.o 00:02:11.927 CC lib/ftl/mngt/ftl_mngt_recovery.o 00:02:11.927 CC lib/ftl/mngt/ftl_mngt_upgrade.o 00:02:11.927 CC lib/ftl/utils/ftl_conf.o 00:02:11.927 CC lib/ftl/utils/ftl_mempool.o 00:02:11.927 CC lib/ftl/utils/ftl_md.o 00:02:12.189 CC lib/ftl/utils/ftl_bitmap.o 00:02:12.189 CC lib/ftl/utils/ftl_property.o 00:02:12.189 CC lib/ftl/utils/ftl_layout_tracker_bdev.o 00:02:12.189 CC lib/ftl/upgrade/ftl_layout_upgrade.o 00:02:12.189 CC lib/ftl/upgrade/ftl_sb_upgrade.o 00:02:12.189 CC lib/ftl/upgrade/ftl_p2l_upgrade.o 00:02:12.189 CC lib/ftl/upgrade/ftl_band_upgrade.o 00:02:12.189 CC lib/ftl/upgrade/ftl_chunk_upgrade.o 00:02:12.189 CC lib/ftl/upgrade/ftl_sb_v3.o 00:02:12.189 CC lib/ftl/upgrade/ftl_sb_v5.o 00:02:12.189 CC lib/ftl/nvc/ftl_nvc_dev.o 00:02:12.189 CC lib/ftl/nvc/ftl_nvc_bdev_vss.o 00:02:12.189 CC lib/ftl/base/ftl_base_dev.o 00:02:12.189 CC lib/ftl/base/ftl_base_bdev.o 00:02:12.189 CC lib/ftl/ftl_trace.o 00:02:12.448 LIB libspdk_nbd.a 00:02:12.448 SO libspdk_nbd.so.6.0 00:02:12.448 SYMLINK libspdk_nbd.so 00:02:12.448 LIB libspdk_scsi.a 00:02:12.448 SO libspdk_scsi.so.8.0 00:02:12.707 LIB libspdk_ublk.a 00:02:12.707 SYMLINK libspdk_scsi.so 00:02:12.707 SO libspdk_ublk.so.2.0 00:02:12.707 SYMLINK libspdk_ublk.so 00:02:12.707 CC lib/vhost/vhost.o 00:02:12.707 CC lib/iscsi/conn.o 00:02:12.707 CC lib/iscsi/init_grp.o 00:02:12.707 CC lib/vhost/vhost_rpc.o 00:02:12.707 CC lib/iscsi/iscsi.o 00:02:12.707 CC lib/vhost/vhost_scsi.o 00:02:12.707 CC lib/vhost/vhost_blk.o 00:02:12.707 CC lib/iscsi/md5.o 00:02:12.707 CC lib/vhost/rte_vhost_user.o 00:02:12.707 CC lib/iscsi/param.o 00:02:12.707 CC lib/iscsi/portal_grp.o 00:02:12.707 CC lib/iscsi/tgt_node.o 00:02:12.707 CC lib/iscsi/iscsi_subsystem.o 00:02:12.707 CC lib/iscsi/iscsi_rpc.o 00:02:12.707 CC lib/iscsi/task.o 00:02:12.965 LIB libspdk_ftl.a 00:02:13.224 SO libspdk_ftl.so.8.0 00:02:13.502 SYMLINK libspdk_ftl.so 00:02:14.070 LIB libspdk_vhost.a 00:02:14.070 SO libspdk_vhost.so.7.1 00:02:14.070 SYMLINK libspdk_vhost.so 00:02:14.070 LIB libspdk_iscsi.a 00:02:14.070 LIB libspdk_nvmf.a 00:02:14.329 SO libspdk_iscsi.so.7.0 00:02:14.329 SO libspdk_nvmf.so.17.0 00:02:14.329 SYMLINK libspdk_iscsi.so 00:02:14.329 SYMLINK libspdk_nvmf.so 00:02:14.588 CC module/env_dpdk/env_dpdk_rpc.o 00:02:14.588 CC module/sock/posix/posix.o 00:02:14.588 CC module/blob/bdev/blob_bdev.o 00:02:14.588 CC module/accel/dsa/accel_dsa.o 00:02:14.588 CC module/scheduler/dpdk_governor/dpdk_governor.o 00:02:14.588 CC module/scheduler/dynamic/scheduler_dynamic.o 00:02:14.588 CC module/accel/iaa/accel_iaa.o 00:02:14.588 CC module/accel/error/accel_error.o 00:02:14.588 CC module/accel/dsa/accel_dsa_rpc.o 00:02:14.588 CC module/accel/error/accel_error_rpc.o 00:02:14.588 CC module/scheduler/gscheduler/gscheduler.o 00:02:14.588 CC module/accel/iaa/accel_iaa_rpc.o 00:02:14.588 CC module/accel/ioat/accel_ioat.o 00:02:14.588 CC module/accel/ioat/accel_ioat_rpc.o 00:02:14.588 LIB libspdk_env_dpdk_rpc.a 00:02:14.588 SO libspdk_env_dpdk_rpc.so.5.0 00:02:14.846 SYMLINK libspdk_env_dpdk_rpc.so 00:02:14.846 LIB libspdk_scheduler_gscheduler.a 00:02:14.846 LIB libspdk_scheduler_dpdk_governor.a 00:02:14.846 SO libspdk_scheduler_dpdk_governor.so.3.0 00:02:14.846 SO libspdk_scheduler_gscheduler.so.3.0 00:02:14.846 LIB libspdk_accel_error.a 00:02:14.846 LIB libspdk_accel_ioat.a 00:02:14.846 LIB libspdk_scheduler_dynamic.a 00:02:14.846 LIB libspdk_accel_iaa.a 00:02:14.846 SO libspdk_accel_error.so.1.0 00:02:14.846 SO libspdk_accel_ioat.so.5.0 00:02:14.847 SO libspdk_scheduler_dynamic.so.3.0 00:02:14.847 SYMLINK libspdk_scheduler_gscheduler.so 00:02:14.847 SYMLINK libspdk_scheduler_dpdk_governor.so 00:02:14.847 SO libspdk_accel_iaa.so.2.0 00:02:14.847 LIB libspdk_accel_dsa.a 00:02:14.847 LIB libspdk_blob_bdev.a 00:02:14.847 SYMLINK libspdk_accel_error.so 00:02:14.847 SYMLINK libspdk_scheduler_dynamic.so 00:02:14.847 SYMLINK libspdk_accel_ioat.so 00:02:14.847 SO libspdk_accel_dsa.so.4.0 00:02:14.847 SO libspdk_blob_bdev.so.10.1 00:02:14.847 SYMLINK libspdk_accel_iaa.so 00:02:14.847 SYMLINK libspdk_accel_dsa.so 00:02:14.847 SYMLINK libspdk_blob_bdev.so 00:02:15.106 CC module/blobfs/bdev/blobfs_bdev.o 00:02:15.106 CC module/blobfs/bdev/blobfs_bdev_rpc.o 00:02:15.106 CC module/bdev/error/vbdev_error.o 00:02:15.106 CC module/bdev/error/vbdev_error_rpc.o 00:02:15.106 CC module/bdev/malloc/bdev_malloc.o 00:02:15.106 CC module/bdev/malloc/bdev_malloc_rpc.o 00:02:15.106 CC module/bdev/null/bdev_null.o 00:02:15.106 CC module/bdev/passthru/vbdev_passthru.o 00:02:15.106 CC module/bdev/gpt/gpt.o 00:02:15.106 CC module/bdev/null/bdev_null_rpc.o 00:02:15.106 CC module/bdev/raid/bdev_raid.o 00:02:15.106 CC module/bdev/passthru/vbdev_passthru_rpc.o 00:02:15.106 CC module/bdev/gpt/vbdev_gpt.o 00:02:15.106 CC module/bdev/lvol/vbdev_lvol.o 00:02:15.106 CC module/bdev/raid/bdev_raid_rpc.o 00:02:15.106 CC module/bdev/nvme/bdev_nvme.o 00:02:15.106 CC module/bdev/delay/vbdev_delay.o 00:02:15.106 CC module/bdev/delay/vbdev_delay_rpc.o 00:02:15.106 CC module/bdev/zone_block/vbdev_zone_block.o 00:02:15.106 CC module/bdev/lvol/vbdev_lvol_rpc.o 00:02:15.106 CC module/bdev/raid/bdev_raid_sb.o 00:02:15.106 CC module/bdev/aio/bdev_aio.o 00:02:15.106 CC module/bdev/zone_block/vbdev_zone_block_rpc.o 00:02:15.106 CC module/bdev/nvme/bdev_nvme_rpc.o 00:02:15.106 CC module/bdev/raid/raid0.o 00:02:15.106 CC module/bdev/nvme/nvme_rpc.o 00:02:15.106 CC module/bdev/aio/bdev_aio_rpc.o 00:02:15.106 CC module/bdev/ftl/bdev_ftl.o 00:02:15.106 CC module/bdev/virtio/bdev_virtio_scsi.o 00:02:15.106 CC module/bdev/raid/raid1.o 00:02:15.106 CC module/bdev/nvme/bdev_mdns_client.o 00:02:15.106 CC module/bdev/virtio/bdev_virtio_rpc.o 00:02:15.106 CC module/bdev/ftl/bdev_ftl_rpc.o 00:02:15.106 CC module/bdev/split/vbdev_split.o 00:02:15.106 CC module/bdev/virtio/bdev_virtio_blk.o 00:02:15.106 CC module/bdev/split/vbdev_split_rpc.o 00:02:15.106 CC module/bdev/raid/concat.o 00:02:15.106 CC module/bdev/nvme/vbdev_opal.o 00:02:15.106 CC module/bdev/nvme/vbdev_opal_rpc.o 00:02:15.106 CC module/bdev/nvme/bdev_nvme_cuse_rpc.o 00:02:15.106 CC module/bdev/iscsi/bdev_iscsi.o 00:02:15.106 CC module/bdev/iscsi/bdev_iscsi_rpc.o 00:02:15.365 LIB libspdk_blobfs_bdev.a 00:02:15.623 LIB libspdk_sock_posix.a 00:02:15.623 SO libspdk_blobfs_bdev.so.5.0 00:02:15.623 LIB libspdk_bdev_gpt.a 00:02:15.623 SO libspdk_sock_posix.so.5.0 00:02:15.623 SO libspdk_bdev_gpt.so.5.0 00:02:15.623 LIB libspdk_bdev_split.a 00:02:15.623 LIB libspdk_bdev_malloc.a 00:02:15.623 LIB libspdk_bdev_null.a 00:02:15.623 SYMLINK libspdk_blobfs_bdev.so 00:02:15.623 LIB libspdk_bdev_passthru.a 00:02:15.623 SO libspdk_bdev_split.so.5.0 00:02:15.623 SO libspdk_bdev_malloc.so.5.0 00:02:15.623 SO libspdk_bdev_null.so.5.0 00:02:15.623 SO libspdk_bdev_passthru.so.5.0 00:02:15.623 SYMLINK libspdk_bdev_gpt.so 00:02:15.623 SYMLINK libspdk_sock_posix.so 00:02:15.623 LIB libspdk_bdev_error.a 00:02:15.623 LIB libspdk_bdev_aio.a 00:02:15.623 SO libspdk_bdev_error.so.5.0 00:02:15.623 SO libspdk_bdev_aio.so.5.0 00:02:15.623 SYMLINK libspdk_bdev_split.so 00:02:15.623 LIB libspdk_bdev_ftl.a 00:02:15.623 SYMLINK libspdk_bdev_malloc.so 00:02:15.623 SYMLINK libspdk_bdev_null.so 00:02:15.623 SYMLINK libspdk_bdev_passthru.so 00:02:15.623 LIB libspdk_bdev_delay.a 00:02:15.623 SO libspdk_bdev_ftl.so.5.0 00:02:15.623 SO libspdk_bdev_delay.so.5.0 00:02:15.623 SYMLINK libspdk_bdev_error.so 00:02:15.623 SYMLINK libspdk_bdev_aio.so 00:02:15.623 LIB libspdk_bdev_zone_block.a 00:02:15.623 SO libspdk_bdev_zone_block.so.5.0 00:02:15.623 LIB libspdk_bdev_iscsi.a 00:02:15.623 SYMLINK libspdk_bdev_ftl.so 00:02:15.623 SYMLINK libspdk_bdev_delay.so 00:02:15.881 SO libspdk_bdev_iscsi.so.5.0 00:02:15.881 SYMLINK libspdk_bdev_zone_block.so 00:02:15.881 SYMLINK libspdk_bdev_iscsi.so 00:02:15.881 LIB libspdk_bdev_lvol.a 00:02:15.881 LIB libspdk_bdev_virtio.a 00:02:15.881 SO libspdk_bdev_lvol.so.5.0 00:02:15.881 SO libspdk_bdev_virtio.so.5.0 00:02:15.881 SYMLINK libspdk_bdev_lvol.so 00:02:15.881 SYMLINK libspdk_bdev_virtio.so 00:02:16.139 LIB libspdk_bdev_raid.a 00:02:16.139 SO libspdk_bdev_raid.so.5.0 00:02:16.398 SYMLINK libspdk_bdev_raid.so 00:02:17.335 LIB libspdk_bdev_nvme.a 00:02:17.335 SO libspdk_bdev_nvme.so.6.0 00:02:17.592 SYMLINK libspdk_bdev_nvme.so 00:02:17.851 CC module/event/subsystems/iobuf/iobuf.o 00:02:17.851 CC module/event/subsystems/vmd/vmd.o 00:02:17.851 CC module/event/subsystems/vmd/vmd_rpc.o 00:02:17.851 CC module/event/subsystems/sock/sock.o 00:02:17.851 CC module/event/subsystems/vhost_blk/vhost_blk.o 00:02:17.851 CC module/event/subsystems/scheduler/scheduler.o 00:02:17.851 CC module/event/subsystems/iobuf/iobuf_rpc.o 00:02:17.851 LIB libspdk_event_sock.a 00:02:17.851 LIB libspdk_event_vhost_blk.a 00:02:17.851 LIB libspdk_event_scheduler.a 00:02:17.851 LIB libspdk_event_vmd.a 00:02:17.851 SO libspdk_event_sock.so.4.0 00:02:17.851 LIB libspdk_event_iobuf.a 00:02:17.851 SO libspdk_event_vhost_blk.so.2.0 00:02:17.851 SO libspdk_event_scheduler.so.3.0 00:02:17.851 SO libspdk_event_vmd.so.5.0 00:02:17.851 SO libspdk_event_iobuf.so.2.0 00:02:17.851 SYMLINK libspdk_event_sock.so 00:02:17.851 SYMLINK libspdk_event_vhost_blk.so 00:02:17.851 SYMLINK libspdk_event_scheduler.so 00:02:17.851 SYMLINK libspdk_event_vmd.so 00:02:17.851 SYMLINK libspdk_event_iobuf.so 00:02:18.110 CC module/event/subsystems/accel/accel.o 00:02:18.400 LIB libspdk_event_accel.a 00:02:18.400 SO libspdk_event_accel.so.5.0 00:02:18.400 SYMLINK libspdk_event_accel.so 00:02:18.400 CC module/event/subsystems/bdev/bdev.o 00:02:18.659 LIB libspdk_event_bdev.a 00:02:18.659 SO libspdk_event_bdev.so.5.0 00:02:18.659 SYMLINK libspdk_event_bdev.so 00:02:18.917 CC module/event/subsystems/nvmf/nvmf_rpc.o 00:02:18.917 CC module/event/subsystems/nvmf/nvmf_tgt.o 00:02:18.917 CC module/event/subsystems/nbd/nbd.o 00:02:18.917 CC module/event/subsystems/scsi/scsi.o 00:02:18.917 CC module/event/subsystems/ublk/ublk.o 00:02:18.917 LIB libspdk_event_nbd.a 00:02:18.917 LIB libspdk_event_ublk.a 00:02:18.917 LIB libspdk_event_scsi.a 00:02:18.917 SO libspdk_event_nbd.so.5.0 00:02:18.917 SO libspdk_event_ublk.so.2.0 00:02:18.917 SO libspdk_event_scsi.so.5.0 00:02:18.917 SYMLINK libspdk_event_ublk.so 00:02:18.917 SYMLINK libspdk_event_nbd.so 00:02:18.917 SYMLINK libspdk_event_scsi.so 00:02:18.917 LIB libspdk_event_nvmf.a 00:02:19.175 SO libspdk_event_nvmf.so.5.0 00:02:19.175 SYMLINK libspdk_event_nvmf.so 00:02:19.175 CC module/event/subsystems/vhost_scsi/vhost_scsi.o 00:02:19.175 CC module/event/subsystems/iscsi/iscsi.o 00:02:19.175 LIB libspdk_event_vhost_scsi.a 00:02:19.175 LIB libspdk_event_iscsi.a 00:02:19.175 SO libspdk_event_vhost_scsi.so.2.0 00:02:19.433 SO libspdk_event_iscsi.so.5.0 00:02:19.433 SYMLINK libspdk_event_vhost_scsi.so 00:02:19.433 SYMLINK libspdk_event_iscsi.so 00:02:19.433 SO libspdk.so.5.0 00:02:19.433 SYMLINK libspdk.so 00:02:19.699 CC app/trace_record/trace_record.o 00:02:19.699 CXX app/trace/trace.o 00:02:19.699 CC app/spdk_nvme_perf/perf.o 00:02:19.699 CC app/spdk_top/spdk_top.o 00:02:19.699 CC app/spdk_lspci/spdk_lspci.o 00:02:19.699 CC app/spdk_nvme_discover/discovery_aer.o 00:02:19.699 CC app/spdk_nvme_identify/identify.o 00:02:19.699 TEST_HEADER include/spdk/accel.h 00:02:19.699 TEST_HEADER include/spdk/accel_module.h 00:02:19.699 TEST_HEADER include/spdk/assert.h 00:02:19.699 CC test/rpc_client/rpc_client_test.o 00:02:19.699 TEST_HEADER include/spdk/barrier.h 00:02:19.699 TEST_HEADER include/spdk/base64.h 00:02:19.699 TEST_HEADER include/spdk/bdev.h 00:02:19.699 TEST_HEADER include/spdk/bdev_module.h 00:02:19.699 TEST_HEADER include/spdk/bdev_zone.h 00:02:19.699 TEST_HEADER include/spdk/bit_array.h 00:02:19.699 TEST_HEADER include/spdk/bit_pool.h 00:02:19.699 TEST_HEADER include/spdk/blob_bdev.h 00:02:19.699 TEST_HEADER include/spdk/blobfs_bdev.h 00:02:19.699 TEST_HEADER include/spdk/blobfs.h 00:02:19.699 CC app/spdk_dd/spdk_dd.o 00:02:19.699 TEST_HEADER include/spdk/blob.h 00:02:19.699 TEST_HEADER include/spdk/conf.h 00:02:19.699 TEST_HEADER include/spdk/config.h 00:02:19.699 TEST_HEADER include/spdk/cpuset.h 00:02:19.699 TEST_HEADER include/spdk/crc16.h 00:02:19.699 TEST_HEADER include/spdk/crc32.h 00:02:19.699 CC examples/interrupt_tgt/interrupt_tgt.o 00:02:19.699 CC app/iscsi_tgt/iscsi_tgt.o 00:02:19.699 TEST_HEADER include/spdk/crc64.h 00:02:19.699 CC app/nvmf_tgt/nvmf_main.o 00:02:19.699 TEST_HEADER include/spdk/dif.h 00:02:19.699 TEST_HEADER include/spdk/dma.h 00:02:19.699 CC app/vhost/vhost.o 00:02:19.699 TEST_HEADER include/spdk/endian.h 00:02:19.699 CC examples/ioat/verify/verify.o 00:02:19.699 CC examples/vmd/led/led.o 00:02:19.699 TEST_HEADER include/spdk/env_dpdk.h 00:02:19.699 CC examples/vmd/lsvmd/lsvmd.o 00:02:19.699 CC examples/nvme/cmb_copy/cmb_copy.o 00:02:19.699 TEST_HEADER include/spdk/env.h 00:02:19.699 CC examples/ioat/perf/perf.o 00:02:19.699 TEST_HEADER include/spdk/event.h 00:02:19.699 CC app/fio/nvme/fio_plugin.o 00:02:19.699 CC examples/sock/hello_world/hello_sock.o 00:02:19.699 TEST_HEADER include/spdk/fd_group.h 00:02:19.699 CC examples/nvme/reconnect/reconnect.o 00:02:19.699 CC examples/nvme/arbitration/arbitration.o 00:02:19.699 TEST_HEADER include/spdk/fd.h 00:02:19.699 CC examples/nvme/nvme_manage/nvme_manage.o 00:02:19.699 CC examples/nvme/pmr_persistence/pmr_persistence.o 00:02:19.699 CC examples/nvme/hello_world/hello_world.o 00:02:19.699 CC examples/idxd/perf/perf.o 00:02:19.699 CC examples/nvme/hotplug/hotplug.o 00:02:19.699 CC examples/nvme/abort/abort.o 00:02:19.699 TEST_HEADER include/spdk/file.h 00:02:19.699 TEST_HEADER include/spdk/ftl.h 00:02:19.699 CC examples/accel/perf/accel_perf.o 00:02:19.699 TEST_HEADER include/spdk/gpt_spec.h 00:02:19.699 CC examples/util/zipf/zipf.o 00:02:19.699 TEST_HEADER include/spdk/hexlify.h 00:02:19.699 TEST_HEADER include/spdk/histogram_data.h 00:02:19.699 CC test/thread/poller_perf/poller_perf.o 00:02:19.699 TEST_HEADER include/spdk/idxd.h 00:02:19.699 CC test/event/event_perf/event_perf.o 00:02:19.699 CC test/nvme/aer/aer.o 00:02:19.699 TEST_HEADER include/spdk/idxd_spec.h 00:02:19.699 CC app/spdk_tgt/spdk_tgt.o 00:02:19.699 TEST_HEADER include/spdk/init.h 00:02:19.699 TEST_HEADER include/spdk/ioat.h 00:02:19.699 TEST_HEADER include/spdk/ioat_spec.h 00:02:19.699 TEST_HEADER include/spdk/iscsi_spec.h 00:02:19.699 TEST_HEADER include/spdk/json.h 00:02:19.699 TEST_HEADER include/spdk/jsonrpc.h 00:02:19.699 TEST_HEADER include/spdk/likely.h 00:02:19.699 TEST_HEADER include/spdk/log.h 00:02:19.699 TEST_HEADER include/spdk/lvol.h 00:02:19.699 CC examples/bdev/hello_world/hello_bdev.o 00:02:19.699 TEST_HEADER include/spdk/memory.h 00:02:19.699 CC examples/bdev/bdevperf/bdevperf.o 00:02:19.699 TEST_HEADER include/spdk/mmio.h 00:02:19.699 CC examples/thread/thread/thread_ex.o 00:02:19.699 TEST_HEADER include/spdk/nbd.h 00:02:19.699 CC examples/nvmf/nvmf/nvmf.o 00:02:19.699 CC test/dma/test_dma/test_dma.o 00:02:19.699 TEST_HEADER include/spdk/notify.h 00:02:19.699 TEST_HEADER include/spdk/nvme.h 00:02:19.699 CC test/app/bdev_svc/bdev_svc.o 00:02:19.699 CC test/blobfs/mkfs/mkfs.o 00:02:19.699 CC examples/blob/hello_world/hello_blob.o 00:02:19.699 TEST_HEADER include/spdk/nvme_intel.h 00:02:19.699 CC test/bdev/bdevio/bdevio.o 00:02:19.699 TEST_HEADER include/spdk/nvme_ocssd.h 00:02:19.699 CC examples/blob/cli/blobcli.o 00:02:19.699 TEST_HEADER include/spdk/nvme_ocssd_spec.h 00:02:19.699 CC test/accel/dif/dif.o 00:02:19.699 TEST_HEADER include/spdk/nvme_spec.h 00:02:19.699 TEST_HEADER include/spdk/nvme_zns.h 00:02:19.699 CC test/env/mem_callbacks/mem_callbacks.o 00:02:19.699 TEST_HEADER include/spdk/nvmf_cmd.h 00:02:19.699 TEST_HEADER include/spdk/nvmf_fc_spec.h 00:02:19.699 TEST_HEADER include/spdk/nvmf.h 00:02:19.699 CC test/lvol/esnap/esnap.o 00:02:19.699 TEST_HEADER include/spdk/nvmf_spec.h 00:02:19.699 TEST_HEADER include/spdk/nvmf_transport.h 00:02:19.699 TEST_HEADER include/spdk/opal.h 00:02:19.699 TEST_HEADER include/spdk/opal_spec.h 00:02:19.699 TEST_HEADER include/spdk/pci_ids.h 00:02:19.958 TEST_HEADER include/spdk/pipe.h 00:02:19.958 TEST_HEADER include/spdk/queue.h 00:02:19.958 TEST_HEADER include/spdk/reduce.h 00:02:19.958 TEST_HEADER include/spdk/rpc.h 00:02:19.958 TEST_HEADER include/spdk/scheduler.h 00:02:19.958 TEST_HEADER include/spdk/scsi.h 00:02:19.958 TEST_HEADER include/spdk/scsi_spec.h 00:02:19.958 TEST_HEADER include/spdk/sock.h 00:02:19.958 TEST_HEADER include/spdk/stdinc.h 00:02:19.958 TEST_HEADER include/spdk/string.h 00:02:19.958 TEST_HEADER include/spdk/thread.h 00:02:19.958 TEST_HEADER include/spdk/trace.h 00:02:19.958 TEST_HEADER include/spdk/trace_parser.h 00:02:19.958 LINK spdk_lspci 00:02:19.958 TEST_HEADER include/spdk/tree.h 00:02:19.958 TEST_HEADER include/spdk/ublk.h 00:02:19.958 TEST_HEADER include/spdk/util.h 00:02:19.958 TEST_HEADER include/spdk/uuid.h 00:02:19.958 TEST_HEADER include/spdk/version.h 00:02:19.958 TEST_HEADER include/spdk/vfio_user_pci.h 00:02:19.958 TEST_HEADER include/spdk/vfio_user_spec.h 00:02:19.958 TEST_HEADER include/spdk/vhost.h 00:02:19.958 TEST_HEADER include/spdk/vmd.h 00:02:19.958 TEST_HEADER include/spdk/xor.h 00:02:19.958 TEST_HEADER include/spdk/zipf.h 00:02:19.958 CXX test/cpp_headers/accel.o 00:02:19.958 LINK lsvmd 00:02:19.958 LINK rpc_client_test 00:02:19.958 LINK led 00:02:19.958 LINK spdk_nvme_discover 00:02:19.958 LINK zipf 00:02:19.958 LINK event_perf 00:02:19.958 LINK poller_perf 00:02:19.958 LINK nvmf_tgt 00:02:19.958 LINK interrupt_tgt 00:02:19.958 LINK pmr_persistence 00:02:19.958 LINK cmb_copy 00:02:19.958 LINK vhost 00:02:19.958 LINK spdk_trace_record 00:02:19.958 LINK iscsi_tgt 00:02:20.221 LINK verify 00:02:20.221 LINK bdev_svc 00:02:20.221 LINK ioat_perf 00:02:20.221 LINK spdk_tgt 00:02:20.221 LINK hello_world 00:02:20.221 LINK mkfs 00:02:20.221 LINK hello_sock 00:02:20.221 LINK hotplug 00:02:20.221 LINK hello_bdev 00:02:20.221 LINK thread 00:02:20.221 LINK hello_blob 00:02:20.221 LINK aer 00:02:20.221 CXX test/cpp_headers/accel_module.o 00:02:20.221 CC test/event/reactor/reactor.o 00:02:20.221 CXX test/cpp_headers/assert.o 00:02:20.221 LINK nvmf 00:02:20.221 LINK idxd_perf 00:02:20.221 LINK arbitration 00:02:20.221 LINK spdk_dd 00:02:20.484 CXX test/cpp_headers/barrier.o 00:02:20.484 LINK reconnect 00:02:20.484 CC test/env/vtophys/vtophys.o 00:02:20.484 LINK abort 00:02:20.484 CXX test/cpp_headers/base64.o 00:02:20.484 CC test/env/env_dpdk_post_init/env_dpdk_post_init.o 00:02:20.484 CC test/nvme/reset/reset.o 00:02:20.484 LINK spdk_trace 00:02:20.484 CC test/event/reactor_perf/reactor_perf.o 00:02:20.484 CC app/fio/bdev/fio_plugin.o 00:02:20.484 CC test/nvme/sgl/sgl.o 00:02:20.484 LINK test_dma 00:02:20.484 LINK bdevio 00:02:20.484 CC test/event/app_repeat/app_repeat.o 00:02:20.484 LINK dif 00:02:20.484 CC test/app/histogram_perf/histogram_perf.o 00:02:20.484 CC test/env/memory/memory_ut.o 00:02:20.484 CC test/app/jsoncat/jsoncat.o 00:02:20.484 CXX test/cpp_headers/bdev.o 00:02:20.484 CXX test/cpp_headers/bdev_module.o 00:02:20.484 LINK accel_perf 00:02:20.484 CC test/env/pci/pci_ut.o 00:02:20.484 CC test/app/fuzz/nvme_fuzz/nvme_fuzz.o 00:02:20.484 CC test/nvme/overhead/overhead.o 00:02:20.484 CC test/app/stub/stub.o 00:02:20.484 CC test/event/scheduler/scheduler.o 00:02:20.484 CC test/nvme/e2edp/nvme_dp.o 00:02:20.484 LINK nvme_manage 00:02:20.484 LINK reactor 00:02:20.747 CC test/app/fuzz/iscsi_fuzz/iscsi_fuzz.o 00:02:20.747 CXX test/cpp_headers/bdev_zone.o 00:02:20.747 CXX test/cpp_headers/bit_array.o 00:02:20.747 LINK vtophys 00:02:20.747 CXX test/cpp_headers/bit_pool.o 00:02:20.747 CXX test/cpp_headers/blob_bdev.o 00:02:20.747 CC test/app/fuzz/vhost_fuzz/vhost_fuzz_rpc.o 00:02:20.747 LINK reactor_perf 00:02:20.747 CC test/nvme/err_injection/err_injection.o 00:02:20.747 LINK blobcli 00:02:20.747 CXX test/cpp_headers/blobfs_bdev.o 00:02:20.747 LINK spdk_nvme 00:02:20.747 CC test/nvme/startup/startup.o 00:02:20.747 CC test/nvme/simple_copy/simple_copy.o 00:02:20.747 CC test/nvme/reserve/reserve.o 00:02:20.747 CC test/app/fuzz/vhost_fuzz/vhost_fuzz.o 00:02:20.747 LINK env_dpdk_post_init 00:02:20.747 CC test/nvme/connect_stress/connect_stress.o 00:02:20.747 LINK jsoncat 00:02:20.747 LINK app_repeat 00:02:20.747 LINK histogram_perf 00:02:20.747 CC test/nvme/boot_partition/boot_partition.o 00:02:20.747 CC test/nvme/fused_ordering/fused_ordering.o 00:02:20.747 CC test/nvme/compliance/nvme_compliance.o 00:02:20.747 CXX test/cpp_headers/blobfs.o 00:02:21.009 CXX test/cpp_headers/blob.o 00:02:21.009 CXX test/cpp_headers/conf.o 00:02:21.009 CXX test/cpp_headers/config.o 00:02:21.009 CXX test/cpp_headers/cpuset.o 00:02:21.009 LINK reset 00:02:21.009 LINK stub 00:02:21.009 CXX test/cpp_headers/crc16.o 00:02:21.009 CXX test/cpp_headers/crc32.o 00:02:21.009 LINK mem_callbacks 00:02:21.009 LINK sgl 00:02:21.009 CXX test/cpp_headers/crc64.o 00:02:21.009 CC test/nvme/doorbell_aers/doorbell_aers.o 00:02:21.009 CXX test/cpp_headers/dif.o 00:02:21.009 CXX test/cpp_headers/dma.o 00:02:21.009 CC test/nvme/cuse/cuse.o 00:02:21.009 CC test/nvme/fdp/fdp.o 00:02:21.009 CXX test/cpp_headers/endian.o 00:02:21.009 LINK scheduler 00:02:21.009 CXX test/cpp_headers/env_dpdk.o 00:02:21.009 CXX test/cpp_headers/env.o 00:02:21.009 LINK spdk_nvme_perf 00:02:21.009 LINK startup 00:02:21.009 LINK err_injection 00:02:21.009 CXX test/cpp_headers/event.o 00:02:21.009 CXX test/cpp_headers/fd_group.o 00:02:21.009 CXX test/cpp_headers/fd.o 00:02:21.009 CXX test/cpp_headers/file.o 00:02:21.009 CXX test/cpp_headers/ftl.o 00:02:21.009 LINK connect_stress 00:02:21.009 LINK nvme_dp 00:02:21.009 CXX test/cpp_headers/gpt_spec.o 00:02:21.271 CXX test/cpp_headers/hexlify.o 00:02:21.272 LINK overhead 00:02:21.272 LINK reserve 00:02:21.272 LINK bdevperf 00:02:21.272 LINK boot_partition 00:02:21.272 LINK simple_copy 00:02:21.272 LINK spdk_nvme_identify 00:02:21.272 CXX test/cpp_headers/histogram_data.o 00:02:21.272 LINK spdk_top 00:02:21.272 CXX test/cpp_headers/idxd.o 00:02:21.272 CXX test/cpp_headers/idxd_spec.o 00:02:21.272 CXX test/cpp_headers/init.o 00:02:21.272 CXX test/cpp_headers/ioat.o 00:02:21.272 CXX test/cpp_headers/ioat_spec.o 00:02:21.272 LINK pci_ut 00:02:21.272 LINK fused_ordering 00:02:21.272 CXX test/cpp_headers/iscsi_spec.o 00:02:21.272 CXX test/cpp_headers/json.o 00:02:21.272 CXX test/cpp_headers/jsonrpc.o 00:02:21.272 CXX test/cpp_headers/likely.o 00:02:21.272 CXX test/cpp_headers/log.o 00:02:21.272 LINK spdk_bdev 00:02:21.272 CXX test/cpp_headers/lvol.o 00:02:21.272 LINK doorbell_aers 00:02:21.272 LINK nvme_fuzz 00:02:21.272 CXX test/cpp_headers/memory.o 00:02:21.272 CXX test/cpp_headers/mmio.o 00:02:21.533 CXX test/cpp_headers/nbd.o 00:02:21.533 CXX test/cpp_headers/notify.o 00:02:21.533 CXX test/cpp_headers/nvme.o 00:02:21.533 CXX test/cpp_headers/nvme_intel.o 00:02:21.533 CXX test/cpp_headers/nvme_ocssd.o 00:02:21.533 CXX test/cpp_headers/nvme_ocssd_spec.o 00:02:21.533 CXX test/cpp_headers/nvme_spec.o 00:02:21.533 CXX test/cpp_headers/nvme_zns.o 00:02:21.533 CXX test/cpp_headers/nvmf_cmd.o 00:02:21.533 CXX test/cpp_headers/nvmf_fc_spec.o 00:02:21.533 CXX test/cpp_headers/nvmf.o 00:02:21.533 CXX test/cpp_headers/nvmf_spec.o 00:02:21.533 CXX test/cpp_headers/nvmf_transport.o 00:02:21.533 CXX test/cpp_headers/opal.o 00:02:21.533 CXX test/cpp_headers/opal_spec.o 00:02:21.533 CXX test/cpp_headers/pci_ids.o 00:02:21.533 CXX test/cpp_headers/pipe.o 00:02:21.533 LINK nvme_compliance 00:02:21.533 CXX test/cpp_headers/queue.o 00:02:21.533 LINK vhost_fuzz 00:02:21.533 CXX test/cpp_headers/reduce.o 00:02:21.533 CXX test/cpp_headers/rpc.o 00:02:21.533 CXX test/cpp_headers/scheduler.o 00:02:21.533 CXX test/cpp_headers/scsi.o 00:02:21.533 CXX test/cpp_headers/scsi_spec.o 00:02:21.533 CXX test/cpp_headers/sock.o 00:02:21.533 CXX test/cpp_headers/stdinc.o 00:02:21.533 CXX test/cpp_headers/string.o 00:02:21.533 CXX test/cpp_headers/thread.o 00:02:21.533 CXX test/cpp_headers/trace.o 00:02:21.533 CXX test/cpp_headers/trace_parser.o 00:02:21.533 CXX test/cpp_headers/tree.o 00:02:21.533 CXX test/cpp_headers/ublk.o 00:02:21.533 CXX test/cpp_headers/util.o 00:02:21.533 LINK fdp 00:02:21.533 CXX test/cpp_headers/uuid.o 00:02:21.533 CXX test/cpp_headers/version.o 00:02:21.793 CXX test/cpp_headers/vfio_user_pci.o 00:02:21.793 CXX test/cpp_headers/vfio_user_spec.o 00:02:21.793 CXX test/cpp_headers/vhost.o 00:02:21.793 CXX test/cpp_headers/vmd.o 00:02:21.793 CXX test/cpp_headers/xor.o 00:02:21.793 CXX test/cpp_headers/zipf.o 00:02:22.050 LINK memory_ut 00:02:22.617 LINK cuse 00:02:22.876 LINK iscsi_fuzz 00:02:25.436 LINK esnap 00:02:25.436 00:02:25.436 real 0m45.180s 00:02:25.436 user 9m34.513s 00:02:25.436 sys 2m9.863s 00:02:25.436 07:19:41 -- common/autotest_common.sh@1105 -- $ xtrace_disable 00:02:25.436 07:19:41 -- common/autotest_common.sh@10 -- $ set +x 00:02:25.436 ************************************ 00:02:25.436 END TEST make 00:02:25.436 ************************************ 00:02:25.694 07:19:41 -- spdk/autotest.sh@25 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:02:25.694 07:19:41 -- nvmf/common.sh@7 -- # uname -s 00:02:25.694 07:19:41 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:02:25.694 07:19:41 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:02:25.694 07:19:41 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:02:25.694 07:19:41 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:02:25.694 07:19:41 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:02:25.694 07:19:41 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:02:25.694 07:19:41 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:02:25.694 07:19:41 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:02:25.694 07:19:41 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:02:25.694 07:19:41 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:02:25.694 07:19:41 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:02:25.694 07:19:41 -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:02:25.694 07:19:41 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:02:25.694 07:19:41 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:02:25.694 07:19:41 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:02:25.694 07:19:41 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:02:25.694 07:19:41 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:02:25.694 07:19:41 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:02:25.694 07:19:41 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:02:25.694 07:19:41 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:25.694 07:19:41 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:25.694 07:19:41 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:25.694 07:19:41 -- paths/export.sh@5 -- # export PATH 00:02:25.694 07:19:41 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:25.694 07:19:41 -- nvmf/common.sh@46 -- # : 0 00:02:25.694 07:19:41 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:02:25.694 07:19:41 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:02:25.694 07:19:41 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:02:25.694 07:19:41 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:02:25.694 07:19:41 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:02:25.694 07:19:41 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:02:25.694 07:19:41 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:02:25.694 07:19:41 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:02:25.694 07:19:41 -- spdk/autotest.sh@27 -- # '[' 0 -ne 0 ']' 00:02:25.694 07:19:41 -- spdk/autotest.sh@32 -- # uname -s 00:02:25.694 07:19:41 -- spdk/autotest.sh@32 -- # '[' Linux = Linux ']' 00:02:25.694 07:19:41 -- spdk/autotest.sh@33 -- # old_core_pattern='|/usr/lib/systemd/systemd-coredump %P %u %g %s %t %c %h' 00:02:25.694 07:19:41 -- spdk/autotest.sh@34 -- # mkdir -p /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/coredumps 00:02:25.694 07:19:41 -- spdk/autotest.sh@39 -- # echo '|/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/core-collector.sh %P %s %t' 00:02:25.694 07:19:41 -- spdk/autotest.sh@40 -- # echo /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/coredumps 00:02:25.694 07:19:41 -- spdk/autotest.sh@44 -- # modprobe nbd 00:02:25.694 07:19:41 -- spdk/autotest.sh@46 -- # type -P udevadm 00:02:25.694 07:19:41 -- spdk/autotest.sh@46 -- # udevadm=/usr/sbin/udevadm 00:02:25.694 07:19:41 -- spdk/autotest.sh@48 -- # udevadm_pid=3939206 00:02:25.694 07:19:41 -- spdk/autotest.sh@47 -- # /usr/sbin/udevadm monitor --property 00:02:25.694 07:19:41 -- spdk/autotest.sh@51 -- # mkdir -p /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power 00:02:25.694 07:19:41 -- spdk/autotest.sh@54 -- # echo 3939208 00:02:25.694 07:19:41 -- spdk/autotest.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-cpu-load -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power 00:02:25.694 07:19:41 -- spdk/autotest.sh@56 -- # echo 3939209 00:02:25.694 07:19:41 -- spdk/autotest.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-vmstat -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power 00:02:25.694 07:19:41 -- spdk/autotest.sh@58 -- # [[ ............................... != QEMU ]] 00:02:25.694 07:19:41 -- spdk/autotest.sh@60 -- # echo 3939210 00:02:25.694 07:19:41 -- spdk/autotest.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-cpu-temp -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l 00:02:25.694 07:19:41 -- spdk/autotest.sh@62 -- # echo 3939211 00:02:25.694 07:19:41 -- spdk/autotest.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-bmc-pm -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l 00:02:25.694 07:19:41 -- spdk/autotest.sh@66 -- # trap 'autotest_cleanup || :; exit 1' SIGINT SIGTERM EXIT 00:02:25.694 07:19:41 -- spdk/autotest.sh@68 -- # timing_enter autotest 00:02:25.694 07:19:41 -- common/autotest_common.sh@712 -- # xtrace_disable 00:02:25.694 07:19:41 -- common/autotest_common.sh@10 -- # set +x 00:02:25.694 07:19:41 -- spdk/autotest.sh@70 -- # create_test_list 00:02:25.694 07:19:41 -- common/autotest_common.sh@736 -- # xtrace_disable 00:02:25.694 07:19:41 -- common/autotest_common.sh@10 -- # set +x 00:02:25.694 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-bmc-pm.bmc.pm.log 00:02:25.694 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-cpu-temp.pm.log 00:02:25.694 07:19:41 -- spdk/autotest.sh@72 -- # dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/autotest.sh 00:02:25.694 07:19:41 -- spdk/autotest.sh@72 -- # readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:02:25.694 07:19:41 -- spdk/autotest.sh@72 -- # src=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:02:25.694 07:19:41 -- spdk/autotest.sh@73 -- # out=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output 00:02:25.694 07:19:41 -- spdk/autotest.sh@74 -- # cd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:02:25.694 07:19:41 -- spdk/autotest.sh@76 -- # freebsd_update_contigmem_mod 00:02:25.694 07:19:41 -- common/autotest_common.sh@1440 -- # uname 00:02:25.694 07:19:41 -- common/autotest_common.sh@1440 -- # '[' Linux = FreeBSD ']' 00:02:25.695 07:19:41 -- spdk/autotest.sh@77 -- # freebsd_set_maxsock_buf 00:02:25.695 07:19:41 -- common/autotest_common.sh@1460 -- # uname 00:02:25.695 07:19:41 -- common/autotest_common.sh@1460 -- # [[ Linux = FreeBSD ]] 00:02:25.695 07:19:41 -- spdk/autotest.sh@82 -- # grep CC_TYPE mk/cc.mk 00:02:25.695 07:19:41 -- spdk/autotest.sh@82 -- # CC_TYPE=CC_TYPE=gcc 00:02:25.695 07:19:41 -- spdk/autotest.sh@83 -- # hash lcov 00:02:25.695 07:19:41 -- spdk/autotest.sh@83 -- # [[ CC_TYPE=gcc == *\c\l\a\n\g* ]] 00:02:25.695 07:19:41 -- spdk/autotest.sh@91 -- # export 'LCOV_OPTS= 00:02:25.695 --rc lcov_branch_coverage=1 00:02:25.695 --rc lcov_function_coverage=1 00:02:25.695 --rc genhtml_branch_coverage=1 00:02:25.695 --rc genhtml_function_coverage=1 00:02:25.695 --rc genhtml_legend=1 00:02:25.695 --rc geninfo_all_blocks=1 00:02:25.695 ' 00:02:25.695 07:19:41 -- spdk/autotest.sh@91 -- # LCOV_OPTS=' 00:02:25.695 --rc lcov_branch_coverage=1 00:02:25.695 --rc lcov_function_coverage=1 00:02:25.695 --rc genhtml_branch_coverage=1 00:02:25.695 --rc genhtml_function_coverage=1 00:02:25.695 --rc genhtml_legend=1 00:02:25.695 --rc geninfo_all_blocks=1 00:02:25.695 ' 00:02:25.695 07:19:41 -- spdk/autotest.sh@92 -- # export 'LCOV=lcov 00:02:25.695 --rc lcov_branch_coverage=1 00:02:25.695 --rc lcov_function_coverage=1 00:02:25.695 --rc genhtml_branch_coverage=1 00:02:25.695 --rc genhtml_function_coverage=1 00:02:25.695 --rc genhtml_legend=1 00:02:25.695 --rc geninfo_all_blocks=1 00:02:25.695 --no-external' 00:02:25.695 07:19:41 -- spdk/autotest.sh@92 -- # LCOV='lcov 00:02:25.695 --rc lcov_branch_coverage=1 00:02:25.695 --rc lcov_function_coverage=1 00:02:25.695 --rc genhtml_branch_coverage=1 00:02:25.695 --rc genhtml_function_coverage=1 00:02:25.695 --rc genhtml_legend=1 00:02:25.695 --rc geninfo_all_blocks=1 00:02:25.695 --no-external' 00:02:25.695 07:19:41 -- spdk/autotest.sh@94 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -v 00:02:25.695 lcov: LCOV version 1.14 00:02:25.695 07:19:41 -- spdk/autotest.sh@96 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -c -i -t Baseline -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_base.info 00:02:27.596 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/accel.gcno:no functions found 00:02:27.596 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/accel.gcno 00:02:27.596 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/accel_module.gcno:no functions found 00:02:27.596 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/accel_module.gcno 00:02:27.596 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/assert.gcno:no functions found 00:02:27.596 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/assert.gcno 00:02:27.596 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/barrier.gcno:no functions found 00:02:27.596 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/barrier.gcno 00:02:27.596 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/base64.gcno:no functions found 00:02:27.596 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/base64.gcno 00:02:27.596 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bdev.gcno:no functions found 00:02:27.596 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bdev.gcno 00:02:27.596 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bdev_module.gcno:no functions found 00:02:27.596 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bdev_module.gcno 00:02:27.596 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bdev_zone.gcno:no functions found 00:02:27.596 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bdev_zone.gcno 00:02:27.596 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bit_array.gcno:no functions found 00:02:27.596 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bit_array.gcno 00:02:27.596 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bit_pool.gcno:no functions found 00:02:27.596 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bit_pool.gcno 00:02:27.596 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/blob_bdev.gcno:no functions found 00:02:27.596 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/blob_bdev.gcno 00:02:27.596 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/blobfs_bdev.gcno:no functions found 00:02:27.596 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/blobfs_bdev.gcno 00:02:27.596 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/blobfs.gcno:no functions found 00:02:27.596 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/blobfs.gcno 00:02:27.596 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/blob.gcno:no functions found 00:02:27.596 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/blob.gcno 00:02:27.596 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/conf.gcno:no functions found 00:02:27.596 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/conf.gcno 00:02:27.596 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/config.gcno:no functions found 00:02:27.596 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/config.gcno 00:02:27.596 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/cpuset.gcno:no functions found 00:02:27.596 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/cpuset.gcno 00:02:27.596 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/crc32.gcno:no functions found 00:02:27.596 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/crc32.gcno 00:02:27.596 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/crc16.gcno:no functions found 00:02:27.596 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/crc16.gcno 00:02:27.596 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/crc64.gcno:no functions found 00:02:27.596 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/crc64.gcno 00:02:27.596 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/dif.gcno:no functions found 00:02:27.596 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/dif.gcno 00:02:27.596 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/dma.gcno:no functions found 00:02:27.596 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/dma.gcno 00:02:27.596 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/endian.gcno:no functions found 00:02:27.596 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/endian.gcno 00:02:27.596 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/env_dpdk.gcno:no functions found 00:02:27.596 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/env_dpdk.gcno 00:02:27.596 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/env.gcno:no functions found 00:02:27.596 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/env.gcno 00:02:27.596 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/event.gcno:no functions found 00:02:27.596 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/event.gcno 00:02:27.596 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/fd_group.gcno:no functions found 00:02:27.596 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/fd_group.gcno 00:02:27.596 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/fd.gcno:no functions found 00:02:27.596 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/fd.gcno 00:02:27.596 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/file.gcno:no functions found 00:02:27.596 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/file.gcno 00:02:27.596 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/ftl.gcno:no functions found 00:02:27.596 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/ftl.gcno 00:02:27.596 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/gpt_spec.gcno:no functions found 00:02:27.596 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/gpt_spec.gcno 00:02:27.596 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/hexlify.gcno:no functions found 00:02:27.596 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/hexlify.gcno 00:02:27.596 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/histogram_data.gcno:no functions found 00:02:27.596 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/histogram_data.gcno 00:02:27.596 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/idxd.gcno:no functions found 00:02:27.596 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/idxd.gcno 00:02:27.596 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/idxd_spec.gcno:no functions found 00:02:27.596 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/idxd_spec.gcno 00:02:27.596 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/init.gcno:no functions found 00:02:27.596 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/init.gcno 00:02:27.596 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/ioat.gcno:no functions found 00:02:27.596 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/ioat.gcno 00:02:27.597 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/ioat_spec.gcno:no functions found 00:02:27.597 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/ioat_spec.gcno 00:02:27.597 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/iscsi_spec.gcno:no functions found 00:02:27.597 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/iscsi_spec.gcno 00:02:27.597 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/json.gcno:no functions found 00:02:27.597 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/json.gcno 00:02:27.597 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/jsonrpc.gcno:no functions found 00:02:27.597 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/jsonrpc.gcno 00:02:27.597 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/likely.gcno:no functions found 00:02:27.597 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/likely.gcno 00:02:27.597 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/log.gcno:no functions found 00:02:27.597 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/log.gcno 00:02:27.597 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/lvol.gcno:no functions found 00:02:27.597 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/lvol.gcno 00:02:27.597 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/memory.gcno:no functions found 00:02:27.597 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/memory.gcno 00:02:27.597 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/mmio.gcno:no functions found 00:02:27.597 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/mmio.gcno 00:02:27.597 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nbd.gcno:no functions found 00:02:27.597 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nbd.gcno 00:02:27.597 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/notify.gcno:no functions found 00:02:27.597 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/notify.gcno 00:02:27.597 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_intel.gcno:no functions found 00:02:27.597 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_intel.gcno 00:02:27.597 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme.gcno:no functions found 00:02:27.597 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme.gcno 00:02:27.597 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_ocssd.gcno:no functions found 00:02:27.597 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_ocssd.gcno 00:02:27.597 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_ocssd_spec.gcno:no functions found 00:02:27.597 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_ocssd_spec.gcno 00:02:27.597 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_spec.gcno:no functions found 00:02:27.597 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_spec.gcno 00:02:27.597 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_zns.gcno:no functions found 00:02:27.597 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_zns.gcno 00:02:27.597 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf_cmd.gcno:no functions found 00:02:27.597 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf_cmd.gcno 00:02:27.597 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf_fc_spec.gcno:no functions found 00:02:27.597 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf_fc_spec.gcno 00:02:27.597 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf.gcno:no functions found 00:02:27.597 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf.gcno 00:02:27.597 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf_spec.gcno:no functions found 00:02:27.597 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf_spec.gcno 00:02:27.597 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf_transport.gcno:no functions found 00:02:27.597 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf_transport.gcno 00:02:27.597 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/opal.gcno:no functions found 00:02:27.597 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/opal.gcno 00:02:27.597 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/opal_spec.gcno:no functions found 00:02:27.597 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/opal_spec.gcno 00:02:27.597 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/pci_ids.gcno:no functions found 00:02:27.597 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/pci_ids.gcno 00:02:27.597 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/pipe.gcno:no functions found 00:02:27.597 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/pipe.gcno 00:02:27.597 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/queue.gcno:no functions found 00:02:27.597 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/queue.gcno 00:02:27.597 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/reduce.gcno:no functions found 00:02:27.597 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/reduce.gcno 00:02:27.597 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/rpc.gcno:no functions found 00:02:27.597 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/rpc.gcno 00:02:27.597 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/scheduler.gcno:no functions found 00:02:27.597 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/scheduler.gcno 00:02:27.597 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/scsi.gcno:no functions found 00:02:27.597 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/scsi.gcno 00:02:27.597 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/sock.gcno:no functions found 00:02:27.597 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/sock.gcno 00:02:27.597 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/scsi_spec.gcno:no functions found 00:02:27.597 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/scsi_spec.gcno 00:02:27.597 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/stdinc.gcno:no functions found 00:02:27.597 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/stdinc.gcno 00:02:27.597 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/string.gcno:no functions found 00:02:27.597 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/string.gcno 00:02:27.597 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/thread.gcno:no functions found 00:02:27.597 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/thread.gcno 00:02:27.597 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/trace.gcno:no functions found 00:02:27.597 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/trace.gcno 00:02:27.597 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/trace_parser.gcno:no functions found 00:02:27.597 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/trace_parser.gcno 00:02:27.597 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/tree.gcno:no functions found 00:02:27.597 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/tree.gcno 00:02:27.597 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/ublk.gcno:no functions found 00:02:27.597 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/ublk.gcno 00:02:27.597 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/util.gcno:no functions found 00:02:27.597 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/util.gcno 00:02:27.597 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/uuid.gcno:no functions found 00:02:27.597 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/uuid.gcno 00:02:27.597 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/version.gcno:no functions found 00:02:27.597 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/version.gcno 00:02:27.597 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/vfio_user_pci.gcno:no functions found 00:02:27.597 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/vfio_user_pci.gcno 00:02:27.597 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/vfio_user_spec.gcno:no functions found 00:02:27.597 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/vfio_user_spec.gcno 00:02:27.597 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/vhost.gcno:no functions found 00:02:27.597 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/vhost.gcno 00:02:27.597 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/vmd.gcno:no functions found 00:02:27.597 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/vmd.gcno 00:02:27.597 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/xor.gcno:no functions found 00:02:27.597 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/xor.gcno 00:02:27.597 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/zipf.gcno:no functions found 00:02:27.597 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/zipf.gcno 00:02:45.677 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/ftl/upgrade/ftl_p2l_upgrade.gcno:no functions found 00:02:45.677 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/ftl/upgrade/ftl_p2l_upgrade.gcno 00:02:45.677 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/ftl/upgrade/ftl_band_upgrade.gcno:no functions found 00:02:45.677 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/ftl/upgrade/ftl_band_upgrade.gcno 00:02:45.677 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/ftl/upgrade/ftl_chunk_upgrade.gcno:no functions found 00:02:45.677 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/ftl/upgrade/ftl_chunk_upgrade.gcno 00:03:00.585 07:20:14 -- spdk/autotest.sh@100 -- # timing_enter pre_cleanup 00:03:00.585 07:20:14 -- common/autotest_common.sh@712 -- # xtrace_disable 00:03:00.585 07:20:14 -- common/autotest_common.sh@10 -- # set +x 00:03:00.585 07:20:14 -- spdk/autotest.sh@102 -- # rm -f 00:03:00.585 07:20:14 -- spdk/autotest.sh@105 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:03:00.585 0000:88:00.0 (8086 0a54): Already using the nvme driver 00:03:00.585 0000:00:04.7 (8086 0e27): Already using the ioatdma driver 00:03:00.585 0000:00:04.6 (8086 0e26): Already using the ioatdma driver 00:03:00.585 0000:00:04.5 (8086 0e25): Already using the ioatdma driver 00:03:00.585 0000:00:04.4 (8086 0e24): Already using the ioatdma driver 00:03:00.585 0000:00:04.3 (8086 0e23): Already using the ioatdma driver 00:03:00.585 0000:00:04.2 (8086 0e22): Already using the ioatdma driver 00:03:00.585 0000:00:04.1 (8086 0e21): Already using the ioatdma driver 00:03:00.585 0000:00:04.0 (8086 0e20): Already using the ioatdma driver 00:03:00.585 0000:80:04.7 (8086 0e27): Already using the ioatdma driver 00:03:00.585 0000:80:04.6 (8086 0e26): Already using the ioatdma driver 00:03:00.585 0000:80:04.5 (8086 0e25): Already using the ioatdma driver 00:03:00.585 0000:80:04.4 (8086 0e24): Already using the ioatdma driver 00:03:00.585 0000:80:04.3 (8086 0e23): Already using the ioatdma driver 00:03:00.585 0000:80:04.2 (8086 0e22): Already using the ioatdma driver 00:03:00.585 0000:80:04.1 (8086 0e21): Already using the ioatdma driver 00:03:00.585 0000:80:04.0 (8086 0e20): Already using the ioatdma driver 00:03:00.585 07:20:15 -- spdk/autotest.sh@107 -- # get_zoned_devs 00:03:00.585 07:20:15 -- common/autotest_common.sh@1654 -- # zoned_devs=() 00:03:00.585 07:20:15 -- common/autotest_common.sh@1654 -- # local -gA zoned_devs 00:03:00.585 07:20:15 -- common/autotest_common.sh@1655 -- # local nvme bdf 00:03:00.585 07:20:15 -- common/autotest_common.sh@1657 -- # for nvme in /sys/block/nvme* 00:03:00.585 07:20:15 -- common/autotest_common.sh@1658 -- # is_block_zoned nvme0n1 00:03:00.585 07:20:15 -- common/autotest_common.sh@1647 -- # local device=nvme0n1 00:03:00.585 07:20:15 -- common/autotest_common.sh@1649 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:03:00.585 07:20:15 -- common/autotest_common.sh@1650 -- # [[ none != none ]] 00:03:00.585 07:20:15 -- spdk/autotest.sh@109 -- # (( 0 > 0 )) 00:03:00.585 07:20:15 -- spdk/autotest.sh@121 -- # grep -v p 00:03:00.585 07:20:15 -- spdk/autotest.sh@121 -- # ls /dev/nvme0n1 00:03:00.585 07:20:15 -- spdk/autotest.sh@121 -- # for dev in $(ls /dev/nvme*n* | grep -v p || true) 00:03:00.585 07:20:15 -- spdk/autotest.sh@123 -- # [[ -z '' ]] 00:03:00.585 07:20:15 -- spdk/autotest.sh@124 -- # block_in_use /dev/nvme0n1 00:03:00.585 07:20:15 -- scripts/common.sh@380 -- # local block=/dev/nvme0n1 pt 00:03:00.585 07:20:15 -- scripts/common.sh@389 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdk-gpt.py /dev/nvme0n1 00:03:00.585 No valid GPT data, bailing 00:03:00.585 07:20:15 -- scripts/common.sh@393 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:03:00.585 07:20:15 -- scripts/common.sh@393 -- # pt= 00:03:00.585 07:20:15 -- scripts/common.sh@394 -- # return 1 00:03:00.585 07:20:15 -- spdk/autotest.sh@125 -- # dd if=/dev/zero of=/dev/nvme0n1 bs=1M count=1 00:03:00.585 1+0 records in 00:03:00.585 1+0 records out 00:03:00.585 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00251479 s, 417 MB/s 00:03:00.585 07:20:15 -- spdk/autotest.sh@129 -- # sync 00:03:00.585 07:20:15 -- spdk/autotest.sh@131 -- # xtrace_disable_per_cmd reap_spdk_processes 00:03:00.585 07:20:15 -- common/autotest_common.sh@22 -- # eval 'reap_spdk_processes 12> /dev/null' 00:03:00.585 07:20:15 -- common/autotest_common.sh@22 -- # reap_spdk_processes 00:03:01.520 07:20:17 -- spdk/autotest.sh@135 -- # uname -s 00:03:01.520 07:20:17 -- spdk/autotest.sh@135 -- # '[' Linux = Linux ']' 00:03:01.520 07:20:17 -- spdk/autotest.sh@136 -- # run_test setup.sh /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/test-setup.sh 00:03:01.520 07:20:17 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:03:01.520 07:20:17 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:03:01.520 07:20:17 -- common/autotest_common.sh@10 -- # set +x 00:03:01.520 ************************************ 00:03:01.520 START TEST setup.sh 00:03:01.520 ************************************ 00:03:01.520 07:20:17 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/test-setup.sh 00:03:01.520 * Looking for test storage... 00:03:01.520 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup 00:03:01.520 07:20:17 -- setup/test-setup.sh@10 -- # uname -s 00:03:01.520 07:20:17 -- setup/test-setup.sh@10 -- # [[ Linux == Linux ]] 00:03:01.520 07:20:17 -- setup/test-setup.sh@12 -- # run_test acl /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/acl.sh 00:03:01.520 07:20:17 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:03:01.520 07:20:17 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:03:01.520 07:20:17 -- common/autotest_common.sh@10 -- # set +x 00:03:01.520 ************************************ 00:03:01.520 START TEST acl 00:03:01.520 ************************************ 00:03:01.520 07:20:17 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/acl.sh 00:03:01.779 * Looking for test storage... 00:03:01.779 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup 00:03:01.779 07:20:17 -- setup/acl.sh@10 -- # get_zoned_devs 00:03:01.779 07:20:17 -- common/autotest_common.sh@1654 -- # zoned_devs=() 00:03:01.779 07:20:17 -- common/autotest_common.sh@1654 -- # local -gA zoned_devs 00:03:01.779 07:20:17 -- common/autotest_common.sh@1655 -- # local nvme bdf 00:03:01.779 07:20:17 -- common/autotest_common.sh@1657 -- # for nvme in /sys/block/nvme* 00:03:01.779 07:20:17 -- common/autotest_common.sh@1658 -- # is_block_zoned nvme0n1 00:03:01.779 07:20:17 -- common/autotest_common.sh@1647 -- # local device=nvme0n1 00:03:01.779 07:20:17 -- common/autotest_common.sh@1649 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:03:01.779 07:20:17 -- common/autotest_common.sh@1650 -- # [[ none != none ]] 00:03:01.779 07:20:17 -- setup/acl.sh@12 -- # devs=() 00:03:01.779 07:20:17 -- setup/acl.sh@12 -- # declare -a devs 00:03:01.779 07:20:17 -- setup/acl.sh@13 -- # drivers=() 00:03:01.779 07:20:17 -- setup/acl.sh@13 -- # declare -A drivers 00:03:01.779 07:20:17 -- setup/acl.sh@51 -- # setup reset 00:03:01.779 07:20:17 -- setup/common.sh@9 -- # [[ reset == output ]] 00:03:01.779 07:20:17 -- setup/common.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:03:03.151 07:20:19 -- setup/acl.sh@52 -- # collect_setup_devs 00:03:03.151 07:20:19 -- setup/acl.sh@16 -- # local dev driver 00:03:03.151 07:20:19 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:03.151 07:20:19 -- setup/acl.sh@15 -- # setup output status 00:03:03.151 07:20:19 -- setup/common.sh@9 -- # [[ output == output ]] 00:03:03.151 07:20:19 -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh status 00:03:04.086 Hugepages 00:03:04.086 node hugesize free / total 00:03:04.086 07:20:20 -- setup/acl.sh@19 -- # [[ 1048576kB == *:*:*.* ]] 00:03:04.086 07:20:20 -- setup/acl.sh@19 -- # continue 00:03:04.086 07:20:20 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:04.086 07:20:20 -- setup/acl.sh@19 -- # [[ 2048kB == *:*:*.* ]] 00:03:04.086 07:20:20 -- setup/acl.sh@19 -- # continue 00:03:04.086 07:20:20 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:04.086 07:20:20 -- setup/acl.sh@19 -- # [[ 1048576kB == *:*:*.* ]] 00:03:04.086 07:20:20 -- setup/acl.sh@19 -- # continue 00:03:04.086 07:20:20 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:04.086 00:03:04.086 Type BDF Vendor Device NUMA Driver Device Block devices 00:03:04.086 07:20:20 -- setup/acl.sh@19 -- # [[ 2048kB == *:*:*.* ]] 00:03:04.086 07:20:20 -- setup/acl.sh@19 -- # continue 00:03:04.086 07:20:20 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:04.086 07:20:20 -- setup/acl.sh@19 -- # [[ 0000:00:04.0 == *:*:*.* ]] 00:03:04.086 07:20:20 -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:04.086 07:20:20 -- setup/acl.sh@20 -- # continue 00:03:04.086 07:20:20 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:04.086 07:20:20 -- setup/acl.sh@19 -- # [[ 0000:00:04.1 == *:*:*.* ]] 00:03:04.086 07:20:20 -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:04.086 07:20:20 -- setup/acl.sh@20 -- # continue 00:03:04.086 07:20:20 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:04.086 07:20:20 -- setup/acl.sh@19 -- # [[ 0000:00:04.2 == *:*:*.* ]] 00:03:04.086 07:20:20 -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:04.086 07:20:20 -- setup/acl.sh@20 -- # continue 00:03:04.086 07:20:20 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:04.086 07:20:20 -- setup/acl.sh@19 -- # [[ 0000:00:04.3 == *:*:*.* ]] 00:03:04.086 07:20:20 -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:04.086 07:20:20 -- setup/acl.sh@20 -- # continue 00:03:04.086 07:20:20 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:04.086 07:20:20 -- setup/acl.sh@19 -- # [[ 0000:00:04.4 == *:*:*.* ]] 00:03:04.086 07:20:20 -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:04.086 07:20:20 -- setup/acl.sh@20 -- # continue 00:03:04.086 07:20:20 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:04.086 07:20:20 -- setup/acl.sh@19 -- # [[ 0000:00:04.5 == *:*:*.* ]] 00:03:04.086 07:20:20 -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:04.086 07:20:20 -- setup/acl.sh@20 -- # continue 00:03:04.086 07:20:20 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:04.086 07:20:20 -- setup/acl.sh@19 -- # [[ 0000:00:04.6 == *:*:*.* ]] 00:03:04.086 07:20:20 -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:04.086 07:20:20 -- setup/acl.sh@20 -- # continue 00:03:04.086 07:20:20 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:04.086 07:20:20 -- setup/acl.sh@19 -- # [[ 0000:00:04.7 == *:*:*.* ]] 00:03:04.086 07:20:20 -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:04.086 07:20:20 -- setup/acl.sh@20 -- # continue 00:03:04.086 07:20:20 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:04.086 07:20:20 -- setup/acl.sh@19 -- # [[ 0000:80:04.0 == *:*:*.* ]] 00:03:04.086 07:20:20 -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:04.086 07:20:20 -- setup/acl.sh@20 -- # continue 00:03:04.086 07:20:20 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:04.086 07:20:20 -- setup/acl.sh@19 -- # [[ 0000:80:04.1 == *:*:*.* ]] 00:03:04.086 07:20:20 -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:04.086 07:20:20 -- setup/acl.sh@20 -- # continue 00:03:04.086 07:20:20 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:04.086 07:20:20 -- setup/acl.sh@19 -- # [[ 0000:80:04.2 == *:*:*.* ]] 00:03:04.086 07:20:20 -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:04.086 07:20:20 -- setup/acl.sh@20 -- # continue 00:03:04.086 07:20:20 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:04.086 07:20:20 -- setup/acl.sh@19 -- # [[ 0000:80:04.3 == *:*:*.* ]] 00:03:04.086 07:20:20 -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:04.087 07:20:20 -- setup/acl.sh@20 -- # continue 00:03:04.087 07:20:20 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:04.087 07:20:20 -- setup/acl.sh@19 -- # [[ 0000:80:04.4 == *:*:*.* ]] 00:03:04.087 07:20:20 -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:04.087 07:20:20 -- setup/acl.sh@20 -- # continue 00:03:04.087 07:20:20 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:04.087 07:20:20 -- setup/acl.sh@19 -- # [[ 0000:80:04.5 == *:*:*.* ]] 00:03:04.087 07:20:20 -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:04.087 07:20:20 -- setup/acl.sh@20 -- # continue 00:03:04.087 07:20:20 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:04.087 07:20:20 -- setup/acl.sh@19 -- # [[ 0000:80:04.6 == *:*:*.* ]] 00:03:04.087 07:20:20 -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:04.087 07:20:20 -- setup/acl.sh@20 -- # continue 00:03:04.087 07:20:20 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:04.087 07:20:20 -- setup/acl.sh@19 -- # [[ 0000:80:04.7 == *:*:*.* ]] 00:03:04.087 07:20:20 -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:04.087 07:20:20 -- setup/acl.sh@20 -- # continue 00:03:04.087 07:20:20 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:04.087 07:20:20 -- setup/acl.sh@19 -- # [[ 0000:88:00.0 == *:*:*.* ]] 00:03:04.087 07:20:20 -- setup/acl.sh@20 -- # [[ nvme == nvme ]] 00:03:04.087 07:20:20 -- setup/acl.sh@21 -- # [[ '' == *\0\0\0\0\:\8\8\:\0\0\.\0* ]] 00:03:04.087 07:20:20 -- setup/acl.sh@22 -- # devs+=("$dev") 00:03:04.087 07:20:20 -- setup/acl.sh@22 -- # drivers["$dev"]=nvme 00:03:04.087 07:20:20 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:04.087 07:20:20 -- setup/acl.sh@24 -- # (( 1 > 0 )) 00:03:04.087 07:20:20 -- setup/acl.sh@54 -- # run_test denied denied 00:03:04.087 07:20:20 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:03:04.087 07:20:20 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:03:04.087 07:20:20 -- common/autotest_common.sh@10 -- # set +x 00:03:04.087 ************************************ 00:03:04.087 START TEST denied 00:03:04.087 ************************************ 00:03:04.087 07:20:20 -- common/autotest_common.sh@1104 -- # denied 00:03:04.087 07:20:20 -- setup/acl.sh@38 -- # PCI_BLOCKED=' 0000:88:00.0' 00:03:04.087 07:20:20 -- setup/acl.sh@38 -- # setup output config 00:03:04.087 07:20:20 -- setup/acl.sh@39 -- # grep 'Skipping denied controller at 0000:88:00.0' 00:03:04.087 07:20:20 -- setup/common.sh@9 -- # [[ output == output ]] 00:03:04.087 07:20:20 -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh config 00:03:05.461 0000:88:00.0 (8086 0a54): Skipping denied controller at 0000:88:00.0 00:03:05.461 07:20:21 -- setup/acl.sh@40 -- # verify 0000:88:00.0 00:03:05.461 07:20:21 -- setup/acl.sh@28 -- # local dev driver 00:03:05.461 07:20:21 -- setup/acl.sh@30 -- # for dev in "$@" 00:03:05.461 07:20:21 -- setup/acl.sh@31 -- # [[ -e /sys/bus/pci/devices/0000:88:00.0 ]] 00:03:05.461 07:20:21 -- setup/acl.sh@32 -- # readlink -f /sys/bus/pci/devices/0000:88:00.0/driver 00:03:05.461 07:20:21 -- setup/acl.sh@32 -- # driver=/sys/bus/pci/drivers/nvme 00:03:05.461 07:20:21 -- setup/acl.sh@33 -- # [[ nvme == \n\v\m\e ]] 00:03:05.461 07:20:21 -- setup/acl.sh@41 -- # setup reset 00:03:05.461 07:20:21 -- setup/common.sh@9 -- # [[ reset == output ]] 00:03:05.461 07:20:21 -- setup/common.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:03:07.991 00:03:07.991 real 0m3.769s 00:03:07.991 user 0m1.166s 00:03:07.991 sys 0m1.694s 00:03:07.991 07:20:23 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:03:07.991 07:20:23 -- common/autotest_common.sh@10 -- # set +x 00:03:07.991 ************************************ 00:03:07.991 END TEST denied 00:03:07.991 ************************************ 00:03:07.991 07:20:23 -- setup/acl.sh@55 -- # run_test allowed allowed 00:03:07.991 07:20:23 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:03:07.991 07:20:23 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:03:07.991 07:20:23 -- common/autotest_common.sh@10 -- # set +x 00:03:07.991 ************************************ 00:03:07.991 START TEST allowed 00:03:07.991 ************************************ 00:03:07.991 07:20:23 -- common/autotest_common.sh@1104 -- # allowed 00:03:07.991 07:20:23 -- setup/acl.sh@45 -- # PCI_ALLOWED=0000:88:00.0 00:03:07.991 07:20:23 -- setup/acl.sh@45 -- # setup output config 00:03:07.991 07:20:23 -- setup/acl.sh@46 -- # grep -E '0000:88:00.0 .*: nvme -> .*' 00:03:07.991 07:20:23 -- setup/common.sh@9 -- # [[ output == output ]] 00:03:07.991 07:20:23 -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh config 00:03:10.519 0000:88:00.0 (8086 0a54): nvme -> vfio-pci 00:03:10.519 07:20:26 -- setup/acl.sh@47 -- # verify 00:03:10.519 07:20:26 -- setup/acl.sh@28 -- # local dev driver 00:03:10.519 07:20:26 -- setup/acl.sh@48 -- # setup reset 00:03:10.519 07:20:26 -- setup/common.sh@9 -- # [[ reset == output ]] 00:03:10.519 07:20:26 -- setup/common.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:03:11.891 00:03:11.891 real 0m3.929s 00:03:11.891 user 0m1.038s 00:03:11.891 sys 0m1.751s 00:03:11.891 07:20:27 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:03:11.891 07:20:27 -- common/autotest_common.sh@10 -- # set +x 00:03:11.891 ************************************ 00:03:11.891 END TEST allowed 00:03:11.891 ************************************ 00:03:11.891 00:03:11.891 real 0m10.200s 00:03:11.891 user 0m3.174s 00:03:11.891 sys 0m5.054s 00:03:11.891 07:20:27 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:03:11.891 07:20:27 -- common/autotest_common.sh@10 -- # set +x 00:03:11.891 ************************************ 00:03:11.891 END TEST acl 00:03:11.891 ************************************ 00:03:11.891 07:20:27 -- setup/test-setup.sh@13 -- # run_test hugepages /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/hugepages.sh 00:03:11.891 07:20:27 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:03:11.891 07:20:27 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:03:11.891 07:20:27 -- common/autotest_common.sh@10 -- # set +x 00:03:11.891 ************************************ 00:03:11.891 START TEST hugepages 00:03:11.891 ************************************ 00:03:11.891 07:20:27 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/hugepages.sh 00:03:11.891 * Looking for test storage... 00:03:11.891 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup 00:03:11.891 07:20:27 -- setup/hugepages.sh@10 -- # nodes_sys=() 00:03:11.891 07:20:27 -- setup/hugepages.sh@10 -- # declare -a nodes_sys 00:03:11.891 07:20:27 -- setup/hugepages.sh@12 -- # declare -i default_hugepages=0 00:03:11.891 07:20:27 -- setup/hugepages.sh@13 -- # declare -i no_nodes=0 00:03:11.891 07:20:27 -- setup/hugepages.sh@14 -- # declare -i nr_hugepages=0 00:03:11.891 07:20:27 -- setup/hugepages.sh@16 -- # get_meminfo Hugepagesize 00:03:11.891 07:20:27 -- setup/common.sh@17 -- # local get=Hugepagesize 00:03:11.891 07:20:27 -- setup/common.sh@18 -- # local node= 00:03:11.891 07:20:27 -- setup/common.sh@19 -- # local var val 00:03:11.891 07:20:27 -- setup/common.sh@20 -- # local mem_f mem 00:03:11.891 07:20:27 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:11.891 07:20:27 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:11.891 07:20:27 -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:11.891 07:20:27 -- setup/common.sh@28 -- # mapfile -t mem 00:03:11.891 07:20:27 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:11.891 07:20:27 -- setup/common.sh@31 -- # IFS=': ' 00:03:11.891 07:20:27 -- setup/common.sh@31 -- # read -r var val _ 00:03:11.891 07:20:27 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541708 kB' 'MemFree: 43737212 kB' 'MemAvailable: 47242624 kB' 'Buffers: 2704 kB' 'Cached: 10211280 kB' 'SwapCached: 0 kB' 'Active: 7220452 kB' 'Inactive: 3506552 kB' 'Active(anon): 6826100 kB' 'Inactive(anon): 0 kB' 'Active(file): 394352 kB' 'Inactive(file): 3506552 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 516324 kB' 'Mapped: 212500 kB' 'Shmem: 6313080 kB' 'KReclaimable: 196260 kB' 'Slab: 568784 kB' 'SReclaimable: 196260 kB' 'SUnreclaim: 372524 kB' 'KernelStack: 12864 kB' 'PageTables: 8956 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 36562304 kB' 'Committed_AS: 7955268 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 196628 kB' 'VmallocChunk: 0 kB' 'Percpu: 38784 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 2048' 'HugePages_Free: 2048' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 4194304 kB' 'DirectMap4k: 1924700 kB' 'DirectMap2M: 15820800 kB' 'DirectMap1G: 51380224 kB' 00:03:11.891 07:20:27 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:11.891 07:20:27 -- setup/common.sh@32 -- # continue 00:03:11.891 07:20:27 -- setup/common.sh@31 -- # IFS=': ' 00:03:11.891 07:20:27 -- setup/common.sh@31 -- # read -r var val _ 00:03:11.891 07:20:27 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:11.891 07:20:27 -- setup/common.sh@32 -- # continue 00:03:11.891 07:20:27 -- setup/common.sh@31 -- # IFS=': ' 00:03:11.891 07:20:27 -- setup/common.sh@31 -- # read -r var val _ 00:03:11.891 07:20:27 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:11.891 07:20:27 -- setup/common.sh@32 -- # continue 00:03:11.891 07:20:27 -- setup/common.sh@31 -- # IFS=': ' 00:03:11.891 07:20:27 -- setup/common.sh@31 -- # read -r var val _ 00:03:11.891 07:20:27 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:11.891 07:20:27 -- setup/common.sh@32 -- # continue 00:03:11.891 07:20:27 -- setup/common.sh@31 -- # IFS=': ' 00:03:11.891 07:20:27 -- setup/common.sh@31 -- # read -r var val _ 00:03:11.891 07:20:27 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:11.891 07:20:27 -- setup/common.sh@32 -- # continue 00:03:11.891 07:20:27 -- setup/common.sh@31 -- # IFS=': ' 00:03:11.891 07:20:27 -- setup/common.sh@31 -- # read -r var val _ 00:03:11.891 07:20:27 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:11.891 07:20:27 -- setup/common.sh@32 -- # continue 00:03:11.891 07:20:27 -- setup/common.sh@31 -- # IFS=': ' 00:03:11.891 07:20:27 -- setup/common.sh@31 -- # read -r var val _ 00:03:11.891 07:20:27 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:11.891 07:20:27 -- setup/common.sh@32 -- # continue 00:03:11.891 07:20:27 -- setup/common.sh@31 -- # IFS=': ' 00:03:11.891 07:20:27 -- setup/common.sh@31 -- # read -r var val _ 00:03:11.891 07:20:27 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:11.891 07:20:27 -- setup/common.sh@32 -- # continue 00:03:11.891 07:20:27 -- setup/common.sh@31 -- # IFS=': ' 00:03:11.891 07:20:27 -- setup/common.sh@31 -- # read -r var val _ 00:03:11.891 07:20:27 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:11.891 07:20:27 -- setup/common.sh@32 -- # continue 00:03:11.891 07:20:27 -- setup/common.sh@31 -- # IFS=': ' 00:03:11.891 07:20:27 -- setup/common.sh@31 -- # read -r var val _ 00:03:11.891 07:20:27 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:11.891 07:20:27 -- setup/common.sh@32 -- # continue 00:03:11.891 07:20:27 -- setup/common.sh@31 -- # IFS=': ' 00:03:11.891 07:20:27 -- setup/common.sh@31 -- # read -r var val _ 00:03:11.891 07:20:27 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:11.891 07:20:27 -- setup/common.sh@32 -- # continue 00:03:11.891 07:20:27 -- setup/common.sh@31 -- # IFS=': ' 00:03:11.891 07:20:27 -- setup/common.sh@31 -- # read -r var val _ 00:03:11.891 07:20:27 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:11.891 07:20:27 -- setup/common.sh@32 -- # continue 00:03:11.891 07:20:27 -- setup/common.sh@31 -- # IFS=': ' 00:03:11.891 07:20:27 -- setup/common.sh@31 -- # read -r var val _ 00:03:11.891 07:20:27 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:11.891 07:20:27 -- setup/common.sh@32 -- # continue 00:03:11.891 07:20:27 -- setup/common.sh@31 -- # IFS=': ' 00:03:11.891 07:20:27 -- setup/common.sh@31 -- # read -r var val _ 00:03:11.891 07:20:27 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:11.891 07:20:27 -- setup/common.sh@32 -- # continue 00:03:11.891 07:20:27 -- setup/common.sh@31 -- # IFS=': ' 00:03:11.891 07:20:27 -- setup/common.sh@31 -- # read -r var val _ 00:03:11.891 07:20:27 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:11.891 07:20:27 -- setup/common.sh@32 -- # continue 00:03:11.891 07:20:27 -- setup/common.sh@31 -- # IFS=': ' 00:03:11.891 07:20:27 -- setup/common.sh@31 -- # read -r var val _ 00:03:11.891 07:20:27 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:11.891 07:20:27 -- setup/common.sh@32 -- # continue 00:03:11.891 07:20:27 -- setup/common.sh@31 -- # IFS=': ' 00:03:11.891 07:20:27 -- setup/common.sh@31 -- # read -r var val _ 00:03:11.891 07:20:27 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:11.891 07:20:27 -- setup/common.sh@32 -- # continue 00:03:11.891 07:20:27 -- setup/common.sh@31 -- # IFS=': ' 00:03:11.891 07:20:27 -- setup/common.sh@31 -- # read -r var val _ 00:03:11.891 07:20:27 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:11.891 07:20:27 -- setup/common.sh@32 -- # continue 00:03:11.891 07:20:27 -- setup/common.sh@31 -- # IFS=': ' 00:03:11.891 07:20:27 -- setup/common.sh@31 -- # read -r var val _ 00:03:11.892 07:20:27 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:11.892 07:20:27 -- setup/common.sh@32 -- # continue 00:03:11.892 07:20:27 -- setup/common.sh@31 -- # IFS=': ' 00:03:11.892 07:20:27 -- setup/common.sh@31 -- # read -r var val _ 00:03:11.892 07:20:27 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:11.892 07:20:27 -- setup/common.sh@32 -- # continue 00:03:11.892 07:20:27 -- setup/common.sh@31 -- # IFS=': ' 00:03:11.892 07:20:27 -- setup/common.sh@31 -- # read -r var val _ 00:03:11.892 07:20:27 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:11.892 07:20:27 -- setup/common.sh@32 -- # continue 00:03:11.892 07:20:27 -- setup/common.sh@31 -- # IFS=': ' 00:03:11.892 07:20:27 -- setup/common.sh@31 -- # read -r var val _ 00:03:11.892 07:20:27 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:11.892 07:20:27 -- setup/common.sh@32 -- # continue 00:03:11.892 07:20:27 -- setup/common.sh@31 -- # IFS=': ' 00:03:11.892 07:20:27 -- setup/common.sh@31 -- # read -r var val _ 00:03:11.892 07:20:27 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:11.892 07:20:27 -- setup/common.sh@32 -- # continue 00:03:11.892 07:20:27 -- setup/common.sh@31 -- # IFS=': ' 00:03:11.892 07:20:27 -- setup/common.sh@31 -- # read -r var val _ 00:03:11.892 07:20:27 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:11.892 07:20:27 -- setup/common.sh@32 -- # continue 00:03:11.892 07:20:27 -- setup/common.sh@31 -- # IFS=': ' 00:03:11.892 07:20:27 -- setup/common.sh@31 -- # read -r var val _ 00:03:11.892 07:20:27 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:11.892 07:20:27 -- setup/common.sh@32 -- # continue 00:03:11.892 07:20:27 -- setup/common.sh@31 -- # IFS=': ' 00:03:11.892 07:20:27 -- setup/common.sh@31 -- # read -r var val _ 00:03:11.892 07:20:27 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:11.892 07:20:27 -- setup/common.sh@32 -- # continue 00:03:11.892 07:20:27 -- setup/common.sh@31 -- # IFS=': ' 00:03:11.892 07:20:27 -- setup/common.sh@31 -- # read -r var val _ 00:03:11.892 07:20:27 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:11.892 07:20:27 -- setup/common.sh@32 -- # continue 00:03:11.892 07:20:27 -- setup/common.sh@31 -- # IFS=': ' 00:03:11.892 07:20:27 -- setup/common.sh@31 -- # read -r var val _ 00:03:11.892 07:20:27 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:11.892 07:20:27 -- setup/common.sh@32 -- # continue 00:03:11.892 07:20:27 -- setup/common.sh@31 -- # IFS=': ' 00:03:11.892 07:20:27 -- setup/common.sh@31 -- # read -r var val _ 00:03:11.892 07:20:27 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:11.892 07:20:27 -- setup/common.sh@32 -- # continue 00:03:11.892 07:20:27 -- setup/common.sh@31 -- # IFS=': ' 00:03:11.892 07:20:27 -- setup/common.sh@31 -- # read -r var val _ 00:03:11.892 07:20:27 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:11.892 07:20:27 -- setup/common.sh@32 -- # continue 00:03:11.892 07:20:27 -- setup/common.sh@31 -- # IFS=': ' 00:03:11.892 07:20:27 -- setup/common.sh@31 -- # read -r var val _ 00:03:11.892 07:20:27 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:11.892 07:20:27 -- setup/common.sh@32 -- # continue 00:03:11.892 07:20:27 -- setup/common.sh@31 -- # IFS=': ' 00:03:11.892 07:20:27 -- setup/common.sh@31 -- # read -r var val _ 00:03:11.892 07:20:27 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:11.892 07:20:27 -- setup/common.sh@32 -- # continue 00:03:11.892 07:20:27 -- setup/common.sh@31 -- # IFS=': ' 00:03:11.892 07:20:27 -- setup/common.sh@31 -- # read -r var val _ 00:03:11.892 07:20:27 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:11.892 07:20:27 -- setup/common.sh@32 -- # continue 00:03:11.892 07:20:27 -- setup/common.sh@31 -- # IFS=': ' 00:03:11.892 07:20:27 -- setup/common.sh@31 -- # read -r var val _ 00:03:11.892 07:20:27 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:11.892 07:20:27 -- setup/common.sh@32 -- # continue 00:03:11.892 07:20:27 -- setup/common.sh@31 -- # IFS=': ' 00:03:11.892 07:20:27 -- setup/common.sh@31 -- # read -r var val _ 00:03:11.892 07:20:27 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:11.892 07:20:27 -- setup/common.sh@32 -- # continue 00:03:11.892 07:20:27 -- setup/common.sh@31 -- # IFS=': ' 00:03:11.892 07:20:27 -- setup/common.sh@31 -- # read -r var val _ 00:03:11.892 07:20:27 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:11.892 07:20:27 -- setup/common.sh@32 -- # continue 00:03:11.892 07:20:27 -- setup/common.sh@31 -- # IFS=': ' 00:03:11.892 07:20:27 -- setup/common.sh@31 -- # read -r var val _ 00:03:11.892 07:20:27 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:11.892 07:20:27 -- setup/common.sh@32 -- # continue 00:03:11.892 07:20:27 -- setup/common.sh@31 -- # IFS=': ' 00:03:11.892 07:20:27 -- setup/common.sh@31 -- # read -r var val _ 00:03:11.892 07:20:27 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:11.892 07:20:27 -- setup/common.sh@32 -- # continue 00:03:11.892 07:20:27 -- setup/common.sh@31 -- # IFS=': ' 00:03:11.892 07:20:27 -- setup/common.sh@31 -- # read -r var val _ 00:03:11.892 07:20:27 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:11.892 07:20:27 -- setup/common.sh@32 -- # continue 00:03:11.892 07:20:27 -- setup/common.sh@31 -- # IFS=': ' 00:03:11.892 07:20:27 -- setup/common.sh@31 -- # read -r var val _ 00:03:11.892 07:20:27 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:11.892 07:20:27 -- setup/common.sh@32 -- # continue 00:03:11.892 07:20:27 -- setup/common.sh@31 -- # IFS=': ' 00:03:11.892 07:20:27 -- setup/common.sh@31 -- # read -r var val _ 00:03:11.892 07:20:27 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:11.892 07:20:27 -- setup/common.sh@32 -- # continue 00:03:11.892 07:20:27 -- setup/common.sh@31 -- # IFS=': ' 00:03:11.892 07:20:27 -- setup/common.sh@31 -- # read -r var val _ 00:03:11.892 07:20:27 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:11.892 07:20:27 -- setup/common.sh@32 -- # continue 00:03:11.892 07:20:27 -- setup/common.sh@31 -- # IFS=': ' 00:03:11.892 07:20:27 -- setup/common.sh@31 -- # read -r var val _ 00:03:11.892 07:20:27 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:11.892 07:20:27 -- setup/common.sh@32 -- # continue 00:03:11.892 07:20:27 -- setup/common.sh@31 -- # IFS=': ' 00:03:11.892 07:20:27 -- setup/common.sh@31 -- # read -r var val _ 00:03:11.892 07:20:27 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:11.892 07:20:27 -- setup/common.sh@32 -- # continue 00:03:11.892 07:20:27 -- setup/common.sh@31 -- # IFS=': ' 00:03:11.892 07:20:27 -- setup/common.sh@31 -- # read -r var val _ 00:03:11.892 07:20:27 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:11.892 07:20:27 -- setup/common.sh@32 -- # continue 00:03:11.892 07:20:27 -- setup/common.sh@31 -- # IFS=': ' 00:03:11.892 07:20:27 -- setup/common.sh@31 -- # read -r var val _ 00:03:11.892 07:20:27 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:11.892 07:20:27 -- setup/common.sh@32 -- # continue 00:03:11.892 07:20:27 -- setup/common.sh@31 -- # IFS=': ' 00:03:11.892 07:20:27 -- setup/common.sh@31 -- # read -r var val _ 00:03:11.892 07:20:27 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:11.892 07:20:27 -- setup/common.sh@32 -- # continue 00:03:11.892 07:20:27 -- setup/common.sh@31 -- # IFS=': ' 00:03:11.892 07:20:27 -- setup/common.sh@31 -- # read -r var val _ 00:03:11.892 07:20:27 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:11.892 07:20:27 -- setup/common.sh@32 -- # continue 00:03:11.892 07:20:27 -- setup/common.sh@31 -- # IFS=': ' 00:03:11.892 07:20:27 -- setup/common.sh@31 -- # read -r var val _ 00:03:11.892 07:20:27 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:11.892 07:20:27 -- setup/common.sh@32 -- # continue 00:03:11.892 07:20:27 -- setup/common.sh@31 -- # IFS=': ' 00:03:11.892 07:20:27 -- setup/common.sh@31 -- # read -r var val _ 00:03:11.892 07:20:27 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:11.892 07:20:27 -- setup/common.sh@32 -- # continue 00:03:11.892 07:20:27 -- setup/common.sh@31 -- # IFS=': ' 00:03:11.892 07:20:27 -- setup/common.sh@31 -- # read -r var val _ 00:03:11.892 07:20:27 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:11.892 07:20:27 -- setup/common.sh@32 -- # continue 00:03:11.892 07:20:27 -- setup/common.sh@31 -- # IFS=': ' 00:03:11.892 07:20:27 -- setup/common.sh@31 -- # read -r var val _ 00:03:11.892 07:20:27 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:11.892 07:20:27 -- setup/common.sh@32 -- # continue 00:03:11.892 07:20:27 -- setup/common.sh@31 -- # IFS=': ' 00:03:11.892 07:20:27 -- setup/common.sh@31 -- # read -r var val _ 00:03:11.892 07:20:27 -- setup/common.sh@32 -- # [[ Hugepagesize == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:11.892 07:20:27 -- setup/common.sh@33 -- # echo 2048 00:03:11.892 07:20:27 -- setup/common.sh@33 -- # return 0 00:03:11.892 07:20:27 -- setup/hugepages.sh@16 -- # default_hugepages=2048 00:03:11.892 07:20:27 -- setup/hugepages.sh@17 -- # default_huge_nr=/sys/kernel/mm/hugepages/hugepages-2048kB/nr_hugepages 00:03:11.892 07:20:27 -- setup/hugepages.sh@18 -- # global_huge_nr=/proc/sys/vm/nr_hugepages 00:03:11.892 07:20:27 -- setup/hugepages.sh@21 -- # unset -v HUGE_EVEN_ALLOC 00:03:11.892 07:20:27 -- setup/hugepages.sh@22 -- # unset -v HUGEMEM 00:03:11.892 07:20:27 -- setup/hugepages.sh@23 -- # unset -v HUGENODE 00:03:11.892 07:20:27 -- setup/hugepages.sh@24 -- # unset -v NRHUGE 00:03:11.892 07:20:27 -- setup/hugepages.sh@207 -- # get_nodes 00:03:11.892 07:20:27 -- setup/hugepages.sh@27 -- # local node 00:03:11.892 07:20:27 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:11.892 07:20:27 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=2048 00:03:11.892 07:20:27 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:11.892 07:20:27 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=0 00:03:11.892 07:20:27 -- setup/hugepages.sh@32 -- # no_nodes=2 00:03:11.892 07:20:27 -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:03:11.892 07:20:27 -- setup/hugepages.sh@208 -- # clear_hp 00:03:11.892 07:20:27 -- setup/hugepages.sh@37 -- # local node hp 00:03:11.892 07:20:27 -- setup/hugepages.sh@39 -- # for node in "${!nodes_sys[@]}" 00:03:11.892 07:20:27 -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:03:11.892 07:20:27 -- setup/hugepages.sh@41 -- # echo 0 00:03:11.892 07:20:27 -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:03:11.892 07:20:27 -- setup/hugepages.sh@41 -- # echo 0 00:03:11.892 07:20:27 -- setup/hugepages.sh@39 -- # for node in "${!nodes_sys[@]}" 00:03:11.892 07:20:27 -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:03:11.892 07:20:27 -- setup/hugepages.sh@41 -- # echo 0 00:03:11.892 07:20:27 -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:03:11.892 07:20:27 -- setup/hugepages.sh@41 -- # echo 0 00:03:11.892 07:20:27 -- setup/hugepages.sh@45 -- # export CLEAR_HUGE=yes 00:03:11.892 07:20:27 -- setup/hugepages.sh@45 -- # CLEAR_HUGE=yes 00:03:11.892 07:20:27 -- setup/hugepages.sh@210 -- # run_test default_setup default_setup 00:03:11.892 07:20:27 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:03:11.893 07:20:27 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:03:11.893 07:20:27 -- common/autotest_common.sh@10 -- # set +x 00:03:11.893 ************************************ 00:03:11.893 START TEST default_setup 00:03:11.893 ************************************ 00:03:11.893 07:20:27 -- common/autotest_common.sh@1104 -- # default_setup 00:03:11.893 07:20:27 -- setup/hugepages.sh@136 -- # get_test_nr_hugepages 2097152 0 00:03:11.893 07:20:27 -- setup/hugepages.sh@49 -- # local size=2097152 00:03:11.893 07:20:27 -- setup/hugepages.sh@50 -- # (( 2 > 1 )) 00:03:11.893 07:20:27 -- setup/hugepages.sh@51 -- # shift 00:03:11.893 07:20:27 -- setup/hugepages.sh@52 -- # node_ids=('0') 00:03:11.893 07:20:27 -- setup/hugepages.sh@52 -- # local node_ids 00:03:11.893 07:20:27 -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:03:11.893 07:20:27 -- setup/hugepages.sh@57 -- # nr_hugepages=1024 00:03:11.893 07:20:27 -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 0 00:03:11.893 07:20:27 -- setup/hugepages.sh@62 -- # user_nodes=('0') 00:03:11.893 07:20:27 -- setup/hugepages.sh@62 -- # local user_nodes 00:03:11.893 07:20:27 -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:03:11.893 07:20:27 -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:03:11.893 07:20:27 -- setup/hugepages.sh@67 -- # nodes_test=() 00:03:11.893 07:20:27 -- setup/hugepages.sh@67 -- # local -g nodes_test 00:03:11.893 07:20:27 -- setup/hugepages.sh@69 -- # (( 1 > 0 )) 00:03:11.893 07:20:27 -- setup/hugepages.sh@70 -- # for _no_nodes in "${user_nodes[@]}" 00:03:11.893 07:20:27 -- setup/hugepages.sh@71 -- # nodes_test[_no_nodes]=1024 00:03:11.893 07:20:27 -- setup/hugepages.sh@73 -- # return 0 00:03:11.893 07:20:27 -- setup/hugepages.sh@137 -- # setup output 00:03:11.893 07:20:27 -- setup/common.sh@9 -- # [[ output == output ]] 00:03:11.893 07:20:27 -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:03:13.268 0000:00:04.7 (8086 0e27): ioatdma -> vfio-pci 00:03:13.268 0000:00:04.6 (8086 0e26): ioatdma -> vfio-pci 00:03:13.268 0000:00:04.5 (8086 0e25): ioatdma -> vfio-pci 00:03:13.268 0000:00:04.4 (8086 0e24): ioatdma -> vfio-pci 00:03:13.268 0000:00:04.3 (8086 0e23): ioatdma -> vfio-pci 00:03:13.268 0000:00:04.2 (8086 0e22): ioatdma -> vfio-pci 00:03:13.268 0000:00:04.1 (8086 0e21): ioatdma -> vfio-pci 00:03:13.268 0000:00:04.0 (8086 0e20): ioatdma -> vfio-pci 00:03:13.268 0000:80:04.7 (8086 0e27): ioatdma -> vfio-pci 00:03:13.268 0000:80:04.6 (8086 0e26): ioatdma -> vfio-pci 00:03:13.268 0000:80:04.5 (8086 0e25): ioatdma -> vfio-pci 00:03:13.268 0000:80:04.4 (8086 0e24): ioatdma -> vfio-pci 00:03:13.268 0000:80:04.3 (8086 0e23): ioatdma -> vfio-pci 00:03:13.268 0000:80:04.2 (8086 0e22): ioatdma -> vfio-pci 00:03:13.268 0000:80:04.1 (8086 0e21): ioatdma -> vfio-pci 00:03:13.268 0000:80:04.0 (8086 0e20): ioatdma -> vfio-pci 00:03:14.206 0000:88:00.0 (8086 0a54): nvme -> vfio-pci 00:03:14.206 07:20:30 -- setup/hugepages.sh@138 -- # verify_nr_hugepages 00:03:14.206 07:20:30 -- setup/hugepages.sh@89 -- # local node 00:03:14.206 07:20:30 -- setup/hugepages.sh@90 -- # local sorted_t 00:03:14.206 07:20:30 -- setup/hugepages.sh@91 -- # local sorted_s 00:03:14.206 07:20:30 -- setup/hugepages.sh@92 -- # local surp 00:03:14.206 07:20:30 -- setup/hugepages.sh@93 -- # local resv 00:03:14.206 07:20:30 -- setup/hugepages.sh@94 -- # local anon 00:03:14.206 07:20:30 -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:03:14.206 07:20:30 -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:03:14.206 07:20:30 -- setup/common.sh@17 -- # local get=AnonHugePages 00:03:14.206 07:20:30 -- setup/common.sh@18 -- # local node= 00:03:14.206 07:20:30 -- setup/common.sh@19 -- # local var val 00:03:14.206 07:20:30 -- setup/common.sh@20 -- # local mem_f mem 00:03:14.206 07:20:30 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:14.206 07:20:30 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:14.206 07:20:30 -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:14.206 07:20:30 -- setup/common.sh@28 -- # mapfile -t mem 00:03:14.206 07:20:30 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:14.206 07:20:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:14.206 07:20:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:14.206 07:20:30 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541708 kB' 'MemFree: 45803604 kB' 'MemAvailable: 49309016 kB' 'Buffers: 2704 kB' 'Cached: 10211372 kB' 'SwapCached: 0 kB' 'Active: 7238816 kB' 'Inactive: 3506552 kB' 'Active(anon): 6844464 kB' 'Inactive(anon): 0 kB' 'Active(file): 394352 kB' 'Inactive(file): 3506552 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 534624 kB' 'Mapped: 212524 kB' 'Shmem: 6313172 kB' 'KReclaimable: 196260 kB' 'Slab: 568392 kB' 'SReclaimable: 196260 kB' 'SUnreclaim: 372132 kB' 'KernelStack: 12976 kB' 'PageTables: 8732 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37610880 kB' 'Committed_AS: 7975700 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 196660 kB' 'VmallocChunk: 0 kB' 'Percpu: 38784 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 1924700 kB' 'DirectMap2M: 15820800 kB' 'DirectMap1G: 51380224 kB' 00:03:14.206 07:20:30 -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:14.206 07:20:30 -- setup/common.sh@32 -- # continue 00:03:14.206 07:20:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:14.206 07:20:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:14.206 07:20:30 -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:14.206 07:20:30 -- setup/common.sh@32 -- # continue 00:03:14.206 07:20:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:14.206 07:20:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:14.206 07:20:30 -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:14.206 07:20:30 -- setup/common.sh@32 -- # continue 00:03:14.206 07:20:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:14.206 07:20:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:14.206 07:20:30 -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:14.206 07:20:30 -- setup/common.sh@32 -- # continue 00:03:14.206 07:20:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:14.206 07:20:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:14.206 07:20:30 -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:14.206 07:20:30 -- setup/common.sh@32 -- # continue 00:03:14.206 07:20:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:14.206 07:20:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:14.206 07:20:30 -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:14.206 07:20:30 -- setup/common.sh@32 -- # continue 00:03:14.206 07:20:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:14.206 07:20:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:14.206 07:20:30 -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:14.206 07:20:30 -- setup/common.sh@32 -- # continue 00:03:14.206 07:20:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:14.206 07:20:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:14.206 07:20:30 -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:14.206 07:20:30 -- setup/common.sh@32 -- # continue 00:03:14.206 07:20:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:14.206 07:20:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:14.206 07:20:30 -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:14.206 07:20:30 -- setup/common.sh@32 -- # continue 00:03:14.206 07:20:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:14.206 07:20:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:14.206 07:20:30 -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:14.206 07:20:30 -- setup/common.sh@32 -- # continue 00:03:14.206 07:20:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:14.206 07:20:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:14.206 07:20:30 -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:14.206 07:20:30 -- setup/common.sh@32 -- # continue 00:03:14.206 07:20:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:14.206 07:20:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:14.206 07:20:30 -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:14.206 07:20:30 -- setup/common.sh@32 -- # continue 00:03:14.206 07:20:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:14.206 07:20:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:14.206 07:20:30 -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:14.206 07:20:30 -- setup/common.sh@32 -- # continue 00:03:14.206 07:20:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:14.206 07:20:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:14.206 07:20:30 -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:14.206 07:20:30 -- setup/common.sh@32 -- # continue 00:03:14.206 07:20:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:14.206 07:20:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:14.206 07:20:30 -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:14.206 07:20:30 -- setup/common.sh@32 -- # continue 00:03:14.206 07:20:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:14.206 07:20:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:14.206 07:20:30 -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:14.206 07:20:30 -- setup/common.sh@32 -- # continue 00:03:14.206 07:20:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:14.206 07:20:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:14.207 07:20:30 -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:14.207 07:20:30 -- setup/common.sh@32 -- # continue 00:03:14.207 07:20:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:14.207 07:20:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:14.207 07:20:30 -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:14.207 07:20:30 -- setup/common.sh@32 -- # continue 00:03:14.207 07:20:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:14.207 07:20:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:14.207 07:20:30 -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:14.207 07:20:30 -- setup/common.sh@32 -- # continue 00:03:14.207 07:20:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:14.207 07:20:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:14.207 07:20:30 -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:14.207 07:20:30 -- setup/common.sh@32 -- # continue 00:03:14.207 07:20:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:14.207 07:20:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:14.207 07:20:30 -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:14.207 07:20:30 -- setup/common.sh@32 -- # continue 00:03:14.207 07:20:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:14.207 07:20:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:14.207 07:20:30 -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:14.207 07:20:30 -- setup/common.sh@32 -- # continue 00:03:14.207 07:20:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:14.207 07:20:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:14.207 07:20:30 -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:14.207 07:20:30 -- setup/common.sh@32 -- # continue 00:03:14.207 07:20:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:14.207 07:20:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:14.207 07:20:30 -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:14.207 07:20:30 -- setup/common.sh@32 -- # continue 00:03:14.207 07:20:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:14.207 07:20:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:14.207 07:20:30 -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:14.207 07:20:30 -- setup/common.sh@32 -- # continue 00:03:14.207 07:20:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:14.207 07:20:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:14.207 07:20:30 -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:14.207 07:20:30 -- setup/common.sh@32 -- # continue 00:03:14.207 07:20:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:14.207 07:20:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:14.207 07:20:30 -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:14.207 07:20:30 -- setup/common.sh@32 -- # continue 00:03:14.207 07:20:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:14.207 07:20:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:14.207 07:20:30 -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:14.207 07:20:30 -- setup/common.sh@32 -- # continue 00:03:14.207 07:20:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:14.207 07:20:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:14.207 07:20:30 -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:14.207 07:20:30 -- setup/common.sh@32 -- # continue 00:03:14.207 07:20:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:14.207 07:20:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:14.207 07:20:30 -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:14.207 07:20:30 -- setup/common.sh@32 -- # continue 00:03:14.207 07:20:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:14.207 07:20:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:14.207 07:20:30 -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:14.207 07:20:30 -- setup/common.sh@32 -- # continue 00:03:14.207 07:20:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:14.207 07:20:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:14.207 07:20:30 -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:14.207 07:20:30 -- setup/common.sh@32 -- # continue 00:03:14.207 07:20:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:14.207 07:20:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:14.207 07:20:30 -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:14.207 07:20:30 -- setup/common.sh@32 -- # continue 00:03:14.207 07:20:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:14.207 07:20:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:14.207 07:20:30 -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:14.207 07:20:30 -- setup/common.sh@32 -- # continue 00:03:14.207 07:20:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:14.207 07:20:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:14.207 07:20:30 -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:14.207 07:20:30 -- setup/common.sh@32 -- # continue 00:03:14.207 07:20:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:14.207 07:20:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:14.207 07:20:30 -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:14.207 07:20:30 -- setup/common.sh@32 -- # continue 00:03:14.207 07:20:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:14.207 07:20:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:14.207 07:20:30 -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:14.207 07:20:30 -- setup/common.sh@32 -- # continue 00:03:14.207 07:20:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:14.207 07:20:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:14.207 07:20:30 -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:14.207 07:20:30 -- setup/common.sh@32 -- # continue 00:03:14.207 07:20:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:14.207 07:20:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:14.207 07:20:30 -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:14.207 07:20:30 -- setup/common.sh@32 -- # continue 00:03:14.207 07:20:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:14.207 07:20:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:14.207 07:20:30 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:14.207 07:20:30 -- setup/common.sh@32 -- # continue 00:03:14.207 07:20:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:14.207 07:20:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:14.207 07:20:30 -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:14.207 07:20:30 -- setup/common.sh@33 -- # echo 0 00:03:14.207 07:20:30 -- setup/common.sh@33 -- # return 0 00:03:14.207 07:20:30 -- setup/hugepages.sh@97 -- # anon=0 00:03:14.207 07:20:30 -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:03:14.207 07:20:30 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:14.207 07:20:30 -- setup/common.sh@18 -- # local node= 00:03:14.207 07:20:30 -- setup/common.sh@19 -- # local var val 00:03:14.207 07:20:30 -- setup/common.sh@20 -- # local mem_f mem 00:03:14.207 07:20:30 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:14.207 07:20:30 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:14.207 07:20:30 -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:14.207 07:20:30 -- setup/common.sh@28 -- # mapfile -t mem 00:03:14.207 07:20:30 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:14.207 07:20:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:14.207 07:20:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:14.207 07:20:30 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541708 kB' 'MemFree: 45805328 kB' 'MemAvailable: 49310740 kB' 'Buffers: 2704 kB' 'Cached: 10211372 kB' 'SwapCached: 0 kB' 'Active: 7238884 kB' 'Inactive: 3506552 kB' 'Active(anon): 6844532 kB' 'Inactive(anon): 0 kB' 'Active(file): 394352 kB' 'Inactive(file): 3506552 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 534604 kB' 'Mapped: 212660 kB' 'Shmem: 6313172 kB' 'KReclaimable: 196260 kB' 'Slab: 568396 kB' 'SReclaimable: 196260 kB' 'SUnreclaim: 372136 kB' 'KernelStack: 12864 kB' 'PageTables: 8424 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37610880 kB' 'Committed_AS: 7975712 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 196644 kB' 'VmallocChunk: 0 kB' 'Percpu: 38784 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 1924700 kB' 'DirectMap2M: 15820800 kB' 'DirectMap1G: 51380224 kB' 00:03:14.207 07:20:30 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:14.207 07:20:30 -- setup/common.sh@32 -- # continue 00:03:14.207 07:20:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:14.207 07:20:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:14.207 07:20:30 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:14.207 07:20:30 -- setup/common.sh@32 -- # continue 00:03:14.207 07:20:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:14.207 07:20:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:14.207 07:20:30 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:14.207 07:20:30 -- setup/common.sh@32 -- # continue 00:03:14.207 07:20:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:14.207 07:20:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:14.207 07:20:30 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:14.207 07:20:30 -- setup/common.sh@32 -- # continue 00:03:14.207 07:20:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:14.207 07:20:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:14.207 07:20:30 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:14.207 07:20:30 -- setup/common.sh@32 -- # continue 00:03:14.207 07:20:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:14.207 07:20:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:14.207 07:20:30 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:14.207 07:20:30 -- setup/common.sh@32 -- # continue 00:03:14.207 07:20:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:14.207 07:20:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:14.207 07:20:30 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:14.207 07:20:30 -- setup/common.sh@32 -- # continue 00:03:14.207 07:20:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:14.207 07:20:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:14.207 07:20:30 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:14.207 07:20:30 -- setup/common.sh@32 -- # continue 00:03:14.207 07:20:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:14.207 07:20:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:14.207 07:20:30 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:14.207 07:20:30 -- setup/common.sh@32 -- # continue 00:03:14.207 07:20:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:14.207 07:20:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:14.207 07:20:30 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:14.207 07:20:30 -- setup/common.sh@32 -- # continue 00:03:14.207 07:20:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:14.207 07:20:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:14.208 07:20:30 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:14.208 07:20:30 -- setup/common.sh@32 -- # continue 00:03:14.208 07:20:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:14.208 07:20:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:14.208 07:20:30 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:14.208 07:20:30 -- setup/common.sh@32 -- # continue 00:03:14.208 07:20:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:14.208 07:20:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:14.208 07:20:30 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:14.208 07:20:30 -- setup/common.sh@32 -- # continue 00:03:14.208 07:20:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:14.208 07:20:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:14.208 07:20:30 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:14.208 07:20:30 -- setup/common.sh@32 -- # continue 00:03:14.208 07:20:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:14.208 07:20:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:14.208 07:20:30 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:14.208 07:20:30 -- setup/common.sh@32 -- # continue 00:03:14.208 07:20:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:14.208 07:20:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:14.208 07:20:30 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:14.208 07:20:30 -- setup/common.sh@32 -- # continue 00:03:14.208 07:20:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:14.208 07:20:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:14.208 07:20:30 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:14.208 07:20:30 -- setup/common.sh@32 -- # continue 00:03:14.208 07:20:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:14.208 07:20:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:14.208 07:20:30 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:14.208 07:20:30 -- setup/common.sh@32 -- # continue 00:03:14.208 07:20:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:14.208 07:20:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:14.208 07:20:30 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:14.208 07:20:30 -- setup/common.sh@32 -- # continue 00:03:14.208 07:20:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:14.208 07:20:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:14.208 07:20:30 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:14.208 07:20:30 -- setup/common.sh@32 -- # continue 00:03:14.208 07:20:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:14.208 07:20:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:14.208 07:20:30 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:14.208 07:20:30 -- setup/common.sh@32 -- # continue 00:03:14.208 07:20:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:14.208 07:20:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:14.208 07:20:30 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:14.208 07:20:30 -- setup/common.sh@32 -- # continue 00:03:14.208 07:20:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:14.208 07:20:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:14.208 07:20:30 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:14.208 07:20:30 -- setup/common.sh@32 -- # continue 00:03:14.208 07:20:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:14.208 07:20:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:14.208 07:20:30 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:14.208 07:20:30 -- setup/common.sh@32 -- # continue 00:03:14.208 07:20:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:14.208 07:20:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:14.208 07:20:30 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:14.208 07:20:30 -- setup/common.sh@32 -- # continue 00:03:14.208 07:20:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:14.208 07:20:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:14.208 07:20:30 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:14.208 07:20:30 -- setup/common.sh@32 -- # continue 00:03:14.208 07:20:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:14.208 07:20:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:14.208 07:20:30 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:14.208 07:20:30 -- setup/common.sh@32 -- # continue 00:03:14.208 07:20:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:14.208 07:20:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:14.208 07:20:30 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:14.208 07:20:30 -- setup/common.sh@32 -- # continue 00:03:14.208 07:20:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:14.208 07:20:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:14.208 07:20:30 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:14.208 07:20:30 -- setup/common.sh@32 -- # continue 00:03:14.208 07:20:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:14.208 07:20:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:14.208 07:20:30 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:14.208 07:20:30 -- setup/common.sh@32 -- # continue 00:03:14.208 07:20:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:14.208 07:20:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:14.208 07:20:30 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:14.208 07:20:30 -- setup/common.sh@32 -- # continue 00:03:14.208 07:20:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:14.208 07:20:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:14.208 07:20:30 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:14.208 07:20:30 -- setup/common.sh@32 -- # continue 00:03:14.208 07:20:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:14.208 07:20:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:14.208 07:20:30 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:14.208 07:20:30 -- setup/common.sh@32 -- # continue 00:03:14.208 07:20:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:14.208 07:20:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:14.208 07:20:30 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:14.208 07:20:30 -- setup/common.sh@32 -- # continue 00:03:14.208 07:20:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:14.208 07:20:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:14.208 07:20:30 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:14.208 07:20:30 -- setup/common.sh@32 -- # continue 00:03:14.208 07:20:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:14.208 07:20:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:14.208 07:20:30 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:14.208 07:20:30 -- setup/common.sh@32 -- # continue 00:03:14.208 07:20:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:14.208 07:20:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:14.208 07:20:30 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:14.208 07:20:30 -- setup/common.sh@32 -- # continue 00:03:14.208 07:20:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:14.208 07:20:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:14.208 07:20:30 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:14.208 07:20:30 -- setup/common.sh@32 -- # continue 00:03:14.208 07:20:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:14.208 07:20:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:14.208 07:20:30 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:14.208 07:20:30 -- setup/common.sh@32 -- # continue 00:03:14.208 07:20:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:14.208 07:20:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:14.208 07:20:30 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:14.208 07:20:30 -- setup/common.sh@32 -- # continue 00:03:14.208 07:20:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:14.208 07:20:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:14.208 07:20:30 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:14.208 07:20:30 -- setup/common.sh@32 -- # continue 00:03:14.208 07:20:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:14.208 07:20:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:14.208 07:20:30 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:14.208 07:20:30 -- setup/common.sh@32 -- # continue 00:03:14.208 07:20:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:14.208 07:20:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:14.208 07:20:30 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:14.208 07:20:30 -- setup/common.sh@32 -- # continue 00:03:14.208 07:20:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:14.208 07:20:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:14.208 07:20:30 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:14.208 07:20:30 -- setup/common.sh@32 -- # continue 00:03:14.208 07:20:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:14.208 07:20:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:14.208 07:20:30 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:14.208 07:20:30 -- setup/common.sh@32 -- # continue 00:03:14.208 07:20:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:14.208 07:20:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:14.208 07:20:30 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:14.208 07:20:30 -- setup/common.sh@32 -- # continue 00:03:14.208 07:20:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:14.208 07:20:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:14.208 07:20:30 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:14.208 07:20:30 -- setup/common.sh@32 -- # continue 00:03:14.208 07:20:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:14.208 07:20:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:14.208 07:20:30 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:14.208 07:20:30 -- setup/common.sh@32 -- # continue 00:03:14.208 07:20:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:14.208 07:20:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:14.208 07:20:30 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:14.208 07:20:30 -- setup/common.sh@32 -- # continue 00:03:14.208 07:20:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:14.208 07:20:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:14.208 07:20:30 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:14.208 07:20:30 -- setup/common.sh@32 -- # continue 00:03:14.208 07:20:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:14.208 07:20:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:14.208 07:20:30 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:14.208 07:20:30 -- setup/common.sh@32 -- # continue 00:03:14.208 07:20:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:14.208 07:20:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:14.208 07:20:30 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:14.208 07:20:30 -- setup/common.sh@33 -- # echo 0 00:03:14.208 07:20:30 -- setup/common.sh@33 -- # return 0 00:03:14.208 07:20:30 -- setup/hugepages.sh@99 -- # surp=0 00:03:14.208 07:20:30 -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:03:14.208 07:20:30 -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:03:14.208 07:20:30 -- setup/common.sh@18 -- # local node= 00:03:14.208 07:20:30 -- setup/common.sh@19 -- # local var val 00:03:14.208 07:20:30 -- setup/common.sh@20 -- # local mem_f mem 00:03:14.208 07:20:30 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:14.208 07:20:30 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:14.208 07:20:30 -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:14.209 07:20:30 -- setup/common.sh@28 -- # mapfile -t mem 00:03:14.209 07:20:30 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:14.209 07:20:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:14.209 07:20:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:14.209 07:20:30 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541708 kB' 'MemFree: 45805916 kB' 'MemAvailable: 49311328 kB' 'Buffers: 2704 kB' 'Cached: 10211388 kB' 'SwapCached: 0 kB' 'Active: 7238060 kB' 'Inactive: 3506552 kB' 'Active(anon): 6843708 kB' 'Inactive(anon): 0 kB' 'Active(file): 394352 kB' 'Inactive(file): 3506552 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 533716 kB' 'Mapped: 212616 kB' 'Shmem: 6313188 kB' 'KReclaimable: 196260 kB' 'Slab: 568396 kB' 'SReclaimable: 196260 kB' 'SUnreclaim: 372136 kB' 'KernelStack: 12832 kB' 'PageTables: 8316 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37610880 kB' 'Committed_AS: 7975728 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 196660 kB' 'VmallocChunk: 0 kB' 'Percpu: 38784 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 1924700 kB' 'DirectMap2M: 15820800 kB' 'DirectMap1G: 51380224 kB' 00:03:14.209 07:20:30 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:14.209 07:20:30 -- setup/common.sh@32 -- # continue 00:03:14.209 07:20:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:14.209 07:20:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:14.209 07:20:30 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:14.209 07:20:30 -- setup/common.sh@32 -- # continue 00:03:14.209 07:20:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:14.209 07:20:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:14.209 07:20:30 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:14.209 07:20:30 -- setup/common.sh@32 -- # continue 00:03:14.209 07:20:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:14.209 07:20:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:14.209 07:20:30 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:14.209 07:20:30 -- setup/common.sh@32 -- # continue 00:03:14.209 07:20:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:14.209 07:20:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:14.209 07:20:30 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:14.209 07:20:30 -- setup/common.sh@32 -- # continue 00:03:14.209 07:20:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:14.209 07:20:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:14.209 07:20:30 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:14.209 07:20:30 -- setup/common.sh@32 -- # continue 00:03:14.209 07:20:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:14.209 07:20:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:14.209 07:20:30 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:14.209 07:20:30 -- setup/common.sh@32 -- # continue 00:03:14.209 07:20:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:14.209 07:20:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:14.209 07:20:30 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:14.209 07:20:30 -- setup/common.sh@32 -- # continue 00:03:14.209 07:20:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:14.209 07:20:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:14.209 07:20:30 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:14.209 07:20:30 -- setup/common.sh@32 -- # continue 00:03:14.209 07:20:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:14.209 07:20:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:14.209 07:20:30 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:14.209 07:20:30 -- setup/common.sh@32 -- # continue 00:03:14.209 07:20:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:14.209 07:20:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:14.209 07:20:30 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:14.209 07:20:30 -- setup/common.sh@32 -- # continue 00:03:14.209 07:20:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:14.209 07:20:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:14.209 07:20:30 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:14.209 07:20:30 -- setup/common.sh@32 -- # continue 00:03:14.209 07:20:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:14.209 07:20:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:14.209 07:20:30 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:14.209 07:20:30 -- setup/common.sh@32 -- # continue 00:03:14.209 07:20:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:14.209 07:20:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:14.209 07:20:30 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:14.209 07:20:30 -- setup/common.sh@32 -- # continue 00:03:14.209 07:20:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:14.209 07:20:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:14.209 07:20:30 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:14.209 07:20:30 -- setup/common.sh@32 -- # continue 00:03:14.209 07:20:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:14.209 07:20:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:14.209 07:20:30 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:14.209 07:20:30 -- setup/common.sh@32 -- # continue 00:03:14.209 07:20:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:14.209 07:20:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:14.209 07:20:30 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:14.209 07:20:30 -- setup/common.sh@32 -- # continue 00:03:14.209 07:20:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:14.209 07:20:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:14.209 07:20:30 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:14.209 07:20:30 -- setup/common.sh@32 -- # continue 00:03:14.209 07:20:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:14.209 07:20:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:14.209 07:20:30 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:14.209 07:20:30 -- setup/common.sh@32 -- # continue 00:03:14.209 07:20:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:14.209 07:20:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:14.209 07:20:30 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:14.209 07:20:30 -- setup/common.sh@32 -- # continue 00:03:14.209 07:20:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:14.209 07:20:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:14.209 07:20:30 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:14.209 07:20:30 -- setup/common.sh@32 -- # continue 00:03:14.209 07:20:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:14.209 07:20:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:14.209 07:20:30 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:14.209 07:20:30 -- setup/common.sh@32 -- # continue 00:03:14.209 07:20:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:14.209 07:20:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:14.209 07:20:30 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:14.209 07:20:30 -- setup/common.sh@32 -- # continue 00:03:14.209 07:20:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:14.209 07:20:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:14.209 07:20:30 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:14.209 07:20:30 -- setup/common.sh@32 -- # continue 00:03:14.209 07:20:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:14.209 07:20:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:14.209 07:20:30 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:14.209 07:20:30 -- setup/common.sh@32 -- # continue 00:03:14.209 07:20:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:14.209 07:20:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:14.209 07:20:30 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:14.209 07:20:30 -- setup/common.sh@32 -- # continue 00:03:14.209 07:20:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:14.209 07:20:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:14.209 07:20:30 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:14.209 07:20:30 -- setup/common.sh@32 -- # continue 00:03:14.209 07:20:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:14.209 07:20:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:14.209 07:20:30 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:14.209 07:20:30 -- setup/common.sh@32 -- # continue 00:03:14.209 07:20:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:14.209 07:20:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:14.209 07:20:30 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:14.209 07:20:30 -- setup/common.sh@32 -- # continue 00:03:14.209 07:20:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:14.209 07:20:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:14.209 07:20:30 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:14.209 07:20:30 -- setup/common.sh@32 -- # continue 00:03:14.209 07:20:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:14.209 07:20:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:14.209 07:20:30 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:14.209 07:20:30 -- setup/common.sh@32 -- # continue 00:03:14.209 07:20:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:14.209 07:20:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:14.209 07:20:30 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:14.209 07:20:30 -- setup/common.sh@32 -- # continue 00:03:14.209 07:20:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:14.209 07:20:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:14.209 07:20:30 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:14.209 07:20:30 -- setup/common.sh@32 -- # continue 00:03:14.209 07:20:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:14.209 07:20:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:14.209 07:20:30 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:14.209 07:20:30 -- setup/common.sh@32 -- # continue 00:03:14.209 07:20:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:14.209 07:20:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:14.209 07:20:30 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:14.209 07:20:30 -- setup/common.sh@32 -- # continue 00:03:14.209 07:20:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:14.209 07:20:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:14.209 07:20:30 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:14.209 07:20:30 -- setup/common.sh@32 -- # continue 00:03:14.209 07:20:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:14.209 07:20:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:14.209 07:20:30 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:14.209 07:20:30 -- setup/common.sh@32 -- # continue 00:03:14.209 07:20:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:14.209 07:20:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:14.209 07:20:30 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:14.209 07:20:30 -- setup/common.sh@32 -- # continue 00:03:14.209 07:20:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:14.209 07:20:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:14.209 07:20:30 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:14.209 07:20:30 -- setup/common.sh@32 -- # continue 00:03:14.210 07:20:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:14.210 07:20:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:14.210 07:20:30 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:14.210 07:20:30 -- setup/common.sh@32 -- # continue 00:03:14.210 07:20:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:14.210 07:20:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:14.210 07:20:30 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:14.210 07:20:30 -- setup/common.sh@32 -- # continue 00:03:14.210 07:20:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:14.210 07:20:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:14.210 07:20:30 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:14.210 07:20:30 -- setup/common.sh@32 -- # continue 00:03:14.210 07:20:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:14.210 07:20:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:14.210 07:20:30 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:14.210 07:20:30 -- setup/common.sh@32 -- # continue 00:03:14.210 07:20:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:14.210 07:20:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:14.210 07:20:30 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:14.210 07:20:30 -- setup/common.sh@32 -- # continue 00:03:14.210 07:20:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:14.210 07:20:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:14.210 07:20:30 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:14.210 07:20:30 -- setup/common.sh@32 -- # continue 00:03:14.210 07:20:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:14.210 07:20:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:14.210 07:20:30 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:14.210 07:20:30 -- setup/common.sh@32 -- # continue 00:03:14.210 07:20:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:14.210 07:20:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:14.210 07:20:30 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:14.210 07:20:30 -- setup/common.sh@32 -- # continue 00:03:14.210 07:20:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:14.210 07:20:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:14.210 07:20:30 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:14.210 07:20:30 -- setup/common.sh@32 -- # continue 00:03:14.210 07:20:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:14.210 07:20:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:14.210 07:20:30 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:14.210 07:20:30 -- setup/common.sh@32 -- # continue 00:03:14.210 07:20:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:14.210 07:20:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:14.210 07:20:30 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:14.210 07:20:30 -- setup/common.sh@32 -- # continue 00:03:14.210 07:20:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:14.210 07:20:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:14.210 07:20:30 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:14.210 07:20:30 -- setup/common.sh@33 -- # echo 0 00:03:14.210 07:20:30 -- setup/common.sh@33 -- # return 0 00:03:14.210 07:20:30 -- setup/hugepages.sh@100 -- # resv=0 00:03:14.210 07:20:30 -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:03:14.210 nr_hugepages=1024 00:03:14.210 07:20:30 -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:03:14.210 resv_hugepages=0 00:03:14.210 07:20:30 -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:03:14.210 surplus_hugepages=0 00:03:14.210 07:20:30 -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:03:14.210 anon_hugepages=0 00:03:14.210 07:20:30 -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:03:14.210 07:20:30 -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:03:14.210 07:20:30 -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:03:14.210 07:20:30 -- setup/common.sh@17 -- # local get=HugePages_Total 00:03:14.210 07:20:30 -- setup/common.sh@18 -- # local node= 00:03:14.210 07:20:30 -- setup/common.sh@19 -- # local var val 00:03:14.210 07:20:30 -- setup/common.sh@20 -- # local mem_f mem 00:03:14.210 07:20:30 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:14.210 07:20:30 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:14.210 07:20:30 -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:14.210 07:20:30 -- setup/common.sh@28 -- # mapfile -t mem 00:03:14.210 07:20:30 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:14.210 07:20:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:14.210 07:20:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:14.210 07:20:30 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541708 kB' 'MemFree: 45806112 kB' 'MemAvailable: 49311524 kB' 'Buffers: 2704 kB' 'Cached: 10211388 kB' 'SwapCached: 0 kB' 'Active: 7237672 kB' 'Inactive: 3506552 kB' 'Active(anon): 6843320 kB' 'Inactive(anon): 0 kB' 'Active(file): 394352 kB' 'Inactive(file): 3506552 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 533332 kB' 'Mapped: 212540 kB' 'Shmem: 6313188 kB' 'KReclaimable: 196260 kB' 'Slab: 568404 kB' 'SReclaimable: 196260 kB' 'SUnreclaim: 372144 kB' 'KernelStack: 12880 kB' 'PageTables: 8404 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37610880 kB' 'Committed_AS: 7975744 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 196660 kB' 'VmallocChunk: 0 kB' 'Percpu: 38784 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 1924700 kB' 'DirectMap2M: 15820800 kB' 'DirectMap1G: 51380224 kB' 00:03:14.210 07:20:30 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:14.210 07:20:30 -- setup/common.sh@32 -- # continue 00:03:14.210 07:20:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:14.210 07:20:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:14.210 07:20:30 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:14.210 07:20:30 -- setup/common.sh@32 -- # continue 00:03:14.210 07:20:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:14.210 07:20:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:14.210 07:20:30 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:14.210 07:20:30 -- setup/common.sh@32 -- # continue 00:03:14.210 07:20:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:14.210 07:20:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:14.210 07:20:30 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:14.210 07:20:30 -- setup/common.sh@32 -- # continue 00:03:14.210 07:20:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:14.210 07:20:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:14.210 07:20:30 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:14.210 07:20:30 -- setup/common.sh@32 -- # continue 00:03:14.210 07:20:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:14.210 07:20:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:14.210 07:20:30 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:14.210 07:20:30 -- setup/common.sh@32 -- # continue 00:03:14.210 07:20:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:14.210 07:20:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:14.210 07:20:30 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:14.210 07:20:30 -- setup/common.sh@32 -- # continue 00:03:14.210 07:20:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:14.210 07:20:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:14.210 07:20:30 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:14.210 07:20:30 -- setup/common.sh@32 -- # continue 00:03:14.210 07:20:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:14.210 07:20:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:14.210 07:20:30 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:14.210 07:20:30 -- setup/common.sh@32 -- # continue 00:03:14.210 07:20:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:14.210 07:20:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:14.210 07:20:30 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:14.210 07:20:30 -- setup/common.sh@32 -- # continue 00:03:14.210 07:20:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:14.210 07:20:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:14.210 07:20:30 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:14.210 07:20:30 -- setup/common.sh@32 -- # continue 00:03:14.210 07:20:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:14.210 07:20:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:14.210 07:20:30 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:14.210 07:20:30 -- setup/common.sh@32 -- # continue 00:03:14.210 07:20:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:14.210 07:20:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:14.210 07:20:30 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:14.210 07:20:30 -- setup/common.sh@32 -- # continue 00:03:14.210 07:20:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:14.210 07:20:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:14.211 07:20:30 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:14.211 07:20:30 -- setup/common.sh@32 -- # continue 00:03:14.211 07:20:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:14.211 07:20:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:14.211 07:20:30 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:14.211 07:20:30 -- setup/common.sh@32 -- # continue 00:03:14.211 07:20:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:14.211 07:20:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:14.211 07:20:30 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:14.211 07:20:30 -- setup/common.sh@32 -- # continue 00:03:14.211 07:20:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:14.211 07:20:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:14.211 07:20:30 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:14.211 07:20:30 -- setup/common.sh@32 -- # continue 00:03:14.482 07:20:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:14.482 07:20:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:14.482 07:20:30 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:14.482 07:20:30 -- setup/common.sh@32 -- # continue 00:03:14.482 07:20:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:14.482 07:20:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:14.482 07:20:30 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:14.482 07:20:30 -- setup/common.sh@32 -- # continue 00:03:14.482 07:20:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:14.482 07:20:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:14.482 07:20:30 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:14.482 07:20:30 -- setup/common.sh@32 -- # continue 00:03:14.482 07:20:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:14.482 07:20:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:14.482 07:20:30 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:14.482 07:20:30 -- setup/common.sh@32 -- # continue 00:03:14.482 07:20:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:14.482 07:20:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:14.482 07:20:30 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:14.482 07:20:30 -- setup/common.sh@32 -- # continue 00:03:14.482 07:20:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:14.482 07:20:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:14.482 07:20:30 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:14.482 07:20:30 -- setup/common.sh@32 -- # continue 00:03:14.482 07:20:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:14.482 07:20:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:14.482 07:20:30 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:14.482 07:20:30 -- setup/common.sh@32 -- # continue 00:03:14.482 07:20:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:14.482 07:20:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:14.482 07:20:30 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:14.482 07:20:30 -- setup/common.sh@32 -- # continue 00:03:14.482 07:20:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:14.482 07:20:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:14.482 07:20:30 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:14.482 07:20:30 -- setup/common.sh@32 -- # continue 00:03:14.482 07:20:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:14.482 07:20:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:14.482 07:20:30 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:14.482 07:20:30 -- setup/common.sh@32 -- # continue 00:03:14.482 07:20:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:14.482 07:20:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:14.482 07:20:30 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:14.482 07:20:30 -- setup/common.sh@32 -- # continue 00:03:14.482 07:20:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:14.482 07:20:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:14.482 07:20:30 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:14.482 07:20:30 -- setup/common.sh@32 -- # continue 00:03:14.482 07:20:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:14.482 07:20:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:14.482 07:20:30 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:14.482 07:20:30 -- setup/common.sh@32 -- # continue 00:03:14.482 07:20:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:14.482 07:20:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:14.482 07:20:30 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:14.482 07:20:30 -- setup/common.sh@32 -- # continue 00:03:14.482 07:20:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:14.482 07:20:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:14.482 07:20:30 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:14.482 07:20:30 -- setup/common.sh@32 -- # continue 00:03:14.482 07:20:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:14.482 07:20:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:14.482 07:20:30 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:14.482 07:20:30 -- setup/common.sh@32 -- # continue 00:03:14.482 07:20:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:14.482 07:20:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:14.482 07:20:30 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:14.482 07:20:30 -- setup/common.sh@32 -- # continue 00:03:14.482 07:20:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:14.482 07:20:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:14.482 07:20:30 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:14.482 07:20:30 -- setup/common.sh@32 -- # continue 00:03:14.482 07:20:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:14.482 07:20:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:14.482 07:20:30 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:14.482 07:20:30 -- setup/common.sh@32 -- # continue 00:03:14.482 07:20:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:14.482 07:20:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:14.482 07:20:30 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:14.482 07:20:30 -- setup/common.sh@32 -- # continue 00:03:14.482 07:20:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:14.482 07:20:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:14.482 07:20:30 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:14.482 07:20:30 -- setup/common.sh@32 -- # continue 00:03:14.482 07:20:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:14.482 07:20:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:14.482 07:20:30 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:14.482 07:20:30 -- setup/common.sh@32 -- # continue 00:03:14.482 07:20:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:14.482 07:20:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:14.483 07:20:30 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:14.483 07:20:30 -- setup/common.sh@32 -- # continue 00:03:14.483 07:20:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:14.483 07:20:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:14.483 07:20:30 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:14.483 07:20:30 -- setup/common.sh@32 -- # continue 00:03:14.483 07:20:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:14.483 07:20:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:14.483 07:20:30 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:14.483 07:20:30 -- setup/common.sh@32 -- # continue 00:03:14.483 07:20:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:14.483 07:20:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:14.483 07:20:30 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:14.483 07:20:30 -- setup/common.sh@32 -- # continue 00:03:14.483 07:20:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:14.483 07:20:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:14.483 07:20:30 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:14.483 07:20:30 -- setup/common.sh@32 -- # continue 00:03:14.483 07:20:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:14.483 07:20:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:14.483 07:20:30 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:14.483 07:20:30 -- setup/common.sh@32 -- # continue 00:03:14.483 07:20:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:14.483 07:20:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:14.483 07:20:30 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:14.483 07:20:30 -- setup/common.sh@32 -- # continue 00:03:14.483 07:20:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:14.483 07:20:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:14.483 07:20:30 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:14.483 07:20:30 -- setup/common.sh@32 -- # continue 00:03:14.483 07:20:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:14.483 07:20:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:14.483 07:20:30 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:14.483 07:20:30 -- setup/common.sh@32 -- # continue 00:03:14.483 07:20:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:14.483 07:20:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:14.483 07:20:30 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:14.483 07:20:30 -- setup/common.sh@33 -- # echo 1024 00:03:14.483 07:20:30 -- setup/common.sh@33 -- # return 0 00:03:14.483 07:20:30 -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:03:14.483 07:20:30 -- setup/hugepages.sh@112 -- # get_nodes 00:03:14.483 07:20:30 -- setup/hugepages.sh@27 -- # local node 00:03:14.483 07:20:30 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:14.483 07:20:30 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:03:14.483 07:20:30 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:14.483 07:20:30 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=0 00:03:14.483 07:20:30 -- setup/hugepages.sh@32 -- # no_nodes=2 00:03:14.483 07:20:30 -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:03:14.483 07:20:30 -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:03:14.483 07:20:30 -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:03:14.483 07:20:30 -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:03:14.483 07:20:30 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:14.483 07:20:30 -- setup/common.sh@18 -- # local node=0 00:03:14.483 07:20:30 -- setup/common.sh@19 -- # local var val 00:03:14.483 07:20:30 -- setup/common.sh@20 -- # local mem_f mem 00:03:14.483 07:20:30 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:14.483 07:20:30 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:03:14.483 07:20:30 -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:03:14.483 07:20:30 -- setup/common.sh@28 -- # mapfile -t mem 00:03:14.483 07:20:30 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:14.483 07:20:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:14.483 07:20:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:14.483 07:20:30 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 32829884 kB' 'MemFree: 27250616 kB' 'MemUsed: 5579268 kB' 'SwapCached: 0 kB' 'Active: 2365060 kB' 'Inactive: 110044 kB' 'Active(anon): 2254172 kB' 'Inactive(anon): 0 kB' 'Active(file): 110888 kB' 'Inactive(file): 110044 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 2243048 kB' 'Mapped: 35748 kB' 'AnonPages: 235180 kB' 'Shmem: 2022116 kB' 'KernelStack: 7128 kB' 'PageTables: 4368 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 97124 kB' 'Slab: 312464 kB' 'SReclaimable: 97124 kB' 'SUnreclaim: 215340 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:03:14.483 07:20:30 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:14.483 07:20:30 -- setup/common.sh@32 -- # continue 00:03:14.483 07:20:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:14.483 07:20:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:14.483 07:20:30 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:14.483 07:20:30 -- setup/common.sh@32 -- # continue 00:03:14.483 07:20:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:14.483 07:20:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:14.483 07:20:30 -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:14.483 07:20:30 -- setup/common.sh@32 -- # continue 00:03:14.483 07:20:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:14.483 07:20:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:14.483 07:20:30 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:14.483 07:20:30 -- setup/common.sh@32 -- # continue 00:03:14.483 07:20:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:14.483 07:20:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:14.483 07:20:30 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:14.483 07:20:30 -- setup/common.sh@32 -- # continue 00:03:14.483 07:20:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:14.483 07:20:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:14.483 07:20:30 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:14.483 07:20:30 -- setup/common.sh@32 -- # continue 00:03:14.483 07:20:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:14.483 07:20:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:14.483 07:20:30 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:14.483 07:20:30 -- setup/common.sh@32 -- # continue 00:03:14.483 07:20:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:14.483 07:20:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:14.483 07:20:30 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:14.483 07:20:30 -- setup/common.sh@32 -- # continue 00:03:14.483 07:20:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:14.483 07:20:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:14.483 07:20:30 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:14.483 07:20:30 -- setup/common.sh@32 -- # continue 00:03:14.483 07:20:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:14.483 07:20:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:14.483 07:20:30 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:14.483 07:20:30 -- setup/common.sh@32 -- # continue 00:03:14.483 07:20:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:14.483 07:20:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:14.483 07:20:30 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:14.483 07:20:30 -- setup/common.sh@32 -- # continue 00:03:14.483 07:20:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:14.483 07:20:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:14.483 07:20:30 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:14.483 07:20:30 -- setup/common.sh@32 -- # continue 00:03:14.483 07:20:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:14.483 07:20:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:14.483 07:20:30 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:14.483 07:20:30 -- setup/common.sh@32 -- # continue 00:03:14.483 07:20:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:14.483 07:20:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:14.483 07:20:30 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:14.483 07:20:30 -- setup/common.sh@32 -- # continue 00:03:14.483 07:20:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:14.483 07:20:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:14.483 07:20:30 -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:14.483 07:20:30 -- setup/common.sh@32 -- # continue 00:03:14.483 07:20:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:14.483 07:20:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:14.483 07:20:30 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:14.483 07:20:30 -- setup/common.sh@32 -- # continue 00:03:14.483 07:20:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:14.483 07:20:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:14.483 07:20:30 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:14.483 07:20:30 -- setup/common.sh@32 -- # continue 00:03:14.483 07:20:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:14.483 07:20:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:14.483 07:20:30 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:14.483 07:20:30 -- setup/common.sh@32 -- # continue 00:03:14.483 07:20:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:14.483 07:20:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:14.483 07:20:30 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:14.483 07:20:30 -- setup/common.sh@32 -- # continue 00:03:14.483 07:20:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:14.483 07:20:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:14.483 07:20:30 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:14.483 07:20:30 -- setup/common.sh@32 -- # continue 00:03:14.483 07:20:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:14.483 07:20:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:14.483 07:20:30 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:14.483 07:20:30 -- setup/common.sh@32 -- # continue 00:03:14.483 07:20:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:14.483 07:20:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:14.483 07:20:30 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:14.483 07:20:30 -- setup/common.sh@32 -- # continue 00:03:14.483 07:20:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:14.483 07:20:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:14.483 07:20:30 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:14.483 07:20:30 -- setup/common.sh@32 -- # continue 00:03:14.483 07:20:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:14.483 07:20:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:14.483 07:20:30 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:14.483 07:20:30 -- setup/common.sh@32 -- # continue 00:03:14.483 07:20:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:14.483 07:20:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:14.484 07:20:30 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:14.484 07:20:30 -- setup/common.sh@32 -- # continue 00:03:14.484 07:20:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:14.484 07:20:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:14.484 07:20:30 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:14.484 07:20:30 -- setup/common.sh@32 -- # continue 00:03:14.484 07:20:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:14.484 07:20:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:14.484 07:20:30 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:14.484 07:20:30 -- setup/common.sh@32 -- # continue 00:03:14.484 07:20:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:14.484 07:20:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:14.484 07:20:30 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:14.484 07:20:30 -- setup/common.sh@32 -- # continue 00:03:14.484 07:20:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:14.484 07:20:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:14.484 07:20:30 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:14.484 07:20:30 -- setup/common.sh@32 -- # continue 00:03:14.484 07:20:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:14.484 07:20:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:14.484 07:20:30 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:14.484 07:20:30 -- setup/common.sh@32 -- # continue 00:03:14.484 07:20:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:14.484 07:20:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:14.484 07:20:30 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:14.484 07:20:30 -- setup/common.sh@32 -- # continue 00:03:14.484 07:20:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:14.484 07:20:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:14.484 07:20:30 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:14.484 07:20:30 -- setup/common.sh@32 -- # continue 00:03:14.484 07:20:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:14.484 07:20:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:14.484 07:20:30 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:14.484 07:20:30 -- setup/common.sh@32 -- # continue 00:03:14.484 07:20:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:14.484 07:20:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:14.484 07:20:30 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:14.484 07:20:30 -- setup/common.sh@32 -- # continue 00:03:14.484 07:20:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:14.484 07:20:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:14.484 07:20:30 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:14.484 07:20:30 -- setup/common.sh@32 -- # continue 00:03:14.484 07:20:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:14.484 07:20:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:14.484 07:20:30 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:14.484 07:20:30 -- setup/common.sh@32 -- # continue 00:03:14.484 07:20:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:14.484 07:20:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:14.484 07:20:30 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:14.484 07:20:30 -- setup/common.sh@33 -- # echo 0 00:03:14.484 07:20:30 -- setup/common.sh@33 -- # return 0 00:03:14.484 07:20:30 -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:03:14.484 07:20:30 -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:03:14.484 07:20:30 -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:03:14.484 07:20:30 -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:03:14.484 07:20:30 -- setup/hugepages.sh@128 -- # echo 'node0=1024 expecting 1024' 00:03:14.484 node0=1024 expecting 1024 00:03:14.484 07:20:30 -- setup/hugepages.sh@130 -- # [[ 1024 == \1\0\2\4 ]] 00:03:14.484 00:03:14.484 real 0m2.412s 00:03:14.484 user 0m0.664s 00:03:14.484 sys 0m0.870s 00:03:14.484 07:20:30 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:03:14.484 07:20:30 -- common/autotest_common.sh@10 -- # set +x 00:03:14.484 ************************************ 00:03:14.484 END TEST default_setup 00:03:14.484 ************************************ 00:03:14.484 07:20:30 -- setup/hugepages.sh@211 -- # run_test per_node_1G_alloc per_node_1G_alloc 00:03:14.484 07:20:30 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:03:14.484 07:20:30 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:03:14.484 07:20:30 -- common/autotest_common.sh@10 -- # set +x 00:03:14.484 ************************************ 00:03:14.484 START TEST per_node_1G_alloc 00:03:14.484 ************************************ 00:03:14.484 07:20:30 -- common/autotest_common.sh@1104 -- # per_node_1G_alloc 00:03:14.484 07:20:30 -- setup/hugepages.sh@143 -- # local IFS=, 00:03:14.484 07:20:30 -- setup/hugepages.sh@145 -- # get_test_nr_hugepages 1048576 0 1 00:03:14.484 07:20:30 -- setup/hugepages.sh@49 -- # local size=1048576 00:03:14.484 07:20:30 -- setup/hugepages.sh@50 -- # (( 3 > 1 )) 00:03:14.484 07:20:30 -- setup/hugepages.sh@51 -- # shift 00:03:14.484 07:20:30 -- setup/hugepages.sh@52 -- # node_ids=('0' '1') 00:03:14.484 07:20:30 -- setup/hugepages.sh@52 -- # local node_ids 00:03:14.484 07:20:30 -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:03:14.484 07:20:30 -- setup/hugepages.sh@57 -- # nr_hugepages=512 00:03:14.484 07:20:30 -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 0 1 00:03:14.484 07:20:30 -- setup/hugepages.sh@62 -- # user_nodes=('0' '1') 00:03:14.484 07:20:30 -- setup/hugepages.sh@62 -- # local user_nodes 00:03:14.484 07:20:30 -- setup/hugepages.sh@64 -- # local _nr_hugepages=512 00:03:14.484 07:20:30 -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:03:14.484 07:20:30 -- setup/hugepages.sh@67 -- # nodes_test=() 00:03:14.484 07:20:30 -- setup/hugepages.sh@67 -- # local -g nodes_test 00:03:14.484 07:20:30 -- setup/hugepages.sh@69 -- # (( 2 > 0 )) 00:03:14.484 07:20:30 -- setup/hugepages.sh@70 -- # for _no_nodes in "${user_nodes[@]}" 00:03:14.484 07:20:30 -- setup/hugepages.sh@71 -- # nodes_test[_no_nodes]=512 00:03:14.484 07:20:30 -- setup/hugepages.sh@70 -- # for _no_nodes in "${user_nodes[@]}" 00:03:14.484 07:20:30 -- setup/hugepages.sh@71 -- # nodes_test[_no_nodes]=512 00:03:14.484 07:20:30 -- setup/hugepages.sh@73 -- # return 0 00:03:14.484 07:20:30 -- setup/hugepages.sh@146 -- # NRHUGE=512 00:03:14.484 07:20:30 -- setup/hugepages.sh@146 -- # HUGENODE=0,1 00:03:14.484 07:20:30 -- setup/hugepages.sh@146 -- # setup output 00:03:14.484 07:20:30 -- setup/common.sh@9 -- # [[ output == output ]] 00:03:14.484 07:20:30 -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:03:15.462 0000:00:04.7 (8086 0e27): Already using the vfio-pci driver 00:03:15.462 0000:88:00.0 (8086 0a54): Already using the vfio-pci driver 00:03:15.462 0000:00:04.6 (8086 0e26): Already using the vfio-pci driver 00:03:15.462 0000:00:04.5 (8086 0e25): Already using the vfio-pci driver 00:03:15.462 0000:00:04.4 (8086 0e24): Already using the vfio-pci driver 00:03:15.462 0000:00:04.3 (8086 0e23): Already using the vfio-pci driver 00:03:15.462 0000:00:04.2 (8086 0e22): Already using the vfio-pci driver 00:03:15.462 0000:00:04.1 (8086 0e21): Already using the vfio-pci driver 00:03:15.462 0000:00:04.0 (8086 0e20): Already using the vfio-pci driver 00:03:15.462 0000:80:04.7 (8086 0e27): Already using the vfio-pci driver 00:03:15.462 0000:80:04.6 (8086 0e26): Already using the vfio-pci driver 00:03:15.462 0000:80:04.5 (8086 0e25): Already using the vfio-pci driver 00:03:15.462 0000:80:04.4 (8086 0e24): Already using the vfio-pci driver 00:03:15.462 0000:80:04.3 (8086 0e23): Already using the vfio-pci driver 00:03:15.462 0000:80:04.2 (8086 0e22): Already using the vfio-pci driver 00:03:15.462 0000:80:04.1 (8086 0e21): Already using the vfio-pci driver 00:03:15.462 0000:80:04.0 (8086 0e20): Already using the vfio-pci driver 00:03:15.723 07:20:31 -- setup/hugepages.sh@147 -- # nr_hugepages=1024 00:03:15.723 07:20:31 -- setup/hugepages.sh@147 -- # verify_nr_hugepages 00:03:15.723 07:20:31 -- setup/hugepages.sh@89 -- # local node 00:03:15.723 07:20:31 -- setup/hugepages.sh@90 -- # local sorted_t 00:03:15.723 07:20:31 -- setup/hugepages.sh@91 -- # local sorted_s 00:03:15.723 07:20:31 -- setup/hugepages.sh@92 -- # local surp 00:03:15.723 07:20:31 -- setup/hugepages.sh@93 -- # local resv 00:03:15.723 07:20:31 -- setup/hugepages.sh@94 -- # local anon 00:03:15.723 07:20:31 -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:03:15.723 07:20:31 -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:03:15.723 07:20:31 -- setup/common.sh@17 -- # local get=AnonHugePages 00:03:15.723 07:20:31 -- setup/common.sh@18 -- # local node= 00:03:15.723 07:20:31 -- setup/common.sh@19 -- # local var val 00:03:15.723 07:20:31 -- setup/common.sh@20 -- # local mem_f mem 00:03:15.723 07:20:31 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:15.723 07:20:31 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:15.723 07:20:31 -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:15.723 07:20:31 -- setup/common.sh@28 -- # mapfile -t mem 00:03:15.723 07:20:31 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:15.723 07:20:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.723 07:20:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.723 07:20:31 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541708 kB' 'MemFree: 45813204 kB' 'MemAvailable: 49318616 kB' 'Buffers: 2704 kB' 'Cached: 10211460 kB' 'SwapCached: 0 kB' 'Active: 7237756 kB' 'Inactive: 3506552 kB' 'Active(anon): 6843404 kB' 'Inactive(anon): 0 kB' 'Active(file): 394352 kB' 'Inactive(file): 3506552 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 533320 kB' 'Mapped: 212512 kB' 'Shmem: 6313260 kB' 'KReclaimable: 196260 kB' 'Slab: 568244 kB' 'SReclaimable: 196260 kB' 'SUnreclaim: 371984 kB' 'KernelStack: 12800 kB' 'PageTables: 8100 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37610880 kB' 'Committed_AS: 7975788 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 196724 kB' 'VmallocChunk: 0 kB' 'Percpu: 38784 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 1924700 kB' 'DirectMap2M: 15820800 kB' 'DirectMap1G: 51380224 kB' 00:03:15.723 07:20:31 -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:15.723 07:20:31 -- setup/common.sh@32 -- # continue 00:03:15.723 07:20:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.723 07:20:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.723 07:20:31 -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:15.723 07:20:31 -- setup/common.sh@32 -- # continue 00:03:15.723 07:20:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.723 07:20:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.723 07:20:31 -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:15.723 07:20:31 -- setup/common.sh@32 -- # continue 00:03:15.724 07:20:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.724 07:20:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.724 07:20:31 -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:15.724 07:20:31 -- setup/common.sh@32 -- # continue 00:03:15.724 07:20:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.724 07:20:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.724 07:20:31 -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:15.724 07:20:31 -- setup/common.sh@32 -- # continue 00:03:15.724 07:20:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.724 07:20:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.724 07:20:31 -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:15.724 07:20:31 -- setup/common.sh@32 -- # continue 00:03:15.724 07:20:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.724 07:20:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.724 07:20:31 -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:15.724 07:20:31 -- setup/common.sh@32 -- # continue 00:03:15.724 07:20:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.724 07:20:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.724 07:20:31 -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:15.724 07:20:31 -- setup/common.sh@32 -- # continue 00:03:15.724 07:20:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.724 07:20:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.724 07:20:31 -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:15.724 07:20:31 -- setup/common.sh@32 -- # continue 00:03:15.724 07:20:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.724 07:20:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.724 07:20:31 -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:15.724 07:20:31 -- setup/common.sh@32 -- # continue 00:03:15.724 07:20:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.724 07:20:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.724 07:20:31 -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:15.724 07:20:31 -- setup/common.sh@32 -- # continue 00:03:15.724 07:20:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.724 07:20:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.724 07:20:31 -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:15.724 07:20:31 -- setup/common.sh@32 -- # continue 00:03:15.724 07:20:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.724 07:20:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.724 07:20:31 -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:15.724 07:20:31 -- setup/common.sh@32 -- # continue 00:03:15.724 07:20:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.724 07:20:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.724 07:20:31 -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:15.724 07:20:31 -- setup/common.sh@32 -- # continue 00:03:15.724 07:20:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.724 07:20:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.724 07:20:31 -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:15.724 07:20:31 -- setup/common.sh@32 -- # continue 00:03:15.724 07:20:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.724 07:20:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.724 07:20:31 -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:15.724 07:20:31 -- setup/common.sh@32 -- # continue 00:03:15.724 07:20:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.724 07:20:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.724 07:20:31 -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:15.724 07:20:31 -- setup/common.sh@32 -- # continue 00:03:15.724 07:20:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.724 07:20:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.724 07:20:31 -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:15.724 07:20:31 -- setup/common.sh@32 -- # continue 00:03:15.724 07:20:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.724 07:20:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.724 07:20:31 -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:15.724 07:20:31 -- setup/common.sh@32 -- # continue 00:03:15.724 07:20:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.724 07:20:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.724 07:20:31 -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:15.724 07:20:31 -- setup/common.sh@32 -- # continue 00:03:15.724 07:20:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.724 07:20:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.724 07:20:31 -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:15.724 07:20:31 -- setup/common.sh@32 -- # continue 00:03:15.724 07:20:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.724 07:20:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.724 07:20:31 -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:15.724 07:20:31 -- setup/common.sh@32 -- # continue 00:03:15.724 07:20:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.724 07:20:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.724 07:20:31 -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:15.724 07:20:31 -- setup/common.sh@32 -- # continue 00:03:15.724 07:20:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.724 07:20:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.724 07:20:31 -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:15.724 07:20:31 -- setup/common.sh@32 -- # continue 00:03:15.724 07:20:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.724 07:20:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.724 07:20:31 -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:15.724 07:20:31 -- setup/common.sh@32 -- # continue 00:03:15.724 07:20:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.724 07:20:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.724 07:20:31 -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:15.724 07:20:31 -- setup/common.sh@32 -- # continue 00:03:15.724 07:20:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.724 07:20:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.724 07:20:31 -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:15.724 07:20:31 -- setup/common.sh@32 -- # continue 00:03:15.724 07:20:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.724 07:20:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.724 07:20:31 -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:15.724 07:20:31 -- setup/common.sh@32 -- # continue 00:03:15.724 07:20:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.724 07:20:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.724 07:20:31 -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:15.724 07:20:31 -- setup/common.sh@32 -- # continue 00:03:15.724 07:20:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.724 07:20:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.724 07:20:31 -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:15.724 07:20:31 -- setup/common.sh@32 -- # continue 00:03:15.724 07:20:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.724 07:20:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.724 07:20:31 -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:15.724 07:20:31 -- setup/common.sh@32 -- # continue 00:03:15.724 07:20:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.724 07:20:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.724 07:20:31 -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:15.724 07:20:31 -- setup/common.sh@32 -- # continue 00:03:15.724 07:20:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.724 07:20:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.724 07:20:31 -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:15.724 07:20:31 -- setup/common.sh@32 -- # continue 00:03:15.724 07:20:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.724 07:20:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.724 07:20:31 -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:15.724 07:20:31 -- setup/common.sh@32 -- # continue 00:03:15.724 07:20:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.724 07:20:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.724 07:20:31 -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:15.724 07:20:31 -- setup/common.sh@32 -- # continue 00:03:15.724 07:20:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.724 07:20:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.724 07:20:31 -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:15.724 07:20:31 -- setup/common.sh@32 -- # continue 00:03:15.724 07:20:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.724 07:20:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.724 07:20:31 -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:15.724 07:20:31 -- setup/common.sh@32 -- # continue 00:03:15.724 07:20:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.724 07:20:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.724 07:20:31 -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:15.724 07:20:31 -- setup/common.sh@32 -- # continue 00:03:15.724 07:20:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.724 07:20:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.724 07:20:31 -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:15.724 07:20:31 -- setup/common.sh@32 -- # continue 00:03:15.724 07:20:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.724 07:20:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.724 07:20:31 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:15.724 07:20:31 -- setup/common.sh@32 -- # continue 00:03:15.724 07:20:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.724 07:20:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.724 07:20:31 -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:15.724 07:20:31 -- setup/common.sh@33 -- # echo 0 00:03:15.724 07:20:31 -- setup/common.sh@33 -- # return 0 00:03:15.724 07:20:31 -- setup/hugepages.sh@97 -- # anon=0 00:03:15.724 07:20:31 -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:03:15.724 07:20:31 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:15.724 07:20:31 -- setup/common.sh@18 -- # local node= 00:03:15.724 07:20:31 -- setup/common.sh@19 -- # local var val 00:03:15.724 07:20:31 -- setup/common.sh@20 -- # local mem_f mem 00:03:15.724 07:20:31 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:15.724 07:20:31 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:15.724 07:20:31 -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:15.724 07:20:31 -- setup/common.sh@28 -- # mapfile -t mem 00:03:15.724 07:20:31 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:15.724 07:20:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.725 07:20:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.725 07:20:31 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541708 kB' 'MemFree: 45818308 kB' 'MemAvailable: 49323720 kB' 'Buffers: 2704 kB' 'Cached: 10211460 kB' 'SwapCached: 0 kB' 'Active: 7238172 kB' 'Inactive: 3506552 kB' 'Active(anon): 6843820 kB' 'Inactive(anon): 0 kB' 'Active(file): 394352 kB' 'Inactive(file): 3506552 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 533764 kB' 'Mapped: 212512 kB' 'Shmem: 6313260 kB' 'KReclaimable: 196260 kB' 'Slab: 568184 kB' 'SReclaimable: 196260 kB' 'SUnreclaim: 371924 kB' 'KernelStack: 12816 kB' 'PageTables: 8140 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37610880 kB' 'Committed_AS: 7975800 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 196676 kB' 'VmallocChunk: 0 kB' 'Percpu: 38784 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 1924700 kB' 'DirectMap2M: 15820800 kB' 'DirectMap1G: 51380224 kB' 00:03:15.725 07:20:31 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.725 07:20:31 -- setup/common.sh@32 -- # continue 00:03:15.725 07:20:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.725 07:20:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.725 07:20:31 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.725 07:20:31 -- setup/common.sh@32 -- # continue 00:03:15.725 07:20:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.725 07:20:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.725 07:20:31 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.725 07:20:31 -- setup/common.sh@32 -- # continue 00:03:15.725 07:20:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.725 07:20:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.725 07:20:31 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.725 07:20:31 -- setup/common.sh@32 -- # continue 00:03:15.725 07:20:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.725 07:20:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.725 07:20:31 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.725 07:20:31 -- setup/common.sh@32 -- # continue 00:03:15.725 07:20:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.725 07:20:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.725 07:20:31 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.725 07:20:31 -- setup/common.sh@32 -- # continue 00:03:15.725 07:20:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.725 07:20:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.725 07:20:31 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.725 07:20:31 -- setup/common.sh@32 -- # continue 00:03:15.725 07:20:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.725 07:20:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.725 07:20:31 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.725 07:20:31 -- setup/common.sh@32 -- # continue 00:03:15.725 07:20:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.725 07:20:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.725 07:20:31 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.725 07:20:31 -- setup/common.sh@32 -- # continue 00:03:15.725 07:20:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.725 07:20:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.725 07:20:31 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.725 07:20:31 -- setup/common.sh@32 -- # continue 00:03:15.725 07:20:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.725 07:20:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.725 07:20:31 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.725 07:20:31 -- setup/common.sh@32 -- # continue 00:03:15.725 07:20:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.725 07:20:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.725 07:20:31 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.725 07:20:31 -- setup/common.sh@32 -- # continue 00:03:15.725 07:20:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.725 07:20:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.725 07:20:31 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.725 07:20:31 -- setup/common.sh@32 -- # continue 00:03:15.725 07:20:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.725 07:20:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.725 07:20:31 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.725 07:20:31 -- setup/common.sh@32 -- # continue 00:03:15.725 07:20:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.725 07:20:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.725 07:20:31 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.725 07:20:31 -- setup/common.sh@32 -- # continue 00:03:15.725 07:20:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.725 07:20:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.725 07:20:31 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.725 07:20:31 -- setup/common.sh@32 -- # continue 00:03:15.725 07:20:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.725 07:20:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.725 07:20:31 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.725 07:20:31 -- setup/common.sh@32 -- # continue 00:03:15.725 07:20:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.725 07:20:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.725 07:20:31 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.725 07:20:31 -- setup/common.sh@32 -- # continue 00:03:15.725 07:20:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.725 07:20:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.725 07:20:31 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.725 07:20:31 -- setup/common.sh@32 -- # continue 00:03:15.725 07:20:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.725 07:20:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.725 07:20:31 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.725 07:20:31 -- setup/common.sh@32 -- # continue 00:03:15.725 07:20:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.725 07:20:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.725 07:20:31 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.725 07:20:31 -- setup/common.sh@32 -- # continue 00:03:15.725 07:20:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.725 07:20:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.725 07:20:31 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.725 07:20:31 -- setup/common.sh@32 -- # continue 00:03:15.725 07:20:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.725 07:20:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.725 07:20:31 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.725 07:20:31 -- setup/common.sh@32 -- # continue 00:03:15.725 07:20:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.725 07:20:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.725 07:20:31 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.725 07:20:31 -- setup/common.sh@32 -- # continue 00:03:15.725 07:20:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.725 07:20:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.725 07:20:31 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.725 07:20:31 -- setup/common.sh@32 -- # continue 00:03:15.725 07:20:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.725 07:20:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.725 07:20:31 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.725 07:20:31 -- setup/common.sh@32 -- # continue 00:03:15.725 07:20:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.725 07:20:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.725 07:20:31 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.725 07:20:31 -- setup/common.sh@32 -- # continue 00:03:15.725 07:20:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.725 07:20:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.725 07:20:31 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.725 07:20:31 -- setup/common.sh@32 -- # continue 00:03:15.725 07:20:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.725 07:20:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.725 07:20:31 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.725 07:20:31 -- setup/common.sh@32 -- # continue 00:03:15.725 07:20:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.725 07:20:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.725 07:20:31 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.725 07:20:31 -- setup/common.sh@32 -- # continue 00:03:15.725 07:20:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.725 07:20:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.725 07:20:31 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.725 07:20:31 -- setup/common.sh@32 -- # continue 00:03:15.725 07:20:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.725 07:20:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.725 07:20:31 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.725 07:20:31 -- setup/common.sh@32 -- # continue 00:03:15.725 07:20:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.725 07:20:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.725 07:20:31 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.725 07:20:31 -- setup/common.sh@32 -- # continue 00:03:15.725 07:20:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.725 07:20:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.725 07:20:31 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.725 07:20:31 -- setup/common.sh@32 -- # continue 00:03:15.725 07:20:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.725 07:20:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.725 07:20:31 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.725 07:20:31 -- setup/common.sh@32 -- # continue 00:03:15.725 07:20:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.725 07:20:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.725 07:20:31 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.725 07:20:31 -- setup/common.sh@32 -- # continue 00:03:15.725 07:20:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.725 07:20:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.725 07:20:31 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.725 07:20:31 -- setup/common.sh@32 -- # continue 00:03:15.725 07:20:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.725 07:20:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.725 07:20:31 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.725 07:20:31 -- setup/common.sh@32 -- # continue 00:03:15.725 07:20:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.725 07:20:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.725 07:20:31 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.725 07:20:31 -- setup/common.sh@32 -- # continue 00:03:15.725 07:20:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.726 07:20:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.726 07:20:31 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.726 07:20:31 -- setup/common.sh@32 -- # continue 00:03:15.726 07:20:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.726 07:20:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.726 07:20:31 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.726 07:20:31 -- setup/common.sh@32 -- # continue 00:03:15.726 07:20:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.726 07:20:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.726 07:20:31 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.726 07:20:31 -- setup/common.sh@32 -- # continue 00:03:15.726 07:20:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.726 07:20:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.726 07:20:31 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.726 07:20:31 -- setup/common.sh@32 -- # continue 00:03:15.726 07:20:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.726 07:20:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.726 07:20:31 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.726 07:20:31 -- setup/common.sh@32 -- # continue 00:03:15.726 07:20:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.726 07:20:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.726 07:20:31 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.726 07:20:31 -- setup/common.sh@32 -- # continue 00:03:15.726 07:20:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.726 07:20:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.726 07:20:31 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.726 07:20:31 -- setup/common.sh@32 -- # continue 00:03:15.726 07:20:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.726 07:20:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.726 07:20:31 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.726 07:20:31 -- setup/common.sh@32 -- # continue 00:03:15.726 07:20:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.726 07:20:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.726 07:20:31 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.726 07:20:31 -- setup/common.sh@32 -- # continue 00:03:15.726 07:20:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.726 07:20:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.726 07:20:31 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.726 07:20:31 -- setup/common.sh@32 -- # continue 00:03:15.726 07:20:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.726 07:20:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.726 07:20:31 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.726 07:20:31 -- setup/common.sh@32 -- # continue 00:03:15.726 07:20:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.726 07:20:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.726 07:20:31 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.726 07:20:31 -- setup/common.sh@32 -- # continue 00:03:15.726 07:20:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.726 07:20:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.726 07:20:31 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.726 07:20:31 -- setup/common.sh@33 -- # echo 0 00:03:15.726 07:20:31 -- setup/common.sh@33 -- # return 0 00:03:15.726 07:20:31 -- setup/hugepages.sh@99 -- # surp=0 00:03:15.726 07:20:31 -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:03:15.726 07:20:31 -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:03:15.726 07:20:31 -- setup/common.sh@18 -- # local node= 00:03:15.726 07:20:31 -- setup/common.sh@19 -- # local var val 00:03:15.726 07:20:31 -- setup/common.sh@20 -- # local mem_f mem 00:03:15.726 07:20:31 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:15.726 07:20:31 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:15.726 07:20:31 -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:15.726 07:20:31 -- setup/common.sh@28 -- # mapfile -t mem 00:03:15.726 07:20:31 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:15.726 07:20:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.726 07:20:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.726 07:20:31 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541708 kB' 'MemFree: 45818308 kB' 'MemAvailable: 49323720 kB' 'Buffers: 2704 kB' 'Cached: 10211460 kB' 'SwapCached: 0 kB' 'Active: 7238108 kB' 'Inactive: 3506552 kB' 'Active(anon): 6843756 kB' 'Inactive(anon): 0 kB' 'Active(file): 394352 kB' 'Inactive(file): 3506552 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 533716 kB' 'Mapped: 212472 kB' 'Shmem: 6313260 kB' 'KReclaimable: 196260 kB' 'Slab: 568184 kB' 'SReclaimable: 196260 kB' 'SUnreclaim: 371924 kB' 'KernelStack: 12864 kB' 'PageTables: 8252 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37610880 kB' 'Committed_AS: 7975812 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 196676 kB' 'VmallocChunk: 0 kB' 'Percpu: 38784 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 1924700 kB' 'DirectMap2M: 15820800 kB' 'DirectMap1G: 51380224 kB' 00:03:15.726 07:20:31 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:15.726 07:20:31 -- setup/common.sh@32 -- # continue 00:03:15.726 07:20:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.726 07:20:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.726 07:20:31 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:15.726 07:20:31 -- setup/common.sh@32 -- # continue 00:03:15.726 07:20:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.726 07:20:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.726 07:20:31 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:15.726 07:20:31 -- setup/common.sh@32 -- # continue 00:03:15.726 07:20:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.726 07:20:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.726 07:20:31 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:15.726 07:20:31 -- setup/common.sh@32 -- # continue 00:03:15.726 07:20:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.726 07:20:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.726 07:20:31 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:15.726 07:20:31 -- setup/common.sh@32 -- # continue 00:03:15.726 07:20:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.726 07:20:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.726 07:20:31 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:15.726 07:20:31 -- setup/common.sh@32 -- # continue 00:03:15.726 07:20:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.726 07:20:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.726 07:20:31 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:15.726 07:20:31 -- setup/common.sh@32 -- # continue 00:03:15.726 07:20:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.726 07:20:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.726 07:20:31 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:15.726 07:20:31 -- setup/common.sh@32 -- # continue 00:03:15.726 07:20:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.726 07:20:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.726 07:20:31 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:15.726 07:20:31 -- setup/common.sh@32 -- # continue 00:03:15.726 07:20:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.726 07:20:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.726 07:20:31 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:15.726 07:20:31 -- setup/common.sh@32 -- # continue 00:03:15.726 07:20:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.726 07:20:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.726 07:20:31 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:15.726 07:20:31 -- setup/common.sh@32 -- # continue 00:03:15.726 07:20:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.726 07:20:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.726 07:20:31 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:15.726 07:20:31 -- setup/common.sh@32 -- # continue 00:03:15.726 07:20:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.726 07:20:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.726 07:20:31 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:15.726 07:20:31 -- setup/common.sh@32 -- # continue 00:03:15.726 07:20:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.726 07:20:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.726 07:20:31 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:15.726 07:20:31 -- setup/common.sh@32 -- # continue 00:03:15.726 07:20:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.726 07:20:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.726 07:20:31 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:15.726 07:20:31 -- setup/common.sh@32 -- # continue 00:03:15.726 07:20:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.726 07:20:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.726 07:20:31 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:15.726 07:20:31 -- setup/common.sh@32 -- # continue 00:03:15.726 07:20:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.726 07:20:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.726 07:20:31 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:15.726 07:20:31 -- setup/common.sh@32 -- # continue 00:03:15.726 07:20:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.726 07:20:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.726 07:20:31 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:15.726 07:20:31 -- setup/common.sh@32 -- # continue 00:03:15.726 07:20:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.726 07:20:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.726 07:20:31 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:15.726 07:20:31 -- setup/common.sh@32 -- # continue 00:03:15.726 07:20:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.726 07:20:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.726 07:20:31 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:15.726 07:20:31 -- setup/common.sh@32 -- # continue 00:03:15.726 07:20:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.726 07:20:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.726 07:20:31 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:15.726 07:20:31 -- setup/common.sh@32 -- # continue 00:03:15.726 07:20:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.726 07:20:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.726 07:20:31 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:15.726 07:20:31 -- setup/common.sh@32 -- # continue 00:03:15.727 07:20:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.727 07:20:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.727 07:20:31 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:15.727 07:20:31 -- setup/common.sh@32 -- # continue 00:03:15.727 07:20:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.727 07:20:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.727 07:20:31 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:15.727 07:20:31 -- setup/common.sh@32 -- # continue 00:03:15.727 07:20:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.727 07:20:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.727 07:20:31 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:15.727 07:20:31 -- setup/common.sh@32 -- # continue 00:03:15.727 07:20:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.727 07:20:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.727 07:20:31 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:15.727 07:20:31 -- setup/common.sh@32 -- # continue 00:03:15.727 07:20:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.727 07:20:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.727 07:20:31 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:15.727 07:20:31 -- setup/common.sh@32 -- # continue 00:03:15.727 07:20:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.727 07:20:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.727 07:20:31 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:15.727 07:20:31 -- setup/common.sh@32 -- # continue 00:03:15.727 07:20:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.727 07:20:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.727 07:20:31 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:15.727 07:20:31 -- setup/common.sh@32 -- # continue 00:03:15.727 07:20:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.727 07:20:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.727 07:20:31 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:15.727 07:20:31 -- setup/common.sh@32 -- # continue 00:03:15.727 07:20:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.727 07:20:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.727 07:20:31 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:15.727 07:20:31 -- setup/common.sh@32 -- # continue 00:03:15.727 07:20:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.727 07:20:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.727 07:20:31 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:15.727 07:20:31 -- setup/common.sh@32 -- # continue 00:03:15.727 07:20:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.727 07:20:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.727 07:20:31 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:15.727 07:20:31 -- setup/common.sh@32 -- # continue 00:03:15.727 07:20:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.727 07:20:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.727 07:20:31 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:15.727 07:20:31 -- setup/common.sh@32 -- # continue 00:03:15.727 07:20:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.727 07:20:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.727 07:20:31 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:15.727 07:20:31 -- setup/common.sh@32 -- # continue 00:03:15.727 07:20:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.727 07:20:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.727 07:20:31 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:15.727 07:20:31 -- setup/common.sh@32 -- # continue 00:03:15.727 07:20:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.727 07:20:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.727 07:20:31 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:15.727 07:20:31 -- setup/common.sh@32 -- # continue 00:03:15.727 07:20:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.727 07:20:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.727 07:20:31 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:15.727 07:20:31 -- setup/common.sh@32 -- # continue 00:03:15.727 07:20:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.727 07:20:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.727 07:20:31 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:15.727 07:20:31 -- setup/common.sh@32 -- # continue 00:03:15.727 07:20:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.727 07:20:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.727 07:20:31 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:15.727 07:20:31 -- setup/common.sh@32 -- # continue 00:03:15.727 07:20:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.727 07:20:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.727 07:20:31 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:15.727 07:20:31 -- setup/common.sh@32 -- # continue 00:03:15.727 07:20:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.727 07:20:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.727 07:20:31 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:15.727 07:20:31 -- setup/common.sh@32 -- # continue 00:03:15.727 07:20:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.727 07:20:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.727 07:20:31 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:15.727 07:20:31 -- setup/common.sh@32 -- # continue 00:03:15.727 07:20:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.727 07:20:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.727 07:20:31 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:15.727 07:20:31 -- setup/common.sh@32 -- # continue 00:03:15.727 07:20:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.727 07:20:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.727 07:20:31 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:15.727 07:20:31 -- setup/common.sh@32 -- # continue 00:03:15.727 07:20:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.727 07:20:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.727 07:20:31 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:15.727 07:20:31 -- setup/common.sh@32 -- # continue 00:03:15.727 07:20:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.727 07:20:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.727 07:20:31 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:15.727 07:20:31 -- setup/common.sh@32 -- # continue 00:03:15.727 07:20:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.727 07:20:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.727 07:20:31 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:15.727 07:20:31 -- setup/common.sh@32 -- # continue 00:03:15.727 07:20:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.727 07:20:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.727 07:20:31 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:15.727 07:20:31 -- setup/common.sh@32 -- # continue 00:03:15.727 07:20:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.727 07:20:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.727 07:20:31 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:15.727 07:20:31 -- setup/common.sh@32 -- # continue 00:03:15.727 07:20:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.727 07:20:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.727 07:20:31 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:15.727 07:20:31 -- setup/common.sh@33 -- # echo 0 00:03:15.727 07:20:31 -- setup/common.sh@33 -- # return 0 00:03:15.727 07:20:31 -- setup/hugepages.sh@100 -- # resv=0 00:03:15.727 07:20:31 -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:03:15.727 nr_hugepages=1024 00:03:15.727 07:20:31 -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:03:15.727 resv_hugepages=0 00:03:15.727 07:20:31 -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:03:15.727 surplus_hugepages=0 00:03:15.727 07:20:31 -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:03:15.727 anon_hugepages=0 00:03:15.727 07:20:31 -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:03:15.727 07:20:31 -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:03:15.727 07:20:31 -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:03:15.727 07:20:31 -- setup/common.sh@17 -- # local get=HugePages_Total 00:03:15.727 07:20:31 -- setup/common.sh@18 -- # local node= 00:03:15.727 07:20:31 -- setup/common.sh@19 -- # local var val 00:03:15.727 07:20:31 -- setup/common.sh@20 -- # local mem_f mem 00:03:15.727 07:20:31 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:15.727 07:20:31 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:15.727 07:20:31 -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:15.727 07:20:31 -- setup/common.sh@28 -- # mapfile -t mem 00:03:15.727 07:20:31 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:15.727 07:20:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.727 07:20:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.727 07:20:31 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541708 kB' 'MemFree: 45818312 kB' 'MemAvailable: 49323724 kB' 'Buffers: 2704 kB' 'Cached: 10211476 kB' 'SwapCached: 0 kB' 'Active: 7237860 kB' 'Inactive: 3506552 kB' 'Active(anon): 6843508 kB' 'Inactive(anon): 0 kB' 'Active(file): 394352 kB' 'Inactive(file): 3506552 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 533432 kB' 'Mapped: 212544 kB' 'Shmem: 6313276 kB' 'KReclaimable: 196260 kB' 'Slab: 568276 kB' 'SReclaimable: 196260 kB' 'SUnreclaim: 372016 kB' 'KernelStack: 12896 kB' 'PageTables: 8404 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37610880 kB' 'Committed_AS: 7975828 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 196692 kB' 'VmallocChunk: 0 kB' 'Percpu: 38784 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 1924700 kB' 'DirectMap2M: 15820800 kB' 'DirectMap1G: 51380224 kB' 00:03:15.727 07:20:31 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:15.727 07:20:31 -- setup/common.sh@32 -- # continue 00:03:15.727 07:20:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.727 07:20:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.727 07:20:31 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:15.727 07:20:31 -- setup/common.sh@32 -- # continue 00:03:15.727 07:20:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.727 07:20:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.727 07:20:31 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:15.727 07:20:31 -- setup/common.sh@32 -- # continue 00:03:15.727 07:20:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.727 07:20:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.727 07:20:31 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:15.728 07:20:31 -- setup/common.sh@32 -- # continue 00:03:15.728 07:20:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.728 07:20:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.728 07:20:31 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:15.728 07:20:31 -- setup/common.sh@32 -- # continue 00:03:15.728 07:20:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.728 07:20:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.728 07:20:31 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:15.728 07:20:31 -- setup/common.sh@32 -- # continue 00:03:15.728 07:20:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.728 07:20:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.728 07:20:31 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:15.728 07:20:31 -- setup/common.sh@32 -- # continue 00:03:15.728 07:20:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.728 07:20:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.728 07:20:31 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:15.728 07:20:31 -- setup/common.sh@32 -- # continue 00:03:15.728 07:20:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.728 07:20:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.728 07:20:31 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:15.728 07:20:31 -- setup/common.sh@32 -- # continue 00:03:15.728 07:20:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.728 07:20:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.728 07:20:31 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:15.729 07:20:31 -- setup/common.sh@32 -- # continue 00:03:15.729 07:20:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.729 07:20:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.729 07:20:31 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:15.729 07:20:31 -- setup/common.sh@32 -- # continue 00:03:15.729 07:20:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.729 07:20:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.729 07:20:31 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:15.729 07:20:31 -- setup/common.sh@32 -- # continue 00:03:15.729 07:20:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.729 07:20:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.729 07:20:31 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:15.729 07:20:31 -- setup/common.sh@32 -- # continue 00:03:15.729 07:20:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.729 07:20:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.729 07:20:31 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:15.729 07:20:31 -- setup/common.sh@32 -- # continue 00:03:15.729 07:20:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.729 07:20:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.729 07:20:31 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:15.729 07:20:31 -- setup/common.sh@32 -- # continue 00:03:15.729 07:20:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.729 07:20:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.729 07:20:31 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:15.729 07:20:31 -- setup/common.sh@32 -- # continue 00:03:15.729 07:20:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.729 07:20:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.729 07:20:31 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:15.729 07:20:31 -- setup/common.sh@32 -- # continue 00:03:15.729 07:20:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.729 07:20:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.729 07:20:31 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:15.729 07:20:31 -- setup/common.sh@32 -- # continue 00:03:15.729 07:20:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.729 07:20:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.729 07:20:31 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:15.729 07:20:31 -- setup/common.sh@32 -- # continue 00:03:15.729 07:20:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.729 07:20:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.729 07:20:31 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:15.729 07:20:31 -- setup/common.sh@32 -- # continue 00:03:15.729 07:20:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.729 07:20:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.729 07:20:31 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:15.729 07:20:31 -- setup/common.sh@32 -- # continue 00:03:15.729 07:20:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.729 07:20:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.729 07:20:31 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:15.729 07:20:31 -- setup/common.sh@32 -- # continue 00:03:15.729 07:20:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.729 07:20:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.729 07:20:31 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:15.729 07:20:31 -- setup/common.sh@32 -- # continue 00:03:15.729 07:20:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.729 07:20:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.729 07:20:31 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:15.729 07:20:31 -- setup/common.sh@32 -- # continue 00:03:15.729 07:20:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.729 07:20:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.730 07:20:31 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:15.730 07:20:31 -- setup/common.sh@32 -- # continue 00:03:15.730 07:20:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.730 07:20:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.730 07:20:31 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:15.730 07:20:31 -- setup/common.sh@32 -- # continue 00:03:15.730 07:20:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.730 07:20:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.730 07:20:31 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:15.730 07:20:31 -- setup/common.sh@32 -- # continue 00:03:15.730 07:20:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.730 07:20:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.730 07:20:31 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:15.730 07:20:31 -- setup/common.sh@32 -- # continue 00:03:15.730 07:20:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.730 07:20:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.730 07:20:31 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:15.730 07:20:31 -- setup/common.sh@32 -- # continue 00:03:15.730 07:20:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.730 07:20:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.730 07:20:31 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:15.730 07:20:31 -- setup/common.sh@32 -- # continue 00:03:15.730 07:20:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.730 07:20:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.730 07:20:31 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:15.730 07:20:31 -- setup/common.sh@32 -- # continue 00:03:15.730 07:20:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.730 07:20:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.730 07:20:31 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:15.730 07:20:31 -- setup/common.sh@32 -- # continue 00:03:15.730 07:20:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.730 07:20:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.730 07:20:31 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:15.730 07:20:31 -- setup/common.sh@32 -- # continue 00:03:15.730 07:20:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.730 07:20:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.730 07:20:31 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:15.730 07:20:31 -- setup/common.sh@32 -- # continue 00:03:15.730 07:20:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.730 07:20:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.730 07:20:31 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:15.730 07:20:31 -- setup/common.sh@32 -- # continue 00:03:15.730 07:20:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.730 07:20:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.730 07:20:31 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:15.730 07:20:31 -- setup/common.sh@32 -- # continue 00:03:15.730 07:20:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.730 07:20:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.730 07:20:31 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:15.730 07:20:31 -- setup/common.sh@32 -- # continue 00:03:15.730 07:20:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.730 07:20:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.730 07:20:31 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:15.730 07:20:31 -- setup/common.sh@32 -- # continue 00:03:15.730 07:20:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.730 07:20:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.730 07:20:31 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:15.730 07:20:31 -- setup/common.sh@32 -- # continue 00:03:15.730 07:20:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.730 07:20:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.730 07:20:31 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:15.730 07:20:31 -- setup/common.sh@32 -- # continue 00:03:15.730 07:20:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.730 07:20:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.730 07:20:31 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:15.730 07:20:31 -- setup/common.sh@32 -- # continue 00:03:15.730 07:20:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.730 07:20:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.730 07:20:31 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:15.730 07:20:31 -- setup/common.sh@32 -- # continue 00:03:15.730 07:20:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.730 07:20:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.730 07:20:31 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:15.730 07:20:31 -- setup/common.sh@32 -- # continue 00:03:15.730 07:20:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.730 07:20:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.730 07:20:31 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:15.730 07:20:31 -- setup/common.sh@32 -- # continue 00:03:15.730 07:20:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.730 07:20:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.730 07:20:31 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:15.730 07:20:31 -- setup/common.sh@32 -- # continue 00:03:15.730 07:20:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.730 07:20:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.730 07:20:31 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:15.730 07:20:31 -- setup/common.sh@32 -- # continue 00:03:15.730 07:20:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.730 07:20:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.730 07:20:31 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:15.730 07:20:31 -- setup/common.sh@32 -- # continue 00:03:15.730 07:20:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.730 07:20:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.730 07:20:31 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:15.730 07:20:31 -- setup/common.sh@32 -- # continue 00:03:15.730 07:20:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.730 07:20:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.730 07:20:31 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:15.730 07:20:31 -- setup/common.sh@33 -- # echo 1024 00:03:15.730 07:20:31 -- setup/common.sh@33 -- # return 0 00:03:15.730 07:20:31 -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:03:15.730 07:20:31 -- setup/hugepages.sh@112 -- # get_nodes 00:03:15.730 07:20:31 -- setup/hugepages.sh@27 -- # local node 00:03:15.730 07:20:31 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:15.730 07:20:31 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:03:15.730 07:20:31 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:15.730 07:20:31 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:03:15.730 07:20:31 -- setup/hugepages.sh@32 -- # no_nodes=2 00:03:15.730 07:20:31 -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:03:15.730 07:20:31 -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:03:15.730 07:20:31 -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:03:15.730 07:20:31 -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:03:15.730 07:20:31 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:15.730 07:20:31 -- setup/common.sh@18 -- # local node=0 00:03:15.730 07:20:31 -- setup/common.sh@19 -- # local var val 00:03:15.730 07:20:31 -- setup/common.sh@20 -- # local mem_f mem 00:03:15.730 07:20:31 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:15.730 07:20:31 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:03:15.730 07:20:31 -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:03:15.730 07:20:31 -- setup/common.sh@28 -- # mapfile -t mem 00:03:15.730 07:20:31 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:15.730 07:20:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.730 07:20:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.730 07:20:31 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 32829884 kB' 'MemFree: 28305988 kB' 'MemUsed: 4523896 kB' 'SwapCached: 0 kB' 'Active: 2365420 kB' 'Inactive: 110044 kB' 'Active(anon): 2254532 kB' 'Inactive(anon): 0 kB' 'Active(file): 110888 kB' 'Inactive(file): 110044 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 2243128 kB' 'Mapped: 35744 kB' 'AnonPages: 235480 kB' 'Shmem: 2022196 kB' 'KernelStack: 7176 kB' 'PageTables: 4460 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 97124 kB' 'Slab: 312368 kB' 'SReclaimable: 97124 kB' 'SUnreclaim: 215244 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:03:15.730 07:20:31 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.730 07:20:31 -- setup/common.sh@32 -- # continue 00:03:15.730 07:20:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.730 07:20:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.730 07:20:31 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.730 07:20:31 -- setup/common.sh@32 -- # continue 00:03:15.730 07:20:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.730 07:20:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.730 07:20:31 -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.730 07:20:31 -- setup/common.sh@32 -- # continue 00:03:15.730 07:20:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.730 07:20:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.730 07:20:31 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.730 07:20:31 -- setup/common.sh@32 -- # continue 00:03:15.730 07:20:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.730 07:20:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.730 07:20:31 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.730 07:20:31 -- setup/common.sh@32 -- # continue 00:03:15.730 07:20:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.730 07:20:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.730 07:20:31 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.730 07:20:31 -- setup/common.sh@32 -- # continue 00:03:15.730 07:20:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.730 07:20:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.730 07:20:31 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.730 07:20:31 -- setup/common.sh@32 -- # continue 00:03:15.730 07:20:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.730 07:20:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.730 07:20:31 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.731 07:20:31 -- setup/common.sh@32 -- # continue 00:03:15.731 07:20:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.731 07:20:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.731 07:20:31 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.731 07:20:31 -- setup/common.sh@32 -- # continue 00:03:15.731 07:20:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.731 07:20:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.731 07:20:31 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.731 07:20:31 -- setup/common.sh@32 -- # continue 00:03:15.731 07:20:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.731 07:20:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.731 07:20:31 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.990 07:20:31 -- setup/common.sh@32 -- # continue 00:03:15.990 07:20:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.990 07:20:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.990 07:20:31 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.990 07:20:31 -- setup/common.sh@32 -- # continue 00:03:15.990 07:20:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.990 07:20:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.990 07:20:31 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.990 07:20:31 -- setup/common.sh@32 -- # continue 00:03:15.990 07:20:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.990 07:20:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.990 07:20:31 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.990 07:20:31 -- setup/common.sh@32 -- # continue 00:03:15.990 07:20:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.990 07:20:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.990 07:20:31 -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.990 07:20:31 -- setup/common.sh@32 -- # continue 00:03:15.990 07:20:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.990 07:20:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.990 07:20:31 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.990 07:20:31 -- setup/common.sh@32 -- # continue 00:03:15.990 07:20:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.990 07:20:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.990 07:20:31 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.990 07:20:31 -- setup/common.sh@32 -- # continue 00:03:15.990 07:20:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.990 07:20:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.990 07:20:31 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.990 07:20:31 -- setup/common.sh@32 -- # continue 00:03:15.990 07:20:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.990 07:20:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.990 07:20:31 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.990 07:20:31 -- setup/common.sh@32 -- # continue 00:03:15.990 07:20:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.990 07:20:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.990 07:20:31 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.990 07:20:31 -- setup/common.sh@32 -- # continue 00:03:15.990 07:20:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.990 07:20:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.990 07:20:31 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.990 07:20:31 -- setup/common.sh@32 -- # continue 00:03:15.990 07:20:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.990 07:20:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.990 07:20:31 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.990 07:20:31 -- setup/common.sh@32 -- # continue 00:03:15.990 07:20:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.990 07:20:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.990 07:20:31 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.990 07:20:31 -- setup/common.sh@32 -- # continue 00:03:15.990 07:20:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.990 07:20:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.990 07:20:31 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.990 07:20:31 -- setup/common.sh@32 -- # continue 00:03:15.990 07:20:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.990 07:20:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.990 07:20:31 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.990 07:20:31 -- setup/common.sh@32 -- # continue 00:03:15.990 07:20:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.990 07:20:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.990 07:20:31 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.990 07:20:31 -- setup/common.sh@32 -- # continue 00:03:15.990 07:20:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.990 07:20:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.990 07:20:31 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.990 07:20:31 -- setup/common.sh@32 -- # continue 00:03:15.990 07:20:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.990 07:20:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.990 07:20:31 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.990 07:20:31 -- setup/common.sh@32 -- # continue 00:03:15.990 07:20:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.990 07:20:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.991 07:20:31 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.991 07:20:31 -- setup/common.sh@32 -- # continue 00:03:15.991 07:20:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.991 07:20:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.991 07:20:31 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.991 07:20:31 -- setup/common.sh@32 -- # continue 00:03:15.991 07:20:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.991 07:20:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.991 07:20:31 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.991 07:20:31 -- setup/common.sh@32 -- # continue 00:03:15.991 07:20:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.991 07:20:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.991 07:20:31 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.991 07:20:31 -- setup/common.sh@32 -- # continue 00:03:15.991 07:20:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.991 07:20:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.991 07:20:31 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.991 07:20:31 -- setup/common.sh@32 -- # continue 00:03:15.991 07:20:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.991 07:20:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.991 07:20:31 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.991 07:20:31 -- setup/common.sh@32 -- # continue 00:03:15.991 07:20:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.991 07:20:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.991 07:20:31 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.991 07:20:31 -- setup/common.sh@32 -- # continue 00:03:15.991 07:20:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.991 07:20:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.991 07:20:31 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.991 07:20:31 -- setup/common.sh@32 -- # continue 00:03:15.991 07:20:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.991 07:20:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.991 07:20:31 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.991 07:20:31 -- setup/common.sh@33 -- # echo 0 00:03:15.991 07:20:31 -- setup/common.sh@33 -- # return 0 00:03:15.991 07:20:31 -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:03:15.991 07:20:31 -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:03:15.991 07:20:31 -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:03:15.991 07:20:31 -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 1 00:03:15.991 07:20:31 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:15.991 07:20:31 -- setup/common.sh@18 -- # local node=1 00:03:15.991 07:20:31 -- setup/common.sh@19 -- # local var val 00:03:15.991 07:20:31 -- setup/common.sh@20 -- # local mem_f mem 00:03:15.991 07:20:31 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:15.991 07:20:31 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node1/meminfo ]] 00:03:15.991 07:20:31 -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node1/meminfo 00:03:15.991 07:20:31 -- setup/common.sh@28 -- # mapfile -t mem 00:03:15.991 07:20:31 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:15.991 07:20:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.991 07:20:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.991 07:20:31 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 27711824 kB' 'MemFree: 17512576 kB' 'MemUsed: 10199248 kB' 'SwapCached: 0 kB' 'Active: 4872772 kB' 'Inactive: 3396508 kB' 'Active(anon): 4589308 kB' 'Inactive(anon): 0 kB' 'Active(file): 283464 kB' 'Inactive(file): 3396508 kB' 'Unevictable: 0 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 7971080 kB' 'Mapped: 176800 kB' 'AnonPages: 298244 kB' 'Shmem: 4291108 kB' 'KernelStack: 5720 kB' 'PageTables: 3944 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 99136 kB' 'Slab: 255908 kB' 'SReclaimable: 99136 kB' 'SUnreclaim: 156772 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:03:15.991 07:20:31 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.991 07:20:31 -- setup/common.sh@32 -- # continue 00:03:15.991 07:20:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.991 07:20:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.991 07:20:31 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.991 07:20:31 -- setup/common.sh@32 -- # continue 00:03:15.991 07:20:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.991 07:20:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.991 07:20:31 -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.991 07:20:31 -- setup/common.sh@32 -- # continue 00:03:15.991 07:20:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.991 07:20:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.991 07:20:31 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.991 07:20:31 -- setup/common.sh@32 -- # continue 00:03:15.991 07:20:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.991 07:20:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.991 07:20:31 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.991 07:20:31 -- setup/common.sh@32 -- # continue 00:03:15.991 07:20:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.991 07:20:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.991 07:20:31 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.991 07:20:31 -- setup/common.sh@32 -- # continue 00:03:15.991 07:20:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.991 07:20:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.991 07:20:31 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.991 07:20:31 -- setup/common.sh@32 -- # continue 00:03:15.991 07:20:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.991 07:20:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.991 07:20:31 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.991 07:20:31 -- setup/common.sh@32 -- # continue 00:03:15.991 07:20:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.991 07:20:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.991 07:20:31 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.991 07:20:31 -- setup/common.sh@32 -- # continue 00:03:15.991 07:20:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.991 07:20:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.991 07:20:31 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.991 07:20:31 -- setup/common.sh@32 -- # continue 00:03:15.991 07:20:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.991 07:20:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.991 07:20:31 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.991 07:20:31 -- setup/common.sh@32 -- # continue 00:03:15.991 07:20:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.991 07:20:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.991 07:20:31 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.991 07:20:31 -- setup/common.sh@32 -- # continue 00:03:15.991 07:20:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.991 07:20:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.991 07:20:31 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.991 07:20:31 -- setup/common.sh@32 -- # continue 00:03:15.991 07:20:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.991 07:20:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.991 07:20:31 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.991 07:20:31 -- setup/common.sh@32 -- # continue 00:03:15.991 07:20:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.991 07:20:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.991 07:20:31 -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.991 07:20:31 -- setup/common.sh@32 -- # continue 00:03:15.991 07:20:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.991 07:20:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.991 07:20:31 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.991 07:20:31 -- setup/common.sh@32 -- # continue 00:03:15.991 07:20:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.991 07:20:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.991 07:20:31 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.991 07:20:31 -- setup/common.sh@32 -- # continue 00:03:15.991 07:20:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.991 07:20:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.991 07:20:31 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.991 07:20:31 -- setup/common.sh@32 -- # continue 00:03:15.991 07:20:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.991 07:20:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.991 07:20:31 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.991 07:20:31 -- setup/common.sh@32 -- # continue 00:03:15.991 07:20:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.991 07:20:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.991 07:20:31 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.991 07:20:31 -- setup/common.sh@32 -- # continue 00:03:15.991 07:20:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.991 07:20:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.991 07:20:31 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.991 07:20:31 -- setup/common.sh@32 -- # continue 00:03:15.991 07:20:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.991 07:20:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.991 07:20:31 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.991 07:20:31 -- setup/common.sh@32 -- # continue 00:03:15.991 07:20:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.991 07:20:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.991 07:20:31 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.991 07:20:31 -- setup/common.sh@32 -- # continue 00:03:15.991 07:20:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.991 07:20:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.991 07:20:31 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.991 07:20:31 -- setup/common.sh@32 -- # continue 00:03:15.991 07:20:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.991 07:20:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.991 07:20:31 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.991 07:20:31 -- setup/common.sh@32 -- # continue 00:03:15.991 07:20:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.991 07:20:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.991 07:20:31 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.991 07:20:31 -- setup/common.sh@32 -- # continue 00:03:15.991 07:20:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.991 07:20:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.991 07:20:31 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.991 07:20:31 -- setup/common.sh@32 -- # continue 00:03:15.991 07:20:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.991 07:20:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.991 07:20:31 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.991 07:20:31 -- setup/common.sh@32 -- # continue 00:03:15.991 07:20:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.991 07:20:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.992 07:20:31 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.992 07:20:31 -- setup/common.sh@32 -- # continue 00:03:15.992 07:20:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.992 07:20:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.992 07:20:31 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.992 07:20:31 -- setup/common.sh@32 -- # continue 00:03:15.992 07:20:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.992 07:20:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.992 07:20:31 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.992 07:20:31 -- setup/common.sh@32 -- # continue 00:03:15.992 07:20:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.992 07:20:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.992 07:20:31 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.992 07:20:31 -- setup/common.sh@32 -- # continue 00:03:15.992 07:20:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.992 07:20:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.992 07:20:31 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.992 07:20:31 -- setup/common.sh@32 -- # continue 00:03:15.992 07:20:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.992 07:20:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.992 07:20:31 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.992 07:20:31 -- setup/common.sh@32 -- # continue 00:03:15.992 07:20:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.992 07:20:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.992 07:20:31 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.992 07:20:31 -- setup/common.sh@32 -- # continue 00:03:15.992 07:20:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.992 07:20:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.992 07:20:31 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.992 07:20:31 -- setup/common.sh@32 -- # continue 00:03:15.992 07:20:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.992 07:20:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.992 07:20:31 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.992 07:20:31 -- setup/common.sh@33 -- # echo 0 00:03:15.992 07:20:31 -- setup/common.sh@33 -- # return 0 00:03:15.992 07:20:31 -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:03:15.992 07:20:31 -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:03:15.992 07:20:31 -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:03:15.992 07:20:31 -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:03:15.992 07:20:31 -- setup/hugepages.sh@128 -- # echo 'node0=512 expecting 512' 00:03:15.992 node0=512 expecting 512 00:03:15.992 07:20:31 -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:03:15.992 07:20:31 -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:03:15.992 07:20:31 -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:03:15.992 07:20:31 -- setup/hugepages.sh@128 -- # echo 'node1=512 expecting 512' 00:03:15.992 node1=512 expecting 512 00:03:15.992 07:20:31 -- setup/hugepages.sh@130 -- # [[ 512 == \5\1\2 ]] 00:03:15.992 00:03:15.992 real 0m1.498s 00:03:15.992 user 0m0.652s 00:03:15.992 sys 0m0.811s 00:03:15.992 07:20:31 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:03:15.992 07:20:31 -- common/autotest_common.sh@10 -- # set +x 00:03:15.992 ************************************ 00:03:15.992 END TEST per_node_1G_alloc 00:03:15.992 ************************************ 00:03:15.992 07:20:31 -- setup/hugepages.sh@212 -- # run_test even_2G_alloc even_2G_alloc 00:03:15.992 07:20:31 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:03:15.992 07:20:31 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:03:15.992 07:20:31 -- common/autotest_common.sh@10 -- # set +x 00:03:15.992 ************************************ 00:03:15.992 START TEST even_2G_alloc 00:03:15.992 ************************************ 00:03:15.992 07:20:31 -- common/autotest_common.sh@1104 -- # even_2G_alloc 00:03:15.992 07:20:31 -- setup/hugepages.sh@152 -- # get_test_nr_hugepages 2097152 00:03:15.992 07:20:31 -- setup/hugepages.sh@49 -- # local size=2097152 00:03:15.992 07:20:31 -- setup/hugepages.sh@50 -- # (( 1 > 1 )) 00:03:15.992 07:20:31 -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:03:15.992 07:20:31 -- setup/hugepages.sh@57 -- # nr_hugepages=1024 00:03:15.992 07:20:31 -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 00:03:15.992 07:20:31 -- setup/hugepages.sh@62 -- # user_nodes=() 00:03:15.992 07:20:31 -- setup/hugepages.sh@62 -- # local user_nodes 00:03:15.992 07:20:31 -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:03:15.992 07:20:31 -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:03:15.992 07:20:31 -- setup/hugepages.sh@67 -- # nodes_test=() 00:03:15.992 07:20:31 -- setup/hugepages.sh@67 -- # local -g nodes_test 00:03:15.992 07:20:31 -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:03:15.992 07:20:31 -- setup/hugepages.sh@74 -- # (( 0 > 0 )) 00:03:15.992 07:20:31 -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:03:15.992 07:20:31 -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=512 00:03:15.992 07:20:31 -- setup/hugepages.sh@83 -- # : 512 00:03:15.992 07:20:31 -- setup/hugepages.sh@84 -- # : 1 00:03:15.992 07:20:31 -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:03:15.992 07:20:31 -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=512 00:03:15.992 07:20:31 -- setup/hugepages.sh@83 -- # : 0 00:03:15.992 07:20:31 -- setup/hugepages.sh@84 -- # : 0 00:03:15.992 07:20:31 -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:03:15.992 07:20:31 -- setup/hugepages.sh@153 -- # NRHUGE=1024 00:03:15.992 07:20:31 -- setup/hugepages.sh@153 -- # HUGE_EVEN_ALLOC=yes 00:03:15.992 07:20:31 -- setup/hugepages.sh@153 -- # setup output 00:03:15.992 07:20:31 -- setup/common.sh@9 -- # [[ output == output ]] 00:03:15.992 07:20:31 -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:03:17.371 0000:00:04.7 (8086 0e27): Already using the vfio-pci driver 00:03:17.371 0000:88:00.0 (8086 0a54): Already using the vfio-pci driver 00:03:17.371 0000:00:04.6 (8086 0e26): Already using the vfio-pci driver 00:03:17.371 0000:00:04.5 (8086 0e25): Already using the vfio-pci driver 00:03:17.371 0000:00:04.4 (8086 0e24): Already using the vfio-pci driver 00:03:17.371 0000:00:04.3 (8086 0e23): Already using the vfio-pci driver 00:03:17.371 0000:00:04.2 (8086 0e22): Already using the vfio-pci driver 00:03:17.371 0000:00:04.1 (8086 0e21): Already using the vfio-pci driver 00:03:17.371 0000:00:04.0 (8086 0e20): Already using the vfio-pci driver 00:03:17.371 0000:80:04.7 (8086 0e27): Already using the vfio-pci driver 00:03:17.371 0000:80:04.6 (8086 0e26): Already using the vfio-pci driver 00:03:17.371 0000:80:04.5 (8086 0e25): Already using the vfio-pci driver 00:03:17.371 0000:80:04.4 (8086 0e24): Already using the vfio-pci driver 00:03:17.371 0000:80:04.3 (8086 0e23): Already using the vfio-pci driver 00:03:17.371 0000:80:04.2 (8086 0e22): Already using the vfio-pci driver 00:03:17.371 0000:80:04.1 (8086 0e21): Already using the vfio-pci driver 00:03:17.371 0000:80:04.0 (8086 0e20): Already using the vfio-pci driver 00:03:17.371 07:20:33 -- setup/hugepages.sh@154 -- # verify_nr_hugepages 00:03:17.371 07:20:33 -- setup/hugepages.sh@89 -- # local node 00:03:17.371 07:20:33 -- setup/hugepages.sh@90 -- # local sorted_t 00:03:17.371 07:20:33 -- setup/hugepages.sh@91 -- # local sorted_s 00:03:17.371 07:20:33 -- setup/hugepages.sh@92 -- # local surp 00:03:17.371 07:20:33 -- setup/hugepages.sh@93 -- # local resv 00:03:17.371 07:20:33 -- setup/hugepages.sh@94 -- # local anon 00:03:17.371 07:20:33 -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:03:17.371 07:20:33 -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:03:17.371 07:20:33 -- setup/common.sh@17 -- # local get=AnonHugePages 00:03:17.371 07:20:33 -- setup/common.sh@18 -- # local node= 00:03:17.371 07:20:33 -- setup/common.sh@19 -- # local var val 00:03:17.371 07:20:33 -- setup/common.sh@20 -- # local mem_f mem 00:03:17.371 07:20:33 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:17.371 07:20:33 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:17.371 07:20:33 -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:17.371 07:20:33 -- setup/common.sh@28 -- # mapfile -t mem 00:03:17.371 07:20:33 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:17.371 07:20:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:17.371 07:20:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:17.371 07:20:33 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541708 kB' 'MemFree: 45807188 kB' 'MemAvailable: 49312600 kB' 'Buffers: 2704 kB' 'Cached: 10211552 kB' 'SwapCached: 0 kB' 'Active: 7235344 kB' 'Inactive: 3506552 kB' 'Active(anon): 6840992 kB' 'Inactive(anon): 0 kB' 'Active(file): 394352 kB' 'Inactive(file): 3506552 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 530964 kB' 'Mapped: 212580 kB' 'Shmem: 6313352 kB' 'KReclaimable: 196260 kB' 'Slab: 568220 kB' 'SReclaimable: 196260 kB' 'SUnreclaim: 371960 kB' 'KernelStack: 12864 kB' 'PageTables: 8136 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37610880 kB' 'Committed_AS: 7964356 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 196676 kB' 'VmallocChunk: 0 kB' 'Percpu: 38784 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 1924700 kB' 'DirectMap2M: 15820800 kB' 'DirectMap1G: 51380224 kB' 00:03:17.371 07:20:33 -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:17.371 07:20:33 -- setup/common.sh@32 -- # continue 00:03:17.371 07:20:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:17.371 07:20:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:17.371 07:20:33 -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:17.371 07:20:33 -- setup/common.sh@32 -- # continue 00:03:17.371 07:20:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:17.371 07:20:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:17.371 07:20:33 -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:17.371 07:20:33 -- setup/common.sh@32 -- # continue 00:03:17.371 07:20:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:17.371 07:20:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:17.371 07:20:33 -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:17.371 07:20:33 -- setup/common.sh@32 -- # continue 00:03:17.371 07:20:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:17.371 07:20:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:17.371 07:20:33 -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:17.371 07:20:33 -- setup/common.sh@32 -- # continue 00:03:17.371 07:20:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:17.371 07:20:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:17.371 07:20:33 -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:17.371 07:20:33 -- setup/common.sh@32 -- # continue 00:03:17.371 07:20:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:17.371 07:20:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:17.371 07:20:33 -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:17.371 07:20:33 -- setup/common.sh@32 -- # continue 00:03:17.371 07:20:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:17.371 07:20:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:17.371 07:20:33 -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:17.371 07:20:33 -- setup/common.sh@32 -- # continue 00:03:17.371 07:20:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:17.371 07:20:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:17.371 07:20:33 -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:17.371 07:20:33 -- setup/common.sh@32 -- # continue 00:03:17.371 07:20:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:17.371 07:20:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:17.371 07:20:33 -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:17.371 07:20:33 -- setup/common.sh@32 -- # continue 00:03:17.371 07:20:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:17.371 07:20:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:17.371 07:20:33 -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:17.371 07:20:33 -- setup/common.sh@32 -- # continue 00:03:17.371 07:20:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:17.371 07:20:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:17.371 07:20:33 -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:17.371 07:20:33 -- setup/common.sh@32 -- # continue 00:03:17.371 07:20:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:17.371 07:20:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:17.371 07:20:33 -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:17.371 07:20:33 -- setup/common.sh@32 -- # continue 00:03:17.371 07:20:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:17.371 07:20:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:17.371 07:20:33 -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:17.371 07:20:33 -- setup/common.sh@32 -- # continue 00:03:17.371 07:20:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:17.371 07:20:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:17.371 07:20:33 -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:17.371 07:20:33 -- setup/common.sh@32 -- # continue 00:03:17.371 07:20:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:17.371 07:20:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:17.371 07:20:33 -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:17.371 07:20:33 -- setup/common.sh@32 -- # continue 00:03:17.371 07:20:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:17.371 07:20:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:17.371 07:20:33 -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:17.371 07:20:33 -- setup/common.sh@32 -- # continue 00:03:17.371 07:20:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:17.371 07:20:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:17.372 07:20:33 -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:17.372 07:20:33 -- setup/common.sh@32 -- # continue 00:03:17.372 07:20:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:17.372 07:20:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:17.372 07:20:33 -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:17.372 07:20:33 -- setup/common.sh@32 -- # continue 00:03:17.372 07:20:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:17.372 07:20:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:17.372 07:20:33 -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:17.372 07:20:33 -- setup/common.sh@32 -- # continue 00:03:17.372 07:20:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:17.372 07:20:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:17.372 07:20:33 -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:17.372 07:20:33 -- setup/common.sh@32 -- # continue 00:03:17.372 07:20:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:17.372 07:20:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:17.372 07:20:33 -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:17.372 07:20:33 -- setup/common.sh@32 -- # continue 00:03:17.372 07:20:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:17.372 07:20:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:17.372 07:20:33 -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:17.372 07:20:33 -- setup/common.sh@32 -- # continue 00:03:17.372 07:20:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:17.372 07:20:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:17.372 07:20:33 -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:17.372 07:20:33 -- setup/common.sh@32 -- # continue 00:03:17.372 07:20:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:17.372 07:20:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:17.372 07:20:33 -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:17.372 07:20:33 -- setup/common.sh@32 -- # continue 00:03:17.372 07:20:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:17.372 07:20:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:17.372 07:20:33 -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:17.372 07:20:33 -- setup/common.sh@32 -- # continue 00:03:17.372 07:20:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:17.372 07:20:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:17.372 07:20:33 -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:17.372 07:20:33 -- setup/common.sh@32 -- # continue 00:03:17.372 07:20:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:17.372 07:20:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:17.372 07:20:33 -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:17.372 07:20:33 -- setup/common.sh@32 -- # continue 00:03:17.372 07:20:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:17.372 07:20:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:17.372 07:20:33 -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:17.372 07:20:33 -- setup/common.sh@32 -- # continue 00:03:17.372 07:20:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:17.372 07:20:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:17.372 07:20:33 -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:17.372 07:20:33 -- setup/common.sh@32 -- # continue 00:03:17.372 07:20:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:17.372 07:20:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:17.372 07:20:33 -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:17.372 07:20:33 -- setup/common.sh@32 -- # continue 00:03:17.372 07:20:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:17.372 07:20:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:17.372 07:20:33 -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:17.372 07:20:33 -- setup/common.sh@32 -- # continue 00:03:17.372 07:20:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:17.372 07:20:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:17.372 07:20:33 -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:17.372 07:20:33 -- setup/common.sh@32 -- # continue 00:03:17.372 07:20:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:17.372 07:20:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:17.372 07:20:33 -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:17.372 07:20:33 -- setup/common.sh@32 -- # continue 00:03:17.372 07:20:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:17.372 07:20:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:17.372 07:20:33 -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:17.372 07:20:33 -- setup/common.sh@32 -- # continue 00:03:17.372 07:20:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:17.372 07:20:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:17.372 07:20:33 -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:17.372 07:20:33 -- setup/common.sh@32 -- # continue 00:03:17.372 07:20:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:17.372 07:20:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:17.372 07:20:33 -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:17.372 07:20:33 -- setup/common.sh@32 -- # continue 00:03:17.372 07:20:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:17.372 07:20:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:17.372 07:20:33 -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:17.372 07:20:33 -- setup/common.sh@32 -- # continue 00:03:17.372 07:20:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:17.372 07:20:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:17.372 07:20:33 -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:17.372 07:20:33 -- setup/common.sh@32 -- # continue 00:03:17.372 07:20:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:17.372 07:20:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:17.372 07:20:33 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:17.372 07:20:33 -- setup/common.sh@32 -- # continue 00:03:17.372 07:20:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:17.372 07:20:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:17.372 07:20:33 -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:17.372 07:20:33 -- setup/common.sh@33 -- # echo 0 00:03:17.372 07:20:33 -- setup/common.sh@33 -- # return 0 00:03:17.372 07:20:33 -- setup/hugepages.sh@97 -- # anon=0 00:03:17.372 07:20:33 -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:03:17.372 07:20:33 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:17.372 07:20:33 -- setup/common.sh@18 -- # local node= 00:03:17.372 07:20:33 -- setup/common.sh@19 -- # local var val 00:03:17.372 07:20:33 -- setup/common.sh@20 -- # local mem_f mem 00:03:17.372 07:20:33 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:17.372 07:20:33 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:17.372 07:20:33 -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:17.372 07:20:33 -- setup/common.sh@28 -- # mapfile -t mem 00:03:17.372 07:20:33 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:17.372 07:20:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:17.372 07:20:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:17.372 07:20:33 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541708 kB' 'MemFree: 45811348 kB' 'MemAvailable: 49316760 kB' 'Buffers: 2704 kB' 'Cached: 10211552 kB' 'SwapCached: 0 kB' 'Active: 7234920 kB' 'Inactive: 3506552 kB' 'Active(anon): 6840568 kB' 'Inactive(anon): 0 kB' 'Active(file): 394352 kB' 'Inactive(file): 3506552 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 530496 kB' 'Mapped: 211708 kB' 'Shmem: 6313352 kB' 'KReclaimable: 196260 kB' 'Slab: 568188 kB' 'SReclaimable: 196260 kB' 'SUnreclaim: 371928 kB' 'KernelStack: 12768 kB' 'PageTables: 7832 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37610880 kB' 'Committed_AS: 7961944 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 196660 kB' 'VmallocChunk: 0 kB' 'Percpu: 38784 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 1924700 kB' 'DirectMap2M: 15820800 kB' 'DirectMap1G: 51380224 kB' 00:03:17.372 07:20:33 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:17.372 07:20:33 -- setup/common.sh@32 -- # continue 00:03:17.372 07:20:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:17.372 07:20:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:17.372 07:20:33 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:17.372 07:20:33 -- setup/common.sh@32 -- # continue 00:03:17.372 07:20:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:17.372 07:20:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:17.372 07:20:33 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:17.372 07:20:33 -- setup/common.sh@32 -- # continue 00:03:17.372 07:20:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:17.372 07:20:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:17.372 07:20:33 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:17.372 07:20:33 -- setup/common.sh@32 -- # continue 00:03:17.372 07:20:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:17.372 07:20:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:17.372 07:20:33 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:17.372 07:20:33 -- setup/common.sh@32 -- # continue 00:03:17.372 07:20:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:17.372 07:20:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:17.372 07:20:33 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:17.372 07:20:33 -- setup/common.sh@32 -- # continue 00:03:17.372 07:20:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:17.372 07:20:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:17.372 07:20:33 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:17.372 07:20:33 -- setup/common.sh@32 -- # continue 00:03:17.372 07:20:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:17.372 07:20:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:17.372 07:20:33 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:17.372 07:20:33 -- setup/common.sh@32 -- # continue 00:03:17.372 07:20:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:17.372 07:20:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:17.372 07:20:33 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:17.372 07:20:33 -- setup/common.sh@32 -- # continue 00:03:17.372 07:20:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:17.372 07:20:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:17.372 07:20:33 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:17.372 07:20:33 -- setup/common.sh@32 -- # continue 00:03:17.372 07:20:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:17.372 07:20:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:17.372 07:20:33 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:17.372 07:20:33 -- setup/common.sh@32 -- # continue 00:03:17.372 07:20:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:17.372 07:20:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:17.372 07:20:33 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:17.373 07:20:33 -- setup/common.sh@32 -- # continue 00:03:17.373 07:20:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:17.373 07:20:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:17.373 07:20:33 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:17.373 07:20:33 -- setup/common.sh@32 -- # continue 00:03:17.373 07:20:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:17.373 07:20:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:17.373 07:20:33 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:17.373 07:20:33 -- setup/common.sh@32 -- # continue 00:03:17.373 07:20:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:17.373 07:20:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:17.373 07:20:33 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:17.373 07:20:33 -- setup/common.sh@32 -- # continue 00:03:17.373 07:20:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:17.373 07:20:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:17.373 07:20:33 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:17.373 07:20:33 -- setup/common.sh@32 -- # continue 00:03:17.373 07:20:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:17.373 07:20:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:17.373 07:20:33 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:17.373 07:20:33 -- setup/common.sh@32 -- # continue 00:03:17.373 07:20:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:17.373 07:20:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:17.373 07:20:33 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:17.373 07:20:33 -- setup/common.sh@32 -- # continue 00:03:17.373 07:20:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:17.373 07:20:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:17.373 07:20:33 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:17.373 07:20:33 -- setup/common.sh@32 -- # continue 00:03:17.373 07:20:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:17.373 07:20:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:17.373 07:20:33 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:17.373 07:20:33 -- setup/common.sh@32 -- # continue 00:03:17.373 07:20:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:17.373 07:20:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:17.373 07:20:33 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:17.373 07:20:33 -- setup/common.sh@32 -- # continue 00:03:17.373 07:20:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:17.373 07:20:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:17.373 07:20:33 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:17.373 07:20:33 -- setup/common.sh@32 -- # continue 00:03:17.373 07:20:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:17.373 07:20:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:17.373 07:20:33 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:17.373 07:20:33 -- setup/common.sh@32 -- # continue 00:03:17.373 07:20:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:17.373 07:20:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:17.373 07:20:33 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:17.373 07:20:33 -- setup/common.sh@32 -- # continue 00:03:17.373 07:20:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:17.373 07:20:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:17.373 07:20:33 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:17.373 07:20:33 -- setup/common.sh@32 -- # continue 00:03:17.373 07:20:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:17.373 07:20:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:17.373 07:20:33 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:17.373 07:20:33 -- setup/common.sh@32 -- # continue 00:03:17.373 07:20:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:17.373 07:20:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:17.373 07:20:33 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:17.373 07:20:33 -- setup/common.sh@32 -- # continue 00:03:17.373 07:20:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:17.373 07:20:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:17.373 07:20:33 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:17.373 07:20:33 -- setup/common.sh@32 -- # continue 00:03:17.373 07:20:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:17.373 07:20:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:17.373 07:20:33 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:17.373 07:20:33 -- setup/common.sh@32 -- # continue 00:03:17.373 07:20:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:17.373 07:20:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:17.373 07:20:33 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:17.373 07:20:33 -- setup/common.sh@32 -- # continue 00:03:17.373 07:20:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:17.373 07:20:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:17.373 07:20:33 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:17.373 07:20:33 -- setup/common.sh@32 -- # continue 00:03:17.373 07:20:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:17.373 07:20:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:17.373 07:20:33 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:17.373 07:20:33 -- setup/common.sh@32 -- # continue 00:03:17.373 07:20:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:17.373 07:20:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:17.373 07:20:33 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:17.373 07:20:33 -- setup/common.sh@32 -- # continue 00:03:17.373 07:20:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:17.373 07:20:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:17.373 07:20:33 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:17.373 07:20:33 -- setup/common.sh@32 -- # continue 00:03:17.373 07:20:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:17.373 07:20:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:17.373 07:20:33 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:17.373 07:20:33 -- setup/common.sh@32 -- # continue 00:03:17.373 07:20:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:17.373 07:20:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:17.373 07:20:33 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:17.373 07:20:33 -- setup/common.sh@32 -- # continue 00:03:17.373 07:20:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:17.373 07:20:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:17.373 07:20:33 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:17.373 07:20:33 -- setup/common.sh@32 -- # continue 00:03:17.373 07:20:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:17.373 07:20:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:17.373 07:20:33 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:17.373 07:20:33 -- setup/common.sh@32 -- # continue 00:03:17.373 07:20:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:17.373 07:20:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:17.373 07:20:33 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:17.373 07:20:33 -- setup/common.sh@32 -- # continue 00:03:17.373 07:20:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:17.373 07:20:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:17.373 07:20:33 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:17.373 07:20:33 -- setup/common.sh@32 -- # continue 00:03:17.373 07:20:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:17.373 07:20:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:17.373 07:20:33 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:17.373 07:20:33 -- setup/common.sh@32 -- # continue 00:03:17.373 07:20:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:17.373 07:20:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:17.373 07:20:33 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:17.373 07:20:33 -- setup/common.sh@32 -- # continue 00:03:17.373 07:20:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:17.373 07:20:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:17.373 07:20:33 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:17.373 07:20:33 -- setup/common.sh@32 -- # continue 00:03:17.373 07:20:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:17.373 07:20:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:17.373 07:20:33 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:17.373 07:20:33 -- setup/common.sh@32 -- # continue 00:03:17.373 07:20:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:17.373 07:20:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:17.373 07:20:33 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:17.373 07:20:33 -- setup/common.sh@32 -- # continue 00:03:17.373 07:20:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:17.373 07:20:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:17.373 07:20:33 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:17.373 07:20:33 -- setup/common.sh@32 -- # continue 00:03:17.373 07:20:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:17.373 07:20:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:17.373 07:20:33 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:17.373 07:20:33 -- setup/common.sh@32 -- # continue 00:03:17.373 07:20:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:17.373 07:20:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:17.373 07:20:33 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:17.373 07:20:33 -- setup/common.sh@32 -- # continue 00:03:17.373 07:20:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:17.373 07:20:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:17.373 07:20:33 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:17.373 07:20:33 -- setup/common.sh@32 -- # continue 00:03:17.373 07:20:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:17.373 07:20:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:17.373 07:20:33 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:17.373 07:20:33 -- setup/common.sh@32 -- # continue 00:03:17.373 07:20:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:17.373 07:20:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:17.373 07:20:33 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:17.373 07:20:33 -- setup/common.sh@32 -- # continue 00:03:17.373 07:20:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:17.373 07:20:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:17.373 07:20:33 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:17.373 07:20:33 -- setup/common.sh@33 -- # echo 0 00:03:17.373 07:20:33 -- setup/common.sh@33 -- # return 0 00:03:17.373 07:20:33 -- setup/hugepages.sh@99 -- # surp=0 00:03:17.373 07:20:33 -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:03:17.373 07:20:33 -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:03:17.373 07:20:33 -- setup/common.sh@18 -- # local node= 00:03:17.373 07:20:33 -- setup/common.sh@19 -- # local var val 00:03:17.373 07:20:33 -- setup/common.sh@20 -- # local mem_f mem 00:03:17.373 07:20:33 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:17.374 07:20:33 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:17.374 07:20:33 -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:17.374 07:20:33 -- setup/common.sh@28 -- # mapfile -t mem 00:03:17.374 07:20:33 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:17.374 07:20:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:17.374 07:20:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:17.374 07:20:33 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541708 kB' 'MemFree: 45811684 kB' 'MemAvailable: 49317096 kB' 'Buffers: 2704 kB' 'Cached: 10211564 kB' 'SwapCached: 0 kB' 'Active: 7234832 kB' 'Inactive: 3506552 kB' 'Active(anon): 6840480 kB' 'Inactive(anon): 0 kB' 'Active(file): 394352 kB' 'Inactive(file): 3506552 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 530428 kB' 'Mapped: 211672 kB' 'Shmem: 6313364 kB' 'KReclaimable: 196260 kB' 'Slab: 568220 kB' 'SReclaimable: 196260 kB' 'SUnreclaim: 371960 kB' 'KernelStack: 12848 kB' 'PageTables: 8064 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37610880 kB' 'Committed_AS: 7961960 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 196660 kB' 'VmallocChunk: 0 kB' 'Percpu: 38784 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 1924700 kB' 'DirectMap2M: 15820800 kB' 'DirectMap1G: 51380224 kB' 00:03:17.374 07:20:33 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:17.374 07:20:33 -- setup/common.sh@32 -- # continue 00:03:17.374 07:20:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:17.374 07:20:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:17.374 07:20:33 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:17.374 07:20:33 -- setup/common.sh@32 -- # continue 00:03:17.374 07:20:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:17.374 07:20:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:17.374 07:20:33 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:17.374 07:20:33 -- setup/common.sh@32 -- # continue 00:03:17.374 07:20:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:17.374 07:20:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:17.374 07:20:33 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:17.374 07:20:33 -- setup/common.sh@32 -- # continue 00:03:17.374 07:20:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:17.374 07:20:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:17.374 07:20:33 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:17.374 07:20:33 -- setup/common.sh@32 -- # continue 00:03:17.374 07:20:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:17.374 07:20:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:17.374 07:20:33 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:17.374 07:20:33 -- setup/common.sh@32 -- # continue 00:03:17.374 07:20:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:17.374 07:20:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:17.374 07:20:33 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:17.374 07:20:33 -- setup/common.sh@32 -- # continue 00:03:17.374 07:20:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:17.374 07:20:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:17.374 07:20:33 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:17.374 07:20:33 -- setup/common.sh@32 -- # continue 00:03:17.374 07:20:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:17.374 07:20:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:17.374 07:20:33 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:17.374 07:20:33 -- setup/common.sh@32 -- # continue 00:03:17.374 07:20:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:17.374 07:20:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:17.374 07:20:33 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:17.374 07:20:33 -- setup/common.sh@32 -- # continue 00:03:17.374 07:20:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:17.374 07:20:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:17.374 07:20:33 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:17.374 07:20:33 -- setup/common.sh@32 -- # continue 00:03:17.374 07:20:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:17.374 07:20:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:17.374 07:20:33 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:17.374 07:20:33 -- setup/common.sh@32 -- # continue 00:03:17.374 07:20:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:17.374 07:20:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:17.374 07:20:33 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:17.374 07:20:33 -- setup/common.sh@32 -- # continue 00:03:17.374 07:20:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:17.374 07:20:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:17.374 07:20:33 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:17.374 07:20:33 -- setup/common.sh@32 -- # continue 00:03:17.374 07:20:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:17.374 07:20:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:17.374 07:20:33 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:17.374 07:20:33 -- setup/common.sh@32 -- # continue 00:03:17.374 07:20:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:17.374 07:20:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:17.374 07:20:33 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:17.374 07:20:33 -- setup/common.sh@32 -- # continue 00:03:17.374 07:20:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:17.374 07:20:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:17.374 07:20:33 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:17.374 07:20:33 -- setup/common.sh@32 -- # continue 00:03:17.374 07:20:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:17.374 07:20:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:17.374 07:20:33 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:17.374 07:20:33 -- setup/common.sh@32 -- # continue 00:03:17.374 07:20:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:17.374 07:20:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:17.374 07:20:33 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:17.374 07:20:33 -- setup/common.sh@32 -- # continue 00:03:17.374 07:20:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:17.374 07:20:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:17.374 07:20:33 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:17.374 07:20:33 -- setup/common.sh@32 -- # continue 00:03:17.374 07:20:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:17.374 07:20:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:17.374 07:20:33 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:17.374 07:20:33 -- setup/common.sh@32 -- # continue 00:03:17.374 07:20:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:17.374 07:20:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:17.374 07:20:33 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:17.374 07:20:33 -- setup/common.sh@32 -- # continue 00:03:17.374 07:20:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:17.374 07:20:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:17.374 07:20:33 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:17.374 07:20:33 -- setup/common.sh@32 -- # continue 00:03:17.374 07:20:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:17.374 07:20:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:17.374 07:20:33 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:17.374 07:20:33 -- setup/common.sh@32 -- # continue 00:03:17.374 07:20:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:17.374 07:20:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:17.374 07:20:33 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:17.374 07:20:33 -- setup/common.sh@32 -- # continue 00:03:17.374 07:20:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:17.374 07:20:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:17.374 07:20:33 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:17.374 07:20:33 -- setup/common.sh@32 -- # continue 00:03:17.374 07:20:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:17.374 07:20:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:17.374 07:20:33 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:17.374 07:20:33 -- setup/common.sh@32 -- # continue 00:03:17.374 07:20:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:17.374 07:20:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:17.374 07:20:33 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:17.374 07:20:33 -- setup/common.sh@32 -- # continue 00:03:17.374 07:20:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:17.374 07:20:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:17.374 07:20:33 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:17.374 07:20:33 -- setup/common.sh@32 -- # continue 00:03:17.374 07:20:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:17.374 07:20:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:17.374 07:20:33 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:17.374 07:20:33 -- setup/common.sh@32 -- # continue 00:03:17.374 07:20:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:17.374 07:20:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:17.374 07:20:33 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:17.374 07:20:33 -- setup/common.sh@32 -- # continue 00:03:17.374 07:20:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:17.374 07:20:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:17.374 07:20:33 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:17.374 07:20:33 -- setup/common.sh@32 -- # continue 00:03:17.374 07:20:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:17.374 07:20:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:17.374 07:20:33 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:17.374 07:20:33 -- setup/common.sh@32 -- # continue 00:03:17.374 07:20:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:17.374 07:20:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:17.374 07:20:33 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:17.374 07:20:33 -- setup/common.sh@32 -- # continue 00:03:17.374 07:20:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:17.374 07:20:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:17.374 07:20:33 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:17.374 07:20:33 -- setup/common.sh@32 -- # continue 00:03:17.374 07:20:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:17.374 07:20:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:17.374 07:20:33 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:17.374 07:20:33 -- setup/common.sh@32 -- # continue 00:03:17.374 07:20:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:17.374 07:20:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:17.374 07:20:33 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:17.374 07:20:33 -- setup/common.sh@32 -- # continue 00:03:17.374 07:20:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:17.374 07:20:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:17.374 07:20:33 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:17.375 07:20:33 -- setup/common.sh@32 -- # continue 00:03:17.375 07:20:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:17.375 07:20:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:17.375 07:20:33 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:17.375 07:20:33 -- setup/common.sh@32 -- # continue 00:03:17.375 07:20:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:17.375 07:20:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:17.375 07:20:33 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:17.375 07:20:33 -- setup/common.sh@32 -- # continue 00:03:17.375 07:20:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:17.375 07:20:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:17.375 07:20:33 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:17.375 07:20:33 -- setup/common.sh@32 -- # continue 00:03:17.375 07:20:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:17.375 07:20:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:17.375 07:20:33 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:17.375 07:20:33 -- setup/common.sh@32 -- # continue 00:03:17.375 07:20:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:17.375 07:20:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:17.375 07:20:33 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:17.375 07:20:33 -- setup/common.sh@32 -- # continue 00:03:17.375 07:20:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:17.375 07:20:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:17.375 07:20:33 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:17.375 07:20:33 -- setup/common.sh@32 -- # continue 00:03:17.375 07:20:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:17.375 07:20:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:17.375 07:20:33 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:17.375 07:20:33 -- setup/common.sh@32 -- # continue 00:03:17.375 07:20:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:17.375 07:20:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:17.375 07:20:33 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:17.375 07:20:33 -- setup/common.sh@32 -- # continue 00:03:17.375 07:20:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:17.375 07:20:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:17.375 07:20:33 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:17.375 07:20:33 -- setup/common.sh@32 -- # continue 00:03:17.375 07:20:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:17.375 07:20:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:17.375 07:20:33 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:17.375 07:20:33 -- setup/common.sh@32 -- # continue 00:03:17.375 07:20:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:17.375 07:20:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:17.375 07:20:33 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:17.375 07:20:33 -- setup/common.sh@32 -- # continue 00:03:17.375 07:20:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:17.375 07:20:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:17.375 07:20:33 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:17.375 07:20:33 -- setup/common.sh@32 -- # continue 00:03:17.375 07:20:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:17.375 07:20:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:17.375 07:20:33 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:17.375 07:20:33 -- setup/common.sh@33 -- # echo 0 00:03:17.375 07:20:33 -- setup/common.sh@33 -- # return 0 00:03:17.375 07:20:33 -- setup/hugepages.sh@100 -- # resv=0 00:03:17.375 07:20:33 -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:03:17.375 nr_hugepages=1024 00:03:17.375 07:20:33 -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:03:17.375 resv_hugepages=0 00:03:17.375 07:20:33 -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:03:17.375 surplus_hugepages=0 00:03:17.375 07:20:33 -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:03:17.375 anon_hugepages=0 00:03:17.375 07:20:33 -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:03:17.375 07:20:33 -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:03:17.375 07:20:33 -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:03:17.375 07:20:33 -- setup/common.sh@17 -- # local get=HugePages_Total 00:03:17.375 07:20:33 -- setup/common.sh@18 -- # local node= 00:03:17.375 07:20:33 -- setup/common.sh@19 -- # local var val 00:03:17.375 07:20:33 -- setup/common.sh@20 -- # local mem_f mem 00:03:17.375 07:20:33 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:17.375 07:20:33 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:17.375 07:20:33 -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:17.375 07:20:33 -- setup/common.sh@28 -- # mapfile -t mem 00:03:17.375 07:20:33 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:17.375 07:20:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:17.375 07:20:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:17.375 07:20:33 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541708 kB' 'MemFree: 45811684 kB' 'MemAvailable: 49317096 kB' 'Buffers: 2704 kB' 'Cached: 10211580 kB' 'SwapCached: 0 kB' 'Active: 7234844 kB' 'Inactive: 3506552 kB' 'Active(anon): 6840492 kB' 'Inactive(anon): 0 kB' 'Active(file): 394352 kB' 'Inactive(file): 3506552 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 530428 kB' 'Mapped: 211672 kB' 'Shmem: 6313380 kB' 'KReclaimable: 196260 kB' 'Slab: 568220 kB' 'SReclaimable: 196260 kB' 'SUnreclaim: 371960 kB' 'KernelStack: 12848 kB' 'PageTables: 8064 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37610880 kB' 'Committed_AS: 7961972 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 196660 kB' 'VmallocChunk: 0 kB' 'Percpu: 38784 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 1924700 kB' 'DirectMap2M: 15820800 kB' 'DirectMap1G: 51380224 kB' 00:03:17.375 07:20:33 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:17.375 07:20:33 -- setup/common.sh@32 -- # continue 00:03:17.375 07:20:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:17.375 07:20:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:17.375 07:20:33 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:17.375 07:20:33 -- setup/common.sh@32 -- # continue 00:03:17.375 07:20:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:17.375 07:20:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:17.375 07:20:33 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:17.375 07:20:33 -- setup/common.sh@32 -- # continue 00:03:17.375 07:20:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:17.375 07:20:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:17.375 07:20:33 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:17.375 07:20:33 -- setup/common.sh@32 -- # continue 00:03:17.375 07:20:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:17.375 07:20:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:17.375 07:20:33 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:17.375 07:20:33 -- setup/common.sh@32 -- # continue 00:03:17.375 07:20:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:17.375 07:20:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:17.375 07:20:33 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:17.375 07:20:33 -- setup/common.sh@32 -- # continue 00:03:17.375 07:20:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:17.375 07:20:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:17.375 07:20:33 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:17.375 07:20:33 -- setup/common.sh@32 -- # continue 00:03:17.375 07:20:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:17.375 07:20:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:17.375 07:20:33 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:17.375 07:20:33 -- setup/common.sh@32 -- # continue 00:03:17.375 07:20:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:17.375 07:20:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:17.375 07:20:33 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:17.375 07:20:33 -- setup/common.sh@32 -- # continue 00:03:17.375 07:20:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:17.375 07:20:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:17.375 07:20:33 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:17.375 07:20:33 -- setup/common.sh@32 -- # continue 00:03:17.375 07:20:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:17.375 07:20:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:17.375 07:20:33 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:17.375 07:20:33 -- setup/common.sh@32 -- # continue 00:03:17.375 07:20:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:17.375 07:20:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:17.376 07:20:33 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:17.376 07:20:33 -- setup/common.sh@32 -- # continue 00:03:17.376 07:20:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:17.376 07:20:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:17.376 07:20:33 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:17.376 07:20:33 -- setup/common.sh@32 -- # continue 00:03:17.376 07:20:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:17.376 07:20:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:17.376 07:20:33 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:17.376 07:20:33 -- setup/common.sh@32 -- # continue 00:03:17.376 07:20:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:17.376 07:20:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:17.376 07:20:33 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:17.376 07:20:33 -- setup/common.sh@32 -- # continue 00:03:17.376 07:20:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:17.376 07:20:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:17.376 07:20:33 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:17.376 07:20:33 -- setup/common.sh@32 -- # continue 00:03:17.376 07:20:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:17.376 07:20:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:17.376 07:20:33 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:17.376 07:20:33 -- setup/common.sh@32 -- # continue 00:03:17.376 07:20:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:17.376 07:20:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:17.376 07:20:33 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:17.376 07:20:33 -- setup/common.sh@32 -- # continue 00:03:17.376 07:20:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:17.376 07:20:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:17.376 07:20:33 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:17.376 07:20:33 -- setup/common.sh@32 -- # continue 00:03:17.376 07:20:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:17.376 07:20:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:17.376 07:20:33 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:17.376 07:20:33 -- setup/common.sh@32 -- # continue 00:03:17.376 07:20:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:17.376 07:20:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:17.376 07:20:33 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:17.376 07:20:33 -- setup/common.sh@32 -- # continue 00:03:17.376 07:20:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:17.376 07:20:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:17.376 07:20:33 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:17.376 07:20:33 -- setup/common.sh@32 -- # continue 00:03:17.376 07:20:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:17.376 07:20:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:17.376 07:20:33 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:17.376 07:20:33 -- setup/common.sh@32 -- # continue 00:03:17.376 07:20:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:17.376 07:20:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:17.376 07:20:33 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:17.376 07:20:33 -- setup/common.sh@32 -- # continue 00:03:17.376 07:20:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:17.376 07:20:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:17.376 07:20:33 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:17.376 07:20:33 -- setup/common.sh@32 -- # continue 00:03:17.376 07:20:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:17.376 07:20:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:17.376 07:20:33 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:17.376 07:20:33 -- setup/common.sh@32 -- # continue 00:03:17.376 07:20:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:17.376 07:20:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:17.376 07:20:33 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:17.376 07:20:33 -- setup/common.sh@32 -- # continue 00:03:17.376 07:20:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:17.376 07:20:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:17.376 07:20:33 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:17.376 07:20:33 -- setup/common.sh@32 -- # continue 00:03:17.376 07:20:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:17.376 07:20:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:17.376 07:20:33 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:17.376 07:20:33 -- setup/common.sh@32 -- # continue 00:03:17.376 07:20:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:17.376 07:20:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:17.376 07:20:33 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:17.376 07:20:33 -- setup/common.sh@32 -- # continue 00:03:17.376 07:20:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:17.376 07:20:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:17.376 07:20:33 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:17.376 07:20:33 -- setup/common.sh@32 -- # continue 00:03:17.376 07:20:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:17.376 07:20:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:17.376 07:20:33 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:17.376 07:20:33 -- setup/common.sh@32 -- # continue 00:03:17.376 07:20:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:17.376 07:20:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:17.376 07:20:33 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:17.376 07:20:33 -- setup/common.sh@32 -- # continue 00:03:17.376 07:20:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:17.376 07:20:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:17.376 07:20:33 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:17.376 07:20:33 -- setup/common.sh@32 -- # continue 00:03:17.376 07:20:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:17.376 07:20:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:17.376 07:20:33 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:17.376 07:20:33 -- setup/common.sh@32 -- # continue 00:03:17.376 07:20:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:17.376 07:20:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:17.376 07:20:33 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:17.376 07:20:33 -- setup/common.sh@32 -- # continue 00:03:17.376 07:20:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:17.376 07:20:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:17.376 07:20:33 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:17.376 07:20:33 -- setup/common.sh@32 -- # continue 00:03:17.376 07:20:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:17.376 07:20:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:17.376 07:20:33 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:17.376 07:20:33 -- setup/common.sh@32 -- # continue 00:03:17.376 07:20:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:17.376 07:20:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:17.376 07:20:33 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:17.376 07:20:33 -- setup/common.sh@32 -- # continue 00:03:17.376 07:20:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:17.376 07:20:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:17.376 07:20:33 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:17.376 07:20:33 -- setup/common.sh@32 -- # continue 00:03:17.376 07:20:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:17.376 07:20:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:17.376 07:20:33 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:17.376 07:20:33 -- setup/common.sh@32 -- # continue 00:03:17.376 07:20:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:17.376 07:20:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:17.376 07:20:33 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:17.376 07:20:33 -- setup/common.sh@32 -- # continue 00:03:17.376 07:20:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:17.376 07:20:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:17.376 07:20:33 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:17.376 07:20:33 -- setup/common.sh@32 -- # continue 00:03:17.376 07:20:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:17.376 07:20:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:17.376 07:20:33 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:17.376 07:20:33 -- setup/common.sh@32 -- # continue 00:03:17.376 07:20:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:17.376 07:20:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:17.376 07:20:33 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:17.376 07:20:33 -- setup/common.sh@32 -- # continue 00:03:17.376 07:20:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:17.376 07:20:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:17.376 07:20:33 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:17.376 07:20:33 -- setup/common.sh@32 -- # continue 00:03:17.376 07:20:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:17.376 07:20:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:17.376 07:20:33 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:17.376 07:20:33 -- setup/common.sh@32 -- # continue 00:03:17.376 07:20:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:17.376 07:20:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:17.376 07:20:33 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:17.376 07:20:33 -- setup/common.sh@32 -- # continue 00:03:17.376 07:20:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:17.376 07:20:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:17.376 07:20:33 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:17.376 07:20:33 -- setup/common.sh@33 -- # echo 1024 00:03:17.376 07:20:33 -- setup/common.sh@33 -- # return 0 00:03:17.376 07:20:33 -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:03:17.376 07:20:33 -- setup/hugepages.sh@112 -- # get_nodes 00:03:17.376 07:20:33 -- setup/hugepages.sh@27 -- # local node 00:03:17.376 07:20:33 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:17.376 07:20:33 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:03:17.376 07:20:33 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:17.376 07:20:33 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:03:17.376 07:20:33 -- setup/hugepages.sh@32 -- # no_nodes=2 00:03:17.376 07:20:33 -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:03:17.376 07:20:33 -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:03:17.376 07:20:33 -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:03:17.376 07:20:33 -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:03:17.376 07:20:33 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:17.376 07:20:33 -- setup/common.sh@18 -- # local node=0 00:03:17.376 07:20:33 -- setup/common.sh@19 -- # local var val 00:03:17.376 07:20:33 -- setup/common.sh@20 -- # local mem_f mem 00:03:17.376 07:20:33 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:17.377 07:20:33 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:03:17.377 07:20:33 -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:03:17.377 07:20:33 -- setup/common.sh@28 -- # mapfile -t mem 00:03:17.377 07:20:33 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:17.377 07:20:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:17.377 07:20:33 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 32829884 kB' 'MemFree: 28284344 kB' 'MemUsed: 4545540 kB' 'SwapCached: 0 kB' 'Active: 2362752 kB' 'Inactive: 110044 kB' 'Active(anon): 2251864 kB' 'Inactive(anon): 0 kB' 'Active(file): 110888 kB' 'Inactive(file): 110044 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 2243144 kB' 'Mapped: 35008 kB' 'AnonPages: 232840 kB' 'Shmem: 2022212 kB' 'KernelStack: 7160 kB' 'PageTables: 4404 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 97124 kB' 'Slab: 312308 kB' 'SReclaimable: 97124 kB' 'SUnreclaim: 215184 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:03:17.377 07:20:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:17.377 07:20:33 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:17.377 07:20:33 -- setup/common.sh@32 -- # continue 00:03:17.377 07:20:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:17.377 07:20:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:17.377 07:20:33 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:17.377 07:20:33 -- setup/common.sh@32 -- # continue 00:03:17.377 07:20:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:17.377 07:20:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:17.377 07:20:33 -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:17.377 07:20:33 -- setup/common.sh@32 -- # continue 00:03:17.377 07:20:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:17.377 07:20:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:17.377 07:20:33 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:17.377 07:20:33 -- setup/common.sh@32 -- # continue 00:03:17.377 07:20:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:17.377 07:20:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:17.377 07:20:33 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:17.377 07:20:33 -- setup/common.sh@32 -- # continue 00:03:17.377 07:20:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:17.377 07:20:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:17.377 07:20:33 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:17.377 07:20:33 -- setup/common.sh@32 -- # continue 00:03:17.377 07:20:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:17.377 07:20:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:17.377 07:20:33 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:17.377 07:20:33 -- setup/common.sh@32 -- # continue 00:03:17.377 07:20:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:17.377 07:20:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:17.377 07:20:33 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:17.377 07:20:33 -- setup/common.sh@32 -- # continue 00:03:17.377 07:20:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:17.377 07:20:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:17.377 07:20:33 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:17.377 07:20:33 -- setup/common.sh@32 -- # continue 00:03:17.377 07:20:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:17.377 07:20:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:17.377 07:20:33 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:17.377 07:20:33 -- setup/common.sh@32 -- # continue 00:03:17.377 07:20:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:17.377 07:20:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:17.377 07:20:33 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:17.377 07:20:33 -- setup/common.sh@32 -- # continue 00:03:17.377 07:20:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:17.377 07:20:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:17.377 07:20:33 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:17.377 07:20:33 -- setup/common.sh@32 -- # continue 00:03:17.377 07:20:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:17.377 07:20:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:17.377 07:20:33 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:17.377 07:20:33 -- setup/common.sh@32 -- # continue 00:03:17.377 07:20:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:17.377 07:20:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:17.377 07:20:33 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:17.377 07:20:33 -- setup/common.sh@32 -- # continue 00:03:17.377 07:20:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:17.377 07:20:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:17.377 07:20:33 -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:17.377 07:20:33 -- setup/common.sh@32 -- # continue 00:03:17.377 07:20:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:17.377 07:20:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:17.377 07:20:33 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:17.377 07:20:33 -- setup/common.sh@32 -- # continue 00:03:17.377 07:20:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:17.377 07:20:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:17.377 07:20:33 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:17.377 07:20:33 -- setup/common.sh@32 -- # continue 00:03:17.377 07:20:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:17.377 07:20:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:17.377 07:20:33 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:17.377 07:20:33 -- setup/common.sh@32 -- # continue 00:03:17.377 07:20:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:17.377 07:20:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:17.377 07:20:33 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:17.377 07:20:33 -- setup/common.sh@32 -- # continue 00:03:17.377 07:20:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:17.377 07:20:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:17.377 07:20:33 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:17.377 07:20:33 -- setup/common.sh@32 -- # continue 00:03:17.377 07:20:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:17.377 07:20:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:17.377 07:20:33 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:17.377 07:20:33 -- setup/common.sh@32 -- # continue 00:03:17.377 07:20:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:17.377 07:20:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:17.377 07:20:33 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:17.377 07:20:33 -- setup/common.sh@32 -- # continue 00:03:17.377 07:20:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:17.377 07:20:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:17.377 07:20:33 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:17.377 07:20:33 -- setup/common.sh@32 -- # continue 00:03:17.377 07:20:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:17.377 07:20:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:17.377 07:20:33 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:17.377 07:20:33 -- setup/common.sh@32 -- # continue 00:03:17.377 07:20:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:17.377 07:20:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:17.377 07:20:33 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:17.377 07:20:33 -- setup/common.sh@32 -- # continue 00:03:17.377 07:20:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:17.377 07:20:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:17.377 07:20:33 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:17.377 07:20:33 -- setup/common.sh@32 -- # continue 00:03:17.377 07:20:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:17.377 07:20:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:17.377 07:20:33 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:17.377 07:20:33 -- setup/common.sh@32 -- # continue 00:03:17.377 07:20:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:17.377 07:20:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:17.377 07:20:33 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:17.377 07:20:33 -- setup/common.sh@32 -- # continue 00:03:17.377 07:20:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:17.377 07:20:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:17.377 07:20:33 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:17.377 07:20:33 -- setup/common.sh@32 -- # continue 00:03:17.377 07:20:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:17.377 07:20:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:17.377 07:20:33 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:17.377 07:20:33 -- setup/common.sh@32 -- # continue 00:03:17.377 07:20:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:17.377 07:20:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:17.377 07:20:33 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:17.377 07:20:33 -- setup/common.sh@32 -- # continue 00:03:17.377 07:20:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:17.377 07:20:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:17.377 07:20:33 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:17.377 07:20:33 -- setup/common.sh@32 -- # continue 00:03:17.377 07:20:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:17.377 07:20:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:17.377 07:20:33 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:17.377 07:20:33 -- setup/common.sh@32 -- # continue 00:03:17.377 07:20:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:17.377 07:20:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:17.377 07:20:33 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:17.377 07:20:33 -- setup/common.sh@32 -- # continue 00:03:17.377 07:20:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:17.377 07:20:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:17.377 07:20:33 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:17.377 07:20:33 -- setup/common.sh@32 -- # continue 00:03:17.377 07:20:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:17.377 07:20:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:17.377 07:20:33 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:17.377 07:20:33 -- setup/common.sh@32 -- # continue 00:03:17.377 07:20:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:17.377 07:20:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:17.377 07:20:33 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:17.377 07:20:33 -- setup/common.sh@33 -- # echo 0 00:03:17.377 07:20:33 -- setup/common.sh@33 -- # return 0 00:03:17.377 07:20:33 -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:03:17.377 07:20:33 -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:03:17.377 07:20:33 -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:03:17.377 07:20:33 -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 1 00:03:17.377 07:20:33 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:17.378 07:20:33 -- setup/common.sh@18 -- # local node=1 00:03:17.378 07:20:33 -- setup/common.sh@19 -- # local var val 00:03:17.378 07:20:33 -- setup/common.sh@20 -- # local mem_f mem 00:03:17.378 07:20:33 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:17.378 07:20:33 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node1/meminfo ]] 00:03:17.378 07:20:33 -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node1/meminfo 00:03:17.378 07:20:33 -- setup/common.sh@28 -- # mapfile -t mem 00:03:17.378 07:20:33 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:17.378 07:20:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:17.378 07:20:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:17.378 07:20:33 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 27711824 kB' 'MemFree: 17527340 kB' 'MemUsed: 10184484 kB' 'SwapCached: 0 kB' 'Active: 4871924 kB' 'Inactive: 3396508 kB' 'Active(anon): 4588460 kB' 'Inactive(anon): 0 kB' 'Active(file): 283464 kB' 'Inactive(file): 3396508 kB' 'Unevictable: 0 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 7971168 kB' 'Mapped: 176664 kB' 'AnonPages: 297364 kB' 'Shmem: 4291196 kB' 'KernelStack: 5672 kB' 'PageTables: 3612 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 99136 kB' 'Slab: 255912 kB' 'SReclaimable: 99136 kB' 'SUnreclaim: 156776 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:03:17.378 07:20:33 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:17.378 07:20:33 -- setup/common.sh@32 -- # continue 00:03:17.378 07:20:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:17.378 07:20:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:17.378 07:20:33 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:17.378 07:20:33 -- setup/common.sh@32 -- # continue 00:03:17.378 07:20:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:17.378 07:20:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:17.378 07:20:33 -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:17.378 07:20:33 -- setup/common.sh@32 -- # continue 00:03:17.378 07:20:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:17.378 07:20:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:17.378 07:20:33 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:17.378 07:20:33 -- setup/common.sh@32 -- # continue 00:03:17.378 07:20:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:17.378 07:20:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:17.378 07:20:33 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:17.378 07:20:33 -- setup/common.sh@32 -- # continue 00:03:17.378 07:20:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:17.378 07:20:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:17.378 07:20:33 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:17.378 07:20:33 -- setup/common.sh@32 -- # continue 00:03:17.378 07:20:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:17.378 07:20:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:17.378 07:20:33 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:17.378 07:20:33 -- setup/common.sh@32 -- # continue 00:03:17.378 07:20:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:17.378 07:20:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:17.378 07:20:33 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:17.378 07:20:33 -- setup/common.sh@32 -- # continue 00:03:17.378 07:20:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:17.378 07:20:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:17.378 07:20:33 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:17.378 07:20:33 -- setup/common.sh@32 -- # continue 00:03:17.378 07:20:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:17.378 07:20:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:17.378 07:20:33 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:17.378 07:20:33 -- setup/common.sh@32 -- # continue 00:03:17.378 07:20:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:17.378 07:20:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:17.378 07:20:33 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:17.378 07:20:33 -- setup/common.sh@32 -- # continue 00:03:17.378 07:20:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:17.378 07:20:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:17.378 07:20:33 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:17.378 07:20:33 -- setup/common.sh@32 -- # continue 00:03:17.378 07:20:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:17.378 07:20:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:17.378 07:20:33 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:17.378 07:20:33 -- setup/common.sh@32 -- # continue 00:03:17.378 07:20:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:17.378 07:20:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:17.378 07:20:33 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:17.378 07:20:33 -- setup/common.sh@32 -- # continue 00:03:17.378 07:20:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:17.378 07:20:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:17.378 07:20:33 -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:17.378 07:20:33 -- setup/common.sh@32 -- # continue 00:03:17.378 07:20:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:17.378 07:20:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:17.378 07:20:33 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:17.378 07:20:33 -- setup/common.sh@32 -- # continue 00:03:17.378 07:20:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:17.378 07:20:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:17.378 07:20:33 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:17.378 07:20:33 -- setup/common.sh@32 -- # continue 00:03:17.378 07:20:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:17.378 07:20:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:17.378 07:20:33 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:17.378 07:20:33 -- setup/common.sh@32 -- # continue 00:03:17.378 07:20:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:17.378 07:20:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:17.378 07:20:33 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:17.378 07:20:33 -- setup/common.sh@32 -- # continue 00:03:17.378 07:20:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:17.378 07:20:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:17.378 07:20:33 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:17.378 07:20:33 -- setup/common.sh@32 -- # continue 00:03:17.378 07:20:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:17.378 07:20:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:17.378 07:20:33 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:17.378 07:20:33 -- setup/common.sh@32 -- # continue 00:03:17.378 07:20:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:17.378 07:20:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:17.378 07:20:33 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:17.378 07:20:33 -- setup/common.sh@32 -- # continue 00:03:17.378 07:20:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:17.378 07:20:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:17.378 07:20:33 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:17.378 07:20:33 -- setup/common.sh@32 -- # continue 00:03:17.378 07:20:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:17.378 07:20:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:17.378 07:20:33 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:17.378 07:20:33 -- setup/common.sh@32 -- # continue 00:03:17.378 07:20:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:17.378 07:20:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:17.378 07:20:33 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:17.378 07:20:33 -- setup/common.sh@32 -- # continue 00:03:17.378 07:20:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:17.378 07:20:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:17.378 07:20:33 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:17.378 07:20:33 -- setup/common.sh@32 -- # continue 00:03:17.378 07:20:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:17.378 07:20:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:17.378 07:20:33 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:17.378 07:20:33 -- setup/common.sh@32 -- # continue 00:03:17.378 07:20:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:17.378 07:20:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:17.378 07:20:33 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:17.378 07:20:33 -- setup/common.sh@32 -- # continue 00:03:17.378 07:20:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:17.378 07:20:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:17.378 07:20:33 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:17.378 07:20:33 -- setup/common.sh@32 -- # continue 00:03:17.378 07:20:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:17.378 07:20:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:17.378 07:20:33 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:17.378 07:20:33 -- setup/common.sh@32 -- # continue 00:03:17.378 07:20:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:17.378 07:20:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:17.378 07:20:33 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:17.378 07:20:33 -- setup/common.sh@32 -- # continue 00:03:17.378 07:20:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:17.378 07:20:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:17.378 07:20:33 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:17.378 07:20:33 -- setup/common.sh@32 -- # continue 00:03:17.378 07:20:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:17.378 07:20:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:17.378 07:20:33 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:17.378 07:20:33 -- setup/common.sh@32 -- # continue 00:03:17.378 07:20:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:17.378 07:20:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:17.378 07:20:33 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:17.378 07:20:33 -- setup/common.sh@32 -- # continue 00:03:17.378 07:20:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:17.378 07:20:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:17.378 07:20:33 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:17.378 07:20:33 -- setup/common.sh@32 -- # continue 00:03:17.378 07:20:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:17.378 07:20:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:17.378 07:20:33 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:17.378 07:20:33 -- setup/common.sh@32 -- # continue 00:03:17.378 07:20:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:17.378 07:20:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:17.378 07:20:33 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:17.378 07:20:33 -- setup/common.sh@33 -- # echo 0 00:03:17.378 07:20:33 -- setup/common.sh@33 -- # return 0 00:03:17.378 07:20:33 -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:03:17.378 07:20:33 -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:03:17.379 07:20:33 -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:03:17.379 07:20:33 -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:03:17.379 07:20:33 -- setup/hugepages.sh@128 -- # echo 'node0=512 expecting 512' 00:03:17.379 node0=512 expecting 512 00:03:17.379 07:20:33 -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:03:17.379 07:20:33 -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:03:17.379 07:20:33 -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:03:17.379 07:20:33 -- setup/hugepages.sh@128 -- # echo 'node1=512 expecting 512' 00:03:17.379 node1=512 expecting 512 00:03:17.379 07:20:33 -- setup/hugepages.sh@130 -- # [[ 512 == \5\1\2 ]] 00:03:17.379 00:03:17.379 real 0m1.495s 00:03:17.379 user 0m0.629s 00:03:17.379 sys 0m0.834s 00:03:17.379 07:20:33 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:03:17.379 07:20:33 -- common/autotest_common.sh@10 -- # set +x 00:03:17.379 ************************************ 00:03:17.379 END TEST even_2G_alloc 00:03:17.379 ************************************ 00:03:17.379 07:20:33 -- setup/hugepages.sh@213 -- # run_test odd_alloc odd_alloc 00:03:17.379 07:20:33 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:03:17.379 07:20:33 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:03:17.379 07:20:33 -- common/autotest_common.sh@10 -- # set +x 00:03:17.379 ************************************ 00:03:17.379 START TEST odd_alloc 00:03:17.379 ************************************ 00:03:17.379 07:20:33 -- common/autotest_common.sh@1104 -- # odd_alloc 00:03:17.379 07:20:33 -- setup/hugepages.sh@159 -- # get_test_nr_hugepages 2098176 00:03:17.379 07:20:33 -- setup/hugepages.sh@49 -- # local size=2098176 00:03:17.379 07:20:33 -- setup/hugepages.sh@50 -- # (( 1 > 1 )) 00:03:17.379 07:20:33 -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:03:17.379 07:20:33 -- setup/hugepages.sh@57 -- # nr_hugepages=1025 00:03:17.379 07:20:33 -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 00:03:17.379 07:20:33 -- setup/hugepages.sh@62 -- # user_nodes=() 00:03:17.379 07:20:33 -- setup/hugepages.sh@62 -- # local user_nodes 00:03:17.379 07:20:33 -- setup/hugepages.sh@64 -- # local _nr_hugepages=1025 00:03:17.379 07:20:33 -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:03:17.379 07:20:33 -- setup/hugepages.sh@67 -- # nodes_test=() 00:03:17.379 07:20:33 -- setup/hugepages.sh@67 -- # local -g nodes_test 00:03:17.379 07:20:33 -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:03:17.379 07:20:33 -- setup/hugepages.sh@74 -- # (( 0 > 0 )) 00:03:17.379 07:20:33 -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:03:17.379 07:20:33 -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=512 00:03:17.379 07:20:33 -- setup/hugepages.sh@83 -- # : 513 00:03:17.379 07:20:33 -- setup/hugepages.sh@84 -- # : 1 00:03:17.379 07:20:33 -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:03:17.379 07:20:33 -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=513 00:03:17.379 07:20:33 -- setup/hugepages.sh@83 -- # : 0 00:03:17.379 07:20:33 -- setup/hugepages.sh@84 -- # : 0 00:03:17.379 07:20:33 -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:03:17.379 07:20:33 -- setup/hugepages.sh@160 -- # HUGEMEM=2049 00:03:17.379 07:20:33 -- setup/hugepages.sh@160 -- # HUGE_EVEN_ALLOC=yes 00:03:17.379 07:20:33 -- setup/hugepages.sh@160 -- # setup output 00:03:17.379 07:20:33 -- setup/common.sh@9 -- # [[ output == output ]] 00:03:17.379 07:20:33 -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:03:18.755 0000:00:04.7 (8086 0e27): Already using the vfio-pci driver 00:03:18.755 0000:88:00.0 (8086 0a54): Already using the vfio-pci driver 00:03:18.755 0000:00:04.6 (8086 0e26): Already using the vfio-pci driver 00:03:18.755 0000:00:04.5 (8086 0e25): Already using the vfio-pci driver 00:03:18.755 0000:00:04.4 (8086 0e24): Already using the vfio-pci driver 00:03:18.755 0000:00:04.3 (8086 0e23): Already using the vfio-pci driver 00:03:18.755 0000:00:04.2 (8086 0e22): Already using the vfio-pci driver 00:03:18.755 0000:00:04.1 (8086 0e21): Already using the vfio-pci driver 00:03:18.755 0000:00:04.0 (8086 0e20): Already using the vfio-pci driver 00:03:18.755 0000:80:04.7 (8086 0e27): Already using the vfio-pci driver 00:03:18.755 0000:80:04.6 (8086 0e26): Already using the vfio-pci driver 00:03:18.755 0000:80:04.5 (8086 0e25): Already using the vfio-pci driver 00:03:18.755 0000:80:04.4 (8086 0e24): Already using the vfio-pci driver 00:03:18.755 0000:80:04.3 (8086 0e23): Already using the vfio-pci driver 00:03:18.755 0000:80:04.2 (8086 0e22): Already using the vfio-pci driver 00:03:18.755 0000:80:04.1 (8086 0e21): Already using the vfio-pci driver 00:03:18.755 0000:80:04.0 (8086 0e20): Already using the vfio-pci driver 00:03:18.755 07:20:34 -- setup/hugepages.sh@161 -- # verify_nr_hugepages 00:03:18.755 07:20:34 -- setup/hugepages.sh@89 -- # local node 00:03:18.755 07:20:34 -- setup/hugepages.sh@90 -- # local sorted_t 00:03:18.755 07:20:34 -- setup/hugepages.sh@91 -- # local sorted_s 00:03:18.755 07:20:34 -- setup/hugepages.sh@92 -- # local surp 00:03:18.755 07:20:34 -- setup/hugepages.sh@93 -- # local resv 00:03:18.755 07:20:34 -- setup/hugepages.sh@94 -- # local anon 00:03:18.755 07:20:34 -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:03:18.755 07:20:34 -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:03:18.755 07:20:34 -- setup/common.sh@17 -- # local get=AnonHugePages 00:03:18.755 07:20:34 -- setup/common.sh@18 -- # local node= 00:03:18.755 07:20:34 -- setup/common.sh@19 -- # local var val 00:03:18.755 07:20:34 -- setup/common.sh@20 -- # local mem_f mem 00:03:18.755 07:20:34 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:18.755 07:20:34 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:18.755 07:20:34 -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:18.755 07:20:34 -- setup/common.sh@28 -- # mapfile -t mem 00:03:18.755 07:20:34 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:18.755 07:20:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.755 07:20:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.755 07:20:34 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541708 kB' 'MemFree: 45805488 kB' 'MemAvailable: 49310900 kB' 'Buffers: 2704 kB' 'Cached: 10211644 kB' 'SwapCached: 0 kB' 'Active: 7235252 kB' 'Inactive: 3506552 kB' 'Active(anon): 6840900 kB' 'Inactive(anon): 0 kB' 'Active(file): 394352 kB' 'Inactive(file): 3506552 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 530680 kB' 'Mapped: 211688 kB' 'Shmem: 6313444 kB' 'KReclaimable: 196260 kB' 'Slab: 568324 kB' 'SReclaimable: 196260 kB' 'SUnreclaim: 372064 kB' 'KernelStack: 12864 kB' 'PageTables: 8024 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37609856 kB' 'Committed_AS: 7962156 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 196708 kB' 'VmallocChunk: 0 kB' 'Percpu: 38784 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 1924700 kB' 'DirectMap2M: 15820800 kB' 'DirectMap1G: 51380224 kB' 00:03:18.755 07:20:34 -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:18.755 07:20:34 -- setup/common.sh@32 -- # continue 00:03:18.755 07:20:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.755 07:20:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.755 07:20:34 -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:18.755 07:20:34 -- setup/common.sh@32 -- # continue 00:03:18.755 07:20:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.755 07:20:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.755 07:20:34 -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:18.755 07:20:34 -- setup/common.sh@32 -- # continue 00:03:18.755 07:20:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.755 07:20:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.755 07:20:34 -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:18.755 07:20:34 -- setup/common.sh@32 -- # continue 00:03:18.755 07:20:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.755 07:20:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.755 07:20:34 -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:18.755 07:20:34 -- setup/common.sh@32 -- # continue 00:03:18.755 07:20:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.755 07:20:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.755 07:20:34 -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:18.755 07:20:34 -- setup/common.sh@32 -- # continue 00:03:18.755 07:20:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.755 07:20:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.755 07:20:34 -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:18.755 07:20:34 -- setup/common.sh@32 -- # continue 00:03:18.755 07:20:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.755 07:20:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.755 07:20:34 -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:18.755 07:20:34 -- setup/common.sh@32 -- # continue 00:03:18.755 07:20:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.755 07:20:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.755 07:20:34 -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:18.755 07:20:34 -- setup/common.sh@32 -- # continue 00:03:18.755 07:20:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.755 07:20:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.755 07:20:34 -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:18.755 07:20:34 -- setup/common.sh@32 -- # continue 00:03:18.755 07:20:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.755 07:20:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.755 07:20:34 -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:18.755 07:20:34 -- setup/common.sh@32 -- # continue 00:03:18.755 07:20:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.755 07:20:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.755 07:20:34 -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:18.755 07:20:34 -- setup/common.sh@32 -- # continue 00:03:18.755 07:20:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.755 07:20:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.755 07:20:34 -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:18.755 07:20:34 -- setup/common.sh@32 -- # continue 00:03:18.755 07:20:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.755 07:20:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.755 07:20:34 -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:18.755 07:20:34 -- setup/common.sh@32 -- # continue 00:03:18.755 07:20:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.755 07:20:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.755 07:20:34 -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:18.755 07:20:34 -- setup/common.sh@32 -- # continue 00:03:18.755 07:20:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.755 07:20:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.755 07:20:34 -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:18.755 07:20:34 -- setup/common.sh@32 -- # continue 00:03:18.755 07:20:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.755 07:20:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.755 07:20:34 -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:18.755 07:20:34 -- setup/common.sh@32 -- # continue 00:03:18.755 07:20:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.755 07:20:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.755 07:20:34 -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:18.755 07:20:34 -- setup/common.sh@32 -- # continue 00:03:18.755 07:20:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.755 07:20:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.755 07:20:34 -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:18.755 07:20:34 -- setup/common.sh@32 -- # continue 00:03:18.755 07:20:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.755 07:20:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.755 07:20:34 -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:18.755 07:20:34 -- setup/common.sh@32 -- # continue 00:03:18.755 07:20:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.755 07:20:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.755 07:20:34 -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:18.755 07:20:34 -- setup/common.sh@32 -- # continue 00:03:18.755 07:20:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.755 07:20:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.756 07:20:34 -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:18.756 07:20:34 -- setup/common.sh@32 -- # continue 00:03:18.756 07:20:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.756 07:20:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.756 07:20:34 -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:18.756 07:20:34 -- setup/common.sh@32 -- # continue 00:03:18.756 07:20:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.756 07:20:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.756 07:20:34 -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:18.756 07:20:34 -- setup/common.sh@32 -- # continue 00:03:18.756 07:20:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.756 07:20:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.756 07:20:34 -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:18.756 07:20:34 -- setup/common.sh@32 -- # continue 00:03:18.756 07:20:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.756 07:20:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.756 07:20:34 -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:18.756 07:20:34 -- setup/common.sh@32 -- # continue 00:03:18.756 07:20:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.756 07:20:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.756 07:20:34 -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:18.756 07:20:34 -- setup/common.sh@32 -- # continue 00:03:18.756 07:20:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.756 07:20:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.756 07:20:34 -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:18.756 07:20:34 -- setup/common.sh@32 -- # continue 00:03:18.756 07:20:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.756 07:20:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.756 07:20:34 -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:18.756 07:20:34 -- setup/common.sh@32 -- # continue 00:03:18.756 07:20:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.756 07:20:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.756 07:20:34 -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:18.756 07:20:34 -- setup/common.sh@32 -- # continue 00:03:18.756 07:20:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.756 07:20:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.756 07:20:34 -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:18.756 07:20:34 -- setup/common.sh@32 -- # continue 00:03:18.756 07:20:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.756 07:20:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.756 07:20:34 -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:18.756 07:20:34 -- setup/common.sh@32 -- # continue 00:03:18.756 07:20:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.756 07:20:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.756 07:20:34 -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:18.756 07:20:34 -- setup/common.sh@32 -- # continue 00:03:18.756 07:20:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.756 07:20:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.756 07:20:34 -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:18.756 07:20:34 -- setup/common.sh@32 -- # continue 00:03:18.756 07:20:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.756 07:20:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.756 07:20:34 -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:18.756 07:20:34 -- setup/common.sh@32 -- # continue 00:03:18.756 07:20:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.756 07:20:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.756 07:20:34 -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:18.756 07:20:34 -- setup/common.sh@32 -- # continue 00:03:18.756 07:20:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.756 07:20:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.756 07:20:34 -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:18.756 07:20:34 -- setup/common.sh@32 -- # continue 00:03:18.756 07:20:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.756 07:20:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.756 07:20:34 -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:18.756 07:20:34 -- setup/common.sh@32 -- # continue 00:03:18.756 07:20:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.756 07:20:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.756 07:20:34 -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:18.756 07:20:34 -- setup/common.sh@32 -- # continue 00:03:18.756 07:20:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.756 07:20:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.756 07:20:34 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:18.756 07:20:34 -- setup/common.sh@32 -- # continue 00:03:18.756 07:20:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.756 07:20:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.756 07:20:34 -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:18.756 07:20:34 -- setup/common.sh@33 -- # echo 0 00:03:18.756 07:20:34 -- setup/common.sh@33 -- # return 0 00:03:18.756 07:20:34 -- setup/hugepages.sh@97 -- # anon=0 00:03:18.756 07:20:34 -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:03:18.756 07:20:34 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:18.756 07:20:34 -- setup/common.sh@18 -- # local node= 00:03:18.756 07:20:34 -- setup/common.sh@19 -- # local var val 00:03:18.756 07:20:34 -- setup/common.sh@20 -- # local mem_f mem 00:03:18.756 07:20:34 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:18.756 07:20:34 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:18.756 07:20:34 -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:18.756 07:20:34 -- setup/common.sh@28 -- # mapfile -t mem 00:03:18.756 07:20:34 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:18.756 07:20:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.756 07:20:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.756 07:20:34 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541708 kB' 'MemFree: 45805236 kB' 'MemAvailable: 49310648 kB' 'Buffers: 2704 kB' 'Cached: 10211644 kB' 'SwapCached: 0 kB' 'Active: 7235912 kB' 'Inactive: 3506552 kB' 'Active(anon): 6841560 kB' 'Inactive(anon): 0 kB' 'Active(file): 394352 kB' 'Inactive(file): 3506552 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 531360 kB' 'Mapped: 211688 kB' 'Shmem: 6313444 kB' 'KReclaimable: 196260 kB' 'Slab: 568324 kB' 'SReclaimable: 196260 kB' 'SUnreclaim: 372064 kB' 'KernelStack: 12880 kB' 'PageTables: 8072 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37609856 kB' 'Committed_AS: 7962168 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 196676 kB' 'VmallocChunk: 0 kB' 'Percpu: 38784 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 1924700 kB' 'DirectMap2M: 15820800 kB' 'DirectMap1G: 51380224 kB' 00:03:18.756 07:20:34 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.756 07:20:34 -- setup/common.sh@32 -- # continue 00:03:18.756 07:20:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.756 07:20:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.756 07:20:34 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.756 07:20:34 -- setup/common.sh@32 -- # continue 00:03:18.756 07:20:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.756 07:20:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.756 07:20:34 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.756 07:20:34 -- setup/common.sh@32 -- # continue 00:03:18.756 07:20:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.756 07:20:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.756 07:20:34 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.756 07:20:34 -- setup/common.sh@32 -- # continue 00:03:18.756 07:20:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.756 07:20:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.756 07:20:34 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.756 07:20:34 -- setup/common.sh@32 -- # continue 00:03:18.756 07:20:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.756 07:20:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.756 07:20:34 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.756 07:20:34 -- setup/common.sh@32 -- # continue 00:03:18.756 07:20:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.756 07:20:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.756 07:20:34 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.756 07:20:34 -- setup/common.sh@32 -- # continue 00:03:18.756 07:20:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.756 07:20:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.756 07:20:34 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.756 07:20:34 -- setup/common.sh@32 -- # continue 00:03:18.756 07:20:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.756 07:20:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.756 07:20:34 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.756 07:20:34 -- setup/common.sh@32 -- # continue 00:03:18.756 07:20:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.756 07:20:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.756 07:20:34 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.756 07:20:34 -- setup/common.sh@32 -- # continue 00:03:18.756 07:20:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.756 07:20:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.756 07:20:34 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.756 07:20:34 -- setup/common.sh@32 -- # continue 00:03:18.756 07:20:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.756 07:20:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.756 07:20:34 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.756 07:20:34 -- setup/common.sh@32 -- # continue 00:03:18.756 07:20:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.756 07:20:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.756 07:20:34 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.756 07:20:34 -- setup/common.sh@32 -- # continue 00:03:18.756 07:20:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.756 07:20:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.756 07:20:34 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.756 07:20:34 -- setup/common.sh@32 -- # continue 00:03:18.756 07:20:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.756 07:20:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.756 07:20:34 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.756 07:20:34 -- setup/common.sh@32 -- # continue 00:03:18.756 07:20:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.756 07:20:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.756 07:20:34 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.756 07:20:34 -- setup/common.sh@32 -- # continue 00:03:18.756 07:20:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.756 07:20:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.756 07:20:34 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.757 07:20:34 -- setup/common.sh@32 -- # continue 00:03:18.757 07:20:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.757 07:20:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.757 07:20:34 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.757 07:20:34 -- setup/common.sh@32 -- # continue 00:03:18.757 07:20:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.757 07:20:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.757 07:20:34 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.757 07:20:34 -- setup/common.sh@32 -- # continue 00:03:18.757 07:20:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.757 07:20:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.757 07:20:34 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.757 07:20:34 -- setup/common.sh@32 -- # continue 00:03:18.757 07:20:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.757 07:20:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.757 07:20:34 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.757 07:20:34 -- setup/common.sh@32 -- # continue 00:03:18.757 07:20:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.757 07:20:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.757 07:20:34 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.757 07:20:34 -- setup/common.sh@32 -- # continue 00:03:18.757 07:20:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.757 07:20:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.757 07:20:34 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.757 07:20:34 -- setup/common.sh@32 -- # continue 00:03:18.757 07:20:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.757 07:20:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.757 07:20:34 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.757 07:20:34 -- setup/common.sh@32 -- # continue 00:03:18.757 07:20:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.757 07:20:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.757 07:20:34 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.757 07:20:34 -- setup/common.sh@32 -- # continue 00:03:18.757 07:20:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.757 07:20:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.757 07:20:34 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.757 07:20:34 -- setup/common.sh@32 -- # continue 00:03:18.757 07:20:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.757 07:20:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.757 07:20:34 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.757 07:20:34 -- setup/common.sh@32 -- # continue 00:03:18.757 07:20:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.757 07:20:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.757 07:20:34 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.757 07:20:34 -- setup/common.sh@32 -- # continue 00:03:18.757 07:20:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.757 07:20:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.757 07:20:34 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.757 07:20:34 -- setup/common.sh@32 -- # continue 00:03:18.757 07:20:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.757 07:20:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.757 07:20:34 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.757 07:20:34 -- setup/common.sh@32 -- # continue 00:03:18.757 07:20:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.757 07:20:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.757 07:20:34 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.757 07:20:34 -- setup/common.sh@32 -- # continue 00:03:18.757 07:20:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.757 07:20:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.757 07:20:34 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.757 07:20:34 -- setup/common.sh@32 -- # continue 00:03:18.757 07:20:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.757 07:20:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.757 07:20:34 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.757 07:20:34 -- setup/common.sh@32 -- # continue 00:03:18.757 07:20:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.757 07:20:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.757 07:20:34 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.757 07:20:34 -- setup/common.sh@32 -- # continue 00:03:18.757 07:20:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.757 07:20:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.757 07:20:34 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.757 07:20:34 -- setup/common.sh@32 -- # continue 00:03:18.757 07:20:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.757 07:20:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.757 07:20:34 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.757 07:20:34 -- setup/common.sh@32 -- # continue 00:03:18.757 07:20:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.757 07:20:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.757 07:20:34 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.757 07:20:34 -- setup/common.sh@32 -- # continue 00:03:18.757 07:20:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.757 07:20:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.757 07:20:34 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.757 07:20:34 -- setup/common.sh@32 -- # continue 00:03:18.757 07:20:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.757 07:20:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.757 07:20:34 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.757 07:20:34 -- setup/common.sh@32 -- # continue 00:03:18.757 07:20:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.757 07:20:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.757 07:20:34 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.757 07:20:34 -- setup/common.sh@32 -- # continue 00:03:18.757 07:20:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.757 07:20:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.757 07:20:34 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.757 07:20:34 -- setup/common.sh@32 -- # continue 00:03:18.757 07:20:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.757 07:20:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.757 07:20:34 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.757 07:20:34 -- setup/common.sh@32 -- # continue 00:03:18.757 07:20:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.757 07:20:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.757 07:20:34 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.757 07:20:34 -- setup/common.sh@32 -- # continue 00:03:18.757 07:20:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.757 07:20:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.757 07:20:34 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.757 07:20:34 -- setup/common.sh@32 -- # continue 00:03:18.757 07:20:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.757 07:20:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.757 07:20:34 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.757 07:20:34 -- setup/common.sh@32 -- # continue 00:03:18.757 07:20:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.757 07:20:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.757 07:20:34 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.757 07:20:34 -- setup/common.sh@32 -- # continue 00:03:18.757 07:20:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.757 07:20:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.757 07:20:34 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.757 07:20:34 -- setup/common.sh@32 -- # continue 00:03:18.757 07:20:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.757 07:20:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.757 07:20:34 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.757 07:20:34 -- setup/common.sh@32 -- # continue 00:03:18.757 07:20:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.757 07:20:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.757 07:20:34 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.757 07:20:34 -- setup/common.sh@32 -- # continue 00:03:18.757 07:20:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.757 07:20:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.757 07:20:34 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.757 07:20:34 -- setup/common.sh@32 -- # continue 00:03:18.757 07:20:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.757 07:20:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.757 07:20:34 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.757 07:20:34 -- setup/common.sh@32 -- # continue 00:03:18.757 07:20:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.757 07:20:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.757 07:20:34 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.757 07:20:34 -- setup/common.sh@33 -- # echo 0 00:03:18.757 07:20:34 -- setup/common.sh@33 -- # return 0 00:03:18.757 07:20:34 -- setup/hugepages.sh@99 -- # surp=0 00:03:18.757 07:20:34 -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:03:18.757 07:20:34 -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:03:18.757 07:20:34 -- setup/common.sh@18 -- # local node= 00:03:18.757 07:20:34 -- setup/common.sh@19 -- # local var val 00:03:18.757 07:20:34 -- setup/common.sh@20 -- # local mem_f mem 00:03:18.757 07:20:34 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:18.757 07:20:34 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:18.757 07:20:34 -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:18.757 07:20:34 -- setup/common.sh@28 -- # mapfile -t mem 00:03:18.757 07:20:34 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:18.757 07:20:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.757 07:20:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.757 07:20:34 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541708 kB' 'MemFree: 45805576 kB' 'MemAvailable: 49310988 kB' 'Buffers: 2704 kB' 'Cached: 10211660 kB' 'SwapCached: 0 kB' 'Active: 7234908 kB' 'Inactive: 3506552 kB' 'Active(anon): 6840556 kB' 'Inactive(anon): 0 kB' 'Active(file): 394352 kB' 'Inactive(file): 3506552 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 530304 kB' 'Mapped: 211688 kB' 'Shmem: 6313460 kB' 'KReclaimable: 196260 kB' 'Slab: 568324 kB' 'SReclaimable: 196260 kB' 'SUnreclaim: 372064 kB' 'KernelStack: 12896 kB' 'PageTables: 8124 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37609856 kB' 'Committed_AS: 7962184 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 196676 kB' 'VmallocChunk: 0 kB' 'Percpu: 38784 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 1924700 kB' 'DirectMap2M: 15820800 kB' 'DirectMap1G: 51380224 kB' 00:03:18.757 07:20:34 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:18.757 07:20:34 -- setup/common.sh@32 -- # continue 00:03:18.757 07:20:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.758 07:20:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.758 07:20:34 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:18.758 07:20:34 -- setup/common.sh@32 -- # continue 00:03:18.758 07:20:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.758 07:20:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.758 07:20:34 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:18.758 07:20:34 -- setup/common.sh@32 -- # continue 00:03:18.758 07:20:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.758 07:20:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.758 07:20:34 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:18.758 07:20:34 -- setup/common.sh@32 -- # continue 00:03:18.758 07:20:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.758 07:20:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.758 07:20:34 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:18.758 07:20:34 -- setup/common.sh@32 -- # continue 00:03:18.758 07:20:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.758 07:20:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.758 07:20:34 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:18.758 07:20:34 -- setup/common.sh@32 -- # continue 00:03:18.758 07:20:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.758 07:20:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.758 07:20:34 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:18.758 07:20:34 -- setup/common.sh@32 -- # continue 00:03:18.758 07:20:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.758 07:20:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.758 07:20:34 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:18.758 07:20:34 -- setup/common.sh@32 -- # continue 00:03:18.758 07:20:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.758 07:20:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.758 07:20:34 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:18.758 07:20:34 -- setup/common.sh@32 -- # continue 00:03:18.758 07:20:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.758 07:20:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.758 07:20:34 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:18.758 07:20:34 -- setup/common.sh@32 -- # continue 00:03:18.758 07:20:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.758 07:20:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.758 07:20:34 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:18.758 07:20:34 -- setup/common.sh@32 -- # continue 00:03:18.758 07:20:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.758 07:20:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.758 07:20:34 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:18.758 07:20:34 -- setup/common.sh@32 -- # continue 00:03:18.758 07:20:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.758 07:20:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.758 07:20:34 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:18.758 07:20:34 -- setup/common.sh@32 -- # continue 00:03:18.758 07:20:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.758 07:20:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.758 07:20:34 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:18.758 07:20:34 -- setup/common.sh@32 -- # continue 00:03:18.758 07:20:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.758 07:20:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.758 07:20:34 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:18.758 07:20:34 -- setup/common.sh@32 -- # continue 00:03:18.758 07:20:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.758 07:20:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.758 07:20:34 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:18.758 07:20:34 -- setup/common.sh@32 -- # continue 00:03:18.758 07:20:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.758 07:20:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.758 07:20:34 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:18.758 07:20:34 -- setup/common.sh@32 -- # continue 00:03:18.758 07:20:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.758 07:20:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.758 07:20:34 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:18.758 07:20:34 -- setup/common.sh@32 -- # continue 00:03:18.758 07:20:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.758 07:20:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.758 07:20:34 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:18.758 07:20:34 -- setup/common.sh@32 -- # continue 00:03:18.758 07:20:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.758 07:20:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.758 07:20:34 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:18.758 07:20:34 -- setup/common.sh@32 -- # continue 00:03:18.758 07:20:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.758 07:20:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.758 07:20:34 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:18.758 07:20:34 -- setup/common.sh@32 -- # continue 00:03:18.758 07:20:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.758 07:20:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.758 07:20:34 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:18.758 07:20:34 -- setup/common.sh@32 -- # continue 00:03:18.758 07:20:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.758 07:20:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.758 07:20:34 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:18.758 07:20:34 -- setup/common.sh@32 -- # continue 00:03:18.758 07:20:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.758 07:20:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.758 07:20:34 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:18.758 07:20:34 -- setup/common.sh@32 -- # continue 00:03:18.758 07:20:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.758 07:20:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.758 07:20:34 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:18.758 07:20:34 -- setup/common.sh@32 -- # continue 00:03:18.758 07:20:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.758 07:20:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:19.019 07:20:34 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:19.019 07:20:34 -- setup/common.sh@32 -- # continue 00:03:19.019 07:20:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:19.019 07:20:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:19.019 07:20:34 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:19.019 07:20:34 -- setup/common.sh@32 -- # continue 00:03:19.019 07:20:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:19.019 07:20:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:19.019 07:20:34 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:19.019 07:20:34 -- setup/common.sh@32 -- # continue 00:03:19.019 07:20:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:19.019 07:20:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:19.019 07:20:34 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:19.019 07:20:34 -- setup/common.sh@32 -- # continue 00:03:19.019 07:20:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:19.019 07:20:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:19.020 07:20:34 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:19.020 07:20:34 -- setup/common.sh@32 -- # continue 00:03:19.020 07:20:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:19.020 07:20:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:19.020 07:20:34 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:19.020 07:20:34 -- setup/common.sh@32 -- # continue 00:03:19.020 07:20:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:19.020 07:20:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:19.020 07:20:34 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:19.020 07:20:34 -- setup/common.sh@32 -- # continue 00:03:19.020 07:20:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:19.020 07:20:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:19.020 07:20:34 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:19.020 07:20:34 -- setup/common.sh@32 -- # continue 00:03:19.020 07:20:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:19.020 07:20:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:19.020 07:20:34 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:19.020 07:20:34 -- setup/common.sh@32 -- # continue 00:03:19.020 07:20:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:19.020 07:20:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:19.020 07:20:34 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:19.020 07:20:34 -- setup/common.sh@32 -- # continue 00:03:19.020 07:20:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:19.020 07:20:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:19.020 07:20:34 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:19.020 07:20:34 -- setup/common.sh@32 -- # continue 00:03:19.020 07:20:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:19.020 07:20:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:19.020 07:20:34 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:19.020 07:20:34 -- setup/common.sh@32 -- # continue 00:03:19.020 07:20:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:19.020 07:20:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:19.020 07:20:34 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:19.020 07:20:34 -- setup/common.sh@32 -- # continue 00:03:19.020 07:20:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:19.020 07:20:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:19.020 07:20:34 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:19.020 07:20:34 -- setup/common.sh@32 -- # continue 00:03:19.020 07:20:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:19.020 07:20:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:19.020 07:20:34 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:19.020 07:20:34 -- setup/common.sh@32 -- # continue 00:03:19.020 07:20:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:19.020 07:20:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:19.020 07:20:34 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:19.020 07:20:34 -- setup/common.sh@32 -- # continue 00:03:19.020 07:20:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:19.020 07:20:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:19.020 07:20:34 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:19.020 07:20:34 -- setup/common.sh@32 -- # continue 00:03:19.020 07:20:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:19.020 07:20:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:19.020 07:20:34 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:19.020 07:20:34 -- setup/common.sh@32 -- # continue 00:03:19.020 07:20:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:19.020 07:20:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:19.020 07:20:34 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:19.020 07:20:34 -- setup/common.sh@32 -- # continue 00:03:19.020 07:20:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:19.020 07:20:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:19.020 07:20:34 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:19.020 07:20:34 -- setup/common.sh@32 -- # continue 00:03:19.020 07:20:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:19.020 07:20:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:19.020 07:20:34 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:19.020 07:20:34 -- setup/common.sh@32 -- # continue 00:03:19.020 07:20:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:19.020 07:20:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:19.020 07:20:34 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:19.020 07:20:34 -- setup/common.sh@32 -- # continue 00:03:19.020 07:20:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:19.020 07:20:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:19.020 07:20:34 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:19.020 07:20:34 -- setup/common.sh@32 -- # continue 00:03:19.020 07:20:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:19.020 07:20:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:19.020 07:20:34 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:19.020 07:20:34 -- setup/common.sh@32 -- # continue 00:03:19.020 07:20:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:19.020 07:20:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:19.020 07:20:34 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:19.020 07:20:34 -- setup/common.sh@32 -- # continue 00:03:19.020 07:20:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:19.020 07:20:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:19.020 07:20:34 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:19.020 07:20:34 -- setup/common.sh@33 -- # echo 0 00:03:19.020 07:20:34 -- setup/common.sh@33 -- # return 0 00:03:19.020 07:20:34 -- setup/hugepages.sh@100 -- # resv=0 00:03:19.020 07:20:34 -- setup/hugepages.sh@102 -- # echo nr_hugepages=1025 00:03:19.020 nr_hugepages=1025 00:03:19.020 07:20:34 -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:03:19.020 resv_hugepages=0 00:03:19.020 07:20:34 -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:03:19.020 surplus_hugepages=0 00:03:19.020 07:20:34 -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:03:19.020 anon_hugepages=0 00:03:19.020 07:20:34 -- setup/hugepages.sh@107 -- # (( 1025 == nr_hugepages + surp + resv )) 00:03:19.020 07:20:34 -- setup/hugepages.sh@109 -- # (( 1025 == nr_hugepages )) 00:03:19.020 07:20:34 -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:03:19.020 07:20:34 -- setup/common.sh@17 -- # local get=HugePages_Total 00:03:19.020 07:20:34 -- setup/common.sh@18 -- # local node= 00:03:19.020 07:20:34 -- setup/common.sh@19 -- # local var val 00:03:19.020 07:20:34 -- setup/common.sh@20 -- # local mem_f mem 00:03:19.020 07:20:34 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:19.020 07:20:34 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:19.020 07:20:34 -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:19.020 07:20:34 -- setup/common.sh@28 -- # mapfile -t mem 00:03:19.020 07:20:34 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:19.020 07:20:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:19.020 07:20:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:19.020 07:20:34 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541708 kB' 'MemFree: 45805576 kB' 'MemAvailable: 49310988 kB' 'Buffers: 2704 kB' 'Cached: 10211660 kB' 'SwapCached: 0 kB' 'Active: 7235020 kB' 'Inactive: 3506552 kB' 'Active(anon): 6840668 kB' 'Inactive(anon): 0 kB' 'Active(file): 394352 kB' 'Inactive(file): 3506552 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 530416 kB' 'Mapped: 211688 kB' 'Shmem: 6313460 kB' 'KReclaimable: 196260 kB' 'Slab: 568324 kB' 'SReclaimable: 196260 kB' 'SUnreclaim: 372064 kB' 'KernelStack: 12880 kB' 'PageTables: 8076 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37609856 kB' 'Committed_AS: 7962196 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 196676 kB' 'VmallocChunk: 0 kB' 'Percpu: 38784 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 1924700 kB' 'DirectMap2M: 15820800 kB' 'DirectMap1G: 51380224 kB' 00:03:19.020 07:20:34 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:19.020 07:20:34 -- setup/common.sh@32 -- # continue 00:03:19.020 07:20:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:19.020 07:20:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:19.020 07:20:34 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:19.020 07:20:34 -- setup/common.sh@32 -- # continue 00:03:19.020 07:20:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:19.020 07:20:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:19.020 07:20:34 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:19.020 07:20:34 -- setup/common.sh@32 -- # continue 00:03:19.020 07:20:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:19.020 07:20:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:19.020 07:20:34 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:19.020 07:20:34 -- setup/common.sh@32 -- # continue 00:03:19.020 07:20:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:19.020 07:20:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:19.020 07:20:34 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:19.020 07:20:34 -- setup/common.sh@32 -- # continue 00:03:19.020 07:20:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:19.020 07:20:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:19.020 07:20:34 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:19.020 07:20:34 -- setup/common.sh@32 -- # continue 00:03:19.020 07:20:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:19.020 07:20:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:19.020 07:20:34 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:19.020 07:20:34 -- setup/common.sh@32 -- # continue 00:03:19.020 07:20:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:19.020 07:20:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:19.020 07:20:34 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:19.020 07:20:34 -- setup/common.sh@32 -- # continue 00:03:19.020 07:20:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:19.020 07:20:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:19.020 07:20:34 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:19.020 07:20:34 -- setup/common.sh@32 -- # continue 00:03:19.020 07:20:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:19.020 07:20:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:19.020 07:20:34 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:19.020 07:20:34 -- setup/common.sh@32 -- # continue 00:03:19.020 07:20:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:19.020 07:20:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:19.020 07:20:34 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:19.020 07:20:34 -- setup/common.sh@32 -- # continue 00:03:19.020 07:20:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:19.020 07:20:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:19.021 07:20:34 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:19.021 07:20:34 -- setup/common.sh@32 -- # continue 00:03:19.021 07:20:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:19.021 07:20:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:19.021 07:20:34 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:19.021 07:20:34 -- setup/common.sh@32 -- # continue 00:03:19.021 07:20:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:19.021 07:20:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:19.021 07:20:34 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:19.021 07:20:34 -- setup/common.sh@32 -- # continue 00:03:19.021 07:20:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:19.021 07:20:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:19.021 07:20:34 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:19.021 07:20:34 -- setup/common.sh@32 -- # continue 00:03:19.021 07:20:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:19.021 07:20:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:19.021 07:20:34 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:19.021 07:20:34 -- setup/common.sh@32 -- # continue 00:03:19.021 07:20:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:19.021 07:20:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:19.021 07:20:34 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:19.021 07:20:34 -- setup/common.sh@32 -- # continue 00:03:19.021 07:20:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:19.021 07:20:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:19.021 07:20:34 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:19.021 07:20:34 -- setup/common.sh@32 -- # continue 00:03:19.021 07:20:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:19.021 07:20:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:19.021 07:20:34 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:19.021 07:20:34 -- setup/common.sh@32 -- # continue 00:03:19.021 07:20:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:19.021 07:20:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:19.021 07:20:34 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:19.021 07:20:34 -- setup/common.sh@32 -- # continue 00:03:19.021 07:20:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:19.021 07:20:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:19.021 07:20:34 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:19.021 07:20:34 -- setup/common.sh@32 -- # continue 00:03:19.021 07:20:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:19.021 07:20:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:19.021 07:20:34 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:19.021 07:20:34 -- setup/common.sh@32 -- # continue 00:03:19.021 07:20:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:19.021 07:20:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:19.021 07:20:34 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:19.021 07:20:34 -- setup/common.sh@32 -- # continue 00:03:19.021 07:20:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:19.021 07:20:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:19.021 07:20:34 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:19.021 07:20:34 -- setup/common.sh@32 -- # continue 00:03:19.021 07:20:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:19.021 07:20:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:19.021 07:20:34 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:19.021 07:20:34 -- setup/common.sh@32 -- # continue 00:03:19.021 07:20:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:19.021 07:20:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:19.021 07:20:34 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:19.021 07:20:34 -- setup/common.sh@32 -- # continue 00:03:19.021 07:20:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:19.021 07:20:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:19.021 07:20:34 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:19.021 07:20:34 -- setup/common.sh@32 -- # continue 00:03:19.021 07:20:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:19.021 07:20:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:19.021 07:20:34 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:19.021 07:20:34 -- setup/common.sh@32 -- # continue 00:03:19.021 07:20:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:19.021 07:20:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:19.021 07:20:34 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:19.021 07:20:34 -- setup/common.sh@32 -- # continue 00:03:19.021 07:20:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:19.021 07:20:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:19.021 07:20:34 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:19.021 07:20:34 -- setup/common.sh@32 -- # continue 00:03:19.021 07:20:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:19.021 07:20:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:19.021 07:20:34 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:19.021 07:20:34 -- setup/common.sh@32 -- # continue 00:03:19.021 07:20:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:19.021 07:20:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:19.021 07:20:34 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:19.021 07:20:34 -- setup/common.sh@32 -- # continue 00:03:19.021 07:20:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:19.021 07:20:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:19.021 07:20:34 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:19.021 07:20:34 -- setup/common.sh@32 -- # continue 00:03:19.021 07:20:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:19.021 07:20:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:19.021 07:20:34 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:19.021 07:20:34 -- setup/common.sh@32 -- # continue 00:03:19.021 07:20:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:19.021 07:20:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:19.021 07:20:34 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:19.021 07:20:34 -- setup/common.sh@32 -- # continue 00:03:19.021 07:20:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:19.021 07:20:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:19.021 07:20:34 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:19.021 07:20:34 -- setup/common.sh@32 -- # continue 00:03:19.021 07:20:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:19.021 07:20:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:19.021 07:20:34 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:19.021 07:20:34 -- setup/common.sh@32 -- # continue 00:03:19.021 07:20:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:19.021 07:20:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:19.021 07:20:34 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:19.021 07:20:34 -- setup/common.sh@32 -- # continue 00:03:19.021 07:20:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:19.021 07:20:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:19.021 07:20:34 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:19.021 07:20:34 -- setup/common.sh@32 -- # continue 00:03:19.021 07:20:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:19.021 07:20:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:19.021 07:20:34 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:19.021 07:20:34 -- setup/common.sh@32 -- # continue 00:03:19.021 07:20:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:19.021 07:20:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:19.021 07:20:34 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:19.021 07:20:34 -- setup/common.sh@32 -- # continue 00:03:19.021 07:20:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:19.021 07:20:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:19.021 07:20:34 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:19.021 07:20:34 -- setup/common.sh@32 -- # continue 00:03:19.021 07:20:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:19.021 07:20:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:19.021 07:20:34 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:19.021 07:20:34 -- setup/common.sh@32 -- # continue 00:03:19.021 07:20:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:19.021 07:20:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:19.021 07:20:34 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:19.021 07:20:34 -- setup/common.sh@32 -- # continue 00:03:19.021 07:20:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:19.021 07:20:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:19.021 07:20:34 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:19.021 07:20:34 -- setup/common.sh@32 -- # continue 00:03:19.021 07:20:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:19.021 07:20:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:19.021 07:20:34 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:19.021 07:20:34 -- setup/common.sh@32 -- # continue 00:03:19.021 07:20:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:19.021 07:20:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:19.021 07:20:34 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:19.021 07:20:34 -- setup/common.sh@32 -- # continue 00:03:19.021 07:20:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:19.021 07:20:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:19.021 07:20:34 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:19.021 07:20:34 -- setup/common.sh@32 -- # continue 00:03:19.021 07:20:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:19.021 07:20:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:19.021 07:20:34 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:19.021 07:20:34 -- setup/common.sh@33 -- # echo 1025 00:03:19.021 07:20:34 -- setup/common.sh@33 -- # return 0 00:03:19.021 07:20:34 -- setup/hugepages.sh@110 -- # (( 1025 == nr_hugepages + surp + resv )) 00:03:19.021 07:20:34 -- setup/hugepages.sh@112 -- # get_nodes 00:03:19.021 07:20:34 -- setup/hugepages.sh@27 -- # local node 00:03:19.021 07:20:34 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:19.021 07:20:34 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:03:19.021 07:20:34 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:19.021 07:20:34 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=513 00:03:19.021 07:20:34 -- setup/hugepages.sh@32 -- # no_nodes=2 00:03:19.021 07:20:34 -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:03:19.021 07:20:34 -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:03:19.021 07:20:34 -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:03:19.021 07:20:34 -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:03:19.021 07:20:34 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:19.021 07:20:34 -- setup/common.sh@18 -- # local node=0 00:03:19.021 07:20:34 -- setup/common.sh@19 -- # local var val 00:03:19.021 07:20:34 -- setup/common.sh@20 -- # local mem_f mem 00:03:19.021 07:20:34 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:19.021 07:20:34 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:03:19.022 07:20:34 -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:03:19.022 07:20:34 -- setup/common.sh@28 -- # mapfile -t mem 00:03:19.022 07:20:34 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:19.022 07:20:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:19.022 07:20:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:19.022 07:20:34 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 32829884 kB' 'MemFree: 28295904 kB' 'MemUsed: 4533980 kB' 'SwapCached: 0 kB' 'Active: 2362296 kB' 'Inactive: 110044 kB' 'Active(anon): 2251408 kB' 'Inactive(anon): 0 kB' 'Active(file): 110888 kB' 'Inactive(file): 110044 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 2243148 kB' 'Mapped: 35016 kB' 'AnonPages: 232304 kB' 'Shmem: 2022216 kB' 'KernelStack: 7160 kB' 'PageTables: 4260 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 97124 kB' 'Slab: 312408 kB' 'SReclaimable: 97124 kB' 'SUnreclaim: 215284 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:03:19.022 07:20:34 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:19.022 07:20:34 -- setup/common.sh@32 -- # continue 00:03:19.022 07:20:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:19.022 07:20:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:19.022 07:20:34 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:19.022 07:20:34 -- setup/common.sh@32 -- # continue 00:03:19.022 07:20:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:19.022 07:20:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:19.022 07:20:34 -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:19.022 07:20:34 -- setup/common.sh@32 -- # continue 00:03:19.022 07:20:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:19.022 07:20:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:19.022 07:20:34 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:19.022 07:20:34 -- setup/common.sh@32 -- # continue 00:03:19.022 07:20:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:19.022 07:20:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:19.022 07:20:34 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:19.022 07:20:34 -- setup/common.sh@32 -- # continue 00:03:19.022 07:20:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:19.022 07:20:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:19.022 07:20:34 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:19.022 07:20:34 -- setup/common.sh@32 -- # continue 00:03:19.022 07:20:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:19.022 07:20:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:19.022 07:20:34 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:19.022 07:20:34 -- setup/common.sh@32 -- # continue 00:03:19.022 07:20:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:19.022 07:20:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:19.022 07:20:34 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:19.022 07:20:34 -- setup/common.sh@32 -- # continue 00:03:19.022 07:20:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:19.022 07:20:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:19.022 07:20:34 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:19.022 07:20:34 -- setup/common.sh@32 -- # continue 00:03:19.022 07:20:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:19.022 07:20:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:19.022 07:20:34 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:19.022 07:20:34 -- setup/common.sh@32 -- # continue 00:03:19.022 07:20:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:19.022 07:20:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:19.022 07:20:34 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:19.022 07:20:34 -- setup/common.sh@32 -- # continue 00:03:19.022 07:20:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:19.022 07:20:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:19.022 07:20:34 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:19.022 07:20:34 -- setup/common.sh@32 -- # continue 00:03:19.022 07:20:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:19.022 07:20:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:19.022 07:20:34 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:19.022 07:20:34 -- setup/common.sh@32 -- # continue 00:03:19.022 07:20:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:19.022 07:20:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:19.022 07:20:34 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:19.022 07:20:34 -- setup/common.sh@32 -- # continue 00:03:19.022 07:20:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:19.022 07:20:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:19.022 07:20:34 -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:19.022 07:20:34 -- setup/common.sh@32 -- # continue 00:03:19.022 07:20:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:19.022 07:20:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:19.022 07:20:34 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:19.022 07:20:34 -- setup/common.sh@32 -- # continue 00:03:19.022 07:20:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:19.022 07:20:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:19.022 07:20:34 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:19.022 07:20:34 -- setup/common.sh@32 -- # continue 00:03:19.022 07:20:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:19.022 07:20:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:19.022 07:20:34 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:19.022 07:20:34 -- setup/common.sh@32 -- # continue 00:03:19.022 07:20:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:19.022 07:20:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:19.022 07:20:34 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:19.022 07:20:34 -- setup/common.sh@32 -- # continue 00:03:19.022 07:20:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:19.022 07:20:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:19.022 07:20:34 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:19.022 07:20:34 -- setup/common.sh@32 -- # continue 00:03:19.022 07:20:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:19.022 07:20:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:19.022 07:20:34 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:19.022 07:20:34 -- setup/common.sh@32 -- # continue 00:03:19.022 07:20:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:19.022 07:20:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:19.022 07:20:34 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:19.022 07:20:34 -- setup/common.sh@32 -- # continue 00:03:19.022 07:20:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:19.022 07:20:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:19.022 07:20:34 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:19.022 07:20:34 -- setup/common.sh@32 -- # continue 00:03:19.022 07:20:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:19.022 07:20:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:19.022 07:20:34 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:19.022 07:20:34 -- setup/common.sh@32 -- # continue 00:03:19.022 07:20:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:19.022 07:20:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:19.022 07:20:34 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:19.022 07:20:34 -- setup/common.sh@32 -- # continue 00:03:19.022 07:20:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:19.022 07:20:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:19.022 07:20:34 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:19.022 07:20:34 -- setup/common.sh@32 -- # continue 00:03:19.022 07:20:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:19.022 07:20:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:19.022 07:20:34 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:19.022 07:20:34 -- setup/common.sh@32 -- # continue 00:03:19.022 07:20:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:19.022 07:20:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:19.022 07:20:34 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:19.022 07:20:34 -- setup/common.sh@32 -- # continue 00:03:19.022 07:20:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:19.022 07:20:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:19.022 07:20:34 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:19.022 07:20:34 -- setup/common.sh@32 -- # continue 00:03:19.022 07:20:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:19.022 07:20:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:19.022 07:20:34 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:19.022 07:20:34 -- setup/common.sh@32 -- # continue 00:03:19.022 07:20:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:19.022 07:20:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:19.022 07:20:34 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:19.022 07:20:34 -- setup/common.sh@32 -- # continue 00:03:19.022 07:20:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:19.022 07:20:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:19.022 07:20:34 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:19.022 07:20:34 -- setup/common.sh@32 -- # continue 00:03:19.022 07:20:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:19.022 07:20:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:19.022 07:20:34 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:19.022 07:20:34 -- setup/common.sh@32 -- # continue 00:03:19.022 07:20:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:19.022 07:20:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:19.022 07:20:34 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:19.022 07:20:34 -- setup/common.sh@32 -- # continue 00:03:19.022 07:20:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:19.022 07:20:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:19.022 07:20:34 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:19.022 07:20:34 -- setup/common.sh@32 -- # continue 00:03:19.022 07:20:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:19.022 07:20:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:19.022 07:20:34 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:19.022 07:20:34 -- setup/common.sh@32 -- # continue 00:03:19.022 07:20:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:19.022 07:20:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:19.022 07:20:34 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:19.022 07:20:34 -- setup/common.sh@33 -- # echo 0 00:03:19.022 07:20:34 -- setup/common.sh@33 -- # return 0 00:03:19.022 07:20:34 -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:03:19.022 07:20:34 -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:03:19.022 07:20:34 -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:03:19.022 07:20:34 -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 1 00:03:19.022 07:20:34 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:19.022 07:20:34 -- setup/common.sh@18 -- # local node=1 00:03:19.023 07:20:34 -- setup/common.sh@19 -- # local var val 00:03:19.023 07:20:34 -- setup/common.sh@20 -- # local mem_f mem 00:03:19.023 07:20:34 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:19.023 07:20:34 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node1/meminfo ]] 00:03:19.023 07:20:34 -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node1/meminfo 00:03:19.023 07:20:34 -- setup/common.sh@28 -- # mapfile -t mem 00:03:19.023 07:20:34 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:19.023 07:20:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:19.023 07:20:34 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 27711824 kB' 'MemFree: 17509796 kB' 'MemUsed: 10202028 kB' 'SwapCached: 0 kB' 'Active: 4872840 kB' 'Inactive: 3396508 kB' 'Active(anon): 4589376 kB' 'Inactive(anon): 0 kB' 'Active(file): 283464 kB' 'Inactive(file): 3396508 kB' 'Unevictable: 0 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 7971256 kB' 'Mapped: 176672 kB' 'AnonPages: 298200 kB' 'Shmem: 4291284 kB' 'KernelStack: 5720 kB' 'PageTables: 3804 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 99136 kB' 'Slab: 255924 kB' 'SReclaimable: 99136 kB' 'SUnreclaim: 156788 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 513' 'HugePages_Free: 513' 'HugePages_Surp: 0' 00:03:19.023 07:20:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:19.023 07:20:34 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:19.023 07:20:34 -- setup/common.sh@32 -- # continue 00:03:19.023 07:20:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:19.023 07:20:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:19.023 07:20:34 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:19.023 07:20:34 -- setup/common.sh@32 -- # continue 00:03:19.023 07:20:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:19.023 07:20:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:19.023 07:20:34 -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:19.023 07:20:34 -- setup/common.sh@32 -- # continue 00:03:19.023 07:20:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:19.023 07:20:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:19.023 07:20:34 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:19.023 07:20:34 -- setup/common.sh@32 -- # continue 00:03:19.023 07:20:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:19.023 07:20:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:19.023 07:20:34 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:19.023 07:20:34 -- setup/common.sh@32 -- # continue 00:03:19.023 07:20:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:19.023 07:20:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:19.023 07:20:34 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:19.023 07:20:34 -- setup/common.sh@32 -- # continue 00:03:19.023 07:20:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:19.023 07:20:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:19.023 07:20:34 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:19.023 07:20:34 -- setup/common.sh@32 -- # continue 00:03:19.023 07:20:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:19.023 07:20:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:19.023 07:20:34 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:19.023 07:20:34 -- setup/common.sh@32 -- # continue 00:03:19.023 07:20:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:19.023 07:20:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:19.023 07:20:34 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:19.023 07:20:34 -- setup/common.sh@32 -- # continue 00:03:19.023 07:20:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:19.023 07:20:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:19.023 07:20:34 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:19.023 07:20:34 -- setup/common.sh@32 -- # continue 00:03:19.023 07:20:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:19.023 07:20:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:19.023 07:20:34 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:19.023 07:20:34 -- setup/common.sh@32 -- # continue 00:03:19.023 07:20:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:19.023 07:20:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:19.023 07:20:34 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:19.023 07:20:34 -- setup/common.sh@32 -- # continue 00:03:19.023 07:20:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:19.023 07:20:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:19.023 07:20:34 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:19.023 07:20:34 -- setup/common.sh@32 -- # continue 00:03:19.023 07:20:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:19.023 07:20:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:19.023 07:20:34 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:19.023 07:20:34 -- setup/common.sh@32 -- # continue 00:03:19.023 07:20:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:19.023 07:20:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:19.023 07:20:34 -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:19.023 07:20:34 -- setup/common.sh@32 -- # continue 00:03:19.023 07:20:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:19.023 07:20:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:19.023 07:20:34 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:19.023 07:20:34 -- setup/common.sh@32 -- # continue 00:03:19.023 07:20:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:19.023 07:20:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:19.023 07:20:34 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:19.023 07:20:34 -- setup/common.sh@32 -- # continue 00:03:19.023 07:20:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:19.023 07:20:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:19.023 07:20:34 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:19.023 07:20:34 -- setup/common.sh@32 -- # continue 00:03:19.023 07:20:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:19.023 07:20:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:19.023 07:20:34 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:19.023 07:20:34 -- setup/common.sh@32 -- # continue 00:03:19.023 07:20:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:19.023 07:20:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:19.023 07:20:34 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:19.023 07:20:34 -- setup/common.sh@32 -- # continue 00:03:19.023 07:20:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:19.023 07:20:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:19.023 07:20:34 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:19.023 07:20:34 -- setup/common.sh@32 -- # continue 00:03:19.023 07:20:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:19.023 07:20:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:19.023 07:20:34 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:19.023 07:20:34 -- setup/common.sh@32 -- # continue 00:03:19.023 07:20:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:19.023 07:20:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:19.023 07:20:34 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:19.023 07:20:34 -- setup/common.sh@32 -- # continue 00:03:19.023 07:20:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:19.023 07:20:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:19.023 07:20:34 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:19.023 07:20:34 -- setup/common.sh@32 -- # continue 00:03:19.023 07:20:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:19.023 07:20:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:19.023 07:20:34 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:19.023 07:20:34 -- setup/common.sh@32 -- # continue 00:03:19.023 07:20:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:19.023 07:20:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:19.023 07:20:34 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:19.023 07:20:34 -- setup/common.sh@32 -- # continue 00:03:19.023 07:20:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:19.023 07:20:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:19.023 07:20:34 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:19.023 07:20:34 -- setup/common.sh@32 -- # continue 00:03:19.023 07:20:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:19.023 07:20:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:19.023 07:20:34 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:19.023 07:20:34 -- setup/common.sh@32 -- # continue 00:03:19.023 07:20:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:19.023 07:20:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:19.023 07:20:34 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:19.023 07:20:34 -- setup/common.sh@32 -- # continue 00:03:19.023 07:20:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:19.023 07:20:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:19.023 07:20:34 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:19.023 07:20:35 -- setup/common.sh@32 -- # continue 00:03:19.023 07:20:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:19.023 07:20:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:19.023 07:20:35 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:19.023 07:20:35 -- setup/common.sh@32 -- # continue 00:03:19.023 07:20:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:19.023 07:20:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:19.023 07:20:35 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:19.023 07:20:35 -- setup/common.sh@32 -- # continue 00:03:19.023 07:20:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:19.023 07:20:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:19.023 07:20:35 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:19.023 07:20:35 -- setup/common.sh@32 -- # continue 00:03:19.023 07:20:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:19.023 07:20:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:19.023 07:20:35 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:19.023 07:20:35 -- setup/common.sh@32 -- # continue 00:03:19.023 07:20:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:19.023 07:20:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:19.023 07:20:35 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:19.023 07:20:35 -- setup/common.sh@32 -- # continue 00:03:19.023 07:20:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:19.023 07:20:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:19.023 07:20:35 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:19.023 07:20:35 -- setup/common.sh@32 -- # continue 00:03:19.023 07:20:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:19.023 07:20:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:19.023 07:20:35 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:19.023 07:20:35 -- setup/common.sh@33 -- # echo 0 00:03:19.023 07:20:35 -- setup/common.sh@33 -- # return 0 00:03:19.023 07:20:35 -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:03:19.023 07:20:35 -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:03:19.023 07:20:35 -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:03:19.023 07:20:35 -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:03:19.023 07:20:35 -- setup/hugepages.sh@128 -- # echo 'node0=512 expecting 513' 00:03:19.024 node0=512 expecting 513 00:03:19.024 07:20:35 -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:03:19.024 07:20:35 -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:03:19.024 07:20:35 -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:03:19.024 07:20:35 -- setup/hugepages.sh@128 -- # echo 'node1=513 expecting 512' 00:03:19.024 node1=513 expecting 512 00:03:19.024 07:20:35 -- setup/hugepages.sh@130 -- # [[ 512 513 == \5\1\2\ \5\1\3 ]] 00:03:19.024 00:03:19.024 real 0m1.531s 00:03:19.024 user 0m0.669s 00:03:19.024 sys 0m0.829s 00:03:19.024 07:20:35 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:03:19.024 07:20:35 -- common/autotest_common.sh@10 -- # set +x 00:03:19.024 ************************************ 00:03:19.024 END TEST odd_alloc 00:03:19.024 ************************************ 00:03:19.024 07:20:35 -- setup/hugepages.sh@214 -- # run_test custom_alloc custom_alloc 00:03:19.024 07:20:35 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:03:19.024 07:20:35 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:03:19.024 07:20:35 -- common/autotest_common.sh@10 -- # set +x 00:03:19.024 ************************************ 00:03:19.024 START TEST custom_alloc 00:03:19.024 ************************************ 00:03:19.024 07:20:35 -- common/autotest_common.sh@1104 -- # custom_alloc 00:03:19.024 07:20:35 -- setup/hugepages.sh@167 -- # local IFS=, 00:03:19.024 07:20:35 -- setup/hugepages.sh@169 -- # local node 00:03:19.024 07:20:35 -- setup/hugepages.sh@170 -- # nodes_hp=() 00:03:19.024 07:20:35 -- setup/hugepages.sh@170 -- # local nodes_hp 00:03:19.024 07:20:35 -- setup/hugepages.sh@172 -- # local nr_hugepages=0 _nr_hugepages=0 00:03:19.024 07:20:35 -- setup/hugepages.sh@174 -- # get_test_nr_hugepages 1048576 00:03:19.024 07:20:35 -- setup/hugepages.sh@49 -- # local size=1048576 00:03:19.024 07:20:35 -- setup/hugepages.sh@50 -- # (( 1 > 1 )) 00:03:19.024 07:20:35 -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:03:19.024 07:20:35 -- setup/hugepages.sh@57 -- # nr_hugepages=512 00:03:19.024 07:20:35 -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 00:03:19.024 07:20:35 -- setup/hugepages.sh@62 -- # user_nodes=() 00:03:19.024 07:20:35 -- setup/hugepages.sh@62 -- # local user_nodes 00:03:19.024 07:20:35 -- setup/hugepages.sh@64 -- # local _nr_hugepages=512 00:03:19.024 07:20:35 -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:03:19.024 07:20:35 -- setup/hugepages.sh@67 -- # nodes_test=() 00:03:19.024 07:20:35 -- setup/hugepages.sh@67 -- # local -g nodes_test 00:03:19.024 07:20:35 -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:03:19.024 07:20:35 -- setup/hugepages.sh@74 -- # (( 0 > 0 )) 00:03:19.024 07:20:35 -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:03:19.024 07:20:35 -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=256 00:03:19.024 07:20:35 -- setup/hugepages.sh@83 -- # : 256 00:03:19.024 07:20:35 -- setup/hugepages.sh@84 -- # : 1 00:03:19.024 07:20:35 -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:03:19.024 07:20:35 -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=256 00:03:19.024 07:20:35 -- setup/hugepages.sh@83 -- # : 0 00:03:19.024 07:20:35 -- setup/hugepages.sh@84 -- # : 0 00:03:19.024 07:20:35 -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:03:19.024 07:20:35 -- setup/hugepages.sh@175 -- # nodes_hp[0]=512 00:03:19.024 07:20:35 -- setup/hugepages.sh@176 -- # (( 2 > 1 )) 00:03:19.024 07:20:35 -- setup/hugepages.sh@177 -- # get_test_nr_hugepages 2097152 00:03:19.024 07:20:35 -- setup/hugepages.sh@49 -- # local size=2097152 00:03:19.024 07:20:35 -- setup/hugepages.sh@50 -- # (( 1 > 1 )) 00:03:19.024 07:20:35 -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:03:19.024 07:20:35 -- setup/hugepages.sh@57 -- # nr_hugepages=1024 00:03:19.024 07:20:35 -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 00:03:19.024 07:20:35 -- setup/hugepages.sh@62 -- # user_nodes=() 00:03:19.024 07:20:35 -- setup/hugepages.sh@62 -- # local user_nodes 00:03:19.024 07:20:35 -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:03:19.024 07:20:35 -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:03:19.024 07:20:35 -- setup/hugepages.sh@67 -- # nodes_test=() 00:03:19.024 07:20:35 -- setup/hugepages.sh@67 -- # local -g nodes_test 00:03:19.024 07:20:35 -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:03:19.024 07:20:35 -- setup/hugepages.sh@74 -- # (( 1 > 0 )) 00:03:19.024 07:20:35 -- setup/hugepages.sh@75 -- # for _no_nodes in "${!nodes_hp[@]}" 00:03:19.024 07:20:35 -- setup/hugepages.sh@76 -- # nodes_test[_no_nodes]=512 00:03:19.024 07:20:35 -- setup/hugepages.sh@78 -- # return 0 00:03:19.024 07:20:35 -- setup/hugepages.sh@178 -- # nodes_hp[1]=1024 00:03:19.024 07:20:35 -- setup/hugepages.sh@181 -- # for node in "${!nodes_hp[@]}" 00:03:19.024 07:20:35 -- setup/hugepages.sh@182 -- # HUGENODE+=("nodes_hp[$node]=${nodes_hp[node]}") 00:03:19.024 07:20:35 -- setup/hugepages.sh@183 -- # (( _nr_hugepages += nodes_hp[node] )) 00:03:19.024 07:20:35 -- setup/hugepages.sh@181 -- # for node in "${!nodes_hp[@]}" 00:03:19.024 07:20:35 -- setup/hugepages.sh@182 -- # HUGENODE+=("nodes_hp[$node]=${nodes_hp[node]}") 00:03:19.024 07:20:35 -- setup/hugepages.sh@183 -- # (( _nr_hugepages += nodes_hp[node] )) 00:03:19.024 07:20:35 -- setup/hugepages.sh@186 -- # get_test_nr_hugepages_per_node 00:03:19.024 07:20:35 -- setup/hugepages.sh@62 -- # user_nodes=() 00:03:19.024 07:20:35 -- setup/hugepages.sh@62 -- # local user_nodes 00:03:19.024 07:20:35 -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:03:19.024 07:20:35 -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:03:19.024 07:20:35 -- setup/hugepages.sh@67 -- # nodes_test=() 00:03:19.024 07:20:35 -- setup/hugepages.sh@67 -- # local -g nodes_test 00:03:19.024 07:20:35 -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:03:19.024 07:20:35 -- setup/hugepages.sh@74 -- # (( 2 > 0 )) 00:03:19.024 07:20:35 -- setup/hugepages.sh@75 -- # for _no_nodes in "${!nodes_hp[@]}" 00:03:19.024 07:20:35 -- setup/hugepages.sh@76 -- # nodes_test[_no_nodes]=512 00:03:19.024 07:20:35 -- setup/hugepages.sh@75 -- # for _no_nodes in "${!nodes_hp[@]}" 00:03:19.024 07:20:35 -- setup/hugepages.sh@76 -- # nodes_test[_no_nodes]=1024 00:03:19.024 07:20:35 -- setup/hugepages.sh@78 -- # return 0 00:03:19.024 07:20:35 -- setup/hugepages.sh@187 -- # HUGENODE='nodes_hp[0]=512,nodes_hp[1]=1024' 00:03:19.024 07:20:35 -- setup/hugepages.sh@187 -- # setup output 00:03:19.024 07:20:35 -- setup/common.sh@9 -- # [[ output == output ]] 00:03:19.024 07:20:35 -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:03:20.411 0000:00:04.7 (8086 0e27): Already using the vfio-pci driver 00:03:20.411 0000:88:00.0 (8086 0a54): Already using the vfio-pci driver 00:03:20.411 0000:00:04.6 (8086 0e26): Already using the vfio-pci driver 00:03:20.411 0000:00:04.5 (8086 0e25): Already using the vfio-pci driver 00:03:20.411 0000:00:04.4 (8086 0e24): Already using the vfio-pci driver 00:03:20.411 0000:00:04.3 (8086 0e23): Already using the vfio-pci driver 00:03:20.411 0000:00:04.2 (8086 0e22): Already using the vfio-pci driver 00:03:20.411 0000:00:04.1 (8086 0e21): Already using the vfio-pci driver 00:03:20.411 0000:00:04.0 (8086 0e20): Already using the vfio-pci driver 00:03:20.411 0000:80:04.7 (8086 0e27): Already using the vfio-pci driver 00:03:20.411 0000:80:04.6 (8086 0e26): Already using the vfio-pci driver 00:03:20.411 0000:80:04.5 (8086 0e25): Already using the vfio-pci driver 00:03:20.411 0000:80:04.4 (8086 0e24): Already using the vfio-pci driver 00:03:20.411 0000:80:04.3 (8086 0e23): Already using the vfio-pci driver 00:03:20.411 0000:80:04.2 (8086 0e22): Already using the vfio-pci driver 00:03:20.411 0000:80:04.1 (8086 0e21): Already using the vfio-pci driver 00:03:20.411 0000:80:04.0 (8086 0e20): Already using the vfio-pci driver 00:03:20.411 07:20:36 -- setup/hugepages.sh@188 -- # nr_hugepages=1536 00:03:20.411 07:20:36 -- setup/hugepages.sh@188 -- # verify_nr_hugepages 00:03:20.411 07:20:36 -- setup/hugepages.sh@89 -- # local node 00:03:20.411 07:20:36 -- setup/hugepages.sh@90 -- # local sorted_t 00:03:20.411 07:20:36 -- setup/hugepages.sh@91 -- # local sorted_s 00:03:20.411 07:20:36 -- setup/hugepages.sh@92 -- # local surp 00:03:20.411 07:20:36 -- setup/hugepages.sh@93 -- # local resv 00:03:20.411 07:20:36 -- setup/hugepages.sh@94 -- # local anon 00:03:20.411 07:20:36 -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:03:20.411 07:20:36 -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:03:20.411 07:20:36 -- setup/common.sh@17 -- # local get=AnonHugePages 00:03:20.411 07:20:36 -- setup/common.sh@18 -- # local node= 00:03:20.411 07:20:36 -- setup/common.sh@19 -- # local var val 00:03:20.411 07:20:36 -- setup/common.sh@20 -- # local mem_f mem 00:03:20.411 07:20:36 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:20.411 07:20:36 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:20.411 07:20:36 -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:20.411 07:20:36 -- setup/common.sh@28 -- # mapfile -t mem 00:03:20.411 07:20:36 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:20.411 07:20:36 -- setup/common.sh@31 -- # IFS=': ' 00:03:20.411 07:20:36 -- setup/common.sh@31 -- # read -r var val _ 00:03:20.411 07:20:36 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541708 kB' 'MemFree: 44761188 kB' 'MemAvailable: 48266600 kB' 'Buffers: 2704 kB' 'Cached: 10211740 kB' 'SwapCached: 0 kB' 'Active: 7235884 kB' 'Inactive: 3506552 kB' 'Active(anon): 6841532 kB' 'Inactive(anon): 0 kB' 'Active(file): 394352 kB' 'Inactive(file): 3506552 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 531192 kB' 'Mapped: 211788 kB' 'Shmem: 6313540 kB' 'KReclaimable: 196260 kB' 'Slab: 568212 kB' 'SReclaimable: 196260 kB' 'SUnreclaim: 371952 kB' 'KernelStack: 12848 kB' 'PageTables: 7980 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37086592 kB' 'Committed_AS: 7962012 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 196628 kB' 'VmallocChunk: 0 kB' 'Percpu: 38784 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1536' 'HugePages_Free: 1536' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 3145728 kB' 'DirectMap4k: 1924700 kB' 'DirectMap2M: 15820800 kB' 'DirectMap1G: 51380224 kB' 00:03:20.411 07:20:36 -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:20.411 07:20:36 -- setup/common.sh@32 -- # continue 00:03:20.411 07:20:36 -- setup/common.sh@31 -- # IFS=': ' 00:03:20.411 07:20:36 -- setup/common.sh@31 -- # read -r var val _ 00:03:20.411 07:20:36 -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:20.411 07:20:36 -- setup/common.sh@32 -- # continue 00:03:20.411 07:20:36 -- setup/common.sh@31 -- # IFS=': ' 00:03:20.411 07:20:36 -- setup/common.sh@31 -- # read -r var val _ 00:03:20.411 07:20:36 -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:20.411 07:20:36 -- setup/common.sh@32 -- # continue 00:03:20.411 07:20:36 -- setup/common.sh@31 -- # IFS=': ' 00:03:20.411 07:20:36 -- setup/common.sh@31 -- # read -r var val _ 00:03:20.411 07:20:36 -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:20.411 07:20:36 -- setup/common.sh@32 -- # continue 00:03:20.411 07:20:36 -- setup/common.sh@31 -- # IFS=': ' 00:03:20.411 07:20:36 -- setup/common.sh@31 -- # read -r var val _ 00:03:20.411 07:20:36 -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:20.411 07:20:36 -- setup/common.sh@32 -- # continue 00:03:20.411 07:20:36 -- setup/common.sh@31 -- # IFS=': ' 00:03:20.411 07:20:36 -- setup/common.sh@31 -- # read -r var val _ 00:03:20.411 07:20:36 -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:20.411 07:20:36 -- setup/common.sh@32 -- # continue 00:03:20.411 07:20:36 -- setup/common.sh@31 -- # IFS=': ' 00:03:20.411 07:20:36 -- setup/common.sh@31 -- # read -r var val _ 00:03:20.411 07:20:36 -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:20.411 07:20:36 -- setup/common.sh@32 -- # continue 00:03:20.411 07:20:36 -- setup/common.sh@31 -- # IFS=': ' 00:03:20.411 07:20:36 -- setup/common.sh@31 -- # read -r var val _ 00:03:20.411 07:20:36 -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:20.411 07:20:36 -- setup/common.sh@32 -- # continue 00:03:20.411 07:20:36 -- setup/common.sh@31 -- # IFS=': ' 00:03:20.411 07:20:36 -- setup/common.sh@31 -- # read -r var val _ 00:03:20.411 07:20:36 -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:20.411 07:20:36 -- setup/common.sh@32 -- # continue 00:03:20.411 07:20:36 -- setup/common.sh@31 -- # IFS=': ' 00:03:20.411 07:20:36 -- setup/common.sh@31 -- # read -r var val _ 00:03:20.411 07:20:36 -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:20.411 07:20:36 -- setup/common.sh@32 -- # continue 00:03:20.411 07:20:36 -- setup/common.sh@31 -- # IFS=': ' 00:03:20.411 07:20:36 -- setup/common.sh@31 -- # read -r var val _ 00:03:20.411 07:20:36 -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:20.411 07:20:36 -- setup/common.sh@32 -- # continue 00:03:20.411 07:20:36 -- setup/common.sh@31 -- # IFS=': ' 00:03:20.411 07:20:36 -- setup/common.sh@31 -- # read -r var val _ 00:03:20.411 07:20:36 -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:20.411 07:20:36 -- setup/common.sh@32 -- # continue 00:03:20.411 07:20:36 -- setup/common.sh@31 -- # IFS=': ' 00:03:20.411 07:20:36 -- setup/common.sh@31 -- # read -r var val _ 00:03:20.411 07:20:36 -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:20.411 07:20:36 -- setup/common.sh@32 -- # continue 00:03:20.411 07:20:36 -- setup/common.sh@31 -- # IFS=': ' 00:03:20.411 07:20:36 -- setup/common.sh@31 -- # read -r var val _ 00:03:20.411 07:20:36 -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:20.411 07:20:36 -- setup/common.sh@32 -- # continue 00:03:20.411 07:20:36 -- setup/common.sh@31 -- # IFS=': ' 00:03:20.411 07:20:36 -- setup/common.sh@31 -- # read -r var val _ 00:03:20.411 07:20:36 -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:20.411 07:20:36 -- setup/common.sh@32 -- # continue 00:03:20.411 07:20:36 -- setup/common.sh@31 -- # IFS=': ' 00:03:20.411 07:20:36 -- setup/common.sh@31 -- # read -r var val _ 00:03:20.411 07:20:36 -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:20.411 07:20:36 -- setup/common.sh@32 -- # continue 00:03:20.411 07:20:36 -- setup/common.sh@31 -- # IFS=': ' 00:03:20.411 07:20:36 -- setup/common.sh@31 -- # read -r var val _ 00:03:20.411 07:20:36 -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:20.411 07:20:36 -- setup/common.sh@32 -- # continue 00:03:20.411 07:20:36 -- setup/common.sh@31 -- # IFS=': ' 00:03:20.411 07:20:36 -- setup/common.sh@31 -- # read -r var val _ 00:03:20.411 07:20:36 -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:20.411 07:20:36 -- setup/common.sh@32 -- # continue 00:03:20.411 07:20:36 -- setup/common.sh@31 -- # IFS=': ' 00:03:20.411 07:20:36 -- setup/common.sh@31 -- # read -r var val _ 00:03:20.411 07:20:36 -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:20.411 07:20:36 -- setup/common.sh@32 -- # continue 00:03:20.411 07:20:36 -- setup/common.sh@31 -- # IFS=': ' 00:03:20.411 07:20:36 -- setup/common.sh@31 -- # read -r var val _ 00:03:20.411 07:20:36 -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:20.411 07:20:36 -- setup/common.sh@32 -- # continue 00:03:20.411 07:20:36 -- setup/common.sh@31 -- # IFS=': ' 00:03:20.411 07:20:36 -- setup/common.sh@31 -- # read -r var val _ 00:03:20.411 07:20:36 -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:20.411 07:20:36 -- setup/common.sh@32 -- # continue 00:03:20.411 07:20:36 -- setup/common.sh@31 -- # IFS=': ' 00:03:20.411 07:20:36 -- setup/common.sh@31 -- # read -r var val _ 00:03:20.411 07:20:36 -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:20.411 07:20:36 -- setup/common.sh@32 -- # continue 00:03:20.411 07:20:36 -- setup/common.sh@31 -- # IFS=': ' 00:03:20.411 07:20:36 -- setup/common.sh@31 -- # read -r var val _ 00:03:20.411 07:20:36 -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:20.411 07:20:36 -- setup/common.sh@32 -- # continue 00:03:20.411 07:20:36 -- setup/common.sh@31 -- # IFS=': ' 00:03:20.411 07:20:36 -- setup/common.sh@31 -- # read -r var val _ 00:03:20.411 07:20:36 -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:20.411 07:20:36 -- setup/common.sh@32 -- # continue 00:03:20.411 07:20:36 -- setup/common.sh@31 -- # IFS=': ' 00:03:20.411 07:20:36 -- setup/common.sh@31 -- # read -r var val _ 00:03:20.411 07:20:36 -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:20.411 07:20:36 -- setup/common.sh@32 -- # continue 00:03:20.411 07:20:36 -- setup/common.sh@31 -- # IFS=': ' 00:03:20.411 07:20:36 -- setup/common.sh@31 -- # read -r var val _ 00:03:20.411 07:20:36 -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:20.411 07:20:36 -- setup/common.sh@32 -- # continue 00:03:20.411 07:20:36 -- setup/common.sh@31 -- # IFS=': ' 00:03:20.411 07:20:36 -- setup/common.sh@31 -- # read -r var val _ 00:03:20.411 07:20:36 -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:20.411 07:20:36 -- setup/common.sh@32 -- # continue 00:03:20.411 07:20:36 -- setup/common.sh@31 -- # IFS=': ' 00:03:20.411 07:20:36 -- setup/common.sh@31 -- # read -r var val _ 00:03:20.411 07:20:36 -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:20.411 07:20:36 -- setup/common.sh@32 -- # continue 00:03:20.411 07:20:36 -- setup/common.sh@31 -- # IFS=': ' 00:03:20.411 07:20:36 -- setup/common.sh@31 -- # read -r var val _ 00:03:20.411 07:20:36 -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:20.411 07:20:36 -- setup/common.sh@32 -- # continue 00:03:20.411 07:20:36 -- setup/common.sh@31 -- # IFS=': ' 00:03:20.411 07:20:36 -- setup/common.sh@31 -- # read -r var val _ 00:03:20.411 07:20:36 -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:20.411 07:20:36 -- setup/common.sh@32 -- # continue 00:03:20.411 07:20:36 -- setup/common.sh@31 -- # IFS=': ' 00:03:20.411 07:20:36 -- setup/common.sh@31 -- # read -r var val _ 00:03:20.411 07:20:36 -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:20.411 07:20:36 -- setup/common.sh@32 -- # continue 00:03:20.411 07:20:36 -- setup/common.sh@31 -- # IFS=': ' 00:03:20.411 07:20:36 -- setup/common.sh@31 -- # read -r var val _ 00:03:20.411 07:20:36 -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:20.411 07:20:36 -- setup/common.sh@32 -- # continue 00:03:20.411 07:20:36 -- setup/common.sh@31 -- # IFS=': ' 00:03:20.411 07:20:36 -- setup/common.sh@31 -- # read -r var val _ 00:03:20.411 07:20:36 -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:20.411 07:20:36 -- setup/common.sh@32 -- # continue 00:03:20.411 07:20:36 -- setup/common.sh@31 -- # IFS=': ' 00:03:20.411 07:20:36 -- setup/common.sh@31 -- # read -r var val _ 00:03:20.411 07:20:36 -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:20.411 07:20:36 -- setup/common.sh@32 -- # continue 00:03:20.411 07:20:36 -- setup/common.sh@31 -- # IFS=': ' 00:03:20.411 07:20:36 -- setup/common.sh@31 -- # read -r var val _ 00:03:20.411 07:20:36 -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:20.411 07:20:36 -- setup/common.sh@32 -- # continue 00:03:20.411 07:20:36 -- setup/common.sh@31 -- # IFS=': ' 00:03:20.411 07:20:36 -- setup/common.sh@31 -- # read -r var val _ 00:03:20.411 07:20:36 -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:20.411 07:20:36 -- setup/common.sh@32 -- # continue 00:03:20.411 07:20:36 -- setup/common.sh@31 -- # IFS=': ' 00:03:20.411 07:20:36 -- setup/common.sh@31 -- # read -r var val _ 00:03:20.411 07:20:36 -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:20.411 07:20:36 -- setup/common.sh@32 -- # continue 00:03:20.411 07:20:36 -- setup/common.sh@31 -- # IFS=': ' 00:03:20.411 07:20:36 -- setup/common.sh@31 -- # read -r var val _ 00:03:20.411 07:20:36 -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:20.411 07:20:36 -- setup/common.sh@32 -- # continue 00:03:20.411 07:20:36 -- setup/common.sh@31 -- # IFS=': ' 00:03:20.411 07:20:36 -- setup/common.sh@31 -- # read -r var val _ 00:03:20.411 07:20:36 -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:20.411 07:20:36 -- setup/common.sh@32 -- # continue 00:03:20.411 07:20:36 -- setup/common.sh@31 -- # IFS=': ' 00:03:20.411 07:20:36 -- setup/common.sh@31 -- # read -r var val _ 00:03:20.411 07:20:36 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:20.412 07:20:36 -- setup/common.sh@32 -- # continue 00:03:20.412 07:20:36 -- setup/common.sh@31 -- # IFS=': ' 00:03:20.412 07:20:36 -- setup/common.sh@31 -- # read -r var val _ 00:03:20.412 07:20:36 -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:20.412 07:20:36 -- setup/common.sh@33 -- # echo 0 00:03:20.412 07:20:36 -- setup/common.sh@33 -- # return 0 00:03:20.412 07:20:36 -- setup/hugepages.sh@97 -- # anon=0 00:03:20.412 07:20:36 -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:03:20.412 07:20:36 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:20.412 07:20:36 -- setup/common.sh@18 -- # local node= 00:03:20.412 07:20:36 -- setup/common.sh@19 -- # local var val 00:03:20.412 07:20:36 -- setup/common.sh@20 -- # local mem_f mem 00:03:20.412 07:20:36 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:20.412 07:20:36 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:20.412 07:20:36 -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:20.412 07:20:36 -- setup/common.sh@28 -- # mapfile -t mem 00:03:20.412 07:20:36 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:20.412 07:20:36 -- setup/common.sh@31 -- # IFS=': ' 00:03:20.412 07:20:36 -- setup/common.sh@31 -- # read -r var val _ 00:03:20.412 07:20:36 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541708 kB' 'MemFree: 44761200 kB' 'MemAvailable: 48266612 kB' 'Buffers: 2704 kB' 'Cached: 10211740 kB' 'SwapCached: 0 kB' 'Active: 7235496 kB' 'Inactive: 3506552 kB' 'Active(anon): 6841144 kB' 'Inactive(anon): 0 kB' 'Active(file): 394352 kB' 'Inactive(file): 3506552 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 530756 kB' 'Mapped: 211848 kB' 'Shmem: 6313540 kB' 'KReclaimable: 196260 kB' 'Slab: 568212 kB' 'SReclaimable: 196260 kB' 'SUnreclaim: 371952 kB' 'KernelStack: 12800 kB' 'PageTables: 7828 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37086592 kB' 'Committed_AS: 7962024 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 196612 kB' 'VmallocChunk: 0 kB' 'Percpu: 38784 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1536' 'HugePages_Free: 1536' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 3145728 kB' 'DirectMap4k: 1924700 kB' 'DirectMap2M: 15820800 kB' 'DirectMap1G: 51380224 kB' 00:03:20.412 07:20:36 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.412 07:20:36 -- setup/common.sh@32 -- # continue 00:03:20.412 07:20:36 -- setup/common.sh@31 -- # IFS=': ' 00:03:20.412 07:20:36 -- setup/common.sh@31 -- # read -r var val _ 00:03:20.412 07:20:36 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.412 07:20:36 -- setup/common.sh@32 -- # continue 00:03:20.412 07:20:36 -- setup/common.sh@31 -- # IFS=': ' 00:03:20.412 07:20:36 -- setup/common.sh@31 -- # read -r var val _ 00:03:20.412 07:20:36 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.412 07:20:36 -- setup/common.sh@32 -- # continue 00:03:20.412 07:20:36 -- setup/common.sh@31 -- # IFS=': ' 00:03:20.412 07:20:36 -- setup/common.sh@31 -- # read -r var val _ 00:03:20.412 07:20:36 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.412 07:20:36 -- setup/common.sh@32 -- # continue 00:03:20.412 07:20:36 -- setup/common.sh@31 -- # IFS=': ' 00:03:20.412 07:20:36 -- setup/common.sh@31 -- # read -r var val _ 00:03:20.412 07:20:36 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.412 07:20:36 -- setup/common.sh@32 -- # continue 00:03:20.412 07:20:36 -- setup/common.sh@31 -- # IFS=': ' 00:03:20.412 07:20:36 -- setup/common.sh@31 -- # read -r var val _ 00:03:20.412 07:20:36 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.412 07:20:36 -- setup/common.sh@32 -- # continue 00:03:20.412 07:20:36 -- setup/common.sh@31 -- # IFS=': ' 00:03:20.412 07:20:36 -- setup/common.sh@31 -- # read -r var val _ 00:03:20.412 07:20:36 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.412 07:20:36 -- setup/common.sh@32 -- # continue 00:03:20.412 07:20:36 -- setup/common.sh@31 -- # IFS=': ' 00:03:20.412 07:20:36 -- setup/common.sh@31 -- # read -r var val _ 00:03:20.412 07:20:36 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.412 07:20:36 -- setup/common.sh@32 -- # continue 00:03:20.412 07:20:36 -- setup/common.sh@31 -- # IFS=': ' 00:03:20.412 07:20:36 -- setup/common.sh@31 -- # read -r var val _ 00:03:20.412 07:20:36 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.412 07:20:36 -- setup/common.sh@32 -- # continue 00:03:20.412 07:20:36 -- setup/common.sh@31 -- # IFS=': ' 00:03:20.412 07:20:36 -- setup/common.sh@31 -- # read -r var val _ 00:03:20.412 07:20:36 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.412 07:20:36 -- setup/common.sh@32 -- # continue 00:03:20.412 07:20:36 -- setup/common.sh@31 -- # IFS=': ' 00:03:20.412 07:20:36 -- setup/common.sh@31 -- # read -r var val _ 00:03:20.412 07:20:36 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.412 07:20:36 -- setup/common.sh@32 -- # continue 00:03:20.412 07:20:36 -- setup/common.sh@31 -- # IFS=': ' 00:03:20.412 07:20:36 -- setup/common.sh@31 -- # read -r var val _ 00:03:20.412 07:20:36 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.412 07:20:36 -- setup/common.sh@32 -- # continue 00:03:20.412 07:20:36 -- setup/common.sh@31 -- # IFS=': ' 00:03:20.412 07:20:36 -- setup/common.sh@31 -- # read -r var val _ 00:03:20.412 07:20:36 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.412 07:20:36 -- setup/common.sh@32 -- # continue 00:03:20.412 07:20:36 -- setup/common.sh@31 -- # IFS=': ' 00:03:20.412 07:20:36 -- setup/common.sh@31 -- # read -r var val _ 00:03:20.412 07:20:36 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.412 07:20:36 -- setup/common.sh@32 -- # continue 00:03:20.412 07:20:36 -- setup/common.sh@31 -- # IFS=': ' 00:03:20.412 07:20:36 -- setup/common.sh@31 -- # read -r var val _ 00:03:20.412 07:20:36 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.412 07:20:36 -- setup/common.sh@32 -- # continue 00:03:20.412 07:20:36 -- setup/common.sh@31 -- # IFS=': ' 00:03:20.412 07:20:36 -- setup/common.sh@31 -- # read -r var val _ 00:03:20.412 07:20:36 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.412 07:20:36 -- setup/common.sh@32 -- # continue 00:03:20.412 07:20:36 -- setup/common.sh@31 -- # IFS=': ' 00:03:20.412 07:20:36 -- setup/common.sh@31 -- # read -r var val _ 00:03:20.412 07:20:36 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.412 07:20:36 -- setup/common.sh@32 -- # continue 00:03:20.412 07:20:36 -- setup/common.sh@31 -- # IFS=': ' 00:03:20.412 07:20:36 -- setup/common.sh@31 -- # read -r var val _ 00:03:20.412 07:20:36 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.412 07:20:36 -- setup/common.sh@32 -- # continue 00:03:20.412 07:20:36 -- setup/common.sh@31 -- # IFS=': ' 00:03:20.412 07:20:36 -- setup/common.sh@31 -- # read -r var val _ 00:03:20.412 07:20:36 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.412 07:20:36 -- setup/common.sh@32 -- # continue 00:03:20.412 07:20:36 -- setup/common.sh@31 -- # IFS=': ' 00:03:20.412 07:20:36 -- setup/common.sh@31 -- # read -r var val _ 00:03:20.412 07:20:36 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.412 07:20:36 -- setup/common.sh@32 -- # continue 00:03:20.412 07:20:36 -- setup/common.sh@31 -- # IFS=': ' 00:03:20.412 07:20:36 -- setup/common.sh@31 -- # read -r var val _ 00:03:20.412 07:20:36 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.412 07:20:36 -- setup/common.sh@32 -- # continue 00:03:20.412 07:20:36 -- setup/common.sh@31 -- # IFS=': ' 00:03:20.412 07:20:36 -- setup/common.sh@31 -- # read -r var val _ 00:03:20.412 07:20:36 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.412 07:20:36 -- setup/common.sh@32 -- # continue 00:03:20.412 07:20:36 -- setup/common.sh@31 -- # IFS=': ' 00:03:20.412 07:20:36 -- setup/common.sh@31 -- # read -r var val _ 00:03:20.412 07:20:36 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.412 07:20:36 -- setup/common.sh@32 -- # continue 00:03:20.412 07:20:36 -- setup/common.sh@31 -- # IFS=': ' 00:03:20.412 07:20:36 -- setup/common.sh@31 -- # read -r var val _ 00:03:20.412 07:20:36 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.412 07:20:36 -- setup/common.sh@32 -- # continue 00:03:20.412 07:20:36 -- setup/common.sh@31 -- # IFS=': ' 00:03:20.412 07:20:36 -- setup/common.sh@31 -- # read -r var val _ 00:03:20.412 07:20:36 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.412 07:20:36 -- setup/common.sh@32 -- # continue 00:03:20.412 07:20:36 -- setup/common.sh@31 -- # IFS=': ' 00:03:20.412 07:20:36 -- setup/common.sh@31 -- # read -r var val _ 00:03:20.412 07:20:36 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.412 07:20:36 -- setup/common.sh@32 -- # continue 00:03:20.412 07:20:36 -- setup/common.sh@31 -- # IFS=': ' 00:03:20.412 07:20:36 -- setup/common.sh@31 -- # read -r var val _ 00:03:20.412 07:20:36 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.412 07:20:36 -- setup/common.sh@32 -- # continue 00:03:20.412 07:20:36 -- setup/common.sh@31 -- # IFS=': ' 00:03:20.412 07:20:36 -- setup/common.sh@31 -- # read -r var val _ 00:03:20.412 07:20:36 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.412 07:20:36 -- setup/common.sh@32 -- # continue 00:03:20.412 07:20:36 -- setup/common.sh@31 -- # IFS=': ' 00:03:20.412 07:20:36 -- setup/common.sh@31 -- # read -r var val _ 00:03:20.412 07:20:36 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.412 07:20:36 -- setup/common.sh@32 -- # continue 00:03:20.412 07:20:36 -- setup/common.sh@31 -- # IFS=': ' 00:03:20.412 07:20:36 -- setup/common.sh@31 -- # read -r var val _ 00:03:20.412 07:20:36 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.412 07:20:36 -- setup/common.sh@32 -- # continue 00:03:20.412 07:20:36 -- setup/common.sh@31 -- # IFS=': ' 00:03:20.412 07:20:36 -- setup/common.sh@31 -- # read -r var val _ 00:03:20.412 07:20:36 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.412 07:20:36 -- setup/common.sh@32 -- # continue 00:03:20.412 07:20:36 -- setup/common.sh@31 -- # IFS=': ' 00:03:20.412 07:20:36 -- setup/common.sh@31 -- # read -r var val _ 00:03:20.412 07:20:36 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.412 07:20:36 -- setup/common.sh@32 -- # continue 00:03:20.412 07:20:36 -- setup/common.sh@31 -- # IFS=': ' 00:03:20.412 07:20:36 -- setup/common.sh@31 -- # read -r var val _ 00:03:20.412 07:20:36 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.412 07:20:36 -- setup/common.sh@32 -- # continue 00:03:20.412 07:20:36 -- setup/common.sh@31 -- # IFS=': ' 00:03:20.412 07:20:36 -- setup/common.sh@31 -- # read -r var val _ 00:03:20.412 07:20:36 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.412 07:20:36 -- setup/common.sh@32 -- # continue 00:03:20.412 07:20:36 -- setup/common.sh@31 -- # IFS=': ' 00:03:20.412 07:20:36 -- setup/common.sh@31 -- # read -r var val _ 00:03:20.412 07:20:36 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.412 07:20:36 -- setup/common.sh@32 -- # continue 00:03:20.412 07:20:36 -- setup/common.sh@31 -- # IFS=': ' 00:03:20.412 07:20:36 -- setup/common.sh@31 -- # read -r var val _ 00:03:20.412 07:20:36 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.412 07:20:36 -- setup/common.sh@32 -- # continue 00:03:20.412 07:20:36 -- setup/common.sh@31 -- # IFS=': ' 00:03:20.412 07:20:36 -- setup/common.sh@31 -- # read -r var val _ 00:03:20.412 07:20:36 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.412 07:20:36 -- setup/common.sh@32 -- # continue 00:03:20.412 07:20:36 -- setup/common.sh@31 -- # IFS=': ' 00:03:20.412 07:20:36 -- setup/common.sh@31 -- # read -r var val _ 00:03:20.412 07:20:36 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.412 07:20:36 -- setup/common.sh@32 -- # continue 00:03:20.412 07:20:36 -- setup/common.sh@31 -- # IFS=': ' 00:03:20.412 07:20:36 -- setup/common.sh@31 -- # read -r var val _ 00:03:20.412 07:20:36 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.412 07:20:36 -- setup/common.sh@32 -- # continue 00:03:20.412 07:20:36 -- setup/common.sh@31 -- # IFS=': ' 00:03:20.412 07:20:36 -- setup/common.sh@31 -- # read -r var val _ 00:03:20.412 07:20:36 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.412 07:20:36 -- setup/common.sh@32 -- # continue 00:03:20.412 07:20:36 -- setup/common.sh@31 -- # IFS=': ' 00:03:20.412 07:20:36 -- setup/common.sh@31 -- # read -r var val _ 00:03:20.412 07:20:36 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.412 07:20:36 -- setup/common.sh@32 -- # continue 00:03:20.412 07:20:36 -- setup/common.sh@31 -- # IFS=': ' 00:03:20.412 07:20:36 -- setup/common.sh@31 -- # read -r var val _ 00:03:20.412 07:20:36 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.412 07:20:36 -- setup/common.sh@32 -- # continue 00:03:20.412 07:20:36 -- setup/common.sh@31 -- # IFS=': ' 00:03:20.412 07:20:36 -- setup/common.sh@31 -- # read -r var val _ 00:03:20.412 07:20:36 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.412 07:20:36 -- setup/common.sh@32 -- # continue 00:03:20.412 07:20:36 -- setup/common.sh@31 -- # IFS=': ' 00:03:20.412 07:20:36 -- setup/common.sh@31 -- # read -r var val _ 00:03:20.412 07:20:36 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.412 07:20:36 -- setup/common.sh@32 -- # continue 00:03:20.412 07:20:36 -- setup/common.sh@31 -- # IFS=': ' 00:03:20.412 07:20:36 -- setup/common.sh@31 -- # read -r var val _ 00:03:20.413 07:20:36 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.413 07:20:36 -- setup/common.sh@32 -- # continue 00:03:20.413 07:20:36 -- setup/common.sh@31 -- # IFS=': ' 00:03:20.413 07:20:36 -- setup/common.sh@31 -- # read -r var val _ 00:03:20.413 07:20:36 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.413 07:20:36 -- setup/common.sh@32 -- # continue 00:03:20.413 07:20:36 -- setup/common.sh@31 -- # IFS=': ' 00:03:20.413 07:20:36 -- setup/common.sh@31 -- # read -r var val _ 00:03:20.413 07:20:36 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.413 07:20:36 -- setup/common.sh@32 -- # continue 00:03:20.413 07:20:36 -- setup/common.sh@31 -- # IFS=': ' 00:03:20.413 07:20:36 -- setup/common.sh@31 -- # read -r var val _ 00:03:20.413 07:20:36 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.413 07:20:36 -- setup/common.sh@32 -- # continue 00:03:20.413 07:20:36 -- setup/common.sh@31 -- # IFS=': ' 00:03:20.413 07:20:36 -- setup/common.sh@31 -- # read -r var val _ 00:03:20.413 07:20:36 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.413 07:20:36 -- setup/common.sh@32 -- # continue 00:03:20.413 07:20:36 -- setup/common.sh@31 -- # IFS=': ' 00:03:20.413 07:20:36 -- setup/common.sh@31 -- # read -r var val _ 00:03:20.413 07:20:36 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.413 07:20:36 -- setup/common.sh@32 -- # continue 00:03:20.413 07:20:36 -- setup/common.sh@31 -- # IFS=': ' 00:03:20.413 07:20:36 -- setup/common.sh@31 -- # read -r var val _ 00:03:20.413 07:20:36 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.413 07:20:36 -- setup/common.sh@32 -- # continue 00:03:20.413 07:20:36 -- setup/common.sh@31 -- # IFS=': ' 00:03:20.413 07:20:36 -- setup/common.sh@31 -- # read -r var val _ 00:03:20.413 07:20:36 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.413 07:20:36 -- setup/common.sh@33 -- # echo 0 00:03:20.413 07:20:36 -- setup/common.sh@33 -- # return 0 00:03:20.413 07:20:36 -- setup/hugepages.sh@99 -- # surp=0 00:03:20.413 07:20:36 -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:03:20.413 07:20:36 -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:03:20.413 07:20:36 -- setup/common.sh@18 -- # local node= 00:03:20.413 07:20:36 -- setup/common.sh@19 -- # local var val 00:03:20.413 07:20:36 -- setup/common.sh@20 -- # local mem_f mem 00:03:20.413 07:20:36 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:20.413 07:20:36 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:20.413 07:20:36 -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:20.413 07:20:36 -- setup/common.sh@28 -- # mapfile -t mem 00:03:20.413 07:20:36 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:20.413 07:20:36 -- setup/common.sh@31 -- # IFS=': ' 00:03:20.413 07:20:36 -- setup/common.sh@31 -- # read -r var val _ 00:03:20.413 07:20:36 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541708 kB' 'MemFree: 44761792 kB' 'MemAvailable: 48267204 kB' 'Buffers: 2704 kB' 'Cached: 10211756 kB' 'SwapCached: 0 kB' 'Active: 7235080 kB' 'Inactive: 3506552 kB' 'Active(anon): 6840728 kB' 'Inactive(anon): 0 kB' 'Active(file): 394352 kB' 'Inactive(file): 3506552 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 530384 kB' 'Mapped: 211684 kB' 'Shmem: 6313556 kB' 'KReclaimable: 196260 kB' 'Slab: 568300 kB' 'SReclaimable: 196260 kB' 'SUnreclaim: 372040 kB' 'KernelStack: 12864 kB' 'PageTables: 7968 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37086592 kB' 'Committed_AS: 7962408 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 196596 kB' 'VmallocChunk: 0 kB' 'Percpu: 38784 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1536' 'HugePages_Free: 1536' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 3145728 kB' 'DirectMap4k: 1924700 kB' 'DirectMap2M: 15820800 kB' 'DirectMap1G: 51380224 kB' 00:03:20.413 07:20:36 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:20.413 07:20:36 -- setup/common.sh@32 -- # continue 00:03:20.413 07:20:36 -- setup/common.sh@31 -- # IFS=': ' 00:03:20.413 07:20:36 -- setup/common.sh@31 -- # read -r var val _ 00:03:20.413 07:20:36 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:20.413 07:20:36 -- setup/common.sh@32 -- # continue 00:03:20.413 07:20:36 -- setup/common.sh@31 -- # IFS=': ' 00:03:20.413 07:20:36 -- setup/common.sh@31 -- # read -r var val _ 00:03:20.413 07:20:36 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:20.413 07:20:36 -- setup/common.sh@32 -- # continue 00:03:20.413 07:20:36 -- setup/common.sh@31 -- # IFS=': ' 00:03:20.413 07:20:36 -- setup/common.sh@31 -- # read -r var val _ 00:03:20.413 07:20:36 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:20.413 07:20:36 -- setup/common.sh@32 -- # continue 00:03:20.413 07:20:36 -- setup/common.sh@31 -- # IFS=': ' 00:03:20.413 07:20:36 -- setup/common.sh@31 -- # read -r var val _ 00:03:20.413 07:20:36 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:20.413 07:20:36 -- setup/common.sh@32 -- # continue 00:03:20.413 07:20:36 -- setup/common.sh@31 -- # IFS=': ' 00:03:20.413 07:20:36 -- setup/common.sh@31 -- # read -r var val _ 00:03:20.413 07:20:36 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:20.413 07:20:36 -- setup/common.sh@32 -- # continue 00:03:20.413 07:20:36 -- setup/common.sh@31 -- # IFS=': ' 00:03:20.413 07:20:36 -- setup/common.sh@31 -- # read -r var val _ 00:03:20.413 07:20:36 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:20.413 07:20:36 -- setup/common.sh@32 -- # continue 00:03:20.413 07:20:36 -- setup/common.sh@31 -- # IFS=': ' 00:03:20.413 07:20:36 -- setup/common.sh@31 -- # read -r var val _ 00:03:20.413 07:20:36 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:20.413 07:20:36 -- setup/common.sh@32 -- # continue 00:03:20.413 07:20:36 -- setup/common.sh@31 -- # IFS=': ' 00:03:20.413 07:20:36 -- setup/common.sh@31 -- # read -r var val _ 00:03:20.413 07:20:36 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:20.413 07:20:36 -- setup/common.sh@32 -- # continue 00:03:20.413 07:20:36 -- setup/common.sh@31 -- # IFS=': ' 00:03:20.413 07:20:36 -- setup/common.sh@31 -- # read -r var val _ 00:03:20.413 07:20:36 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:20.413 07:20:36 -- setup/common.sh@32 -- # continue 00:03:20.413 07:20:36 -- setup/common.sh@31 -- # IFS=': ' 00:03:20.413 07:20:36 -- setup/common.sh@31 -- # read -r var val _ 00:03:20.413 07:20:36 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:20.413 07:20:36 -- setup/common.sh@32 -- # continue 00:03:20.413 07:20:36 -- setup/common.sh@31 -- # IFS=': ' 00:03:20.413 07:20:36 -- setup/common.sh@31 -- # read -r var val _ 00:03:20.413 07:20:36 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:20.413 07:20:36 -- setup/common.sh@32 -- # continue 00:03:20.413 07:20:36 -- setup/common.sh@31 -- # IFS=': ' 00:03:20.413 07:20:36 -- setup/common.sh@31 -- # read -r var val _ 00:03:20.413 07:20:36 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:20.413 07:20:36 -- setup/common.sh@32 -- # continue 00:03:20.413 07:20:36 -- setup/common.sh@31 -- # IFS=': ' 00:03:20.413 07:20:36 -- setup/common.sh@31 -- # read -r var val _ 00:03:20.413 07:20:36 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:20.413 07:20:36 -- setup/common.sh@32 -- # continue 00:03:20.413 07:20:36 -- setup/common.sh@31 -- # IFS=': ' 00:03:20.413 07:20:36 -- setup/common.sh@31 -- # read -r var val _ 00:03:20.413 07:20:36 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:20.413 07:20:36 -- setup/common.sh@32 -- # continue 00:03:20.413 07:20:36 -- setup/common.sh@31 -- # IFS=': ' 00:03:20.413 07:20:36 -- setup/common.sh@31 -- # read -r var val _ 00:03:20.413 07:20:36 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:20.413 07:20:36 -- setup/common.sh@32 -- # continue 00:03:20.413 07:20:36 -- setup/common.sh@31 -- # IFS=': ' 00:03:20.413 07:20:36 -- setup/common.sh@31 -- # read -r var val _ 00:03:20.413 07:20:36 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:20.413 07:20:36 -- setup/common.sh@32 -- # continue 00:03:20.413 07:20:36 -- setup/common.sh@31 -- # IFS=': ' 00:03:20.413 07:20:36 -- setup/common.sh@31 -- # read -r var val _ 00:03:20.413 07:20:36 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:20.413 07:20:36 -- setup/common.sh@32 -- # continue 00:03:20.413 07:20:36 -- setup/common.sh@31 -- # IFS=': ' 00:03:20.413 07:20:36 -- setup/common.sh@31 -- # read -r var val _ 00:03:20.413 07:20:36 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:20.413 07:20:36 -- setup/common.sh@32 -- # continue 00:03:20.413 07:20:36 -- setup/common.sh@31 -- # IFS=': ' 00:03:20.413 07:20:36 -- setup/common.sh@31 -- # read -r var val _ 00:03:20.413 07:20:36 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:20.413 07:20:36 -- setup/common.sh@32 -- # continue 00:03:20.413 07:20:36 -- setup/common.sh@31 -- # IFS=': ' 00:03:20.413 07:20:36 -- setup/common.sh@31 -- # read -r var val _ 00:03:20.413 07:20:36 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:20.413 07:20:36 -- setup/common.sh@32 -- # continue 00:03:20.413 07:20:36 -- setup/common.sh@31 -- # IFS=': ' 00:03:20.413 07:20:36 -- setup/common.sh@31 -- # read -r var val _ 00:03:20.413 07:20:36 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:20.413 07:20:36 -- setup/common.sh@32 -- # continue 00:03:20.413 07:20:36 -- setup/common.sh@31 -- # IFS=': ' 00:03:20.413 07:20:36 -- setup/common.sh@31 -- # read -r var val _ 00:03:20.413 07:20:36 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:20.413 07:20:36 -- setup/common.sh@32 -- # continue 00:03:20.413 07:20:36 -- setup/common.sh@31 -- # IFS=': ' 00:03:20.413 07:20:36 -- setup/common.sh@31 -- # read -r var val _ 00:03:20.413 07:20:36 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:20.413 07:20:36 -- setup/common.sh@32 -- # continue 00:03:20.413 07:20:36 -- setup/common.sh@31 -- # IFS=': ' 00:03:20.413 07:20:36 -- setup/common.sh@31 -- # read -r var val _ 00:03:20.413 07:20:36 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:20.413 07:20:36 -- setup/common.sh@32 -- # continue 00:03:20.413 07:20:36 -- setup/common.sh@31 -- # IFS=': ' 00:03:20.413 07:20:36 -- setup/common.sh@31 -- # read -r var val _ 00:03:20.413 07:20:36 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:20.413 07:20:36 -- setup/common.sh@32 -- # continue 00:03:20.413 07:20:36 -- setup/common.sh@31 -- # IFS=': ' 00:03:20.413 07:20:36 -- setup/common.sh@31 -- # read -r var val _ 00:03:20.413 07:20:36 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:20.413 07:20:36 -- setup/common.sh@32 -- # continue 00:03:20.413 07:20:36 -- setup/common.sh@31 -- # IFS=': ' 00:03:20.413 07:20:36 -- setup/common.sh@31 -- # read -r var val _ 00:03:20.413 07:20:36 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:20.413 07:20:36 -- setup/common.sh@32 -- # continue 00:03:20.413 07:20:36 -- setup/common.sh@31 -- # IFS=': ' 00:03:20.413 07:20:36 -- setup/common.sh@31 -- # read -r var val _ 00:03:20.413 07:20:36 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:20.413 07:20:36 -- setup/common.sh@32 -- # continue 00:03:20.413 07:20:36 -- setup/common.sh@31 -- # IFS=': ' 00:03:20.413 07:20:36 -- setup/common.sh@31 -- # read -r var val _ 00:03:20.413 07:20:36 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:20.413 07:20:36 -- setup/common.sh@32 -- # continue 00:03:20.413 07:20:36 -- setup/common.sh@31 -- # IFS=': ' 00:03:20.413 07:20:36 -- setup/common.sh@31 -- # read -r var val _ 00:03:20.413 07:20:36 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:20.413 07:20:36 -- setup/common.sh@32 -- # continue 00:03:20.413 07:20:36 -- setup/common.sh@31 -- # IFS=': ' 00:03:20.413 07:20:36 -- setup/common.sh@31 -- # read -r var val _ 00:03:20.413 07:20:36 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:20.413 07:20:36 -- setup/common.sh@32 -- # continue 00:03:20.413 07:20:36 -- setup/common.sh@31 -- # IFS=': ' 00:03:20.413 07:20:36 -- setup/common.sh@31 -- # read -r var val _ 00:03:20.413 07:20:36 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:20.413 07:20:36 -- setup/common.sh@32 -- # continue 00:03:20.413 07:20:36 -- setup/common.sh@31 -- # IFS=': ' 00:03:20.413 07:20:36 -- setup/common.sh@31 -- # read -r var val _ 00:03:20.413 07:20:36 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:20.413 07:20:36 -- setup/common.sh@32 -- # continue 00:03:20.413 07:20:36 -- setup/common.sh@31 -- # IFS=': ' 00:03:20.413 07:20:36 -- setup/common.sh@31 -- # read -r var val _ 00:03:20.413 07:20:36 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:20.413 07:20:36 -- setup/common.sh@32 -- # continue 00:03:20.413 07:20:36 -- setup/common.sh@31 -- # IFS=': ' 00:03:20.413 07:20:36 -- setup/common.sh@31 -- # read -r var val _ 00:03:20.413 07:20:36 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:20.413 07:20:36 -- setup/common.sh@32 -- # continue 00:03:20.413 07:20:36 -- setup/common.sh@31 -- # IFS=': ' 00:03:20.413 07:20:36 -- setup/common.sh@31 -- # read -r var val _ 00:03:20.413 07:20:36 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:20.413 07:20:36 -- setup/common.sh@32 -- # continue 00:03:20.413 07:20:36 -- setup/common.sh@31 -- # IFS=': ' 00:03:20.413 07:20:36 -- setup/common.sh@31 -- # read -r var val _ 00:03:20.413 07:20:36 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:20.413 07:20:36 -- setup/common.sh@32 -- # continue 00:03:20.413 07:20:36 -- setup/common.sh@31 -- # IFS=': ' 00:03:20.413 07:20:36 -- setup/common.sh@31 -- # read -r var val _ 00:03:20.413 07:20:36 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:20.413 07:20:36 -- setup/common.sh@32 -- # continue 00:03:20.413 07:20:36 -- setup/common.sh@31 -- # IFS=': ' 00:03:20.413 07:20:36 -- setup/common.sh@31 -- # read -r var val _ 00:03:20.413 07:20:36 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:20.413 07:20:36 -- setup/common.sh@32 -- # continue 00:03:20.413 07:20:36 -- setup/common.sh@31 -- # IFS=': ' 00:03:20.413 07:20:36 -- setup/common.sh@31 -- # read -r var val _ 00:03:20.414 07:20:36 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:20.414 07:20:36 -- setup/common.sh@32 -- # continue 00:03:20.414 07:20:36 -- setup/common.sh@31 -- # IFS=': ' 00:03:20.414 07:20:36 -- setup/common.sh@31 -- # read -r var val _ 00:03:20.414 07:20:36 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:20.414 07:20:36 -- setup/common.sh@32 -- # continue 00:03:20.414 07:20:36 -- setup/common.sh@31 -- # IFS=': ' 00:03:20.414 07:20:36 -- setup/common.sh@31 -- # read -r var val _ 00:03:20.414 07:20:36 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:20.414 07:20:36 -- setup/common.sh@32 -- # continue 00:03:20.414 07:20:36 -- setup/common.sh@31 -- # IFS=': ' 00:03:20.414 07:20:36 -- setup/common.sh@31 -- # read -r var val _ 00:03:20.414 07:20:36 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:20.414 07:20:36 -- setup/common.sh@32 -- # continue 00:03:20.414 07:20:36 -- setup/common.sh@31 -- # IFS=': ' 00:03:20.414 07:20:36 -- setup/common.sh@31 -- # read -r var val _ 00:03:20.414 07:20:36 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:20.414 07:20:36 -- setup/common.sh@32 -- # continue 00:03:20.414 07:20:36 -- setup/common.sh@31 -- # IFS=': ' 00:03:20.414 07:20:36 -- setup/common.sh@31 -- # read -r var val _ 00:03:20.414 07:20:36 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:20.414 07:20:36 -- setup/common.sh@32 -- # continue 00:03:20.414 07:20:36 -- setup/common.sh@31 -- # IFS=': ' 00:03:20.414 07:20:36 -- setup/common.sh@31 -- # read -r var val _ 00:03:20.414 07:20:36 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:20.414 07:20:36 -- setup/common.sh@32 -- # continue 00:03:20.414 07:20:36 -- setup/common.sh@31 -- # IFS=': ' 00:03:20.414 07:20:36 -- setup/common.sh@31 -- # read -r var val _ 00:03:20.414 07:20:36 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:20.414 07:20:36 -- setup/common.sh@32 -- # continue 00:03:20.414 07:20:36 -- setup/common.sh@31 -- # IFS=': ' 00:03:20.414 07:20:36 -- setup/common.sh@31 -- # read -r var val _ 00:03:20.414 07:20:36 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:20.414 07:20:36 -- setup/common.sh@32 -- # continue 00:03:20.414 07:20:36 -- setup/common.sh@31 -- # IFS=': ' 00:03:20.414 07:20:36 -- setup/common.sh@31 -- # read -r var val _ 00:03:20.414 07:20:36 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:20.414 07:20:36 -- setup/common.sh@32 -- # continue 00:03:20.414 07:20:36 -- setup/common.sh@31 -- # IFS=': ' 00:03:20.414 07:20:36 -- setup/common.sh@31 -- # read -r var val _ 00:03:20.414 07:20:36 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:20.414 07:20:36 -- setup/common.sh@33 -- # echo 0 00:03:20.414 07:20:36 -- setup/common.sh@33 -- # return 0 00:03:20.414 07:20:36 -- setup/hugepages.sh@100 -- # resv=0 00:03:20.414 07:20:36 -- setup/hugepages.sh@102 -- # echo nr_hugepages=1536 00:03:20.414 nr_hugepages=1536 00:03:20.414 07:20:36 -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:03:20.414 resv_hugepages=0 00:03:20.414 07:20:36 -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:03:20.414 surplus_hugepages=0 00:03:20.414 07:20:36 -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:03:20.414 anon_hugepages=0 00:03:20.414 07:20:36 -- setup/hugepages.sh@107 -- # (( 1536 == nr_hugepages + surp + resv )) 00:03:20.414 07:20:36 -- setup/hugepages.sh@109 -- # (( 1536 == nr_hugepages )) 00:03:20.414 07:20:36 -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:03:20.414 07:20:36 -- setup/common.sh@17 -- # local get=HugePages_Total 00:03:20.414 07:20:36 -- setup/common.sh@18 -- # local node= 00:03:20.414 07:20:36 -- setup/common.sh@19 -- # local var val 00:03:20.414 07:20:36 -- setup/common.sh@20 -- # local mem_f mem 00:03:20.414 07:20:36 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:20.414 07:20:36 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:20.414 07:20:36 -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:20.414 07:20:36 -- setup/common.sh@28 -- # mapfile -t mem 00:03:20.414 07:20:36 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:20.414 07:20:36 -- setup/common.sh@31 -- # IFS=': ' 00:03:20.414 07:20:36 -- setup/common.sh@31 -- # read -r var val _ 00:03:20.414 07:20:36 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541708 kB' 'MemFree: 44762452 kB' 'MemAvailable: 48267864 kB' 'Buffers: 2704 kB' 'Cached: 10211756 kB' 'SwapCached: 0 kB' 'Active: 7235212 kB' 'Inactive: 3506552 kB' 'Active(anon): 6840860 kB' 'Inactive(anon): 0 kB' 'Active(file): 394352 kB' 'Inactive(file): 3506552 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 530540 kB' 'Mapped: 211684 kB' 'Shmem: 6313556 kB' 'KReclaimable: 196260 kB' 'Slab: 568300 kB' 'SReclaimable: 196260 kB' 'SUnreclaim: 372040 kB' 'KernelStack: 12848 kB' 'PageTables: 7920 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37086592 kB' 'Committed_AS: 7962420 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 196596 kB' 'VmallocChunk: 0 kB' 'Percpu: 38784 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1536' 'HugePages_Free: 1536' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 3145728 kB' 'DirectMap4k: 1924700 kB' 'DirectMap2M: 15820800 kB' 'DirectMap1G: 51380224 kB' 00:03:20.414 07:20:36 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:20.414 07:20:36 -- setup/common.sh@32 -- # continue 00:03:20.414 07:20:36 -- setup/common.sh@31 -- # IFS=': ' 00:03:20.414 07:20:36 -- setup/common.sh@31 -- # read -r var val _ 00:03:20.414 07:20:36 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:20.414 07:20:36 -- setup/common.sh@32 -- # continue 00:03:20.414 07:20:36 -- setup/common.sh@31 -- # IFS=': ' 00:03:20.414 07:20:36 -- setup/common.sh@31 -- # read -r var val _ 00:03:20.414 07:20:36 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:20.414 07:20:36 -- setup/common.sh@32 -- # continue 00:03:20.414 07:20:36 -- setup/common.sh@31 -- # IFS=': ' 00:03:20.414 07:20:36 -- setup/common.sh@31 -- # read -r var val _ 00:03:20.414 07:20:36 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:20.414 07:20:36 -- setup/common.sh@32 -- # continue 00:03:20.414 07:20:36 -- setup/common.sh@31 -- # IFS=': ' 00:03:20.414 07:20:36 -- setup/common.sh@31 -- # read -r var val _ 00:03:20.414 07:20:36 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:20.414 07:20:36 -- setup/common.sh@32 -- # continue 00:03:20.414 07:20:36 -- setup/common.sh@31 -- # IFS=': ' 00:03:20.414 07:20:36 -- setup/common.sh@31 -- # read -r var val _ 00:03:20.414 07:20:36 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:20.414 07:20:36 -- setup/common.sh@32 -- # continue 00:03:20.414 07:20:36 -- setup/common.sh@31 -- # IFS=': ' 00:03:20.414 07:20:36 -- setup/common.sh@31 -- # read -r var val _ 00:03:20.414 07:20:36 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:20.414 07:20:36 -- setup/common.sh@32 -- # continue 00:03:20.414 07:20:36 -- setup/common.sh@31 -- # IFS=': ' 00:03:20.414 07:20:36 -- setup/common.sh@31 -- # read -r var val _ 00:03:20.414 07:20:36 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:20.414 07:20:36 -- setup/common.sh@32 -- # continue 00:03:20.414 07:20:36 -- setup/common.sh@31 -- # IFS=': ' 00:03:20.414 07:20:36 -- setup/common.sh@31 -- # read -r var val _ 00:03:20.414 07:20:36 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:20.414 07:20:36 -- setup/common.sh@32 -- # continue 00:03:20.414 07:20:36 -- setup/common.sh@31 -- # IFS=': ' 00:03:20.414 07:20:36 -- setup/common.sh@31 -- # read -r var val _ 00:03:20.414 07:20:36 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:20.414 07:20:36 -- setup/common.sh@32 -- # continue 00:03:20.414 07:20:36 -- setup/common.sh@31 -- # IFS=': ' 00:03:20.414 07:20:36 -- setup/common.sh@31 -- # read -r var val _ 00:03:20.414 07:20:36 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:20.414 07:20:36 -- setup/common.sh@32 -- # continue 00:03:20.414 07:20:36 -- setup/common.sh@31 -- # IFS=': ' 00:03:20.414 07:20:36 -- setup/common.sh@31 -- # read -r var val _ 00:03:20.414 07:20:36 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:20.414 07:20:36 -- setup/common.sh@32 -- # continue 00:03:20.414 07:20:36 -- setup/common.sh@31 -- # IFS=': ' 00:03:20.414 07:20:36 -- setup/common.sh@31 -- # read -r var val _ 00:03:20.414 07:20:36 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:20.414 07:20:36 -- setup/common.sh@32 -- # continue 00:03:20.414 07:20:36 -- setup/common.sh@31 -- # IFS=': ' 00:03:20.414 07:20:36 -- setup/common.sh@31 -- # read -r var val _ 00:03:20.414 07:20:36 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:20.414 07:20:36 -- setup/common.sh@32 -- # continue 00:03:20.414 07:20:36 -- setup/common.sh@31 -- # IFS=': ' 00:03:20.414 07:20:36 -- setup/common.sh@31 -- # read -r var val _ 00:03:20.414 07:20:36 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:20.414 07:20:36 -- setup/common.sh@32 -- # continue 00:03:20.414 07:20:36 -- setup/common.sh@31 -- # IFS=': ' 00:03:20.414 07:20:36 -- setup/common.sh@31 -- # read -r var val _ 00:03:20.414 07:20:36 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:20.414 07:20:36 -- setup/common.sh@32 -- # continue 00:03:20.414 07:20:36 -- setup/common.sh@31 -- # IFS=': ' 00:03:20.414 07:20:36 -- setup/common.sh@31 -- # read -r var val _ 00:03:20.414 07:20:36 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:20.414 07:20:36 -- setup/common.sh@32 -- # continue 00:03:20.414 07:20:36 -- setup/common.sh@31 -- # IFS=': ' 00:03:20.414 07:20:36 -- setup/common.sh@31 -- # read -r var val _ 00:03:20.414 07:20:36 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:20.414 07:20:36 -- setup/common.sh@32 -- # continue 00:03:20.414 07:20:36 -- setup/common.sh@31 -- # IFS=': ' 00:03:20.414 07:20:36 -- setup/common.sh@31 -- # read -r var val _ 00:03:20.414 07:20:36 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:20.414 07:20:36 -- setup/common.sh@32 -- # continue 00:03:20.414 07:20:36 -- setup/common.sh@31 -- # IFS=': ' 00:03:20.414 07:20:36 -- setup/common.sh@31 -- # read -r var val _ 00:03:20.414 07:20:36 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:20.414 07:20:36 -- setup/common.sh@32 -- # continue 00:03:20.414 07:20:36 -- setup/common.sh@31 -- # IFS=': ' 00:03:20.414 07:20:36 -- setup/common.sh@31 -- # read -r var val _ 00:03:20.414 07:20:36 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:20.414 07:20:36 -- setup/common.sh@32 -- # continue 00:03:20.414 07:20:36 -- setup/common.sh@31 -- # IFS=': ' 00:03:20.414 07:20:36 -- setup/common.sh@31 -- # read -r var val _ 00:03:20.414 07:20:36 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:20.414 07:20:36 -- setup/common.sh@32 -- # continue 00:03:20.414 07:20:36 -- setup/common.sh@31 -- # IFS=': ' 00:03:20.414 07:20:36 -- setup/common.sh@31 -- # read -r var val _ 00:03:20.414 07:20:36 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:20.414 07:20:36 -- setup/common.sh@32 -- # continue 00:03:20.414 07:20:36 -- setup/common.sh@31 -- # IFS=': ' 00:03:20.414 07:20:36 -- setup/common.sh@31 -- # read -r var val _ 00:03:20.414 07:20:36 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:20.414 07:20:36 -- setup/common.sh@32 -- # continue 00:03:20.414 07:20:36 -- setup/common.sh@31 -- # IFS=': ' 00:03:20.414 07:20:36 -- setup/common.sh@31 -- # read -r var val _ 00:03:20.414 07:20:36 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:20.414 07:20:36 -- setup/common.sh@32 -- # continue 00:03:20.414 07:20:36 -- setup/common.sh@31 -- # IFS=': ' 00:03:20.414 07:20:36 -- setup/common.sh@31 -- # read -r var val _ 00:03:20.414 07:20:36 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:20.414 07:20:36 -- setup/common.sh@32 -- # continue 00:03:20.414 07:20:36 -- setup/common.sh@31 -- # IFS=': ' 00:03:20.414 07:20:36 -- setup/common.sh@31 -- # read -r var val _ 00:03:20.414 07:20:36 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:20.414 07:20:36 -- setup/common.sh@32 -- # continue 00:03:20.414 07:20:36 -- setup/common.sh@31 -- # IFS=': ' 00:03:20.414 07:20:36 -- setup/common.sh@31 -- # read -r var val _ 00:03:20.414 07:20:36 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:20.414 07:20:36 -- setup/common.sh@32 -- # continue 00:03:20.414 07:20:36 -- setup/common.sh@31 -- # IFS=': ' 00:03:20.414 07:20:36 -- setup/common.sh@31 -- # read -r var val _ 00:03:20.414 07:20:36 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:20.414 07:20:36 -- setup/common.sh@32 -- # continue 00:03:20.414 07:20:36 -- setup/common.sh@31 -- # IFS=': ' 00:03:20.414 07:20:36 -- setup/common.sh@31 -- # read -r var val _ 00:03:20.414 07:20:36 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:20.414 07:20:36 -- setup/common.sh@32 -- # continue 00:03:20.414 07:20:36 -- setup/common.sh@31 -- # IFS=': ' 00:03:20.415 07:20:36 -- setup/common.sh@31 -- # read -r var val _ 00:03:20.415 07:20:36 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:20.415 07:20:36 -- setup/common.sh@32 -- # continue 00:03:20.415 07:20:36 -- setup/common.sh@31 -- # IFS=': ' 00:03:20.415 07:20:36 -- setup/common.sh@31 -- # read -r var val _ 00:03:20.415 07:20:36 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:20.415 07:20:36 -- setup/common.sh@32 -- # continue 00:03:20.415 07:20:36 -- setup/common.sh@31 -- # IFS=': ' 00:03:20.415 07:20:36 -- setup/common.sh@31 -- # read -r var val _ 00:03:20.415 07:20:36 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:20.415 07:20:36 -- setup/common.sh@32 -- # continue 00:03:20.415 07:20:36 -- setup/common.sh@31 -- # IFS=': ' 00:03:20.415 07:20:36 -- setup/common.sh@31 -- # read -r var val _ 00:03:20.415 07:20:36 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:20.415 07:20:36 -- setup/common.sh@32 -- # continue 00:03:20.415 07:20:36 -- setup/common.sh@31 -- # IFS=': ' 00:03:20.415 07:20:36 -- setup/common.sh@31 -- # read -r var val _ 00:03:20.415 07:20:36 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:20.415 07:20:36 -- setup/common.sh@32 -- # continue 00:03:20.415 07:20:36 -- setup/common.sh@31 -- # IFS=': ' 00:03:20.415 07:20:36 -- setup/common.sh@31 -- # read -r var val _ 00:03:20.415 07:20:36 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:20.415 07:20:36 -- setup/common.sh@32 -- # continue 00:03:20.415 07:20:36 -- setup/common.sh@31 -- # IFS=': ' 00:03:20.415 07:20:36 -- setup/common.sh@31 -- # read -r var val _ 00:03:20.415 07:20:36 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:20.415 07:20:36 -- setup/common.sh@32 -- # continue 00:03:20.415 07:20:36 -- setup/common.sh@31 -- # IFS=': ' 00:03:20.415 07:20:36 -- setup/common.sh@31 -- # read -r var val _ 00:03:20.415 07:20:36 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:20.415 07:20:36 -- setup/common.sh@32 -- # continue 00:03:20.415 07:20:36 -- setup/common.sh@31 -- # IFS=': ' 00:03:20.415 07:20:36 -- setup/common.sh@31 -- # read -r var val _ 00:03:20.415 07:20:36 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:20.415 07:20:36 -- setup/common.sh@32 -- # continue 00:03:20.415 07:20:36 -- setup/common.sh@31 -- # IFS=': ' 00:03:20.415 07:20:36 -- setup/common.sh@31 -- # read -r var val _ 00:03:20.415 07:20:36 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:20.415 07:20:36 -- setup/common.sh@32 -- # continue 00:03:20.415 07:20:36 -- setup/common.sh@31 -- # IFS=': ' 00:03:20.415 07:20:36 -- setup/common.sh@31 -- # read -r var val _ 00:03:20.415 07:20:36 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:20.415 07:20:36 -- setup/common.sh@32 -- # continue 00:03:20.415 07:20:36 -- setup/common.sh@31 -- # IFS=': ' 00:03:20.415 07:20:36 -- setup/common.sh@31 -- # read -r var val _ 00:03:20.415 07:20:36 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:20.415 07:20:36 -- setup/common.sh@32 -- # continue 00:03:20.415 07:20:36 -- setup/common.sh@31 -- # IFS=': ' 00:03:20.415 07:20:36 -- setup/common.sh@31 -- # read -r var val _ 00:03:20.415 07:20:36 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:20.415 07:20:36 -- setup/common.sh@32 -- # continue 00:03:20.415 07:20:36 -- setup/common.sh@31 -- # IFS=': ' 00:03:20.415 07:20:36 -- setup/common.sh@31 -- # read -r var val _ 00:03:20.415 07:20:36 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:20.415 07:20:36 -- setup/common.sh@32 -- # continue 00:03:20.415 07:20:36 -- setup/common.sh@31 -- # IFS=': ' 00:03:20.415 07:20:36 -- setup/common.sh@31 -- # read -r var val _ 00:03:20.415 07:20:36 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:20.415 07:20:36 -- setup/common.sh@32 -- # continue 00:03:20.415 07:20:36 -- setup/common.sh@31 -- # IFS=': ' 00:03:20.415 07:20:36 -- setup/common.sh@31 -- # read -r var val _ 00:03:20.415 07:20:36 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:20.415 07:20:36 -- setup/common.sh@32 -- # continue 00:03:20.415 07:20:36 -- setup/common.sh@31 -- # IFS=': ' 00:03:20.415 07:20:36 -- setup/common.sh@31 -- # read -r var val _ 00:03:20.415 07:20:36 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:20.415 07:20:36 -- setup/common.sh@32 -- # continue 00:03:20.415 07:20:36 -- setup/common.sh@31 -- # IFS=': ' 00:03:20.415 07:20:36 -- setup/common.sh@31 -- # read -r var val _ 00:03:20.415 07:20:36 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:20.415 07:20:36 -- setup/common.sh@32 -- # continue 00:03:20.415 07:20:36 -- setup/common.sh@31 -- # IFS=': ' 00:03:20.415 07:20:36 -- setup/common.sh@31 -- # read -r var val _ 00:03:20.415 07:20:36 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:20.415 07:20:36 -- setup/common.sh@33 -- # echo 1536 00:03:20.415 07:20:36 -- setup/common.sh@33 -- # return 0 00:03:20.415 07:20:36 -- setup/hugepages.sh@110 -- # (( 1536 == nr_hugepages + surp + resv )) 00:03:20.415 07:20:36 -- setup/hugepages.sh@112 -- # get_nodes 00:03:20.415 07:20:36 -- setup/hugepages.sh@27 -- # local node 00:03:20.415 07:20:36 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:20.415 07:20:36 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:03:20.415 07:20:36 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:20.415 07:20:36 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:03:20.415 07:20:36 -- setup/hugepages.sh@32 -- # no_nodes=2 00:03:20.415 07:20:36 -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:03:20.415 07:20:36 -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:03:20.415 07:20:36 -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:03:20.415 07:20:36 -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:03:20.415 07:20:36 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:20.415 07:20:36 -- setup/common.sh@18 -- # local node=0 00:03:20.415 07:20:36 -- setup/common.sh@19 -- # local var val 00:03:20.415 07:20:36 -- setup/common.sh@20 -- # local mem_f mem 00:03:20.415 07:20:36 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:20.415 07:20:36 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:03:20.415 07:20:36 -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:03:20.415 07:20:36 -- setup/common.sh@28 -- # mapfile -t mem 00:03:20.415 07:20:36 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:20.415 07:20:36 -- setup/common.sh@31 -- # IFS=': ' 00:03:20.415 07:20:36 -- setup/common.sh@31 -- # read -r var val _ 00:03:20.415 07:20:36 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 32829884 kB' 'MemFree: 28289000 kB' 'MemUsed: 4540884 kB' 'SwapCached: 0 kB' 'Active: 2362492 kB' 'Inactive: 110044 kB' 'Active(anon): 2251604 kB' 'Inactive(anon): 0 kB' 'Active(file): 110888 kB' 'Inactive(file): 110044 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 2243160 kB' 'Mapped: 35008 kB' 'AnonPages: 232548 kB' 'Shmem: 2022228 kB' 'KernelStack: 7192 kB' 'PageTables: 4308 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 97124 kB' 'Slab: 312368 kB' 'SReclaimable: 97124 kB' 'SUnreclaim: 215244 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:03:20.415 07:20:36 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.415 07:20:36 -- setup/common.sh@32 -- # continue 00:03:20.415 07:20:36 -- setup/common.sh@31 -- # IFS=': ' 00:03:20.415 07:20:36 -- setup/common.sh@31 -- # read -r var val _ 00:03:20.415 07:20:36 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.415 07:20:36 -- setup/common.sh@32 -- # continue 00:03:20.415 07:20:36 -- setup/common.sh@31 -- # IFS=': ' 00:03:20.415 07:20:36 -- setup/common.sh@31 -- # read -r var val _ 00:03:20.415 07:20:36 -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.415 07:20:36 -- setup/common.sh@32 -- # continue 00:03:20.415 07:20:36 -- setup/common.sh@31 -- # IFS=': ' 00:03:20.415 07:20:36 -- setup/common.sh@31 -- # read -r var val _ 00:03:20.415 07:20:36 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.415 07:20:36 -- setup/common.sh@32 -- # continue 00:03:20.415 07:20:36 -- setup/common.sh@31 -- # IFS=': ' 00:03:20.415 07:20:36 -- setup/common.sh@31 -- # read -r var val _ 00:03:20.415 07:20:36 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.415 07:20:36 -- setup/common.sh@32 -- # continue 00:03:20.415 07:20:36 -- setup/common.sh@31 -- # IFS=': ' 00:03:20.415 07:20:36 -- setup/common.sh@31 -- # read -r var val _ 00:03:20.415 07:20:36 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.415 07:20:36 -- setup/common.sh@32 -- # continue 00:03:20.415 07:20:36 -- setup/common.sh@31 -- # IFS=': ' 00:03:20.415 07:20:36 -- setup/common.sh@31 -- # read -r var val _ 00:03:20.415 07:20:36 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.415 07:20:36 -- setup/common.sh@32 -- # continue 00:03:20.415 07:20:36 -- setup/common.sh@31 -- # IFS=': ' 00:03:20.415 07:20:36 -- setup/common.sh@31 -- # read -r var val _ 00:03:20.415 07:20:36 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.415 07:20:36 -- setup/common.sh@32 -- # continue 00:03:20.415 07:20:36 -- setup/common.sh@31 -- # IFS=': ' 00:03:20.415 07:20:36 -- setup/common.sh@31 -- # read -r var val _ 00:03:20.415 07:20:36 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.415 07:20:36 -- setup/common.sh@32 -- # continue 00:03:20.415 07:20:36 -- setup/common.sh@31 -- # IFS=': ' 00:03:20.415 07:20:36 -- setup/common.sh@31 -- # read -r var val _ 00:03:20.415 07:20:36 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.415 07:20:36 -- setup/common.sh@32 -- # continue 00:03:20.415 07:20:36 -- setup/common.sh@31 -- # IFS=': ' 00:03:20.415 07:20:36 -- setup/common.sh@31 -- # read -r var val _ 00:03:20.415 07:20:36 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.415 07:20:36 -- setup/common.sh@32 -- # continue 00:03:20.415 07:20:36 -- setup/common.sh@31 -- # IFS=': ' 00:03:20.415 07:20:36 -- setup/common.sh@31 -- # read -r var val _ 00:03:20.415 07:20:36 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.415 07:20:36 -- setup/common.sh@32 -- # continue 00:03:20.415 07:20:36 -- setup/common.sh@31 -- # IFS=': ' 00:03:20.415 07:20:36 -- setup/common.sh@31 -- # read -r var val _ 00:03:20.415 07:20:36 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.415 07:20:36 -- setup/common.sh@32 -- # continue 00:03:20.415 07:20:36 -- setup/common.sh@31 -- # IFS=': ' 00:03:20.415 07:20:36 -- setup/common.sh@31 -- # read -r var val _ 00:03:20.415 07:20:36 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.415 07:20:36 -- setup/common.sh@32 -- # continue 00:03:20.415 07:20:36 -- setup/common.sh@31 -- # IFS=': ' 00:03:20.415 07:20:36 -- setup/common.sh@31 -- # read -r var val _ 00:03:20.415 07:20:36 -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.415 07:20:36 -- setup/common.sh@32 -- # continue 00:03:20.415 07:20:36 -- setup/common.sh@31 -- # IFS=': ' 00:03:20.415 07:20:36 -- setup/common.sh@31 -- # read -r var val _ 00:03:20.415 07:20:36 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.415 07:20:36 -- setup/common.sh@32 -- # continue 00:03:20.415 07:20:36 -- setup/common.sh@31 -- # IFS=': ' 00:03:20.415 07:20:36 -- setup/common.sh@31 -- # read -r var val _ 00:03:20.415 07:20:36 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.415 07:20:36 -- setup/common.sh@32 -- # continue 00:03:20.415 07:20:36 -- setup/common.sh@31 -- # IFS=': ' 00:03:20.415 07:20:36 -- setup/common.sh@31 -- # read -r var val _ 00:03:20.415 07:20:36 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.415 07:20:36 -- setup/common.sh@32 -- # continue 00:03:20.415 07:20:36 -- setup/common.sh@31 -- # IFS=': ' 00:03:20.415 07:20:36 -- setup/common.sh@31 -- # read -r var val _ 00:03:20.415 07:20:36 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.415 07:20:36 -- setup/common.sh@32 -- # continue 00:03:20.415 07:20:36 -- setup/common.sh@31 -- # IFS=': ' 00:03:20.415 07:20:36 -- setup/common.sh@31 -- # read -r var val _ 00:03:20.415 07:20:36 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.415 07:20:36 -- setup/common.sh@32 -- # continue 00:03:20.415 07:20:36 -- setup/common.sh@31 -- # IFS=': ' 00:03:20.415 07:20:36 -- setup/common.sh@31 -- # read -r var val _ 00:03:20.415 07:20:36 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.415 07:20:36 -- setup/common.sh@32 -- # continue 00:03:20.415 07:20:36 -- setup/common.sh@31 -- # IFS=': ' 00:03:20.415 07:20:36 -- setup/common.sh@31 -- # read -r var val _ 00:03:20.415 07:20:36 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.415 07:20:36 -- setup/common.sh@32 -- # continue 00:03:20.415 07:20:36 -- setup/common.sh@31 -- # IFS=': ' 00:03:20.415 07:20:36 -- setup/common.sh@31 -- # read -r var val _ 00:03:20.415 07:20:36 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.415 07:20:36 -- setup/common.sh@32 -- # continue 00:03:20.415 07:20:36 -- setup/common.sh@31 -- # IFS=': ' 00:03:20.415 07:20:36 -- setup/common.sh@31 -- # read -r var val _ 00:03:20.415 07:20:36 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.415 07:20:36 -- setup/common.sh@32 -- # continue 00:03:20.415 07:20:36 -- setup/common.sh@31 -- # IFS=': ' 00:03:20.415 07:20:36 -- setup/common.sh@31 -- # read -r var val _ 00:03:20.415 07:20:36 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.415 07:20:36 -- setup/common.sh@32 -- # continue 00:03:20.415 07:20:36 -- setup/common.sh@31 -- # IFS=': ' 00:03:20.415 07:20:36 -- setup/common.sh@31 -- # read -r var val _ 00:03:20.415 07:20:36 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.415 07:20:36 -- setup/common.sh@32 -- # continue 00:03:20.415 07:20:36 -- setup/common.sh@31 -- # IFS=': ' 00:03:20.415 07:20:36 -- setup/common.sh@31 -- # read -r var val _ 00:03:20.415 07:20:36 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.415 07:20:36 -- setup/common.sh@32 -- # continue 00:03:20.415 07:20:36 -- setup/common.sh@31 -- # IFS=': ' 00:03:20.415 07:20:36 -- setup/common.sh@31 -- # read -r var val _ 00:03:20.415 07:20:36 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.415 07:20:36 -- setup/common.sh@32 -- # continue 00:03:20.415 07:20:36 -- setup/common.sh@31 -- # IFS=': ' 00:03:20.415 07:20:36 -- setup/common.sh@31 -- # read -r var val _ 00:03:20.415 07:20:36 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.415 07:20:36 -- setup/common.sh@32 -- # continue 00:03:20.415 07:20:36 -- setup/common.sh@31 -- # IFS=': ' 00:03:20.415 07:20:36 -- setup/common.sh@31 -- # read -r var val _ 00:03:20.415 07:20:36 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.415 07:20:36 -- setup/common.sh@32 -- # continue 00:03:20.415 07:20:36 -- setup/common.sh@31 -- # IFS=': ' 00:03:20.415 07:20:36 -- setup/common.sh@31 -- # read -r var val _ 00:03:20.415 07:20:36 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.415 07:20:36 -- setup/common.sh@32 -- # continue 00:03:20.415 07:20:36 -- setup/common.sh@31 -- # IFS=': ' 00:03:20.415 07:20:36 -- setup/common.sh@31 -- # read -r var val _ 00:03:20.415 07:20:36 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.415 07:20:36 -- setup/common.sh@32 -- # continue 00:03:20.416 07:20:36 -- setup/common.sh@31 -- # IFS=': ' 00:03:20.416 07:20:36 -- setup/common.sh@31 -- # read -r var val _ 00:03:20.416 07:20:36 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.416 07:20:36 -- setup/common.sh@32 -- # continue 00:03:20.416 07:20:36 -- setup/common.sh@31 -- # IFS=': ' 00:03:20.416 07:20:36 -- setup/common.sh@31 -- # read -r var val _ 00:03:20.416 07:20:36 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.416 07:20:36 -- setup/common.sh@32 -- # continue 00:03:20.416 07:20:36 -- setup/common.sh@31 -- # IFS=': ' 00:03:20.416 07:20:36 -- setup/common.sh@31 -- # read -r var val _ 00:03:20.416 07:20:36 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.416 07:20:36 -- setup/common.sh@32 -- # continue 00:03:20.416 07:20:36 -- setup/common.sh@31 -- # IFS=': ' 00:03:20.416 07:20:36 -- setup/common.sh@31 -- # read -r var val _ 00:03:20.416 07:20:36 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.416 07:20:36 -- setup/common.sh@32 -- # continue 00:03:20.416 07:20:36 -- setup/common.sh@31 -- # IFS=': ' 00:03:20.416 07:20:36 -- setup/common.sh@31 -- # read -r var val _ 00:03:20.416 07:20:36 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.416 07:20:36 -- setup/common.sh@33 -- # echo 0 00:03:20.416 07:20:36 -- setup/common.sh@33 -- # return 0 00:03:20.416 07:20:36 -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:03:20.416 07:20:36 -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:03:20.416 07:20:36 -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:03:20.416 07:20:36 -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 1 00:03:20.416 07:20:36 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:20.416 07:20:36 -- setup/common.sh@18 -- # local node=1 00:03:20.416 07:20:36 -- setup/common.sh@19 -- # local var val 00:03:20.416 07:20:36 -- setup/common.sh@20 -- # local mem_f mem 00:03:20.416 07:20:36 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:20.416 07:20:36 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node1/meminfo ]] 00:03:20.416 07:20:36 -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node1/meminfo 00:03:20.416 07:20:36 -- setup/common.sh@28 -- # mapfile -t mem 00:03:20.416 07:20:36 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:20.416 07:20:36 -- setup/common.sh@31 -- # IFS=': ' 00:03:20.416 07:20:36 -- setup/common.sh@31 -- # read -r var val _ 00:03:20.416 07:20:36 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 27711824 kB' 'MemFree: 16473560 kB' 'MemUsed: 11238264 kB' 'SwapCached: 0 kB' 'Active: 4872824 kB' 'Inactive: 3396508 kB' 'Active(anon): 4589360 kB' 'Inactive(anon): 0 kB' 'Active(file): 283464 kB' 'Inactive(file): 3396508 kB' 'Unevictable: 0 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 7971340 kB' 'Mapped: 176676 kB' 'AnonPages: 298112 kB' 'Shmem: 4291368 kB' 'KernelStack: 5704 kB' 'PageTables: 3760 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 99136 kB' 'Slab: 255932 kB' 'SReclaimable: 99136 kB' 'SUnreclaim: 156796 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:03:20.416 07:20:36 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.416 07:20:36 -- setup/common.sh@32 -- # continue 00:03:20.416 07:20:36 -- setup/common.sh@31 -- # IFS=': ' 00:03:20.416 07:20:36 -- setup/common.sh@31 -- # read -r var val _ 00:03:20.416 07:20:36 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.416 07:20:36 -- setup/common.sh@32 -- # continue 00:03:20.416 07:20:36 -- setup/common.sh@31 -- # IFS=': ' 00:03:20.416 07:20:36 -- setup/common.sh@31 -- # read -r var val _ 00:03:20.416 07:20:36 -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.416 07:20:36 -- setup/common.sh@32 -- # continue 00:03:20.416 07:20:36 -- setup/common.sh@31 -- # IFS=': ' 00:03:20.416 07:20:36 -- setup/common.sh@31 -- # read -r var val _ 00:03:20.416 07:20:36 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.416 07:20:36 -- setup/common.sh@32 -- # continue 00:03:20.416 07:20:36 -- setup/common.sh@31 -- # IFS=': ' 00:03:20.416 07:20:36 -- setup/common.sh@31 -- # read -r var val _ 00:03:20.416 07:20:36 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.416 07:20:36 -- setup/common.sh@32 -- # continue 00:03:20.416 07:20:36 -- setup/common.sh@31 -- # IFS=': ' 00:03:20.416 07:20:36 -- setup/common.sh@31 -- # read -r var val _ 00:03:20.416 07:20:36 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.416 07:20:36 -- setup/common.sh@32 -- # continue 00:03:20.416 07:20:36 -- setup/common.sh@31 -- # IFS=': ' 00:03:20.416 07:20:36 -- setup/common.sh@31 -- # read -r var val _ 00:03:20.416 07:20:36 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.416 07:20:36 -- setup/common.sh@32 -- # continue 00:03:20.416 07:20:36 -- setup/common.sh@31 -- # IFS=': ' 00:03:20.416 07:20:36 -- setup/common.sh@31 -- # read -r var val _ 00:03:20.416 07:20:36 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.416 07:20:36 -- setup/common.sh@32 -- # continue 00:03:20.416 07:20:36 -- setup/common.sh@31 -- # IFS=': ' 00:03:20.416 07:20:36 -- setup/common.sh@31 -- # read -r var val _ 00:03:20.416 07:20:36 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.416 07:20:36 -- setup/common.sh@32 -- # continue 00:03:20.416 07:20:36 -- setup/common.sh@31 -- # IFS=': ' 00:03:20.416 07:20:36 -- setup/common.sh@31 -- # read -r var val _ 00:03:20.416 07:20:36 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.416 07:20:36 -- setup/common.sh@32 -- # continue 00:03:20.416 07:20:36 -- setup/common.sh@31 -- # IFS=': ' 00:03:20.416 07:20:36 -- setup/common.sh@31 -- # read -r var val _ 00:03:20.416 07:20:36 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.416 07:20:36 -- setup/common.sh@32 -- # continue 00:03:20.416 07:20:36 -- setup/common.sh@31 -- # IFS=': ' 00:03:20.416 07:20:36 -- setup/common.sh@31 -- # read -r var val _ 00:03:20.416 07:20:36 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.416 07:20:36 -- setup/common.sh@32 -- # continue 00:03:20.416 07:20:36 -- setup/common.sh@31 -- # IFS=': ' 00:03:20.416 07:20:36 -- setup/common.sh@31 -- # read -r var val _ 00:03:20.416 07:20:36 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.416 07:20:36 -- setup/common.sh@32 -- # continue 00:03:20.416 07:20:36 -- setup/common.sh@31 -- # IFS=': ' 00:03:20.416 07:20:36 -- setup/common.sh@31 -- # read -r var val _ 00:03:20.416 07:20:36 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.416 07:20:36 -- setup/common.sh@32 -- # continue 00:03:20.416 07:20:36 -- setup/common.sh@31 -- # IFS=': ' 00:03:20.416 07:20:36 -- setup/common.sh@31 -- # read -r var val _ 00:03:20.416 07:20:36 -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.416 07:20:36 -- setup/common.sh@32 -- # continue 00:03:20.416 07:20:36 -- setup/common.sh@31 -- # IFS=': ' 00:03:20.416 07:20:36 -- setup/common.sh@31 -- # read -r var val _ 00:03:20.416 07:20:36 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.416 07:20:36 -- setup/common.sh@32 -- # continue 00:03:20.416 07:20:36 -- setup/common.sh@31 -- # IFS=': ' 00:03:20.416 07:20:36 -- setup/common.sh@31 -- # read -r var val _ 00:03:20.416 07:20:36 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.416 07:20:36 -- setup/common.sh@32 -- # continue 00:03:20.416 07:20:36 -- setup/common.sh@31 -- # IFS=': ' 00:03:20.416 07:20:36 -- setup/common.sh@31 -- # read -r var val _ 00:03:20.416 07:20:36 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.416 07:20:36 -- setup/common.sh@32 -- # continue 00:03:20.416 07:20:36 -- setup/common.sh@31 -- # IFS=': ' 00:03:20.416 07:20:36 -- setup/common.sh@31 -- # read -r var val _ 00:03:20.416 07:20:36 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.416 07:20:36 -- setup/common.sh@32 -- # continue 00:03:20.416 07:20:36 -- setup/common.sh@31 -- # IFS=': ' 00:03:20.416 07:20:36 -- setup/common.sh@31 -- # read -r var val _ 00:03:20.416 07:20:36 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.416 07:20:36 -- setup/common.sh@32 -- # continue 00:03:20.416 07:20:36 -- setup/common.sh@31 -- # IFS=': ' 00:03:20.416 07:20:36 -- setup/common.sh@31 -- # read -r var val _ 00:03:20.416 07:20:36 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.416 07:20:36 -- setup/common.sh@32 -- # continue 00:03:20.416 07:20:36 -- setup/common.sh@31 -- # IFS=': ' 00:03:20.416 07:20:36 -- setup/common.sh@31 -- # read -r var val _ 00:03:20.416 07:20:36 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.416 07:20:36 -- setup/common.sh@32 -- # continue 00:03:20.416 07:20:36 -- setup/common.sh@31 -- # IFS=': ' 00:03:20.416 07:20:36 -- setup/common.sh@31 -- # read -r var val _ 00:03:20.416 07:20:36 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.416 07:20:36 -- setup/common.sh@32 -- # continue 00:03:20.416 07:20:36 -- setup/common.sh@31 -- # IFS=': ' 00:03:20.416 07:20:36 -- setup/common.sh@31 -- # read -r var val _ 00:03:20.416 07:20:36 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.416 07:20:36 -- setup/common.sh@32 -- # continue 00:03:20.416 07:20:36 -- setup/common.sh@31 -- # IFS=': ' 00:03:20.416 07:20:36 -- setup/common.sh@31 -- # read -r var val _ 00:03:20.416 07:20:36 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.416 07:20:36 -- setup/common.sh@32 -- # continue 00:03:20.416 07:20:36 -- setup/common.sh@31 -- # IFS=': ' 00:03:20.416 07:20:36 -- setup/common.sh@31 -- # read -r var val _ 00:03:20.416 07:20:36 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.416 07:20:36 -- setup/common.sh@32 -- # continue 00:03:20.416 07:20:36 -- setup/common.sh@31 -- # IFS=': ' 00:03:20.416 07:20:36 -- setup/common.sh@31 -- # read -r var val _ 00:03:20.416 07:20:36 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.416 07:20:36 -- setup/common.sh@32 -- # continue 00:03:20.416 07:20:36 -- setup/common.sh@31 -- # IFS=': ' 00:03:20.416 07:20:36 -- setup/common.sh@31 -- # read -r var val _ 00:03:20.416 07:20:36 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.416 07:20:36 -- setup/common.sh@32 -- # continue 00:03:20.416 07:20:36 -- setup/common.sh@31 -- # IFS=': ' 00:03:20.416 07:20:36 -- setup/common.sh@31 -- # read -r var val _ 00:03:20.416 07:20:36 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.416 07:20:36 -- setup/common.sh@32 -- # continue 00:03:20.416 07:20:36 -- setup/common.sh@31 -- # IFS=': ' 00:03:20.416 07:20:36 -- setup/common.sh@31 -- # read -r var val _ 00:03:20.416 07:20:36 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.416 07:20:36 -- setup/common.sh@32 -- # continue 00:03:20.416 07:20:36 -- setup/common.sh@31 -- # IFS=': ' 00:03:20.416 07:20:36 -- setup/common.sh@31 -- # read -r var val _ 00:03:20.416 07:20:36 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.416 07:20:36 -- setup/common.sh@32 -- # continue 00:03:20.416 07:20:36 -- setup/common.sh@31 -- # IFS=': ' 00:03:20.416 07:20:36 -- setup/common.sh@31 -- # read -r var val _ 00:03:20.416 07:20:36 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.416 07:20:36 -- setup/common.sh@32 -- # continue 00:03:20.416 07:20:36 -- setup/common.sh@31 -- # IFS=': ' 00:03:20.416 07:20:36 -- setup/common.sh@31 -- # read -r var val _ 00:03:20.416 07:20:36 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.416 07:20:36 -- setup/common.sh@32 -- # continue 00:03:20.416 07:20:36 -- setup/common.sh@31 -- # IFS=': ' 00:03:20.416 07:20:36 -- setup/common.sh@31 -- # read -r var val _ 00:03:20.416 07:20:36 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.416 07:20:36 -- setup/common.sh@32 -- # continue 00:03:20.416 07:20:36 -- setup/common.sh@31 -- # IFS=': ' 00:03:20.416 07:20:36 -- setup/common.sh@31 -- # read -r var val _ 00:03:20.416 07:20:36 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.416 07:20:36 -- setup/common.sh@32 -- # continue 00:03:20.416 07:20:36 -- setup/common.sh@31 -- # IFS=': ' 00:03:20.416 07:20:36 -- setup/common.sh@31 -- # read -r var val _ 00:03:20.416 07:20:36 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.416 07:20:36 -- setup/common.sh@32 -- # continue 00:03:20.416 07:20:36 -- setup/common.sh@31 -- # IFS=': ' 00:03:20.416 07:20:36 -- setup/common.sh@31 -- # read -r var val _ 00:03:20.416 07:20:36 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.416 07:20:36 -- setup/common.sh@33 -- # echo 0 00:03:20.416 07:20:36 -- setup/common.sh@33 -- # return 0 00:03:20.416 07:20:36 -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:03:20.416 07:20:36 -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:03:20.416 07:20:36 -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:03:20.416 07:20:36 -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:03:20.416 07:20:36 -- setup/hugepages.sh@128 -- # echo 'node0=512 expecting 512' 00:03:20.416 node0=512 expecting 512 00:03:20.416 07:20:36 -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:03:20.416 07:20:36 -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:03:20.416 07:20:36 -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:03:20.416 07:20:36 -- setup/hugepages.sh@128 -- # echo 'node1=1024 expecting 1024' 00:03:20.416 node1=1024 expecting 1024 00:03:20.416 07:20:36 -- setup/hugepages.sh@130 -- # [[ 512,1024 == \5\1\2\,\1\0\2\4 ]] 00:03:20.416 00:03:20.416 real 0m1.466s 00:03:20.416 user 0m0.645s 00:03:20.416 sys 0m0.785s 00:03:20.416 07:20:36 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:03:20.416 07:20:36 -- common/autotest_common.sh@10 -- # set +x 00:03:20.416 ************************************ 00:03:20.416 END TEST custom_alloc 00:03:20.416 ************************************ 00:03:20.416 07:20:36 -- setup/hugepages.sh@215 -- # run_test no_shrink_alloc no_shrink_alloc 00:03:20.416 07:20:36 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:03:20.416 07:20:36 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:03:20.416 07:20:36 -- common/autotest_common.sh@10 -- # set +x 00:03:20.416 ************************************ 00:03:20.416 START TEST no_shrink_alloc 00:03:20.417 ************************************ 00:03:20.417 07:20:36 -- common/autotest_common.sh@1104 -- # no_shrink_alloc 00:03:20.417 07:20:36 -- setup/hugepages.sh@195 -- # get_test_nr_hugepages 2097152 0 00:03:20.417 07:20:36 -- setup/hugepages.sh@49 -- # local size=2097152 00:03:20.417 07:20:36 -- setup/hugepages.sh@50 -- # (( 2 > 1 )) 00:03:20.417 07:20:36 -- setup/hugepages.sh@51 -- # shift 00:03:20.417 07:20:36 -- setup/hugepages.sh@52 -- # node_ids=('0') 00:03:20.417 07:20:36 -- setup/hugepages.sh@52 -- # local node_ids 00:03:20.417 07:20:36 -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:03:20.417 07:20:36 -- setup/hugepages.sh@57 -- # nr_hugepages=1024 00:03:20.417 07:20:36 -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 0 00:03:20.417 07:20:36 -- setup/hugepages.sh@62 -- # user_nodes=('0') 00:03:20.417 07:20:36 -- setup/hugepages.sh@62 -- # local user_nodes 00:03:20.417 07:20:36 -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:03:20.417 07:20:36 -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:03:20.417 07:20:36 -- setup/hugepages.sh@67 -- # nodes_test=() 00:03:20.417 07:20:36 -- setup/hugepages.sh@67 -- # local -g nodes_test 00:03:20.417 07:20:36 -- setup/hugepages.sh@69 -- # (( 1 > 0 )) 00:03:20.417 07:20:36 -- setup/hugepages.sh@70 -- # for _no_nodes in "${user_nodes[@]}" 00:03:20.417 07:20:36 -- setup/hugepages.sh@71 -- # nodes_test[_no_nodes]=1024 00:03:20.417 07:20:36 -- setup/hugepages.sh@73 -- # return 0 00:03:20.417 07:20:36 -- setup/hugepages.sh@198 -- # setup output 00:03:20.417 07:20:36 -- setup/common.sh@9 -- # [[ output == output ]] 00:03:20.417 07:20:36 -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:03:21.797 0000:00:04.7 (8086 0e27): Already using the vfio-pci driver 00:03:21.797 0000:88:00.0 (8086 0a54): Already using the vfio-pci driver 00:03:21.797 0000:00:04.6 (8086 0e26): Already using the vfio-pci driver 00:03:21.797 0000:00:04.5 (8086 0e25): Already using the vfio-pci driver 00:03:21.797 0000:00:04.4 (8086 0e24): Already using the vfio-pci driver 00:03:21.797 0000:00:04.3 (8086 0e23): Already using the vfio-pci driver 00:03:21.797 0000:00:04.2 (8086 0e22): Already using the vfio-pci driver 00:03:21.797 0000:00:04.1 (8086 0e21): Already using the vfio-pci driver 00:03:21.797 0000:00:04.0 (8086 0e20): Already using the vfio-pci driver 00:03:21.797 0000:80:04.7 (8086 0e27): Already using the vfio-pci driver 00:03:21.797 0000:80:04.6 (8086 0e26): Already using the vfio-pci driver 00:03:21.797 0000:80:04.5 (8086 0e25): Already using the vfio-pci driver 00:03:21.797 0000:80:04.4 (8086 0e24): Already using the vfio-pci driver 00:03:21.797 0000:80:04.3 (8086 0e23): Already using the vfio-pci driver 00:03:21.797 0000:80:04.2 (8086 0e22): Already using the vfio-pci driver 00:03:21.797 0000:80:04.1 (8086 0e21): Already using the vfio-pci driver 00:03:21.797 0000:80:04.0 (8086 0e20): Already using the vfio-pci driver 00:03:21.797 07:20:37 -- setup/hugepages.sh@199 -- # verify_nr_hugepages 00:03:21.797 07:20:37 -- setup/hugepages.sh@89 -- # local node 00:03:21.797 07:20:37 -- setup/hugepages.sh@90 -- # local sorted_t 00:03:21.797 07:20:37 -- setup/hugepages.sh@91 -- # local sorted_s 00:03:21.797 07:20:37 -- setup/hugepages.sh@92 -- # local surp 00:03:21.797 07:20:37 -- setup/hugepages.sh@93 -- # local resv 00:03:21.797 07:20:37 -- setup/hugepages.sh@94 -- # local anon 00:03:21.797 07:20:37 -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:03:21.797 07:20:37 -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:03:21.797 07:20:37 -- setup/common.sh@17 -- # local get=AnonHugePages 00:03:21.797 07:20:37 -- setup/common.sh@18 -- # local node= 00:03:21.797 07:20:37 -- setup/common.sh@19 -- # local var val 00:03:21.797 07:20:37 -- setup/common.sh@20 -- # local mem_f mem 00:03:21.797 07:20:37 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:21.797 07:20:37 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:21.797 07:20:37 -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:21.797 07:20:37 -- setup/common.sh@28 -- # mapfile -t mem 00:03:21.797 07:20:37 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:21.797 07:20:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.797 07:20:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.797 07:20:37 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541708 kB' 'MemFree: 45797352 kB' 'MemAvailable: 49302764 kB' 'Buffers: 2704 kB' 'Cached: 10211836 kB' 'SwapCached: 0 kB' 'Active: 7235360 kB' 'Inactive: 3506552 kB' 'Active(anon): 6841008 kB' 'Inactive(anon): 0 kB' 'Active(file): 394352 kB' 'Inactive(file): 3506552 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 530612 kB' 'Mapped: 212192 kB' 'Shmem: 6313636 kB' 'KReclaimable: 196260 kB' 'Slab: 568316 kB' 'SReclaimable: 196260 kB' 'SUnreclaim: 372056 kB' 'KernelStack: 12832 kB' 'PageTables: 7948 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37610880 kB' 'Committed_AS: 7962608 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 196756 kB' 'VmallocChunk: 0 kB' 'Percpu: 38784 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 1924700 kB' 'DirectMap2M: 15820800 kB' 'DirectMap1G: 51380224 kB' 00:03:21.797 07:20:37 -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:21.797 07:20:37 -- setup/common.sh@32 -- # continue 00:03:21.797 07:20:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.797 07:20:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.797 07:20:37 -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:21.797 07:20:37 -- setup/common.sh@32 -- # continue 00:03:21.797 07:20:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.797 07:20:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.797 07:20:37 -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:21.797 07:20:37 -- setup/common.sh@32 -- # continue 00:03:21.797 07:20:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.797 07:20:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.797 07:20:37 -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:21.797 07:20:37 -- setup/common.sh@32 -- # continue 00:03:21.797 07:20:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.797 07:20:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.797 07:20:37 -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:21.797 07:20:37 -- setup/common.sh@32 -- # continue 00:03:21.797 07:20:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.798 07:20:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.798 07:20:37 -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:21.798 07:20:37 -- setup/common.sh@32 -- # continue 00:03:21.798 07:20:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.798 07:20:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.798 07:20:37 -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:21.798 07:20:37 -- setup/common.sh@32 -- # continue 00:03:21.798 07:20:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.798 07:20:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.798 07:20:37 -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:21.798 07:20:37 -- setup/common.sh@32 -- # continue 00:03:21.798 07:20:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.798 07:20:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.798 07:20:37 -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:21.798 07:20:37 -- setup/common.sh@32 -- # continue 00:03:21.798 07:20:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.798 07:20:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.798 07:20:37 -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:21.798 07:20:37 -- setup/common.sh@32 -- # continue 00:03:21.798 07:20:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.798 07:20:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.798 07:20:37 -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:21.798 07:20:37 -- setup/common.sh@32 -- # continue 00:03:21.798 07:20:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.798 07:20:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.798 07:20:37 -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:21.798 07:20:37 -- setup/common.sh@32 -- # continue 00:03:21.798 07:20:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.798 07:20:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.798 07:20:37 -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:21.798 07:20:37 -- setup/common.sh@32 -- # continue 00:03:21.798 07:20:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.798 07:20:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.798 07:20:37 -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:21.798 07:20:37 -- setup/common.sh@32 -- # continue 00:03:21.798 07:20:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.798 07:20:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.798 07:20:37 -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:21.798 07:20:37 -- setup/common.sh@32 -- # continue 00:03:21.798 07:20:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.798 07:20:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.798 07:20:37 -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:21.798 07:20:37 -- setup/common.sh@32 -- # continue 00:03:21.798 07:20:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.798 07:20:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.798 07:20:37 -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:21.798 07:20:37 -- setup/common.sh@32 -- # continue 00:03:21.798 07:20:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.798 07:20:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.798 07:20:37 -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:21.798 07:20:37 -- setup/common.sh@32 -- # continue 00:03:21.798 07:20:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.798 07:20:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.798 07:20:37 -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:21.798 07:20:37 -- setup/common.sh@32 -- # continue 00:03:21.798 07:20:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.798 07:20:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.798 07:20:37 -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:21.798 07:20:37 -- setup/common.sh@32 -- # continue 00:03:21.798 07:20:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.798 07:20:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.798 07:20:37 -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:21.798 07:20:37 -- setup/common.sh@32 -- # continue 00:03:21.798 07:20:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.798 07:20:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.798 07:20:37 -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:21.798 07:20:37 -- setup/common.sh@32 -- # continue 00:03:21.798 07:20:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.798 07:20:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.798 07:20:37 -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:21.798 07:20:37 -- setup/common.sh@32 -- # continue 00:03:21.798 07:20:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.798 07:20:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.798 07:20:37 -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:21.798 07:20:37 -- setup/common.sh@32 -- # continue 00:03:21.798 07:20:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.798 07:20:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.798 07:20:37 -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:21.798 07:20:37 -- setup/common.sh@32 -- # continue 00:03:21.798 07:20:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.798 07:20:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.798 07:20:37 -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:21.798 07:20:37 -- setup/common.sh@32 -- # continue 00:03:21.798 07:20:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.798 07:20:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.798 07:20:37 -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:21.798 07:20:37 -- setup/common.sh@32 -- # continue 00:03:21.798 07:20:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.798 07:20:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.798 07:20:37 -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:21.798 07:20:37 -- setup/common.sh@32 -- # continue 00:03:21.798 07:20:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.798 07:20:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.798 07:20:37 -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:21.798 07:20:37 -- setup/common.sh@32 -- # continue 00:03:21.798 07:20:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.798 07:20:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.798 07:20:37 -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:21.798 07:20:37 -- setup/common.sh@32 -- # continue 00:03:21.798 07:20:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.798 07:20:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.798 07:20:37 -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:21.798 07:20:37 -- setup/common.sh@32 -- # continue 00:03:21.798 07:20:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.798 07:20:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.798 07:20:37 -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:21.798 07:20:37 -- setup/common.sh@32 -- # continue 00:03:21.798 07:20:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.798 07:20:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.798 07:20:37 -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:21.798 07:20:37 -- setup/common.sh@32 -- # continue 00:03:21.798 07:20:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.798 07:20:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.798 07:20:37 -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:21.798 07:20:37 -- setup/common.sh@32 -- # continue 00:03:21.798 07:20:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.798 07:20:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.798 07:20:37 -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:21.798 07:20:37 -- setup/common.sh@32 -- # continue 00:03:21.798 07:20:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.798 07:20:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.798 07:20:37 -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:21.798 07:20:37 -- setup/common.sh@32 -- # continue 00:03:21.798 07:20:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.798 07:20:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.798 07:20:37 -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:21.798 07:20:37 -- setup/common.sh@32 -- # continue 00:03:21.798 07:20:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.798 07:20:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.798 07:20:37 -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:21.798 07:20:37 -- setup/common.sh@32 -- # continue 00:03:21.798 07:20:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.798 07:20:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.798 07:20:37 -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:21.798 07:20:37 -- setup/common.sh@32 -- # continue 00:03:21.798 07:20:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.798 07:20:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.798 07:20:37 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:21.798 07:20:37 -- setup/common.sh@32 -- # continue 00:03:21.798 07:20:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.798 07:20:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.798 07:20:37 -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:21.798 07:20:37 -- setup/common.sh@33 -- # echo 0 00:03:21.798 07:20:37 -- setup/common.sh@33 -- # return 0 00:03:21.798 07:20:37 -- setup/hugepages.sh@97 -- # anon=0 00:03:21.798 07:20:37 -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:03:21.798 07:20:37 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:21.798 07:20:37 -- setup/common.sh@18 -- # local node= 00:03:21.798 07:20:37 -- setup/common.sh@19 -- # local var val 00:03:21.798 07:20:37 -- setup/common.sh@20 -- # local mem_f mem 00:03:21.798 07:20:37 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:21.798 07:20:37 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:21.798 07:20:37 -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:21.798 07:20:37 -- setup/common.sh@28 -- # mapfile -t mem 00:03:21.798 07:20:37 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:21.798 07:20:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.798 07:20:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.798 07:20:37 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541708 kB' 'MemFree: 45798644 kB' 'MemAvailable: 49304056 kB' 'Buffers: 2704 kB' 'Cached: 10211840 kB' 'SwapCached: 0 kB' 'Active: 7235492 kB' 'Inactive: 3506552 kB' 'Active(anon): 6841140 kB' 'Inactive(anon): 0 kB' 'Active(file): 394352 kB' 'Inactive(file): 3506552 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 530788 kB' 'Mapped: 211772 kB' 'Shmem: 6313640 kB' 'KReclaimable: 196260 kB' 'Slab: 568348 kB' 'SReclaimable: 196260 kB' 'SUnreclaim: 372088 kB' 'KernelStack: 12864 kB' 'PageTables: 7988 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37610880 kB' 'Committed_AS: 7962620 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 196708 kB' 'VmallocChunk: 0 kB' 'Percpu: 38784 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 1924700 kB' 'DirectMap2M: 15820800 kB' 'DirectMap1G: 51380224 kB' 00:03:21.799 07:20:37 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.799 07:20:37 -- setup/common.sh@32 -- # continue 00:03:21.799 07:20:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.799 07:20:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.799 07:20:37 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.799 07:20:37 -- setup/common.sh@32 -- # continue 00:03:21.799 07:20:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.799 07:20:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.799 07:20:37 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.799 07:20:37 -- setup/common.sh@32 -- # continue 00:03:21.799 07:20:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.799 07:20:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.799 07:20:37 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.799 07:20:37 -- setup/common.sh@32 -- # continue 00:03:21.799 07:20:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.799 07:20:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.799 07:20:37 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.799 07:20:37 -- setup/common.sh@32 -- # continue 00:03:21.799 07:20:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.799 07:20:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.799 07:20:37 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.799 07:20:37 -- setup/common.sh@32 -- # continue 00:03:21.799 07:20:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.799 07:20:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.799 07:20:37 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.799 07:20:37 -- setup/common.sh@32 -- # continue 00:03:21.799 07:20:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.799 07:20:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.799 07:20:37 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.799 07:20:37 -- setup/common.sh@32 -- # continue 00:03:21.799 07:20:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.799 07:20:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.799 07:20:37 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.799 07:20:37 -- setup/common.sh@32 -- # continue 00:03:21.799 07:20:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.799 07:20:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.799 07:20:37 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.799 07:20:37 -- setup/common.sh@32 -- # continue 00:03:21.799 07:20:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.799 07:20:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.799 07:20:37 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.799 07:20:37 -- setup/common.sh@32 -- # continue 00:03:21.799 07:20:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.799 07:20:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.799 07:20:37 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.799 07:20:37 -- setup/common.sh@32 -- # continue 00:03:21.799 07:20:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.799 07:20:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.799 07:20:37 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.799 07:20:37 -- setup/common.sh@32 -- # continue 00:03:21.799 07:20:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.799 07:20:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.799 07:20:37 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.799 07:20:37 -- setup/common.sh@32 -- # continue 00:03:21.799 07:20:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.799 07:20:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.799 07:20:37 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.799 07:20:37 -- setup/common.sh@32 -- # continue 00:03:21.799 07:20:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.799 07:20:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.799 07:20:37 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.799 07:20:37 -- setup/common.sh@32 -- # continue 00:03:21.799 07:20:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.799 07:20:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.799 07:20:37 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.799 07:20:37 -- setup/common.sh@32 -- # continue 00:03:21.799 07:20:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.799 07:20:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.799 07:20:37 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.799 07:20:37 -- setup/common.sh@32 -- # continue 00:03:21.799 07:20:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.799 07:20:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.799 07:20:37 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.799 07:20:37 -- setup/common.sh@32 -- # continue 00:03:21.799 07:20:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.799 07:20:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.799 07:20:37 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.799 07:20:37 -- setup/common.sh@32 -- # continue 00:03:21.799 07:20:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.799 07:20:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.799 07:20:37 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.799 07:20:37 -- setup/common.sh@32 -- # continue 00:03:21.799 07:20:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.799 07:20:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.799 07:20:37 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.799 07:20:37 -- setup/common.sh@32 -- # continue 00:03:21.799 07:20:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.799 07:20:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.799 07:20:37 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.799 07:20:37 -- setup/common.sh@32 -- # continue 00:03:21.799 07:20:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.799 07:20:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.799 07:20:37 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.799 07:20:37 -- setup/common.sh@32 -- # continue 00:03:21.799 07:20:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.799 07:20:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.799 07:20:37 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.799 07:20:37 -- setup/common.sh@32 -- # continue 00:03:21.799 07:20:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.799 07:20:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.799 07:20:37 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.799 07:20:37 -- setup/common.sh@32 -- # continue 00:03:21.799 07:20:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.799 07:20:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.799 07:20:37 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.799 07:20:37 -- setup/common.sh@32 -- # continue 00:03:21.799 07:20:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.799 07:20:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.799 07:20:37 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.799 07:20:37 -- setup/common.sh@32 -- # continue 00:03:21.799 07:20:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.799 07:20:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.799 07:20:37 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.799 07:20:37 -- setup/common.sh@32 -- # continue 00:03:21.799 07:20:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.799 07:20:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.799 07:20:37 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.799 07:20:37 -- setup/common.sh@32 -- # continue 00:03:21.799 07:20:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.799 07:20:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.799 07:20:37 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.799 07:20:37 -- setup/common.sh@32 -- # continue 00:03:21.799 07:20:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.799 07:20:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.799 07:20:37 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.799 07:20:37 -- setup/common.sh@32 -- # continue 00:03:21.799 07:20:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.799 07:20:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.799 07:20:37 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.799 07:20:37 -- setup/common.sh@32 -- # continue 00:03:21.799 07:20:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.799 07:20:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.799 07:20:37 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.799 07:20:37 -- setup/common.sh@32 -- # continue 00:03:21.799 07:20:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.799 07:20:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.799 07:20:37 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.799 07:20:37 -- setup/common.sh@32 -- # continue 00:03:21.799 07:20:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.799 07:20:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.799 07:20:37 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.799 07:20:37 -- setup/common.sh@32 -- # continue 00:03:21.799 07:20:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.799 07:20:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.799 07:20:37 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.799 07:20:37 -- setup/common.sh@32 -- # continue 00:03:21.799 07:20:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.799 07:20:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.799 07:20:37 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.799 07:20:37 -- setup/common.sh@32 -- # continue 00:03:21.799 07:20:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.799 07:20:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.799 07:20:37 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.799 07:20:37 -- setup/common.sh@32 -- # continue 00:03:21.799 07:20:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.799 07:20:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.799 07:20:37 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.799 07:20:37 -- setup/common.sh@32 -- # continue 00:03:21.799 07:20:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.799 07:20:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.799 07:20:37 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.799 07:20:37 -- setup/common.sh@32 -- # continue 00:03:21.799 07:20:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.799 07:20:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.799 07:20:37 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.799 07:20:37 -- setup/common.sh@32 -- # continue 00:03:21.799 07:20:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.799 07:20:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.799 07:20:37 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.799 07:20:37 -- setup/common.sh@32 -- # continue 00:03:21.799 07:20:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.799 07:20:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.800 07:20:37 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.800 07:20:37 -- setup/common.sh@32 -- # continue 00:03:21.800 07:20:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.800 07:20:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.800 07:20:37 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.800 07:20:37 -- setup/common.sh@32 -- # continue 00:03:21.800 07:20:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.800 07:20:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.800 07:20:37 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.800 07:20:37 -- setup/common.sh@32 -- # continue 00:03:21.800 07:20:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.800 07:20:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.800 07:20:37 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.800 07:20:37 -- setup/common.sh@32 -- # continue 00:03:21.800 07:20:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.800 07:20:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.800 07:20:37 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.800 07:20:37 -- setup/common.sh@32 -- # continue 00:03:21.800 07:20:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.800 07:20:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.800 07:20:37 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.800 07:20:37 -- setup/common.sh@32 -- # continue 00:03:21.800 07:20:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.800 07:20:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.800 07:20:37 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.800 07:20:37 -- setup/common.sh@32 -- # continue 00:03:21.800 07:20:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.800 07:20:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.800 07:20:37 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.800 07:20:37 -- setup/common.sh@32 -- # continue 00:03:21.800 07:20:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.800 07:20:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.800 07:20:37 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.800 07:20:37 -- setup/common.sh@33 -- # echo 0 00:03:21.800 07:20:37 -- setup/common.sh@33 -- # return 0 00:03:21.800 07:20:37 -- setup/hugepages.sh@99 -- # surp=0 00:03:21.800 07:20:37 -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:03:21.800 07:20:37 -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:03:21.800 07:20:37 -- setup/common.sh@18 -- # local node= 00:03:21.800 07:20:37 -- setup/common.sh@19 -- # local var val 00:03:21.800 07:20:37 -- setup/common.sh@20 -- # local mem_f mem 00:03:21.800 07:20:37 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:21.800 07:20:37 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:21.800 07:20:37 -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:21.800 07:20:37 -- setup/common.sh@28 -- # mapfile -t mem 00:03:21.800 07:20:37 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:21.800 07:20:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.800 07:20:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.800 07:20:37 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541708 kB' 'MemFree: 45799064 kB' 'MemAvailable: 49304476 kB' 'Buffers: 2704 kB' 'Cached: 10211844 kB' 'SwapCached: 0 kB' 'Active: 7235072 kB' 'Inactive: 3506552 kB' 'Active(anon): 6840720 kB' 'Inactive(anon): 0 kB' 'Active(file): 394352 kB' 'Inactive(file): 3506552 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 530372 kB' 'Mapped: 211692 kB' 'Shmem: 6313644 kB' 'KReclaimable: 196260 kB' 'Slab: 568324 kB' 'SReclaimable: 196260 kB' 'SUnreclaim: 372064 kB' 'KernelStack: 12864 kB' 'PageTables: 7980 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37610880 kB' 'Committed_AS: 7962636 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 196708 kB' 'VmallocChunk: 0 kB' 'Percpu: 38784 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 1924700 kB' 'DirectMap2M: 15820800 kB' 'DirectMap1G: 51380224 kB' 00:03:21.800 07:20:37 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:21.800 07:20:37 -- setup/common.sh@32 -- # continue 00:03:21.800 07:20:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.800 07:20:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.800 07:20:37 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:21.800 07:20:37 -- setup/common.sh@32 -- # continue 00:03:21.800 07:20:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.800 07:20:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.800 07:20:37 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:21.800 07:20:37 -- setup/common.sh@32 -- # continue 00:03:21.800 07:20:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.800 07:20:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.800 07:20:37 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:21.800 07:20:37 -- setup/common.sh@32 -- # continue 00:03:21.800 07:20:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.800 07:20:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.800 07:20:37 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:21.800 07:20:37 -- setup/common.sh@32 -- # continue 00:03:21.800 07:20:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.800 07:20:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.800 07:20:37 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:21.800 07:20:37 -- setup/common.sh@32 -- # continue 00:03:21.800 07:20:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.800 07:20:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.800 07:20:37 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:21.800 07:20:37 -- setup/common.sh@32 -- # continue 00:03:21.800 07:20:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.800 07:20:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.800 07:20:37 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:21.800 07:20:37 -- setup/common.sh@32 -- # continue 00:03:21.800 07:20:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.800 07:20:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.800 07:20:37 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:21.800 07:20:37 -- setup/common.sh@32 -- # continue 00:03:21.800 07:20:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.800 07:20:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.800 07:20:37 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:21.800 07:20:37 -- setup/common.sh@32 -- # continue 00:03:21.800 07:20:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.800 07:20:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.800 07:20:37 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:21.800 07:20:37 -- setup/common.sh@32 -- # continue 00:03:21.800 07:20:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.800 07:20:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.800 07:20:37 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:21.800 07:20:37 -- setup/common.sh@32 -- # continue 00:03:21.800 07:20:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.800 07:20:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.800 07:20:37 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:21.800 07:20:37 -- setup/common.sh@32 -- # continue 00:03:21.800 07:20:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.800 07:20:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.800 07:20:37 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:21.800 07:20:37 -- setup/common.sh@32 -- # continue 00:03:21.800 07:20:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.800 07:20:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.800 07:20:37 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:21.800 07:20:37 -- setup/common.sh@32 -- # continue 00:03:21.800 07:20:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.800 07:20:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.800 07:20:37 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:21.800 07:20:37 -- setup/common.sh@32 -- # continue 00:03:21.800 07:20:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.800 07:20:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.800 07:20:37 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:21.800 07:20:37 -- setup/common.sh@32 -- # continue 00:03:21.800 07:20:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.800 07:20:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.800 07:20:37 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:21.800 07:20:37 -- setup/common.sh@32 -- # continue 00:03:21.800 07:20:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.800 07:20:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.800 07:20:37 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:21.800 07:20:37 -- setup/common.sh@32 -- # continue 00:03:21.800 07:20:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.800 07:20:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.800 07:20:37 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:21.800 07:20:37 -- setup/common.sh@32 -- # continue 00:03:21.800 07:20:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.800 07:20:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.800 07:20:37 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:21.800 07:20:37 -- setup/common.sh@32 -- # continue 00:03:21.800 07:20:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.800 07:20:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.800 07:20:37 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:21.800 07:20:37 -- setup/common.sh@32 -- # continue 00:03:21.800 07:20:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.800 07:20:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.800 07:20:37 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:21.800 07:20:37 -- setup/common.sh@32 -- # continue 00:03:21.800 07:20:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.800 07:20:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.800 07:20:37 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:21.800 07:20:37 -- setup/common.sh@32 -- # continue 00:03:21.800 07:20:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.800 07:20:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.800 07:20:37 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:21.800 07:20:37 -- setup/common.sh@32 -- # continue 00:03:21.800 07:20:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.800 07:20:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.800 07:20:37 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:21.801 07:20:37 -- setup/common.sh@32 -- # continue 00:03:21.801 07:20:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.801 07:20:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.801 07:20:37 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:21.801 07:20:37 -- setup/common.sh@32 -- # continue 00:03:21.801 07:20:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.801 07:20:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.801 07:20:37 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:21.801 07:20:37 -- setup/common.sh@32 -- # continue 00:03:21.801 07:20:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.801 07:20:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.801 07:20:37 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:21.801 07:20:37 -- setup/common.sh@32 -- # continue 00:03:21.801 07:20:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.801 07:20:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.801 07:20:37 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:21.801 07:20:37 -- setup/common.sh@32 -- # continue 00:03:21.801 07:20:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.801 07:20:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.801 07:20:37 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:21.801 07:20:37 -- setup/common.sh@32 -- # continue 00:03:21.801 07:20:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.801 07:20:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.801 07:20:37 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:21.801 07:20:37 -- setup/common.sh@32 -- # continue 00:03:21.801 07:20:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.801 07:20:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.801 07:20:37 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:21.801 07:20:37 -- setup/common.sh@32 -- # continue 00:03:21.801 07:20:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.801 07:20:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.801 07:20:37 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:21.801 07:20:37 -- setup/common.sh@32 -- # continue 00:03:21.801 07:20:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.801 07:20:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.801 07:20:37 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:21.801 07:20:37 -- setup/common.sh@32 -- # continue 00:03:21.801 07:20:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.801 07:20:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.801 07:20:37 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:21.801 07:20:37 -- setup/common.sh@32 -- # continue 00:03:21.801 07:20:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.801 07:20:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.801 07:20:37 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:21.801 07:20:37 -- setup/common.sh@32 -- # continue 00:03:21.801 07:20:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.801 07:20:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.801 07:20:37 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:21.801 07:20:37 -- setup/common.sh@32 -- # continue 00:03:21.801 07:20:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.801 07:20:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.801 07:20:37 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:21.801 07:20:37 -- setup/common.sh@32 -- # continue 00:03:21.801 07:20:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.801 07:20:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.801 07:20:37 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:21.801 07:20:37 -- setup/common.sh@32 -- # continue 00:03:21.801 07:20:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.801 07:20:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.801 07:20:37 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:21.801 07:20:37 -- setup/common.sh@32 -- # continue 00:03:21.801 07:20:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.801 07:20:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.801 07:20:37 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:21.801 07:20:37 -- setup/common.sh@32 -- # continue 00:03:21.801 07:20:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.801 07:20:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.801 07:20:37 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:21.801 07:20:37 -- setup/common.sh@32 -- # continue 00:03:21.801 07:20:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.801 07:20:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.801 07:20:37 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:21.801 07:20:37 -- setup/common.sh@32 -- # continue 00:03:21.801 07:20:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.801 07:20:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.801 07:20:37 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:21.801 07:20:37 -- setup/common.sh@32 -- # continue 00:03:21.801 07:20:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.801 07:20:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.801 07:20:37 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:21.801 07:20:37 -- setup/common.sh@32 -- # continue 00:03:21.801 07:20:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.801 07:20:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.801 07:20:37 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:21.801 07:20:37 -- setup/common.sh@32 -- # continue 00:03:21.801 07:20:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.801 07:20:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.801 07:20:37 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:21.801 07:20:37 -- setup/common.sh@32 -- # continue 00:03:21.801 07:20:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.801 07:20:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.801 07:20:37 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:21.801 07:20:37 -- setup/common.sh@32 -- # continue 00:03:21.801 07:20:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.801 07:20:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.801 07:20:37 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:21.801 07:20:37 -- setup/common.sh@32 -- # continue 00:03:21.801 07:20:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.801 07:20:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.801 07:20:37 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:21.801 07:20:37 -- setup/common.sh@33 -- # echo 0 00:03:21.801 07:20:37 -- setup/common.sh@33 -- # return 0 00:03:21.801 07:20:37 -- setup/hugepages.sh@100 -- # resv=0 00:03:21.801 07:20:37 -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:03:21.801 nr_hugepages=1024 00:03:21.801 07:20:37 -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:03:21.801 resv_hugepages=0 00:03:21.801 07:20:37 -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:03:21.801 surplus_hugepages=0 00:03:21.801 07:20:37 -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:03:21.801 anon_hugepages=0 00:03:21.801 07:20:37 -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:03:21.801 07:20:37 -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:03:21.801 07:20:37 -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:03:21.801 07:20:37 -- setup/common.sh@17 -- # local get=HugePages_Total 00:03:21.801 07:20:37 -- setup/common.sh@18 -- # local node= 00:03:21.801 07:20:37 -- setup/common.sh@19 -- # local var val 00:03:21.801 07:20:37 -- setup/common.sh@20 -- # local mem_f mem 00:03:21.801 07:20:37 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:21.801 07:20:37 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:21.801 07:20:37 -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:21.801 07:20:37 -- setup/common.sh@28 -- # mapfile -t mem 00:03:21.801 07:20:37 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:21.801 07:20:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.801 07:20:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.801 07:20:37 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541708 kB' 'MemFree: 45799064 kB' 'MemAvailable: 49304476 kB' 'Buffers: 2704 kB' 'Cached: 10211864 kB' 'SwapCached: 0 kB' 'Active: 7235504 kB' 'Inactive: 3506552 kB' 'Active(anon): 6841152 kB' 'Inactive(anon): 0 kB' 'Active(file): 394352 kB' 'Inactive(file): 3506552 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 530720 kB' 'Mapped: 211692 kB' 'Shmem: 6313664 kB' 'KReclaimable: 196260 kB' 'Slab: 568324 kB' 'SReclaimable: 196260 kB' 'SUnreclaim: 372064 kB' 'KernelStack: 12864 kB' 'PageTables: 7980 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37610880 kB' 'Committed_AS: 7962652 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 196724 kB' 'VmallocChunk: 0 kB' 'Percpu: 38784 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 1924700 kB' 'DirectMap2M: 15820800 kB' 'DirectMap1G: 51380224 kB' 00:03:21.801 07:20:37 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:21.801 07:20:37 -- setup/common.sh@32 -- # continue 00:03:21.801 07:20:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.801 07:20:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.801 07:20:37 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:21.801 07:20:37 -- setup/common.sh@32 -- # continue 00:03:21.801 07:20:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.801 07:20:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.801 07:20:37 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:21.801 07:20:37 -- setup/common.sh@32 -- # continue 00:03:21.801 07:20:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.801 07:20:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.801 07:20:37 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:21.801 07:20:37 -- setup/common.sh@32 -- # continue 00:03:21.801 07:20:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.801 07:20:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.801 07:20:37 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:21.801 07:20:37 -- setup/common.sh@32 -- # continue 00:03:21.801 07:20:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.801 07:20:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.801 07:20:37 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:21.802 07:20:37 -- setup/common.sh@32 -- # continue 00:03:21.802 07:20:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.802 07:20:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.802 07:20:37 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:21.802 07:20:37 -- setup/common.sh@32 -- # continue 00:03:21.802 07:20:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.802 07:20:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.802 07:20:37 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:21.802 07:20:37 -- setup/common.sh@32 -- # continue 00:03:21.802 07:20:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.802 07:20:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.802 07:20:37 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:21.802 07:20:37 -- setup/common.sh@32 -- # continue 00:03:21.802 07:20:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.802 07:20:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.802 07:20:37 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:21.802 07:20:37 -- setup/common.sh@32 -- # continue 00:03:21.802 07:20:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.802 07:20:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.802 07:20:37 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:21.802 07:20:37 -- setup/common.sh@32 -- # continue 00:03:21.802 07:20:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.802 07:20:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.802 07:20:37 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:21.802 07:20:37 -- setup/common.sh@32 -- # continue 00:03:21.802 07:20:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.802 07:20:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.802 07:20:37 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:21.802 07:20:37 -- setup/common.sh@32 -- # continue 00:03:21.802 07:20:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.802 07:20:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.802 07:20:37 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:21.802 07:20:37 -- setup/common.sh@32 -- # continue 00:03:21.802 07:20:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.802 07:20:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.802 07:20:37 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:21.802 07:20:37 -- setup/common.sh@32 -- # continue 00:03:21.802 07:20:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.802 07:20:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.802 07:20:37 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:21.802 07:20:37 -- setup/common.sh@32 -- # continue 00:03:21.802 07:20:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.802 07:20:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.802 07:20:37 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:21.802 07:20:37 -- setup/common.sh@32 -- # continue 00:03:21.802 07:20:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.802 07:20:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.802 07:20:37 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:21.802 07:20:37 -- setup/common.sh@32 -- # continue 00:03:21.802 07:20:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.802 07:20:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.802 07:20:37 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:21.802 07:20:37 -- setup/common.sh@32 -- # continue 00:03:21.802 07:20:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.802 07:20:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.802 07:20:37 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:21.802 07:20:37 -- setup/common.sh@32 -- # continue 00:03:21.802 07:20:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.802 07:20:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.802 07:20:37 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:21.802 07:20:37 -- setup/common.sh@32 -- # continue 00:03:21.802 07:20:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.802 07:20:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.802 07:20:37 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:21.802 07:20:37 -- setup/common.sh@32 -- # continue 00:03:21.802 07:20:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.802 07:20:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.802 07:20:37 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:21.802 07:20:37 -- setup/common.sh@32 -- # continue 00:03:21.802 07:20:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.802 07:20:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.802 07:20:37 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:21.802 07:20:37 -- setup/common.sh@32 -- # continue 00:03:21.802 07:20:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.802 07:20:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.802 07:20:37 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:21.802 07:20:37 -- setup/common.sh@32 -- # continue 00:03:21.802 07:20:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.802 07:20:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.802 07:20:37 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:21.802 07:20:37 -- setup/common.sh@32 -- # continue 00:03:21.802 07:20:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.802 07:20:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.802 07:20:37 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:21.802 07:20:37 -- setup/common.sh@32 -- # continue 00:03:21.802 07:20:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.802 07:20:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.802 07:20:37 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:21.802 07:20:37 -- setup/common.sh@32 -- # continue 00:03:21.802 07:20:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.802 07:20:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.802 07:20:37 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:21.802 07:20:37 -- setup/common.sh@32 -- # continue 00:03:21.802 07:20:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.802 07:20:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.802 07:20:37 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:21.802 07:20:37 -- setup/common.sh@32 -- # continue 00:03:21.802 07:20:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.802 07:20:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.802 07:20:37 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:21.802 07:20:37 -- setup/common.sh@32 -- # continue 00:03:21.802 07:20:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.802 07:20:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.802 07:20:37 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:21.802 07:20:37 -- setup/common.sh@32 -- # continue 00:03:21.802 07:20:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.802 07:20:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.802 07:20:37 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:21.802 07:20:37 -- setup/common.sh@32 -- # continue 00:03:21.802 07:20:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.802 07:20:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.802 07:20:37 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:21.802 07:20:37 -- setup/common.sh@32 -- # continue 00:03:21.802 07:20:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.802 07:20:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.802 07:20:37 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:21.802 07:20:37 -- setup/common.sh@32 -- # continue 00:03:21.802 07:20:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.802 07:20:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.802 07:20:37 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:21.802 07:20:37 -- setup/common.sh@32 -- # continue 00:03:21.802 07:20:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.802 07:20:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.802 07:20:37 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:21.802 07:20:37 -- setup/common.sh@32 -- # continue 00:03:21.802 07:20:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.802 07:20:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.802 07:20:37 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:21.802 07:20:37 -- setup/common.sh@32 -- # continue 00:03:21.802 07:20:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.802 07:20:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.802 07:20:37 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:21.802 07:20:37 -- setup/common.sh@32 -- # continue 00:03:21.802 07:20:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.802 07:20:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.802 07:20:37 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:21.802 07:20:37 -- setup/common.sh@32 -- # continue 00:03:21.802 07:20:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.802 07:20:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.802 07:20:37 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:21.802 07:20:37 -- setup/common.sh@32 -- # continue 00:03:21.802 07:20:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.802 07:20:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.802 07:20:37 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:21.802 07:20:37 -- setup/common.sh@32 -- # continue 00:03:21.802 07:20:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.802 07:20:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.802 07:20:37 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:21.802 07:20:37 -- setup/common.sh@32 -- # continue 00:03:21.802 07:20:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.802 07:20:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.803 07:20:37 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:21.803 07:20:37 -- setup/common.sh@32 -- # continue 00:03:21.803 07:20:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.803 07:20:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.803 07:20:37 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:21.803 07:20:37 -- setup/common.sh@32 -- # continue 00:03:21.803 07:20:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.803 07:20:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.803 07:20:37 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:21.803 07:20:37 -- setup/common.sh@32 -- # continue 00:03:21.803 07:20:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.803 07:20:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.803 07:20:37 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:21.803 07:20:37 -- setup/common.sh@32 -- # continue 00:03:21.803 07:20:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.803 07:20:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.803 07:20:37 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:21.803 07:20:37 -- setup/common.sh@32 -- # continue 00:03:21.803 07:20:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.803 07:20:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.803 07:20:37 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:21.803 07:20:37 -- setup/common.sh@33 -- # echo 1024 00:03:21.803 07:20:37 -- setup/common.sh@33 -- # return 0 00:03:21.803 07:20:37 -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:03:21.803 07:20:37 -- setup/hugepages.sh@112 -- # get_nodes 00:03:21.803 07:20:37 -- setup/hugepages.sh@27 -- # local node 00:03:21.803 07:20:37 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:21.803 07:20:37 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:03:21.803 07:20:37 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:21.803 07:20:37 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=0 00:03:21.803 07:20:37 -- setup/hugepages.sh@32 -- # no_nodes=2 00:03:21.803 07:20:37 -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:03:21.803 07:20:37 -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:03:21.803 07:20:37 -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:03:21.803 07:20:37 -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:03:21.803 07:20:37 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:21.803 07:20:37 -- setup/common.sh@18 -- # local node=0 00:03:21.803 07:20:37 -- setup/common.sh@19 -- # local var val 00:03:21.803 07:20:37 -- setup/common.sh@20 -- # local mem_f mem 00:03:21.803 07:20:37 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:21.803 07:20:37 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:03:21.803 07:20:37 -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:03:21.803 07:20:37 -- setup/common.sh@28 -- # mapfile -t mem 00:03:21.803 07:20:37 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:21.803 07:20:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.803 07:20:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.803 07:20:37 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 32829884 kB' 'MemFree: 27232712 kB' 'MemUsed: 5597172 kB' 'SwapCached: 0 kB' 'Active: 2363228 kB' 'Inactive: 110044 kB' 'Active(anon): 2252340 kB' 'Inactive(anon): 0 kB' 'Active(file): 110888 kB' 'Inactive(file): 110044 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 2243224 kB' 'Mapped: 35008 kB' 'AnonPages: 233296 kB' 'Shmem: 2022292 kB' 'KernelStack: 7192 kB' 'PageTables: 4316 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 97124 kB' 'Slab: 312396 kB' 'SReclaimable: 97124 kB' 'SUnreclaim: 215272 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:03:21.803 07:20:37 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.803 07:20:37 -- setup/common.sh@32 -- # continue 00:03:21.803 07:20:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.803 07:20:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.803 07:20:37 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.803 07:20:37 -- setup/common.sh@32 -- # continue 00:03:21.803 07:20:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.803 07:20:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.803 07:20:37 -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.803 07:20:37 -- setup/common.sh@32 -- # continue 00:03:21.803 07:20:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.803 07:20:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.803 07:20:37 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.803 07:20:37 -- setup/common.sh@32 -- # continue 00:03:21.803 07:20:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.803 07:20:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.803 07:20:37 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.803 07:20:37 -- setup/common.sh@32 -- # continue 00:03:21.803 07:20:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.803 07:20:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.803 07:20:37 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.803 07:20:37 -- setup/common.sh@32 -- # continue 00:03:21.803 07:20:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.803 07:20:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.803 07:20:37 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.803 07:20:37 -- setup/common.sh@32 -- # continue 00:03:21.803 07:20:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.803 07:20:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.803 07:20:37 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.803 07:20:37 -- setup/common.sh@32 -- # continue 00:03:21.803 07:20:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.803 07:20:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.803 07:20:37 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.803 07:20:37 -- setup/common.sh@32 -- # continue 00:03:21.803 07:20:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.803 07:20:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.803 07:20:37 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.803 07:20:37 -- setup/common.sh@32 -- # continue 00:03:21.803 07:20:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.803 07:20:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.803 07:20:37 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.803 07:20:37 -- setup/common.sh@32 -- # continue 00:03:21.803 07:20:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.803 07:20:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.803 07:20:37 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.803 07:20:37 -- setup/common.sh@32 -- # continue 00:03:21.803 07:20:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.803 07:20:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.803 07:20:37 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.803 07:20:37 -- setup/common.sh@32 -- # continue 00:03:21.803 07:20:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.803 07:20:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.803 07:20:37 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.803 07:20:37 -- setup/common.sh@32 -- # continue 00:03:21.803 07:20:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.803 07:20:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.803 07:20:37 -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.803 07:20:37 -- setup/common.sh@32 -- # continue 00:03:21.803 07:20:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.803 07:20:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.803 07:20:37 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.803 07:20:37 -- setup/common.sh@32 -- # continue 00:03:21.803 07:20:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.803 07:20:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.803 07:20:37 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.803 07:20:37 -- setup/common.sh@32 -- # continue 00:03:21.803 07:20:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.803 07:20:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.803 07:20:37 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.803 07:20:37 -- setup/common.sh@32 -- # continue 00:03:21.803 07:20:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.803 07:20:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.803 07:20:37 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.803 07:20:37 -- setup/common.sh@32 -- # continue 00:03:21.803 07:20:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.803 07:20:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.803 07:20:37 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.803 07:20:37 -- setup/common.sh@32 -- # continue 00:03:21.803 07:20:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.803 07:20:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.803 07:20:37 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.803 07:20:37 -- setup/common.sh@32 -- # continue 00:03:21.803 07:20:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.803 07:20:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.803 07:20:37 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.803 07:20:37 -- setup/common.sh@32 -- # continue 00:03:21.803 07:20:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.803 07:20:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.803 07:20:37 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.803 07:20:37 -- setup/common.sh@32 -- # continue 00:03:21.803 07:20:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.803 07:20:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.803 07:20:37 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.803 07:20:37 -- setup/common.sh@32 -- # continue 00:03:21.803 07:20:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.803 07:20:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.803 07:20:37 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.803 07:20:37 -- setup/common.sh@32 -- # continue 00:03:21.803 07:20:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.803 07:20:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.803 07:20:37 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.803 07:20:37 -- setup/common.sh@32 -- # continue 00:03:21.803 07:20:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.803 07:20:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.803 07:20:37 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.803 07:20:37 -- setup/common.sh@32 -- # continue 00:03:21.803 07:20:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.803 07:20:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.803 07:20:37 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.803 07:20:37 -- setup/common.sh@32 -- # continue 00:03:21.803 07:20:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.803 07:20:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.803 07:20:37 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.803 07:20:37 -- setup/common.sh@32 -- # continue 00:03:21.803 07:20:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.803 07:20:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.804 07:20:37 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.804 07:20:37 -- setup/common.sh@32 -- # continue 00:03:21.804 07:20:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.804 07:20:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.804 07:20:37 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.804 07:20:37 -- setup/common.sh@32 -- # continue 00:03:21.804 07:20:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.804 07:20:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.804 07:20:37 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.804 07:20:37 -- setup/common.sh@32 -- # continue 00:03:21.804 07:20:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.804 07:20:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.804 07:20:37 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.804 07:20:37 -- setup/common.sh@32 -- # continue 00:03:21.804 07:20:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.804 07:20:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.804 07:20:37 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.804 07:20:37 -- setup/common.sh@32 -- # continue 00:03:21.804 07:20:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.804 07:20:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.804 07:20:37 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.804 07:20:37 -- setup/common.sh@32 -- # continue 00:03:21.804 07:20:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.804 07:20:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.804 07:20:37 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.804 07:20:37 -- setup/common.sh@32 -- # continue 00:03:21.804 07:20:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.804 07:20:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.804 07:20:37 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.804 07:20:37 -- setup/common.sh@33 -- # echo 0 00:03:21.804 07:20:37 -- setup/common.sh@33 -- # return 0 00:03:21.804 07:20:37 -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:03:21.804 07:20:37 -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:03:21.804 07:20:37 -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:03:21.804 07:20:37 -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:03:21.804 07:20:37 -- setup/hugepages.sh@128 -- # echo 'node0=1024 expecting 1024' 00:03:21.804 node0=1024 expecting 1024 00:03:21.804 07:20:37 -- setup/hugepages.sh@130 -- # [[ 1024 == \1\0\2\4 ]] 00:03:21.804 07:20:37 -- setup/hugepages.sh@202 -- # CLEAR_HUGE=no 00:03:21.804 07:20:37 -- setup/hugepages.sh@202 -- # NRHUGE=512 00:03:21.804 07:20:37 -- setup/hugepages.sh@202 -- # setup output 00:03:21.804 07:20:37 -- setup/common.sh@9 -- # [[ output == output ]] 00:03:21.804 07:20:37 -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:03:23.180 0000:00:04.7 (8086 0e27): Already using the vfio-pci driver 00:03:23.180 0000:88:00.0 (8086 0a54): Already using the vfio-pci driver 00:03:23.180 0000:00:04.6 (8086 0e26): Already using the vfio-pci driver 00:03:23.180 0000:00:04.5 (8086 0e25): Already using the vfio-pci driver 00:03:23.180 0000:00:04.4 (8086 0e24): Already using the vfio-pci driver 00:03:23.180 0000:00:04.3 (8086 0e23): Already using the vfio-pci driver 00:03:23.180 0000:00:04.2 (8086 0e22): Already using the vfio-pci driver 00:03:23.180 0000:00:04.1 (8086 0e21): Already using the vfio-pci driver 00:03:23.180 0000:00:04.0 (8086 0e20): Already using the vfio-pci driver 00:03:23.180 0000:80:04.7 (8086 0e27): Already using the vfio-pci driver 00:03:23.180 0000:80:04.6 (8086 0e26): Already using the vfio-pci driver 00:03:23.180 0000:80:04.5 (8086 0e25): Already using the vfio-pci driver 00:03:23.180 0000:80:04.4 (8086 0e24): Already using the vfio-pci driver 00:03:23.180 0000:80:04.3 (8086 0e23): Already using the vfio-pci driver 00:03:23.180 0000:80:04.2 (8086 0e22): Already using the vfio-pci driver 00:03:23.180 0000:80:04.1 (8086 0e21): Already using the vfio-pci driver 00:03:23.180 0000:80:04.0 (8086 0e20): Already using the vfio-pci driver 00:03:23.180 INFO: Requested 512 hugepages but 1024 already allocated on node0 00:03:23.180 07:20:39 -- setup/hugepages.sh@204 -- # verify_nr_hugepages 00:03:23.180 07:20:39 -- setup/hugepages.sh@89 -- # local node 00:03:23.180 07:20:39 -- setup/hugepages.sh@90 -- # local sorted_t 00:03:23.180 07:20:39 -- setup/hugepages.sh@91 -- # local sorted_s 00:03:23.180 07:20:39 -- setup/hugepages.sh@92 -- # local surp 00:03:23.180 07:20:39 -- setup/hugepages.sh@93 -- # local resv 00:03:23.180 07:20:39 -- setup/hugepages.sh@94 -- # local anon 00:03:23.180 07:20:39 -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:03:23.180 07:20:39 -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:03:23.180 07:20:39 -- setup/common.sh@17 -- # local get=AnonHugePages 00:03:23.180 07:20:39 -- setup/common.sh@18 -- # local node= 00:03:23.180 07:20:39 -- setup/common.sh@19 -- # local var val 00:03:23.180 07:20:39 -- setup/common.sh@20 -- # local mem_f mem 00:03:23.180 07:20:39 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:23.180 07:20:39 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:23.180 07:20:39 -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:23.180 07:20:39 -- setup/common.sh@28 -- # mapfile -t mem 00:03:23.180 07:20:39 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:23.180 07:20:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:23.180 07:20:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:23.180 07:20:39 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541708 kB' 'MemFree: 45793360 kB' 'MemAvailable: 49298772 kB' 'Buffers: 2704 kB' 'Cached: 10211912 kB' 'SwapCached: 0 kB' 'Active: 7238576 kB' 'Inactive: 3506552 kB' 'Active(anon): 6844224 kB' 'Inactive(anon): 0 kB' 'Active(file): 394352 kB' 'Inactive(file): 3506552 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 533724 kB' 'Mapped: 211776 kB' 'Shmem: 6313712 kB' 'KReclaimable: 196260 kB' 'Slab: 568324 kB' 'SReclaimable: 196260 kB' 'SUnreclaim: 372064 kB' 'KernelStack: 13120 kB' 'PageTables: 9584 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37610880 kB' 'Committed_AS: 7966736 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 197076 kB' 'VmallocChunk: 0 kB' 'Percpu: 38784 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 1924700 kB' 'DirectMap2M: 15820800 kB' 'DirectMap1G: 51380224 kB' 00:03:23.180 07:20:39 -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:23.180 07:20:39 -- setup/common.sh@32 -- # continue 00:03:23.180 07:20:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:23.180 07:20:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:23.180 07:20:39 -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:23.181 07:20:39 -- setup/common.sh@32 -- # continue 00:03:23.181 07:20:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:23.181 07:20:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:23.181 07:20:39 -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:23.181 07:20:39 -- setup/common.sh@32 -- # continue 00:03:23.181 07:20:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:23.181 07:20:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:23.181 07:20:39 -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:23.181 07:20:39 -- setup/common.sh@32 -- # continue 00:03:23.181 07:20:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:23.181 07:20:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:23.181 07:20:39 -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:23.181 07:20:39 -- setup/common.sh@32 -- # continue 00:03:23.181 07:20:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:23.181 07:20:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:23.181 07:20:39 -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:23.181 07:20:39 -- setup/common.sh@32 -- # continue 00:03:23.181 07:20:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:23.181 07:20:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:23.181 07:20:39 -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:23.181 07:20:39 -- setup/common.sh@32 -- # continue 00:03:23.181 07:20:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:23.181 07:20:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:23.181 07:20:39 -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:23.181 07:20:39 -- setup/common.sh@32 -- # continue 00:03:23.181 07:20:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:23.181 07:20:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:23.181 07:20:39 -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:23.181 07:20:39 -- setup/common.sh@32 -- # continue 00:03:23.181 07:20:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:23.181 07:20:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:23.181 07:20:39 -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:23.181 07:20:39 -- setup/common.sh@32 -- # continue 00:03:23.181 07:20:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:23.181 07:20:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:23.181 07:20:39 -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:23.181 07:20:39 -- setup/common.sh@32 -- # continue 00:03:23.181 07:20:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:23.181 07:20:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:23.181 07:20:39 -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:23.181 07:20:39 -- setup/common.sh@32 -- # continue 00:03:23.181 07:20:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:23.181 07:20:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:23.181 07:20:39 -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:23.181 07:20:39 -- setup/common.sh@32 -- # continue 00:03:23.181 07:20:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:23.181 07:20:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:23.181 07:20:39 -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:23.181 07:20:39 -- setup/common.sh@32 -- # continue 00:03:23.181 07:20:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:23.181 07:20:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:23.181 07:20:39 -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:23.181 07:20:39 -- setup/common.sh@32 -- # continue 00:03:23.181 07:20:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:23.181 07:20:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:23.181 07:20:39 -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:23.181 07:20:39 -- setup/common.sh@32 -- # continue 00:03:23.181 07:20:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:23.181 07:20:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:23.181 07:20:39 -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:23.181 07:20:39 -- setup/common.sh@32 -- # continue 00:03:23.181 07:20:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:23.181 07:20:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:23.181 07:20:39 -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:23.181 07:20:39 -- setup/common.sh@32 -- # continue 00:03:23.181 07:20:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:23.181 07:20:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:23.181 07:20:39 -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:23.181 07:20:39 -- setup/common.sh@32 -- # continue 00:03:23.181 07:20:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:23.181 07:20:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:23.181 07:20:39 -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:23.181 07:20:39 -- setup/common.sh@32 -- # continue 00:03:23.181 07:20:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:23.181 07:20:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:23.181 07:20:39 -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:23.181 07:20:39 -- setup/common.sh@32 -- # continue 00:03:23.181 07:20:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:23.181 07:20:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:23.181 07:20:39 -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:23.181 07:20:39 -- setup/common.sh@32 -- # continue 00:03:23.181 07:20:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:23.181 07:20:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:23.181 07:20:39 -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:23.181 07:20:39 -- setup/common.sh@32 -- # continue 00:03:23.181 07:20:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:23.181 07:20:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:23.181 07:20:39 -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:23.181 07:20:39 -- setup/common.sh@32 -- # continue 00:03:23.181 07:20:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:23.181 07:20:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:23.181 07:20:39 -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:23.181 07:20:39 -- setup/common.sh@32 -- # continue 00:03:23.181 07:20:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:23.181 07:20:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:23.181 07:20:39 -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:23.181 07:20:39 -- setup/common.sh@32 -- # continue 00:03:23.181 07:20:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:23.181 07:20:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:23.181 07:20:39 -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:23.181 07:20:39 -- setup/common.sh@32 -- # continue 00:03:23.181 07:20:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:23.181 07:20:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:23.181 07:20:39 -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:23.181 07:20:39 -- setup/common.sh@32 -- # continue 00:03:23.181 07:20:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:23.181 07:20:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:23.181 07:20:39 -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:23.181 07:20:39 -- setup/common.sh@32 -- # continue 00:03:23.181 07:20:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:23.181 07:20:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:23.181 07:20:39 -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:23.181 07:20:39 -- setup/common.sh@32 -- # continue 00:03:23.181 07:20:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:23.181 07:20:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:23.181 07:20:39 -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:23.181 07:20:39 -- setup/common.sh@32 -- # continue 00:03:23.181 07:20:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:23.181 07:20:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:23.181 07:20:39 -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:23.181 07:20:39 -- setup/common.sh@32 -- # continue 00:03:23.181 07:20:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:23.181 07:20:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:23.181 07:20:39 -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:23.181 07:20:39 -- setup/common.sh@32 -- # continue 00:03:23.181 07:20:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:23.181 07:20:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:23.181 07:20:39 -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:23.181 07:20:39 -- setup/common.sh@32 -- # continue 00:03:23.181 07:20:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:23.181 07:20:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:23.181 07:20:39 -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:23.181 07:20:39 -- setup/common.sh@32 -- # continue 00:03:23.181 07:20:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:23.181 07:20:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:23.181 07:20:39 -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:23.181 07:20:39 -- setup/common.sh@32 -- # continue 00:03:23.181 07:20:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:23.181 07:20:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:23.181 07:20:39 -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:23.181 07:20:39 -- setup/common.sh@32 -- # continue 00:03:23.181 07:20:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:23.181 07:20:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:23.181 07:20:39 -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:23.181 07:20:39 -- setup/common.sh@32 -- # continue 00:03:23.181 07:20:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:23.181 07:20:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:23.181 07:20:39 -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:23.181 07:20:39 -- setup/common.sh@32 -- # continue 00:03:23.181 07:20:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:23.181 07:20:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:23.181 07:20:39 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:23.181 07:20:39 -- setup/common.sh@32 -- # continue 00:03:23.181 07:20:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:23.181 07:20:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:23.181 07:20:39 -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:23.181 07:20:39 -- setup/common.sh@33 -- # echo 0 00:03:23.181 07:20:39 -- setup/common.sh@33 -- # return 0 00:03:23.181 07:20:39 -- setup/hugepages.sh@97 -- # anon=0 00:03:23.181 07:20:39 -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:03:23.181 07:20:39 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:23.181 07:20:39 -- setup/common.sh@18 -- # local node= 00:03:23.181 07:20:39 -- setup/common.sh@19 -- # local var val 00:03:23.181 07:20:39 -- setup/common.sh@20 -- # local mem_f mem 00:03:23.181 07:20:39 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:23.181 07:20:39 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:23.181 07:20:39 -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:23.181 07:20:39 -- setup/common.sh@28 -- # mapfile -t mem 00:03:23.181 07:20:39 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:23.181 07:20:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:23.181 07:20:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:23.182 07:20:39 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541708 kB' 'MemFree: 45801956 kB' 'MemAvailable: 49307368 kB' 'Buffers: 2704 kB' 'Cached: 10211912 kB' 'SwapCached: 0 kB' 'Active: 7238668 kB' 'Inactive: 3506552 kB' 'Active(anon): 6844316 kB' 'Inactive(anon): 0 kB' 'Active(file): 394352 kB' 'Inactive(file): 3506552 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 533784 kB' 'Mapped: 211708 kB' 'Shmem: 6313712 kB' 'KReclaimable: 196260 kB' 'Slab: 568300 kB' 'SReclaimable: 196260 kB' 'SUnreclaim: 372040 kB' 'KernelStack: 13168 kB' 'PageTables: 10008 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37610880 kB' 'Committed_AS: 7962696 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 196836 kB' 'VmallocChunk: 0 kB' 'Percpu: 38784 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 1924700 kB' 'DirectMap2M: 15820800 kB' 'DirectMap1G: 51380224 kB' 00:03:23.182 07:20:39 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.182 07:20:39 -- setup/common.sh@32 -- # continue 00:03:23.182 07:20:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:23.182 07:20:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:23.182 07:20:39 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.182 07:20:39 -- setup/common.sh@32 -- # continue 00:03:23.182 07:20:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:23.182 07:20:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:23.182 07:20:39 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.182 07:20:39 -- setup/common.sh@32 -- # continue 00:03:23.182 07:20:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:23.182 07:20:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:23.182 07:20:39 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.182 07:20:39 -- setup/common.sh@32 -- # continue 00:03:23.182 07:20:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:23.182 07:20:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:23.182 07:20:39 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.182 07:20:39 -- setup/common.sh@32 -- # continue 00:03:23.182 07:20:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:23.182 07:20:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:23.182 07:20:39 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.182 07:20:39 -- setup/common.sh@32 -- # continue 00:03:23.182 07:20:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:23.182 07:20:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:23.182 07:20:39 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.182 07:20:39 -- setup/common.sh@32 -- # continue 00:03:23.182 07:20:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:23.182 07:20:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:23.182 07:20:39 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.182 07:20:39 -- setup/common.sh@32 -- # continue 00:03:23.182 07:20:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:23.182 07:20:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:23.182 07:20:39 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.182 07:20:39 -- setup/common.sh@32 -- # continue 00:03:23.182 07:20:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:23.182 07:20:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:23.182 07:20:39 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.182 07:20:39 -- setup/common.sh@32 -- # continue 00:03:23.182 07:20:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:23.182 07:20:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:23.182 07:20:39 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.182 07:20:39 -- setup/common.sh@32 -- # continue 00:03:23.182 07:20:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:23.182 07:20:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:23.182 07:20:39 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.182 07:20:39 -- setup/common.sh@32 -- # continue 00:03:23.182 07:20:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:23.182 07:20:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:23.182 07:20:39 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.182 07:20:39 -- setup/common.sh@32 -- # continue 00:03:23.182 07:20:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:23.182 07:20:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:23.182 07:20:39 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.182 07:20:39 -- setup/common.sh@32 -- # continue 00:03:23.182 07:20:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:23.182 07:20:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:23.182 07:20:39 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.182 07:20:39 -- setup/common.sh@32 -- # continue 00:03:23.182 07:20:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:23.182 07:20:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:23.182 07:20:39 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.182 07:20:39 -- setup/common.sh@32 -- # continue 00:03:23.182 07:20:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:23.182 07:20:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:23.182 07:20:39 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.182 07:20:39 -- setup/common.sh@32 -- # continue 00:03:23.182 07:20:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:23.182 07:20:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:23.182 07:20:39 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.182 07:20:39 -- setup/common.sh@32 -- # continue 00:03:23.182 07:20:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:23.182 07:20:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:23.182 07:20:39 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.182 07:20:39 -- setup/common.sh@32 -- # continue 00:03:23.182 07:20:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:23.182 07:20:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:23.182 07:20:39 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.182 07:20:39 -- setup/common.sh@32 -- # continue 00:03:23.182 07:20:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:23.182 07:20:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:23.182 07:20:39 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.182 07:20:39 -- setup/common.sh@32 -- # continue 00:03:23.182 07:20:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:23.182 07:20:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:23.182 07:20:39 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.182 07:20:39 -- setup/common.sh@32 -- # continue 00:03:23.182 07:20:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:23.182 07:20:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:23.182 07:20:39 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.182 07:20:39 -- setup/common.sh@32 -- # continue 00:03:23.182 07:20:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:23.182 07:20:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:23.182 07:20:39 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.182 07:20:39 -- setup/common.sh@32 -- # continue 00:03:23.182 07:20:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:23.182 07:20:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:23.443 07:20:39 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.443 07:20:39 -- setup/common.sh@32 -- # continue 00:03:23.443 07:20:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:23.443 07:20:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:23.443 07:20:39 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.443 07:20:39 -- setup/common.sh@32 -- # continue 00:03:23.443 07:20:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:23.443 07:20:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:23.443 07:20:39 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.443 07:20:39 -- setup/common.sh@32 -- # continue 00:03:23.443 07:20:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:23.443 07:20:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:23.443 07:20:39 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.443 07:20:39 -- setup/common.sh@32 -- # continue 00:03:23.443 07:20:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:23.443 07:20:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:23.443 07:20:39 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.443 07:20:39 -- setup/common.sh@32 -- # continue 00:03:23.443 07:20:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:23.443 07:20:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:23.443 07:20:39 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.443 07:20:39 -- setup/common.sh@32 -- # continue 00:03:23.443 07:20:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:23.443 07:20:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:23.443 07:20:39 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.443 07:20:39 -- setup/common.sh@32 -- # continue 00:03:23.443 07:20:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:23.443 07:20:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:23.443 07:20:39 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.443 07:20:39 -- setup/common.sh@32 -- # continue 00:03:23.443 07:20:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:23.443 07:20:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:23.443 07:20:39 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.443 07:20:39 -- setup/common.sh@32 -- # continue 00:03:23.443 07:20:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:23.443 07:20:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:23.443 07:20:39 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.443 07:20:39 -- setup/common.sh@32 -- # continue 00:03:23.443 07:20:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:23.443 07:20:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:23.443 07:20:39 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.443 07:20:39 -- setup/common.sh@32 -- # continue 00:03:23.443 07:20:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:23.443 07:20:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:23.443 07:20:39 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.443 07:20:39 -- setup/common.sh@32 -- # continue 00:03:23.443 07:20:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:23.443 07:20:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:23.443 07:20:39 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.443 07:20:39 -- setup/common.sh@32 -- # continue 00:03:23.443 07:20:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:23.443 07:20:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:23.443 07:20:39 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.443 07:20:39 -- setup/common.sh@32 -- # continue 00:03:23.443 07:20:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:23.443 07:20:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:23.443 07:20:39 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.443 07:20:39 -- setup/common.sh@32 -- # continue 00:03:23.443 07:20:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:23.443 07:20:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:23.443 07:20:39 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.443 07:20:39 -- setup/common.sh@32 -- # continue 00:03:23.443 07:20:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:23.443 07:20:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:23.443 07:20:39 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.443 07:20:39 -- setup/common.sh@32 -- # continue 00:03:23.443 07:20:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:23.443 07:20:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:23.443 07:20:39 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.443 07:20:39 -- setup/common.sh@32 -- # continue 00:03:23.443 07:20:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:23.443 07:20:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:23.443 07:20:39 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.443 07:20:39 -- setup/common.sh@32 -- # continue 00:03:23.443 07:20:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:23.443 07:20:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:23.443 07:20:39 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.443 07:20:39 -- setup/common.sh@32 -- # continue 00:03:23.443 07:20:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:23.443 07:20:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:23.443 07:20:39 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.443 07:20:39 -- setup/common.sh@32 -- # continue 00:03:23.443 07:20:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:23.443 07:20:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:23.443 07:20:39 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.444 07:20:39 -- setup/common.sh@32 -- # continue 00:03:23.444 07:20:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:23.444 07:20:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:23.444 07:20:39 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.444 07:20:39 -- setup/common.sh@32 -- # continue 00:03:23.444 07:20:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:23.444 07:20:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:23.444 07:20:39 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.444 07:20:39 -- setup/common.sh@32 -- # continue 00:03:23.444 07:20:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:23.444 07:20:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:23.444 07:20:39 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.444 07:20:39 -- setup/common.sh@32 -- # continue 00:03:23.444 07:20:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:23.444 07:20:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:23.444 07:20:39 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.444 07:20:39 -- setup/common.sh@32 -- # continue 00:03:23.444 07:20:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:23.444 07:20:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:23.444 07:20:39 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.444 07:20:39 -- setup/common.sh@32 -- # continue 00:03:23.444 07:20:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:23.444 07:20:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:23.444 07:20:39 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.444 07:20:39 -- setup/common.sh@33 -- # echo 0 00:03:23.444 07:20:39 -- setup/common.sh@33 -- # return 0 00:03:23.444 07:20:39 -- setup/hugepages.sh@99 -- # surp=0 00:03:23.444 07:20:39 -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:03:23.444 07:20:39 -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:03:23.444 07:20:39 -- setup/common.sh@18 -- # local node= 00:03:23.444 07:20:39 -- setup/common.sh@19 -- # local var val 00:03:23.444 07:20:39 -- setup/common.sh@20 -- # local mem_f mem 00:03:23.444 07:20:39 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:23.444 07:20:39 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:23.444 07:20:39 -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:23.444 07:20:39 -- setup/common.sh@28 -- # mapfile -t mem 00:03:23.444 07:20:39 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:23.444 07:20:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:23.444 07:20:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:23.444 07:20:39 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541708 kB' 'MemFree: 45804040 kB' 'MemAvailable: 49309452 kB' 'Buffers: 2704 kB' 'Cached: 10211924 kB' 'SwapCached: 0 kB' 'Active: 7236656 kB' 'Inactive: 3506552 kB' 'Active(anon): 6842304 kB' 'Inactive(anon): 0 kB' 'Active(file): 394352 kB' 'Inactive(file): 3506552 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 531784 kB' 'Mapped: 211700 kB' 'Shmem: 6313724 kB' 'KReclaimable: 196260 kB' 'Slab: 568404 kB' 'SReclaimable: 196260 kB' 'SUnreclaim: 372144 kB' 'KernelStack: 12960 kB' 'PageTables: 8380 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37610880 kB' 'Committed_AS: 7962720 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 196756 kB' 'VmallocChunk: 0 kB' 'Percpu: 38784 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 1924700 kB' 'DirectMap2M: 15820800 kB' 'DirectMap1G: 51380224 kB' 00:03:23.444 07:20:39 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:23.444 07:20:39 -- setup/common.sh@32 -- # continue 00:03:23.444 07:20:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:23.444 07:20:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:23.444 07:20:39 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:23.444 07:20:39 -- setup/common.sh@32 -- # continue 00:03:23.444 07:20:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:23.444 07:20:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:23.444 07:20:39 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:23.444 07:20:39 -- setup/common.sh@32 -- # continue 00:03:23.444 07:20:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:23.444 07:20:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:23.444 07:20:39 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:23.444 07:20:39 -- setup/common.sh@32 -- # continue 00:03:23.444 07:20:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:23.444 07:20:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:23.444 07:20:39 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:23.444 07:20:39 -- setup/common.sh@32 -- # continue 00:03:23.444 07:20:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:23.444 07:20:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:23.444 07:20:39 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:23.444 07:20:39 -- setup/common.sh@32 -- # continue 00:03:23.444 07:20:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:23.444 07:20:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:23.444 07:20:39 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:23.444 07:20:39 -- setup/common.sh@32 -- # continue 00:03:23.444 07:20:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:23.444 07:20:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:23.444 07:20:39 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:23.444 07:20:39 -- setup/common.sh@32 -- # continue 00:03:23.444 07:20:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:23.444 07:20:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:23.444 07:20:39 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:23.444 07:20:39 -- setup/common.sh@32 -- # continue 00:03:23.444 07:20:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:23.444 07:20:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:23.444 07:20:39 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:23.444 07:20:39 -- setup/common.sh@32 -- # continue 00:03:23.444 07:20:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:23.444 07:20:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:23.444 07:20:39 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:23.444 07:20:39 -- setup/common.sh@32 -- # continue 00:03:23.444 07:20:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:23.444 07:20:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:23.444 07:20:39 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:23.444 07:20:39 -- setup/common.sh@32 -- # continue 00:03:23.444 07:20:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:23.444 07:20:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:23.444 07:20:39 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:23.444 07:20:39 -- setup/common.sh@32 -- # continue 00:03:23.444 07:20:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:23.444 07:20:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:23.444 07:20:39 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:23.444 07:20:39 -- setup/common.sh@32 -- # continue 00:03:23.444 07:20:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:23.444 07:20:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:23.444 07:20:39 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:23.444 07:20:39 -- setup/common.sh@32 -- # continue 00:03:23.444 07:20:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:23.444 07:20:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:23.444 07:20:39 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:23.444 07:20:39 -- setup/common.sh@32 -- # continue 00:03:23.444 07:20:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:23.444 07:20:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:23.444 07:20:39 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:23.444 07:20:39 -- setup/common.sh@32 -- # continue 00:03:23.444 07:20:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:23.444 07:20:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:23.444 07:20:39 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:23.444 07:20:39 -- setup/common.sh@32 -- # continue 00:03:23.444 07:20:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:23.444 07:20:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:23.444 07:20:39 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:23.444 07:20:39 -- setup/common.sh@32 -- # continue 00:03:23.444 07:20:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:23.444 07:20:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:23.444 07:20:39 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:23.444 07:20:39 -- setup/common.sh@32 -- # continue 00:03:23.444 07:20:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:23.444 07:20:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:23.444 07:20:39 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:23.444 07:20:39 -- setup/common.sh@32 -- # continue 00:03:23.444 07:20:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:23.444 07:20:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:23.444 07:20:39 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:23.444 07:20:39 -- setup/common.sh@32 -- # continue 00:03:23.444 07:20:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:23.444 07:20:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:23.444 07:20:39 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:23.444 07:20:39 -- setup/common.sh@32 -- # continue 00:03:23.444 07:20:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:23.444 07:20:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:23.444 07:20:39 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:23.444 07:20:39 -- setup/common.sh@32 -- # continue 00:03:23.444 07:20:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:23.444 07:20:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:23.444 07:20:39 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:23.444 07:20:39 -- setup/common.sh@32 -- # continue 00:03:23.444 07:20:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:23.444 07:20:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:23.444 07:20:39 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:23.444 07:20:39 -- setup/common.sh@32 -- # continue 00:03:23.444 07:20:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:23.444 07:20:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:23.444 07:20:39 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:23.444 07:20:39 -- setup/common.sh@32 -- # continue 00:03:23.444 07:20:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:23.444 07:20:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:23.444 07:20:39 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:23.444 07:20:39 -- setup/common.sh@32 -- # continue 00:03:23.444 07:20:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:23.444 07:20:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:23.444 07:20:39 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:23.444 07:20:39 -- setup/common.sh@32 -- # continue 00:03:23.444 07:20:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:23.444 07:20:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:23.445 07:20:39 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:23.445 07:20:39 -- setup/common.sh@32 -- # continue 00:03:23.445 07:20:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:23.445 07:20:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:23.445 07:20:39 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:23.445 07:20:39 -- setup/common.sh@32 -- # continue 00:03:23.445 07:20:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:23.445 07:20:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:23.445 07:20:39 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:23.445 07:20:39 -- setup/common.sh@32 -- # continue 00:03:23.445 07:20:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:23.445 07:20:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:23.445 07:20:39 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:23.445 07:20:39 -- setup/common.sh@32 -- # continue 00:03:23.445 07:20:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:23.445 07:20:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:23.445 07:20:39 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:23.445 07:20:39 -- setup/common.sh@32 -- # continue 00:03:23.445 07:20:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:23.445 07:20:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:23.445 07:20:39 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:23.445 07:20:39 -- setup/common.sh@32 -- # continue 00:03:23.445 07:20:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:23.445 07:20:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:23.445 07:20:39 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:23.445 07:20:39 -- setup/common.sh@32 -- # continue 00:03:23.445 07:20:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:23.445 07:20:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:23.445 07:20:39 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:23.445 07:20:39 -- setup/common.sh@32 -- # continue 00:03:23.445 07:20:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:23.445 07:20:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:23.445 07:20:39 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:23.445 07:20:39 -- setup/common.sh@32 -- # continue 00:03:23.445 07:20:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:23.445 07:20:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:23.445 07:20:39 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:23.445 07:20:39 -- setup/common.sh@32 -- # continue 00:03:23.445 07:20:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:23.445 07:20:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:23.445 07:20:39 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:23.445 07:20:39 -- setup/common.sh@32 -- # continue 00:03:23.445 07:20:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:23.445 07:20:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:23.445 07:20:39 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:23.445 07:20:39 -- setup/common.sh@32 -- # continue 00:03:23.445 07:20:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:23.445 07:20:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:23.445 07:20:39 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:23.445 07:20:39 -- setup/common.sh@32 -- # continue 00:03:23.445 07:20:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:23.445 07:20:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:23.445 07:20:39 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:23.445 07:20:39 -- setup/common.sh@32 -- # continue 00:03:23.445 07:20:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:23.445 07:20:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:23.445 07:20:39 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:23.445 07:20:39 -- setup/common.sh@32 -- # continue 00:03:23.445 07:20:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:23.445 07:20:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:23.445 07:20:39 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:23.445 07:20:39 -- setup/common.sh@32 -- # continue 00:03:23.445 07:20:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:23.445 07:20:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:23.445 07:20:39 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:23.445 07:20:39 -- setup/common.sh@32 -- # continue 00:03:23.445 07:20:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:23.445 07:20:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:23.445 07:20:39 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:23.445 07:20:39 -- setup/common.sh@32 -- # continue 00:03:23.445 07:20:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:23.445 07:20:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:23.445 07:20:39 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:23.445 07:20:39 -- setup/common.sh@32 -- # continue 00:03:23.445 07:20:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:23.445 07:20:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:23.445 07:20:39 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:23.445 07:20:39 -- setup/common.sh@32 -- # continue 00:03:23.445 07:20:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:23.445 07:20:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:23.445 07:20:39 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:23.445 07:20:39 -- setup/common.sh@32 -- # continue 00:03:23.445 07:20:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:23.445 07:20:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:23.445 07:20:39 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:23.445 07:20:39 -- setup/common.sh@33 -- # echo 0 00:03:23.445 07:20:39 -- setup/common.sh@33 -- # return 0 00:03:23.445 07:20:39 -- setup/hugepages.sh@100 -- # resv=0 00:03:23.445 07:20:39 -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:03:23.445 nr_hugepages=1024 00:03:23.445 07:20:39 -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:03:23.445 resv_hugepages=0 00:03:23.445 07:20:39 -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:03:23.445 surplus_hugepages=0 00:03:23.445 07:20:39 -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:03:23.445 anon_hugepages=0 00:03:23.445 07:20:39 -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:03:23.445 07:20:39 -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:03:23.445 07:20:39 -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:03:23.445 07:20:39 -- setup/common.sh@17 -- # local get=HugePages_Total 00:03:23.445 07:20:39 -- setup/common.sh@18 -- # local node= 00:03:23.445 07:20:39 -- setup/common.sh@19 -- # local var val 00:03:23.445 07:20:39 -- setup/common.sh@20 -- # local mem_f mem 00:03:23.445 07:20:39 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:23.445 07:20:39 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:23.445 07:20:39 -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:23.445 07:20:39 -- setup/common.sh@28 -- # mapfile -t mem 00:03:23.445 07:20:39 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:23.445 07:20:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:23.445 07:20:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:23.445 07:20:39 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541708 kB' 'MemFree: 45811892 kB' 'MemAvailable: 49317304 kB' 'Buffers: 2704 kB' 'Cached: 10211924 kB' 'SwapCached: 0 kB' 'Active: 7235852 kB' 'Inactive: 3506552 kB' 'Active(anon): 6841500 kB' 'Inactive(anon): 0 kB' 'Active(file): 394352 kB' 'Inactive(file): 3506552 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 531020 kB' 'Mapped: 211704 kB' 'Shmem: 6313724 kB' 'KReclaimable: 196260 kB' 'Slab: 568404 kB' 'SReclaimable: 196260 kB' 'SUnreclaim: 372144 kB' 'KernelStack: 12848 kB' 'PageTables: 8092 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37610880 kB' 'Committed_AS: 7962728 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 196708 kB' 'VmallocChunk: 0 kB' 'Percpu: 38784 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 1924700 kB' 'DirectMap2M: 15820800 kB' 'DirectMap1G: 51380224 kB' 00:03:23.445 07:20:39 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:23.445 07:20:39 -- setup/common.sh@32 -- # continue 00:03:23.445 07:20:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:23.445 07:20:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:23.445 07:20:39 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:23.445 07:20:39 -- setup/common.sh@32 -- # continue 00:03:23.445 07:20:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:23.445 07:20:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:23.445 07:20:39 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:23.445 07:20:39 -- setup/common.sh@32 -- # continue 00:03:23.445 07:20:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:23.445 07:20:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:23.445 07:20:39 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:23.445 07:20:39 -- setup/common.sh@32 -- # continue 00:03:23.445 07:20:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:23.445 07:20:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:23.445 07:20:39 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:23.445 07:20:39 -- setup/common.sh@32 -- # continue 00:03:23.445 07:20:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:23.445 07:20:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:23.445 07:20:39 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:23.445 07:20:39 -- setup/common.sh@32 -- # continue 00:03:23.445 07:20:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:23.445 07:20:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:23.445 07:20:39 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:23.445 07:20:39 -- setup/common.sh@32 -- # continue 00:03:23.445 07:20:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:23.445 07:20:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:23.445 07:20:39 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:23.445 07:20:39 -- setup/common.sh@32 -- # continue 00:03:23.445 07:20:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:23.445 07:20:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:23.445 07:20:39 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:23.445 07:20:39 -- setup/common.sh@32 -- # continue 00:03:23.445 07:20:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:23.445 07:20:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:23.445 07:20:39 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:23.445 07:20:39 -- setup/common.sh@32 -- # continue 00:03:23.445 07:20:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:23.445 07:20:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:23.445 07:20:39 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:23.445 07:20:39 -- setup/common.sh@32 -- # continue 00:03:23.445 07:20:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:23.445 07:20:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:23.445 07:20:39 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:23.445 07:20:39 -- setup/common.sh@32 -- # continue 00:03:23.445 07:20:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:23.445 07:20:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:23.446 07:20:39 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:23.446 07:20:39 -- setup/common.sh@32 -- # continue 00:03:23.446 07:20:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:23.446 07:20:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:23.446 07:20:39 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:23.446 07:20:39 -- setup/common.sh@32 -- # continue 00:03:23.446 07:20:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:23.446 07:20:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:23.446 07:20:39 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:23.446 07:20:39 -- setup/common.sh@32 -- # continue 00:03:23.446 07:20:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:23.446 07:20:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:23.446 07:20:39 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:23.446 07:20:39 -- setup/common.sh@32 -- # continue 00:03:23.446 07:20:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:23.446 07:20:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:23.446 07:20:39 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:23.446 07:20:39 -- setup/common.sh@32 -- # continue 00:03:23.446 07:20:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:23.446 07:20:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:23.446 07:20:39 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:23.446 07:20:39 -- setup/common.sh@32 -- # continue 00:03:23.446 07:20:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:23.446 07:20:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:23.446 07:20:39 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:23.446 07:20:39 -- setup/common.sh@32 -- # continue 00:03:23.446 07:20:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:23.446 07:20:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:23.446 07:20:39 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:23.446 07:20:39 -- setup/common.sh@32 -- # continue 00:03:23.446 07:20:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:23.446 07:20:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:23.446 07:20:39 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:23.446 07:20:39 -- setup/common.sh@32 -- # continue 00:03:23.446 07:20:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:23.446 07:20:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:23.446 07:20:39 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:23.446 07:20:39 -- setup/common.sh@32 -- # continue 00:03:23.446 07:20:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:23.446 07:20:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:23.446 07:20:39 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:23.446 07:20:39 -- setup/common.sh@32 -- # continue 00:03:23.446 07:20:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:23.446 07:20:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:23.446 07:20:39 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:23.446 07:20:39 -- setup/common.sh@32 -- # continue 00:03:23.446 07:20:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:23.446 07:20:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:23.446 07:20:39 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:23.446 07:20:39 -- setup/common.sh@32 -- # continue 00:03:23.446 07:20:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:23.446 07:20:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:23.446 07:20:39 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:23.446 07:20:39 -- setup/common.sh@32 -- # continue 00:03:23.446 07:20:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:23.446 07:20:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:23.446 07:20:39 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:23.446 07:20:39 -- setup/common.sh@32 -- # continue 00:03:23.446 07:20:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:23.446 07:20:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:23.446 07:20:39 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:23.446 07:20:39 -- setup/common.sh@32 -- # continue 00:03:23.446 07:20:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:23.446 07:20:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:23.446 07:20:39 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:23.446 07:20:39 -- setup/common.sh@32 -- # continue 00:03:23.446 07:20:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:23.446 07:20:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:23.446 07:20:39 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:23.446 07:20:39 -- setup/common.sh@32 -- # continue 00:03:23.446 07:20:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:23.446 07:20:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:23.446 07:20:39 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:23.446 07:20:39 -- setup/common.sh@32 -- # continue 00:03:23.446 07:20:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:23.446 07:20:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:23.446 07:20:39 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:23.446 07:20:39 -- setup/common.sh@32 -- # continue 00:03:23.446 07:20:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:23.446 07:20:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:23.446 07:20:39 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:23.446 07:20:39 -- setup/common.sh@32 -- # continue 00:03:23.446 07:20:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:23.446 07:20:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:23.446 07:20:39 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:23.446 07:20:39 -- setup/common.sh@32 -- # continue 00:03:23.446 07:20:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:23.446 07:20:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:23.446 07:20:39 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:23.446 07:20:39 -- setup/common.sh@32 -- # continue 00:03:23.446 07:20:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:23.446 07:20:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:23.446 07:20:39 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:23.446 07:20:39 -- setup/common.sh@32 -- # continue 00:03:23.446 07:20:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:23.446 07:20:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:23.446 07:20:39 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:23.446 07:20:39 -- setup/common.sh@32 -- # continue 00:03:23.446 07:20:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:23.446 07:20:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:23.446 07:20:39 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:23.446 07:20:39 -- setup/common.sh@32 -- # continue 00:03:23.446 07:20:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:23.446 07:20:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:23.446 07:20:39 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:23.446 07:20:39 -- setup/common.sh@32 -- # continue 00:03:23.446 07:20:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:23.446 07:20:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:23.446 07:20:39 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:23.446 07:20:39 -- setup/common.sh@32 -- # continue 00:03:23.446 07:20:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:23.446 07:20:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:23.446 07:20:39 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:23.446 07:20:39 -- setup/common.sh@32 -- # continue 00:03:23.446 07:20:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:23.446 07:20:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:23.446 07:20:39 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:23.446 07:20:39 -- setup/common.sh@32 -- # continue 00:03:23.446 07:20:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:23.446 07:20:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:23.446 07:20:39 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:23.446 07:20:39 -- setup/common.sh@32 -- # continue 00:03:23.446 07:20:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:23.446 07:20:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:23.446 07:20:39 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:23.446 07:20:39 -- setup/common.sh@32 -- # continue 00:03:23.446 07:20:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:23.446 07:20:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:23.446 07:20:39 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:23.446 07:20:39 -- setup/common.sh@32 -- # continue 00:03:23.446 07:20:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:23.446 07:20:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:23.446 07:20:39 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:23.446 07:20:39 -- setup/common.sh@32 -- # continue 00:03:23.446 07:20:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:23.446 07:20:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:23.446 07:20:39 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:23.446 07:20:39 -- setup/common.sh@32 -- # continue 00:03:23.446 07:20:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:23.446 07:20:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:23.446 07:20:39 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:23.446 07:20:39 -- setup/common.sh@32 -- # continue 00:03:23.446 07:20:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:23.446 07:20:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:23.446 07:20:39 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:23.446 07:20:39 -- setup/common.sh@33 -- # echo 1024 00:03:23.446 07:20:39 -- setup/common.sh@33 -- # return 0 00:03:23.446 07:20:39 -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:03:23.446 07:20:39 -- setup/hugepages.sh@112 -- # get_nodes 00:03:23.446 07:20:39 -- setup/hugepages.sh@27 -- # local node 00:03:23.446 07:20:39 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:23.446 07:20:39 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:03:23.446 07:20:39 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:23.446 07:20:39 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=0 00:03:23.446 07:20:39 -- setup/hugepages.sh@32 -- # no_nodes=2 00:03:23.446 07:20:39 -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:03:23.446 07:20:39 -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:03:23.446 07:20:39 -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:03:23.446 07:20:39 -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:03:23.446 07:20:39 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:23.446 07:20:39 -- setup/common.sh@18 -- # local node=0 00:03:23.446 07:20:39 -- setup/common.sh@19 -- # local var val 00:03:23.446 07:20:39 -- setup/common.sh@20 -- # local mem_f mem 00:03:23.446 07:20:39 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:23.446 07:20:39 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:03:23.446 07:20:39 -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:03:23.446 07:20:39 -- setup/common.sh@28 -- # mapfile -t mem 00:03:23.446 07:20:39 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:23.446 07:20:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:23.446 07:20:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:23.447 07:20:39 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 32829884 kB' 'MemFree: 27255116 kB' 'MemUsed: 5574768 kB' 'SwapCached: 0 kB' 'Active: 2363696 kB' 'Inactive: 110044 kB' 'Active(anon): 2252808 kB' 'Inactive(anon): 0 kB' 'Active(file): 110888 kB' 'Inactive(file): 110044 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 2243296 kB' 'Mapped: 35012 kB' 'AnonPages: 233664 kB' 'Shmem: 2022364 kB' 'KernelStack: 7256 kB' 'PageTables: 4440 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 97124 kB' 'Slab: 312476 kB' 'SReclaimable: 97124 kB' 'SUnreclaim: 215352 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:03:23.447 07:20:39 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.447 07:20:39 -- setup/common.sh@32 -- # continue 00:03:23.447 07:20:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:23.447 07:20:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:23.447 07:20:39 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.447 07:20:39 -- setup/common.sh@32 -- # continue 00:03:23.447 07:20:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:23.447 07:20:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:23.447 07:20:39 -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.447 07:20:39 -- setup/common.sh@32 -- # continue 00:03:23.447 07:20:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:23.447 07:20:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:23.447 07:20:39 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.447 07:20:39 -- setup/common.sh@32 -- # continue 00:03:23.447 07:20:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:23.447 07:20:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:23.447 07:20:39 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.447 07:20:39 -- setup/common.sh@32 -- # continue 00:03:23.447 07:20:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:23.447 07:20:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:23.447 07:20:39 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.447 07:20:39 -- setup/common.sh@32 -- # continue 00:03:23.447 07:20:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:23.447 07:20:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:23.447 07:20:39 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.447 07:20:39 -- setup/common.sh@32 -- # continue 00:03:23.447 07:20:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:23.447 07:20:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:23.447 07:20:39 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.447 07:20:39 -- setup/common.sh@32 -- # continue 00:03:23.447 07:20:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:23.447 07:20:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:23.447 07:20:39 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.447 07:20:39 -- setup/common.sh@32 -- # continue 00:03:23.447 07:20:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:23.447 07:20:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:23.447 07:20:39 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.447 07:20:39 -- setup/common.sh@32 -- # continue 00:03:23.447 07:20:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:23.447 07:20:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:23.447 07:20:39 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.447 07:20:39 -- setup/common.sh@32 -- # continue 00:03:23.447 07:20:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:23.447 07:20:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:23.447 07:20:39 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.447 07:20:39 -- setup/common.sh@32 -- # continue 00:03:23.447 07:20:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:23.447 07:20:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:23.447 07:20:39 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.447 07:20:39 -- setup/common.sh@32 -- # continue 00:03:23.447 07:20:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:23.447 07:20:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:23.447 07:20:39 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.447 07:20:39 -- setup/common.sh@32 -- # continue 00:03:23.447 07:20:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:23.447 07:20:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:23.447 07:20:39 -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.447 07:20:39 -- setup/common.sh@32 -- # continue 00:03:23.447 07:20:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:23.447 07:20:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:23.447 07:20:39 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.447 07:20:39 -- setup/common.sh@32 -- # continue 00:03:23.447 07:20:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:23.447 07:20:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:23.447 07:20:39 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.447 07:20:39 -- setup/common.sh@32 -- # continue 00:03:23.447 07:20:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:23.447 07:20:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:23.447 07:20:39 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.447 07:20:39 -- setup/common.sh@32 -- # continue 00:03:23.447 07:20:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:23.447 07:20:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:23.447 07:20:39 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.447 07:20:39 -- setup/common.sh@32 -- # continue 00:03:23.447 07:20:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:23.447 07:20:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:23.447 07:20:39 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.447 07:20:39 -- setup/common.sh@32 -- # continue 00:03:23.447 07:20:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:23.447 07:20:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:23.447 07:20:39 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.447 07:20:39 -- setup/common.sh@32 -- # continue 00:03:23.447 07:20:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:23.447 07:20:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:23.447 07:20:39 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.447 07:20:39 -- setup/common.sh@32 -- # continue 00:03:23.447 07:20:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:23.447 07:20:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:23.447 07:20:39 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.447 07:20:39 -- setup/common.sh@32 -- # continue 00:03:23.447 07:20:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:23.447 07:20:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:23.447 07:20:39 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.447 07:20:39 -- setup/common.sh@32 -- # continue 00:03:23.447 07:20:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:23.447 07:20:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:23.447 07:20:39 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.447 07:20:39 -- setup/common.sh@32 -- # continue 00:03:23.447 07:20:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:23.447 07:20:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:23.447 07:20:39 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.447 07:20:39 -- setup/common.sh@32 -- # continue 00:03:23.447 07:20:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:23.447 07:20:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:23.447 07:20:39 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.447 07:20:39 -- setup/common.sh@32 -- # continue 00:03:23.447 07:20:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:23.447 07:20:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:23.447 07:20:39 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.447 07:20:39 -- setup/common.sh@32 -- # continue 00:03:23.447 07:20:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:23.447 07:20:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:23.447 07:20:39 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.447 07:20:39 -- setup/common.sh@32 -- # continue 00:03:23.447 07:20:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:23.447 07:20:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:23.447 07:20:39 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.447 07:20:39 -- setup/common.sh@32 -- # continue 00:03:23.447 07:20:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:23.447 07:20:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:23.447 07:20:39 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.447 07:20:39 -- setup/common.sh@32 -- # continue 00:03:23.447 07:20:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:23.447 07:20:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:23.447 07:20:39 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.447 07:20:39 -- setup/common.sh@32 -- # continue 00:03:23.447 07:20:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:23.447 07:20:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:23.447 07:20:39 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.447 07:20:39 -- setup/common.sh@32 -- # continue 00:03:23.447 07:20:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:23.447 07:20:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:23.447 07:20:39 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.447 07:20:39 -- setup/common.sh@32 -- # continue 00:03:23.447 07:20:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:23.447 07:20:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:23.448 07:20:39 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.448 07:20:39 -- setup/common.sh@32 -- # continue 00:03:23.448 07:20:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:23.448 07:20:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:23.448 07:20:39 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.448 07:20:39 -- setup/common.sh@32 -- # continue 00:03:23.448 07:20:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:23.448 07:20:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:23.448 07:20:39 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.448 07:20:39 -- setup/common.sh@33 -- # echo 0 00:03:23.448 07:20:39 -- setup/common.sh@33 -- # return 0 00:03:23.448 07:20:39 -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:03:23.448 07:20:39 -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:03:23.448 07:20:39 -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:03:23.448 07:20:39 -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:03:23.448 07:20:39 -- setup/hugepages.sh@128 -- # echo 'node0=1024 expecting 1024' 00:03:23.448 node0=1024 expecting 1024 00:03:23.448 07:20:39 -- setup/hugepages.sh@130 -- # [[ 1024 == \1\0\2\4 ]] 00:03:23.448 00:03:23.448 real 0m2.913s 00:03:23.448 user 0m1.191s 00:03:23.448 sys 0m1.650s 00:03:23.448 07:20:39 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:03:23.448 07:20:39 -- common/autotest_common.sh@10 -- # set +x 00:03:23.448 ************************************ 00:03:23.448 END TEST no_shrink_alloc 00:03:23.448 ************************************ 00:03:23.448 07:20:39 -- setup/hugepages.sh@217 -- # clear_hp 00:03:23.448 07:20:39 -- setup/hugepages.sh@37 -- # local node hp 00:03:23.448 07:20:39 -- setup/hugepages.sh@39 -- # for node in "${!nodes_sys[@]}" 00:03:23.448 07:20:39 -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:03:23.448 07:20:39 -- setup/hugepages.sh@41 -- # echo 0 00:03:23.448 07:20:39 -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:03:23.448 07:20:39 -- setup/hugepages.sh@41 -- # echo 0 00:03:23.448 07:20:39 -- setup/hugepages.sh@39 -- # for node in "${!nodes_sys[@]}" 00:03:23.448 07:20:39 -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:03:23.448 07:20:39 -- setup/hugepages.sh@41 -- # echo 0 00:03:23.448 07:20:39 -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:03:23.448 07:20:39 -- setup/hugepages.sh@41 -- # echo 0 00:03:23.448 07:20:39 -- setup/hugepages.sh@45 -- # export CLEAR_HUGE=yes 00:03:23.448 07:20:39 -- setup/hugepages.sh@45 -- # CLEAR_HUGE=yes 00:03:23.448 00:03:23.448 real 0m11.561s 00:03:23.448 user 0m4.555s 00:03:23.448 sys 0m5.950s 00:03:23.448 07:20:39 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:03:23.448 07:20:39 -- common/autotest_common.sh@10 -- # set +x 00:03:23.448 ************************************ 00:03:23.448 END TEST hugepages 00:03:23.448 ************************************ 00:03:23.448 07:20:39 -- setup/test-setup.sh@14 -- # run_test driver /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/driver.sh 00:03:23.448 07:20:39 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:03:23.448 07:20:39 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:03:23.448 07:20:39 -- common/autotest_common.sh@10 -- # set +x 00:03:23.448 ************************************ 00:03:23.448 START TEST driver 00:03:23.448 ************************************ 00:03:23.448 07:20:39 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/driver.sh 00:03:23.448 * Looking for test storage... 00:03:23.448 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup 00:03:23.448 07:20:39 -- setup/driver.sh@68 -- # setup reset 00:03:23.448 07:20:39 -- setup/common.sh@9 -- # [[ reset == output ]] 00:03:23.448 07:20:39 -- setup/common.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:03:25.978 07:20:42 -- setup/driver.sh@69 -- # run_test guess_driver guess_driver 00:03:25.978 07:20:42 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:03:25.978 07:20:42 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:03:25.978 07:20:42 -- common/autotest_common.sh@10 -- # set +x 00:03:25.978 ************************************ 00:03:25.978 START TEST guess_driver 00:03:25.978 ************************************ 00:03:25.978 07:20:42 -- common/autotest_common.sh@1104 -- # guess_driver 00:03:25.978 07:20:42 -- setup/driver.sh@46 -- # local driver setup_driver marker 00:03:25.979 07:20:42 -- setup/driver.sh@47 -- # local fail=0 00:03:25.979 07:20:42 -- setup/driver.sh@49 -- # pick_driver 00:03:25.979 07:20:42 -- setup/driver.sh@36 -- # vfio 00:03:25.979 07:20:42 -- setup/driver.sh@21 -- # local iommu_grups 00:03:25.979 07:20:42 -- setup/driver.sh@22 -- # local unsafe_vfio 00:03:25.979 07:20:42 -- setup/driver.sh@24 -- # [[ -e /sys/module/vfio/parameters/enable_unsafe_noiommu_mode ]] 00:03:25.979 07:20:42 -- setup/driver.sh@25 -- # unsafe_vfio=N 00:03:25.979 07:20:42 -- setup/driver.sh@27 -- # iommu_groups=(/sys/kernel/iommu_groups/*) 00:03:25.979 07:20:42 -- setup/driver.sh@29 -- # (( 141 > 0 )) 00:03:25.979 07:20:42 -- setup/driver.sh@30 -- # is_driver vfio_pci 00:03:25.979 07:20:42 -- setup/driver.sh@14 -- # mod vfio_pci 00:03:25.979 07:20:42 -- setup/driver.sh@12 -- # dep vfio_pci 00:03:25.979 07:20:42 -- setup/driver.sh@11 -- # modprobe --show-depends vfio_pci 00:03:25.979 07:20:42 -- setup/driver.sh@12 -- # [[ insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/virt/lib/irqbypass.ko.xz 00:03:25.979 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/iommu/iommufd/iommufd.ko.xz 00:03:25.979 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/vfio/vfio.ko.xz 00:03:25.979 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/iommu/iommufd/iommufd.ko.xz 00:03:25.979 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/vfio/vfio.ko.xz 00:03:25.979 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/vfio/vfio_iommu_type1.ko.xz 00:03:25.979 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/vfio/pci/vfio-pci-core.ko.xz 00:03:25.979 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/vfio/pci/vfio-pci.ko.xz == *\.\k\o* ]] 00:03:25.979 07:20:42 -- setup/driver.sh@30 -- # return 0 00:03:25.979 07:20:42 -- setup/driver.sh@37 -- # echo vfio-pci 00:03:25.979 07:20:42 -- setup/driver.sh@49 -- # driver=vfio-pci 00:03:25.979 07:20:42 -- setup/driver.sh@51 -- # [[ vfio-pci == \N\o\ \v\a\l\i\d\ \d\r\i\v\e\r\ \f\o\u\n\d ]] 00:03:25.979 07:20:42 -- setup/driver.sh@56 -- # echo 'Looking for driver=vfio-pci' 00:03:25.979 Looking for driver=vfio-pci 00:03:25.979 07:20:42 -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:25.979 07:20:42 -- setup/driver.sh@45 -- # setup output config 00:03:25.979 07:20:42 -- setup/common.sh@9 -- # [[ output == output ]] 00:03:25.979 07:20:42 -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh config 00:03:27.353 07:20:43 -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:03:27.353 07:20:43 -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:03:27.353 07:20:43 -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:27.353 07:20:43 -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:03:27.353 07:20:43 -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:03:27.353 07:20:43 -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:27.353 07:20:43 -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:03:27.353 07:20:43 -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:03:27.353 07:20:43 -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:27.353 07:20:43 -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:03:27.353 07:20:43 -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:03:27.353 07:20:43 -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:27.353 07:20:43 -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:03:27.353 07:20:43 -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:03:27.353 07:20:43 -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:27.353 07:20:43 -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:03:27.353 07:20:43 -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:03:27.353 07:20:43 -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:27.353 07:20:43 -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:03:27.353 07:20:43 -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:03:27.353 07:20:43 -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:27.353 07:20:43 -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:03:27.353 07:20:43 -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:03:27.353 07:20:43 -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:27.353 07:20:43 -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:03:27.353 07:20:43 -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:03:27.353 07:20:43 -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:27.353 07:20:43 -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:03:27.353 07:20:43 -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:03:27.353 07:20:43 -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:27.353 07:20:43 -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:03:27.353 07:20:43 -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:03:27.353 07:20:43 -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:27.353 07:20:43 -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:03:27.353 07:20:43 -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:03:27.353 07:20:43 -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:27.353 07:20:43 -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:03:27.353 07:20:43 -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:03:27.353 07:20:43 -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:27.353 07:20:43 -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:03:27.353 07:20:43 -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:03:27.353 07:20:43 -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:27.353 07:20:43 -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:03:27.353 07:20:43 -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:03:27.353 07:20:43 -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:27.353 07:20:43 -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:03:27.353 07:20:43 -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:03:27.353 07:20:43 -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:28.288 07:20:44 -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:03:28.288 07:20:44 -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:03:28.288 07:20:44 -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:28.288 07:20:44 -- setup/driver.sh@64 -- # (( fail == 0 )) 00:03:28.288 07:20:44 -- setup/driver.sh@65 -- # setup reset 00:03:28.288 07:20:44 -- setup/common.sh@9 -- # [[ reset == output ]] 00:03:28.288 07:20:44 -- setup/common.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:03:30.841 00:03:30.841 real 0m4.834s 00:03:30.841 user 0m1.139s 00:03:30.841 sys 0m1.817s 00:03:30.841 07:20:46 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:03:30.841 07:20:46 -- common/autotest_common.sh@10 -- # set +x 00:03:30.841 ************************************ 00:03:30.841 END TEST guess_driver 00:03:30.841 ************************************ 00:03:30.841 00:03:30.841 real 0m7.403s 00:03:30.841 user 0m1.708s 00:03:30.841 sys 0m2.839s 00:03:30.841 07:20:46 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:03:30.841 07:20:46 -- common/autotest_common.sh@10 -- # set +x 00:03:30.841 ************************************ 00:03:30.841 END TEST driver 00:03:30.841 ************************************ 00:03:30.841 07:20:46 -- setup/test-setup.sh@15 -- # run_test devices /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/devices.sh 00:03:30.841 07:20:46 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:03:30.841 07:20:46 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:03:30.841 07:20:46 -- common/autotest_common.sh@10 -- # set +x 00:03:30.841 ************************************ 00:03:30.841 START TEST devices 00:03:30.841 ************************************ 00:03:30.841 07:20:46 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/devices.sh 00:03:30.841 * Looking for test storage... 00:03:30.841 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup 00:03:30.841 07:20:46 -- setup/devices.sh@190 -- # trap cleanup EXIT 00:03:30.841 07:20:46 -- setup/devices.sh@192 -- # setup reset 00:03:30.841 07:20:46 -- setup/common.sh@9 -- # [[ reset == output ]] 00:03:30.841 07:20:46 -- setup/common.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:03:32.743 07:20:48 -- setup/devices.sh@194 -- # get_zoned_devs 00:03:32.743 07:20:48 -- common/autotest_common.sh@1654 -- # zoned_devs=() 00:03:32.743 07:20:48 -- common/autotest_common.sh@1654 -- # local -gA zoned_devs 00:03:32.743 07:20:48 -- common/autotest_common.sh@1655 -- # local nvme bdf 00:03:32.743 07:20:48 -- common/autotest_common.sh@1657 -- # for nvme in /sys/block/nvme* 00:03:32.743 07:20:48 -- common/autotest_common.sh@1658 -- # is_block_zoned nvme0n1 00:03:32.743 07:20:48 -- common/autotest_common.sh@1647 -- # local device=nvme0n1 00:03:32.743 07:20:48 -- common/autotest_common.sh@1649 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:03:32.743 07:20:48 -- common/autotest_common.sh@1650 -- # [[ none != none ]] 00:03:32.743 07:20:48 -- setup/devices.sh@196 -- # blocks=() 00:03:32.743 07:20:48 -- setup/devices.sh@196 -- # declare -a blocks 00:03:32.743 07:20:48 -- setup/devices.sh@197 -- # blocks_to_pci=() 00:03:32.743 07:20:48 -- setup/devices.sh@197 -- # declare -A blocks_to_pci 00:03:32.743 07:20:48 -- setup/devices.sh@198 -- # min_disk_size=3221225472 00:03:32.743 07:20:48 -- setup/devices.sh@200 -- # for block in "/sys/block/nvme"!(*c*) 00:03:32.743 07:20:48 -- setup/devices.sh@201 -- # ctrl=nvme0n1 00:03:32.743 07:20:48 -- setup/devices.sh@201 -- # ctrl=nvme0 00:03:32.743 07:20:48 -- setup/devices.sh@202 -- # pci=0000:88:00.0 00:03:32.743 07:20:48 -- setup/devices.sh@203 -- # [[ '' == *\0\0\0\0\:\8\8\:\0\0\.\0* ]] 00:03:32.743 07:20:48 -- setup/devices.sh@204 -- # block_in_use nvme0n1 00:03:32.743 07:20:48 -- scripts/common.sh@380 -- # local block=nvme0n1 pt 00:03:32.743 07:20:48 -- scripts/common.sh@389 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdk-gpt.py nvme0n1 00:03:32.743 No valid GPT data, bailing 00:03:32.743 07:20:48 -- scripts/common.sh@393 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:03:32.743 07:20:48 -- scripts/common.sh@393 -- # pt= 00:03:32.743 07:20:48 -- scripts/common.sh@394 -- # return 1 00:03:32.743 07:20:48 -- setup/devices.sh@204 -- # sec_size_to_bytes nvme0n1 00:03:32.743 07:20:48 -- setup/common.sh@76 -- # local dev=nvme0n1 00:03:32.743 07:20:48 -- setup/common.sh@78 -- # [[ -e /sys/block/nvme0n1 ]] 00:03:32.743 07:20:48 -- setup/common.sh@80 -- # echo 1000204886016 00:03:32.743 07:20:48 -- setup/devices.sh@204 -- # (( 1000204886016 >= min_disk_size )) 00:03:32.743 07:20:48 -- setup/devices.sh@205 -- # blocks+=("${block##*/}") 00:03:32.743 07:20:48 -- setup/devices.sh@206 -- # blocks_to_pci["${block##*/}"]=0000:88:00.0 00:03:32.743 07:20:48 -- setup/devices.sh@209 -- # (( 1 > 0 )) 00:03:32.743 07:20:48 -- setup/devices.sh@211 -- # declare -r test_disk=nvme0n1 00:03:32.743 07:20:48 -- setup/devices.sh@213 -- # run_test nvme_mount nvme_mount 00:03:32.743 07:20:48 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:03:32.743 07:20:48 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:03:32.743 07:20:48 -- common/autotest_common.sh@10 -- # set +x 00:03:32.743 ************************************ 00:03:32.743 START TEST nvme_mount 00:03:32.743 ************************************ 00:03:32.744 07:20:48 -- common/autotest_common.sh@1104 -- # nvme_mount 00:03:32.744 07:20:48 -- setup/devices.sh@95 -- # nvme_disk=nvme0n1 00:03:32.744 07:20:48 -- setup/devices.sh@96 -- # nvme_disk_p=nvme0n1p1 00:03:32.744 07:20:48 -- setup/devices.sh@97 -- # nvme_mount=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:03:32.744 07:20:48 -- setup/devices.sh@98 -- # nvme_dummy_test_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:03:32.744 07:20:48 -- setup/devices.sh@101 -- # partition_drive nvme0n1 1 00:03:32.744 07:20:48 -- setup/common.sh@39 -- # local disk=nvme0n1 00:03:32.744 07:20:48 -- setup/common.sh@40 -- # local part_no=1 00:03:32.744 07:20:48 -- setup/common.sh@41 -- # local size=1073741824 00:03:32.744 07:20:48 -- setup/common.sh@43 -- # local part part_start=0 part_end=0 00:03:32.744 07:20:48 -- setup/common.sh@44 -- # parts=() 00:03:32.744 07:20:48 -- setup/common.sh@44 -- # local parts 00:03:32.744 07:20:48 -- setup/common.sh@46 -- # (( part = 1 )) 00:03:32.744 07:20:48 -- setup/common.sh@46 -- # (( part <= part_no )) 00:03:32.744 07:20:48 -- setup/common.sh@47 -- # parts+=("${disk}p$part") 00:03:32.744 07:20:48 -- setup/common.sh@46 -- # (( part++ )) 00:03:32.744 07:20:48 -- setup/common.sh@46 -- # (( part <= part_no )) 00:03:32.744 07:20:48 -- setup/common.sh@51 -- # (( size /= 512 )) 00:03:32.744 07:20:48 -- setup/common.sh@56 -- # sgdisk /dev/nvme0n1 --zap-all 00:03:32.744 07:20:48 -- setup/common.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/sync_dev_uevents.sh block/partition nvme0n1p1 00:03:33.681 Creating new GPT entries in memory. 00:03:33.681 GPT data structures destroyed! You may now partition the disk using fdisk or 00:03:33.681 other utilities. 00:03:33.681 07:20:49 -- setup/common.sh@57 -- # (( part = 1 )) 00:03:33.681 07:20:49 -- setup/common.sh@57 -- # (( part <= part_no )) 00:03:33.681 07:20:49 -- setup/common.sh@58 -- # (( part_start = part_start == 0 ? 2048 : part_end + 1 )) 00:03:33.681 07:20:49 -- setup/common.sh@59 -- # (( part_end = part_start + size - 1 )) 00:03:33.681 07:20:49 -- setup/common.sh@60 -- # flock /dev/nvme0n1 sgdisk /dev/nvme0n1 --new=1:2048:2099199 00:03:34.620 Creating new GPT entries in memory. 00:03:34.620 The operation has completed successfully. 00:03:34.620 07:20:50 -- setup/common.sh@57 -- # (( part++ )) 00:03:34.620 07:20:50 -- setup/common.sh@57 -- # (( part <= part_no )) 00:03:34.620 07:20:50 -- setup/common.sh@62 -- # wait 3959166 00:03:34.620 07:20:50 -- setup/devices.sh@102 -- # mkfs /dev/nvme0n1p1 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:03:34.620 07:20:50 -- setup/common.sh@66 -- # local dev=/dev/nvme0n1p1 mount=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount size= 00:03:34.620 07:20:50 -- setup/common.sh@68 -- # mkdir -p /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:03:34.620 07:20:50 -- setup/common.sh@70 -- # [[ -e /dev/nvme0n1p1 ]] 00:03:34.620 07:20:50 -- setup/common.sh@71 -- # mkfs.ext4 -qF /dev/nvme0n1p1 00:03:34.620 07:20:50 -- setup/common.sh@72 -- # mount /dev/nvme0n1p1 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:03:34.620 07:20:50 -- setup/devices.sh@105 -- # verify 0000:88:00.0 nvme0n1:nvme0n1p1 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:03:34.620 07:20:50 -- setup/devices.sh@48 -- # local dev=0000:88:00.0 00:03:34.620 07:20:50 -- setup/devices.sh@49 -- # local mounts=nvme0n1:nvme0n1p1 00:03:34.620 07:20:50 -- setup/devices.sh@50 -- # local mount_point=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:03:34.620 07:20:50 -- setup/devices.sh@51 -- # local test_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:03:34.620 07:20:50 -- setup/devices.sh@53 -- # local found=0 00:03:34.620 07:20:50 -- setup/devices.sh@55 -- # [[ -n /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme ]] 00:03:34.620 07:20:50 -- setup/devices.sh@56 -- # : 00:03:34.620 07:20:50 -- setup/devices.sh@59 -- # local pci status 00:03:34.620 07:20:50 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:34.620 07:20:50 -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:88:00.0 00:03:34.620 07:20:50 -- setup/devices.sh@47 -- # setup output config 00:03:34.620 07:20:50 -- setup/common.sh@9 -- # [[ output == output ]] 00:03:34.620 07:20:50 -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh config 00:03:35.555 07:20:51 -- setup/devices.sh@62 -- # [[ 0000:88:00.0 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:35.555 07:20:51 -- setup/devices.sh@62 -- # [[ Active devices: mount@nvme0n1:nvme0n1p1, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\n\v\m\e\0\n\1\:\n\v\m\e\0\n\1\p\1* ]] 00:03:35.555 07:20:51 -- setup/devices.sh@63 -- # found=1 00:03:35.555 07:20:51 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:35.555 07:20:51 -- setup/devices.sh@62 -- # [[ 0000:00:04.0 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:35.555 07:20:51 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:35.555 07:20:51 -- setup/devices.sh@62 -- # [[ 0000:80:04.0 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:35.555 07:20:51 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:35.555 07:20:51 -- setup/devices.sh@62 -- # [[ 0000:00:04.1 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:35.555 07:20:51 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:35.555 07:20:51 -- setup/devices.sh@62 -- # [[ 0000:80:04.1 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:35.555 07:20:51 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:35.555 07:20:51 -- setup/devices.sh@62 -- # [[ 0000:00:04.2 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:35.555 07:20:51 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:35.555 07:20:51 -- setup/devices.sh@62 -- # [[ 0000:80:04.2 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:35.555 07:20:51 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:35.555 07:20:51 -- setup/devices.sh@62 -- # [[ 0000:00:04.3 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:35.555 07:20:51 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:35.555 07:20:51 -- setup/devices.sh@62 -- # [[ 0000:80:04.3 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:35.555 07:20:51 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:35.555 07:20:51 -- setup/devices.sh@62 -- # [[ 0000:00:04.4 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:35.555 07:20:51 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:35.555 07:20:51 -- setup/devices.sh@62 -- # [[ 0000:80:04.4 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:35.555 07:20:51 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:35.555 07:20:51 -- setup/devices.sh@62 -- # [[ 0000:00:04.5 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:35.555 07:20:51 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:35.555 07:20:51 -- setup/devices.sh@62 -- # [[ 0000:80:04.5 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:35.555 07:20:51 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:35.555 07:20:51 -- setup/devices.sh@62 -- # [[ 0000:00:04.6 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:35.555 07:20:51 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:35.555 07:20:51 -- setup/devices.sh@62 -- # [[ 0000:80:04.6 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:35.555 07:20:51 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:35.555 07:20:51 -- setup/devices.sh@62 -- # [[ 0000:00:04.7 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:35.555 07:20:51 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:35.555 07:20:51 -- setup/devices.sh@62 -- # [[ 0000:80:04.7 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:35.555 07:20:51 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:35.815 07:20:51 -- setup/devices.sh@66 -- # (( found == 1 )) 00:03:35.815 07:20:51 -- setup/devices.sh@68 -- # [[ -n /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount ]] 00:03:35.815 07:20:51 -- setup/devices.sh@71 -- # mountpoint -q /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:03:35.815 07:20:51 -- setup/devices.sh@73 -- # [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme ]] 00:03:35.815 07:20:51 -- setup/devices.sh@74 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:03:35.815 07:20:51 -- setup/devices.sh@110 -- # cleanup_nvme 00:03:35.815 07:20:51 -- setup/devices.sh@20 -- # mountpoint -q /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:03:35.815 07:20:51 -- setup/devices.sh@21 -- # umount /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:03:35.815 07:20:51 -- setup/devices.sh@24 -- # [[ -b /dev/nvme0n1p1 ]] 00:03:35.815 07:20:51 -- setup/devices.sh@25 -- # wipefs --all /dev/nvme0n1p1 00:03:35.815 /dev/nvme0n1p1: 2 bytes were erased at offset 0x00000438 (ext4): 53 ef 00:03:35.815 07:20:51 -- setup/devices.sh@27 -- # [[ -b /dev/nvme0n1 ]] 00:03:35.815 07:20:51 -- setup/devices.sh@28 -- # wipefs --all /dev/nvme0n1 00:03:36.074 /dev/nvme0n1: 8 bytes were erased at offset 0x00000200 (gpt): 45 46 49 20 50 41 52 54 00:03:36.074 /dev/nvme0n1: 8 bytes were erased at offset 0xe8e0db5e00 (gpt): 45 46 49 20 50 41 52 54 00:03:36.074 /dev/nvme0n1: 2 bytes were erased at offset 0x000001fe (PMBR): 55 aa 00:03:36.074 /dev/nvme0n1: calling ioctl to re-read partition table: Success 00:03:36.074 07:20:52 -- setup/devices.sh@113 -- # mkfs /dev/nvme0n1 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 1024M 00:03:36.074 07:20:52 -- setup/common.sh@66 -- # local dev=/dev/nvme0n1 mount=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount size=1024M 00:03:36.074 07:20:52 -- setup/common.sh@68 -- # mkdir -p /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:03:36.074 07:20:52 -- setup/common.sh@70 -- # [[ -e /dev/nvme0n1 ]] 00:03:36.074 07:20:52 -- setup/common.sh@71 -- # mkfs.ext4 -qF /dev/nvme0n1 1024M 00:03:36.074 07:20:52 -- setup/common.sh@72 -- # mount /dev/nvme0n1 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:03:36.074 07:20:52 -- setup/devices.sh@116 -- # verify 0000:88:00.0 nvme0n1:nvme0n1 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:03:36.074 07:20:52 -- setup/devices.sh@48 -- # local dev=0000:88:00.0 00:03:36.074 07:20:52 -- setup/devices.sh@49 -- # local mounts=nvme0n1:nvme0n1 00:03:36.074 07:20:52 -- setup/devices.sh@50 -- # local mount_point=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:03:36.074 07:20:52 -- setup/devices.sh@51 -- # local test_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:03:36.074 07:20:52 -- setup/devices.sh@53 -- # local found=0 00:03:36.074 07:20:52 -- setup/devices.sh@55 -- # [[ -n /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme ]] 00:03:36.074 07:20:52 -- setup/devices.sh@56 -- # : 00:03:36.074 07:20:52 -- setup/devices.sh@59 -- # local pci status 00:03:36.074 07:20:52 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:36.074 07:20:52 -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:88:00.0 00:03:36.074 07:20:52 -- setup/devices.sh@47 -- # setup output config 00:03:36.074 07:20:52 -- setup/common.sh@9 -- # [[ output == output ]] 00:03:36.074 07:20:52 -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh config 00:03:37.448 07:20:53 -- setup/devices.sh@62 -- # [[ 0000:88:00.0 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:37.448 07:20:53 -- setup/devices.sh@62 -- # [[ Active devices: mount@nvme0n1:nvme0n1, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\n\v\m\e\0\n\1\:\n\v\m\e\0\n\1* ]] 00:03:37.448 07:20:53 -- setup/devices.sh@63 -- # found=1 00:03:37.448 07:20:53 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:37.448 07:20:53 -- setup/devices.sh@62 -- # [[ 0000:00:04.0 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:37.448 07:20:53 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:37.448 07:20:53 -- setup/devices.sh@62 -- # [[ 0000:80:04.0 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:37.448 07:20:53 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:37.448 07:20:53 -- setup/devices.sh@62 -- # [[ 0000:00:04.1 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:37.448 07:20:53 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:37.448 07:20:53 -- setup/devices.sh@62 -- # [[ 0000:80:04.1 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:37.448 07:20:53 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:37.448 07:20:53 -- setup/devices.sh@62 -- # [[ 0000:00:04.2 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:37.448 07:20:53 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:37.448 07:20:53 -- setup/devices.sh@62 -- # [[ 0000:80:04.2 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:37.448 07:20:53 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:37.448 07:20:53 -- setup/devices.sh@62 -- # [[ 0000:00:04.3 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:37.448 07:20:53 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:37.448 07:20:53 -- setup/devices.sh@62 -- # [[ 0000:80:04.3 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:37.448 07:20:53 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:37.448 07:20:53 -- setup/devices.sh@62 -- # [[ 0000:00:04.4 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:37.448 07:20:53 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:37.448 07:20:53 -- setup/devices.sh@62 -- # [[ 0000:80:04.4 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:37.448 07:20:53 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:37.448 07:20:53 -- setup/devices.sh@62 -- # [[ 0000:00:04.5 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:37.448 07:20:53 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:37.448 07:20:53 -- setup/devices.sh@62 -- # [[ 0000:80:04.5 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:37.448 07:20:53 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:37.448 07:20:53 -- setup/devices.sh@62 -- # [[ 0000:00:04.6 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:37.448 07:20:53 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:37.448 07:20:53 -- setup/devices.sh@62 -- # [[ 0000:80:04.6 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:37.448 07:20:53 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:37.448 07:20:53 -- setup/devices.sh@62 -- # [[ 0000:00:04.7 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:37.448 07:20:53 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:37.448 07:20:53 -- setup/devices.sh@62 -- # [[ 0000:80:04.7 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:37.448 07:20:53 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:37.448 07:20:53 -- setup/devices.sh@66 -- # (( found == 1 )) 00:03:37.448 07:20:53 -- setup/devices.sh@68 -- # [[ -n /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount ]] 00:03:37.448 07:20:53 -- setup/devices.sh@71 -- # mountpoint -q /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:03:37.448 07:20:53 -- setup/devices.sh@73 -- # [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme ]] 00:03:37.448 07:20:53 -- setup/devices.sh@74 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:03:37.448 07:20:53 -- setup/devices.sh@123 -- # umount /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:03:37.448 07:20:53 -- setup/devices.sh@125 -- # verify 0000:88:00.0 data@nvme0n1 '' '' 00:03:37.448 07:20:53 -- setup/devices.sh@48 -- # local dev=0000:88:00.0 00:03:37.448 07:20:53 -- setup/devices.sh@49 -- # local mounts=data@nvme0n1 00:03:37.448 07:20:53 -- setup/devices.sh@50 -- # local mount_point= 00:03:37.448 07:20:53 -- setup/devices.sh@51 -- # local test_file= 00:03:37.448 07:20:53 -- setup/devices.sh@53 -- # local found=0 00:03:37.448 07:20:53 -- setup/devices.sh@55 -- # [[ -n '' ]] 00:03:37.448 07:20:53 -- setup/devices.sh@59 -- # local pci status 00:03:37.448 07:20:53 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:37.448 07:20:53 -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:88:00.0 00:03:37.448 07:20:53 -- setup/devices.sh@47 -- # setup output config 00:03:37.448 07:20:53 -- setup/common.sh@9 -- # [[ output == output ]] 00:03:37.448 07:20:53 -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh config 00:03:38.824 07:20:54 -- setup/devices.sh@62 -- # [[ 0000:88:00.0 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:38.824 07:20:54 -- setup/devices.sh@62 -- # [[ Active devices: data@nvme0n1, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\d\a\t\a\@\n\v\m\e\0\n\1* ]] 00:03:38.824 07:20:54 -- setup/devices.sh@63 -- # found=1 00:03:38.824 07:20:54 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:38.824 07:20:54 -- setup/devices.sh@62 -- # [[ 0000:00:04.0 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:38.824 07:20:54 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:38.824 07:20:54 -- setup/devices.sh@62 -- # [[ 0000:80:04.0 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:38.824 07:20:54 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:38.824 07:20:54 -- setup/devices.sh@62 -- # [[ 0000:00:04.1 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:38.824 07:20:54 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:38.824 07:20:54 -- setup/devices.sh@62 -- # [[ 0000:80:04.1 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:38.824 07:20:54 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:38.824 07:20:54 -- setup/devices.sh@62 -- # [[ 0000:00:04.2 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:38.824 07:20:54 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:38.824 07:20:54 -- setup/devices.sh@62 -- # [[ 0000:80:04.2 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:38.824 07:20:54 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:38.824 07:20:54 -- setup/devices.sh@62 -- # [[ 0000:00:04.3 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:38.824 07:20:54 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:38.824 07:20:54 -- setup/devices.sh@62 -- # [[ 0000:80:04.3 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:38.824 07:20:54 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:38.824 07:20:54 -- setup/devices.sh@62 -- # [[ 0000:00:04.4 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:38.824 07:20:54 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:38.824 07:20:54 -- setup/devices.sh@62 -- # [[ 0000:80:04.4 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:38.824 07:20:54 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:38.824 07:20:54 -- setup/devices.sh@62 -- # [[ 0000:00:04.5 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:38.824 07:20:54 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:38.824 07:20:54 -- setup/devices.sh@62 -- # [[ 0000:80:04.5 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:38.824 07:20:54 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:38.824 07:20:54 -- setup/devices.sh@62 -- # [[ 0000:00:04.6 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:38.824 07:20:54 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:38.824 07:20:54 -- setup/devices.sh@62 -- # [[ 0000:80:04.6 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:38.824 07:20:54 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:38.824 07:20:54 -- setup/devices.sh@62 -- # [[ 0000:00:04.7 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:38.824 07:20:54 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:38.824 07:20:54 -- setup/devices.sh@62 -- # [[ 0000:80:04.7 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:38.824 07:20:54 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:38.824 07:20:54 -- setup/devices.sh@66 -- # (( found == 1 )) 00:03:38.824 07:20:54 -- setup/devices.sh@68 -- # [[ -n '' ]] 00:03:38.824 07:20:54 -- setup/devices.sh@68 -- # return 0 00:03:38.824 07:20:54 -- setup/devices.sh@128 -- # cleanup_nvme 00:03:38.824 07:20:54 -- setup/devices.sh@20 -- # mountpoint -q /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:03:38.824 07:20:54 -- setup/devices.sh@24 -- # [[ -b /dev/nvme0n1p1 ]] 00:03:38.824 07:20:54 -- setup/devices.sh@27 -- # [[ -b /dev/nvme0n1 ]] 00:03:38.824 07:20:54 -- setup/devices.sh@28 -- # wipefs --all /dev/nvme0n1 00:03:38.824 /dev/nvme0n1: 2 bytes were erased at offset 0x00000438 (ext4): 53 ef 00:03:38.824 00:03:38.824 real 0m6.289s 00:03:38.824 user 0m1.440s 00:03:38.824 sys 0m2.434s 00:03:38.824 07:20:54 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:03:38.824 07:20:54 -- common/autotest_common.sh@10 -- # set +x 00:03:38.824 ************************************ 00:03:38.824 END TEST nvme_mount 00:03:38.824 ************************************ 00:03:38.824 07:20:54 -- setup/devices.sh@214 -- # run_test dm_mount dm_mount 00:03:38.824 07:20:54 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:03:38.824 07:20:54 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:03:38.824 07:20:54 -- common/autotest_common.sh@10 -- # set +x 00:03:38.824 ************************************ 00:03:38.824 START TEST dm_mount 00:03:38.824 ************************************ 00:03:38.824 07:20:54 -- common/autotest_common.sh@1104 -- # dm_mount 00:03:38.824 07:20:54 -- setup/devices.sh@144 -- # pv=nvme0n1 00:03:38.824 07:20:54 -- setup/devices.sh@145 -- # pv0=nvme0n1p1 00:03:38.824 07:20:54 -- setup/devices.sh@146 -- # pv1=nvme0n1p2 00:03:38.824 07:20:54 -- setup/devices.sh@148 -- # partition_drive nvme0n1 00:03:38.824 07:20:54 -- setup/common.sh@39 -- # local disk=nvme0n1 00:03:38.824 07:20:54 -- setup/common.sh@40 -- # local part_no=2 00:03:38.824 07:20:54 -- setup/common.sh@41 -- # local size=1073741824 00:03:38.824 07:20:54 -- setup/common.sh@43 -- # local part part_start=0 part_end=0 00:03:38.824 07:20:54 -- setup/common.sh@44 -- # parts=() 00:03:38.824 07:20:54 -- setup/common.sh@44 -- # local parts 00:03:38.824 07:20:54 -- setup/common.sh@46 -- # (( part = 1 )) 00:03:38.824 07:20:54 -- setup/common.sh@46 -- # (( part <= part_no )) 00:03:38.824 07:20:54 -- setup/common.sh@47 -- # parts+=("${disk}p$part") 00:03:38.824 07:20:54 -- setup/common.sh@46 -- # (( part++ )) 00:03:38.824 07:20:54 -- setup/common.sh@46 -- # (( part <= part_no )) 00:03:38.824 07:20:54 -- setup/common.sh@47 -- # parts+=("${disk}p$part") 00:03:38.824 07:20:54 -- setup/common.sh@46 -- # (( part++ )) 00:03:38.824 07:20:54 -- setup/common.sh@46 -- # (( part <= part_no )) 00:03:38.824 07:20:54 -- setup/common.sh@51 -- # (( size /= 512 )) 00:03:38.824 07:20:54 -- setup/common.sh@56 -- # sgdisk /dev/nvme0n1 --zap-all 00:03:38.824 07:20:54 -- setup/common.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/sync_dev_uevents.sh block/partition nvme0n1p1 nvme0n1p2 00:03:39.758 Creating new GPT entries in memory. 00:03:39.758 GPT data structures destroyed! You may now partition the disk using fdisk or 00:03:39.758 other utilities. 00:03:39.758 07:20:55 -- setup/common.sh@57 -- # (( part = 1 )) 00:03:39.758 07:20:55 -- setup/common.sh@57 -- # (( part <= part_no )) 00:03:39.758 07:20:55 -- setup/common.sh@58 -- # (( part_start = part_start == 0 ? 2048 : part_end + 1 )) 00:03:39.758 07:20:55 -- setup/common.sh@59 -- # (( part_end = part_start + size - 1 )) 00:03:39.758 07:20:55 -- setup/common.sh@60 -- # flock /dev/nvme0n1 sgdisk /dev/nvme0n1 --new=1:2048:2099199 00:03:41.130 Creating new GPT entries in memory. 00:03:41.130 The operation has completed successfully. 00:03:41.130 07:20:56 -- setup/common.sh@57 -- # (( part++ )) 00:03:41.130 07:20:56 -- setup/common.sh@57 -- # (( part <= part_no )) 00:03:41.130 07:20:56 -- setup/common.sh@58 -- # (( part_start = part_start == 0 ? 2048 : part_end + 1 )) 00:03:41.130 07:20:56 -- setup/common.sh@59 -- # (( part_end = part_start + size - 1 )) 00:03:41.130 07:20:56 -- setup/common.sh@60 -- # flock /dev/nvme0n1 sgdisk /dev/nvme0n1 --new=2:2099200:4196351 00:03:42.065 The operation has completed successfully. 00:03:42.066 07:20:57 -- setup/common.sh@57 -- # (( part++ )) 00:03:42.066 07:20:57 -- setup/common.sh@57 -- # (( part <= part_no )) 00:03:42.066 07:20:57 -- setup/common.sh@62 -- # wait 3961621 00:03:42.066 07:20:57 -- setup/devices.sh@150 -- # dm_name=nvme_dm_test 00:03:42.066 07:20:57 -- setup/devices.sh@151 -- # dm_mount=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount 00:03:42.066 07:20:57 -- setup/devices.sh@152 -- # dm_dummy_test_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount/test_dm 00:03:42.066 07:20:57 -- setup/devices.sh@155 -- # dmsetup create nvme_dm_test 00:03:42.066 07:20:57 -- setup/devices.sh@160 -- # for t in {1..5} 00:03:42.066 07:20:57 -- setup/devices.sh@161 -- # [[ -e /dev/mapper/nvme_dm_test ]] 00:03:42.066 07:20:57 -- setup/devices.sh@161 -- # break 00:03:42.066 07:20:57 -- setup/devices.sh@164 -- # [[ -e /dev/mapper/nvme_dm_test ]] 00:03:42.066 07:20:57 -- setup/devices.sh@165 -- # readlink -f /dev/mapper/nvme_dm_test 00:03:42.066 07:20:57 -- setup/devices.sh@165 -- # dm=/dev/dm-0 00:03:42.066 07:20:57 -- setup/devices.sh@166 -- # dm=dm-0 00:03:42.066 07:20:57 -- setup/devices.sh@168 -- # [[ -e /sys/class/block/nvme0n1p1/holders/dm-0 ]] 00:03:42.066 07:20:57 -- setup/devices.sh@169 -- # [[ -e /sys/class/block/nvme0n1p2/holders/dm-0 ]] 00:03:42.066 07:20:57 -- setup/devices.sh@171 -- # mkfs /dev/mapper/nvme_dm_test /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount 00:03:42.066 07:20:57 -- setup/common.sh@66 -- # local dev=/dev/mapper/nvme_dm_test mount=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount size= 00:03:42.066 07:20:57 -- setup/common.sh@68 -- # mkdir -p /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount 00:03:42.066 07:20:57 -- setup/common.sh@70 -- # [[ -e /dev/mapper/nvme_dm_test ]] 00:03:42.066 07:20:57 -- setup/common.sh@71 -- # mkfs.ext4 -qF /dev/mapper/nvme_dm_test 00:03:42.066 07:20:57 -- setup/common.sh@72 -- # mount /dev/mapper/nvme_dm_test /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount 00:03:42.066 07:20:57 -- setup/devices.sh@174 -- # verify 0000:88:00.0 nvme0n1:nvme_dm_test /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount/test_dm 00:03:42.066 07:20:57 -- setup/devices.sh@48 -- # local dev=0000:88:00.0 00:03:42.066 07:20:57 -- setup/devices.sh@49 -- # local mounts=nvme0n1:nvme_dm_test 00:03:42.066 07:20:57 -- setup/devices.sh@50 -- # local mount_point=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount 00:03:42.066 07:20:57 -- setup/devices.sh@51 -- # local test_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount/test_dm 00:03:42.066 07:20:57 -- setup/devices.sh@53 -- # local found=0 00:03:42.066 07:20:57 -- setup/devices.sh@55 -- # [[ -n /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount/test_dm ]] 00:03:42.066 07:20:57 -- setup/devices.sh@56 -- # : 00:03:42.066 07:20:57 -- setup/devices.sh@59 -- # local pci status 00:03:42.066 07:20:57 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:42.066 07:20:57 -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:88:00.0 00:03:42.066 07:20:57 -- setup/devices.sh@47 -- # setup output config 00:03:42.066 07:20:57 -- setup/common.sh@9 -- # [[ output == output ]] 00:03:42.066 07:20:57 -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh config 00:03:42.999 07:20:58 -- setup/devices.sh@62 -- # [[ 0000:88:00.0 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:42.999 07:20:58 -- setup/devices.sh@62 -- # [[ Active devices: holder@nvme0n1p1:dm-0,holder@nvme0n1p2:dm-0,mount@nvme0n1:nvme_dm_test, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\n\v\m\e\0\n\1\:\n\v\m\e\_\d\m\_\t\e\s\t* ]] 00:03:42.999 07:20:58 -- setup/devices.sh@63 -- # found=1 00:03:42.999 07:20:58 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:42.999 07:20:59 -- setup/devices.sh@62 -- # [[ 0000:00:04.0 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:42.999 07:20:59 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:42.999 07:20:59 -- setup/devices.sh@62 -- # [[ 0000:80:04.0 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:42.999 07:20:59 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:42.999 07:20:59 -- setup/devices.sh@62 -- # [[ 0000:00:04.1 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:42.999 07:20:59 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:42.999 07:20:59 -- setup/devices.sh@62 -- # [[ 0000:80:04.1 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:42.999 07:20:59 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:42.999 07:20:59 -- setup/devices.sh@62 -- # [[ 0000:00:04.2 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:42.999 07:20:59 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:42.999 07:20:59 -- setup/devices.sh@62 -- # [[ 0000:80:04.2 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:42.999 07:20:59 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:42.999 07:20:59 -- setup/devices.sh@62 -- # [[ 0000:00:04.3 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:42.999 07:20:59 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:42.999 07:20:59 -- setup/devices.sh@62 -- # [[ 0000:80:04.3 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:42.999 07:20:59 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:42.999 07:20:59 -- setup/devices.sh@62 -- # [[ 0000:00:04.4 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:42.999 07:20:59 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:42.999 07:20:59 -- setup/devices.sh@62 -- # [[ 0000:80:04.4 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:42.999 07:20:59 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:42.999 07:20:59 -- setup/devices.sh@62 -- # [[ 0000:00:04.5 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:42.999 07:20:59 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:42.999 07:20:59 -- setup/devices.sh@62 -- # [[ 0000:80:04.5 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:42.999 07:20:59 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:42.999 07:20:59 -- setup/devices.sh@62 -- # [[ 0000:00:04.6 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:42.999 07:20:59 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:42.999 07:20:59 -- setup/devices.sh@62 -- # [[ 0000:80:04.6 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:42.999 07:20:59 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:42.999 07:20:59 -- setup/devices.sh@62 -- # [[ 0000:00:04.7 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:42.999 07:20:59 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:42.999 07:20:59 -- setup/devices.sh@62 -- # [[ 0000:80:04.7 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:42.999 07:20:59 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:43.257 07:20:59 -- setup/devices.sh@66 -- # (( found == 1 )) 00:03:43.257 07:20:59 -- setup/devices.sh@68 -- # [[ -n /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount ]] 00:03:43.257 07:20:59 -- setup/devices.sh@71 -- # mountpoint -q /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount 00:03:43.257 07:20:59 -- setup/devices.sh@73 -- # [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount/test_dm ]] 00:03:43.257 07:20:59 -- setup/devices.sh@74 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount/test_dm 00:03:43.257 07:20:59 -- setup/devices.sh@182 -- # umount /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount 00:03:43.257 07:20:59 -- setup/devices.sh@184 -- # verify 0000:88:00.0 holder@nvme0n1p1:dm-0,holder@nvme0n1p2:dm-0 '' '' 00:03:43.257 07:20:59 -- setup/devices.sh@48 -- # local dev=0000:88:00.0 00:03:43.257 07:20:59 -- setup/devices.sh@49 -- # local mounts=holder@nvme0n1p1:dm-0,holder@nvme0n1p2:dm-0 00:03:43.257 07:20:59 -- setup/devices.sh@50 -- # local mount_point= 00:03:43.257 07:20:59 -- setup/devices.sh@51 -- # local test_file= 00:03:43.257 07:20:59 -- setup/devices.sh@53 -- # local found=0 00:03:43.257 07:20:59 -- setup/devices.sh@55 -- # [[ -n '' ]] 00:03:43.257 07:20:59 -- setup/devices.sh@59 -- # local pci status 00:03:43.257 07:20:59 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:43.257 07:20:59 -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:88:00.0 00:03:43.257 07:20:59 -- setup/devices.sh@47 -- # setup output config 00:03:43.257 07:20:59 -- setup/common.sh@9 -- # [[ output == output ]] 00:03:43.257 07:20:59 -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh config 00:03:44.188 07:21:00 -- setup/devices.sh@62 -- # [[ 0000:88:00.0 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:44.188 07:21:00 -- setup/devices.sh@62 -- # [[ Active devices: holder@nvme0n1p1:dm-0,holder@nvme0n1p2:dm-0, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\h\o\l\d\e\r\@\n\v\m\e\0\n\1\p\1\:\d\m\-\0\,\h\o\l\d\e\r\@\n\v\m\e\0\n\1\p\2\:\d\m\-\0* ]] 00:03:44.188 07:21:00 -- setup/devices.sh@63 -- # found=1 00:03:44.188 07:21:00 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:44.188 07:21:00 -- setup/devices.sh@62 -- # [[ 0000:00:04.0 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:44.188 07:21:00 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:44.188 07:21:00 -- setup/devices.sh@62 -- # [[ 0000:80:04.0 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:44.188 07:21:00 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:44.188 07:21:00 -- setup/devices.sh@62 -- # [[ 0000:00:04.1 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:44.188 07:21:00 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:44.188 07:21:00 -- setup/devices.sh@62 -- # [[ 0000:80:04.1 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:44.188 07:21:00 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:44.188 07:21:00 -- setup/devices.sh@62 -- # [[ 0000:00:04.2 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:44.188 07:21:00 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:44.188 07:21:00 -- setup/devices.sh@62 -- # [[ 0000:80:04.2 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:44.188 07:21:00 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:44.188 07:21:00 -- setup/devices.sh@62 -- # [[ 0000:00:04.3 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:44.188 07:21:00 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:44.188 07:21:00 -- setup/devices.sh@62 -- # [[ 0000:80:04.3 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:44.188 07:21:00 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:44.188 07:21:00 -- setup/devices.sh@62 -- # [[ 0000:00:04.4 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:44.188 07:21:00 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:44.188 07:21:00 -- setup/devices.sh@62 -- # [[ 0000:80:04.4 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:44.188 07:21:00 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:44.188 07:21:00 -- setup/devices.sh@62 -- # [[ 0000:00:04.5 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:44.188 07:21:00 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:44.188 07:21:00 -- setup/devices.sh@62 -- # [[ 0000:80:04.5 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:44.188 07:21:00 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:44.188 07:21:00 -- setup/devices.sh@62 -- # [[ 0000:00:04.6 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:44.188 07:21:00 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:44.188 07:21:00 -- setup/devices.sh@62 -- # [[ 0000:80:04.6 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:44.188 07:21:00 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:44.188 07:21:00 -- setup/devices.sh@62 -- # [[ 0000:00:04.7 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:44.188 07:21:00 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:44.188 07:21:00 -- setup/devices.sh@62 -- # [[ 0000:80:04.7 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:44.188 07:21:00 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:44.446 07:21:00 -- setup/devices.sh@66 -- # (( found == 1 )) 00:03:44.446 07:21:00 -- setup/devices.sh@68 -- # [[ -n '' ]] 00:03:44.446 07:21:00 -- setup/devices.sh@68 -- # return 0 00:03:44.446 07:21:00 -- setup/devices.sh@187 -- # cleanup_dm 00:03:44.446 07:21:00 -- setup/devices.sh@33 -- # mountpoint -q /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount 00:03:44.446 07:21:00 -- setup/devices.sh@36 -- # [[ -L /dev/mapper/nvme_dm_test ]] 00:03:44.446 07:21:00 -- setup/devices.sh@37 -- # dmsetup remove --force nvme_dm_test 00:03:44.446 07:21:00 -- setup/devices.sh@39 -- # [[ -b /dev/nvme0n1p1 ]] 00:03:44.446 07:21:00 -- setup/devices.sh@40 -- # wipefs --all /dev/nvme0n1p1 00:03:44.446 /dev/nvme0n1p1: 2 bytes were erased at offset 0x00000438 (ext4): 53 ef 00:03:44.446 07:21:00 -- setup/devices.sh@42 -- # [[ -b /dev/nvme0n1p2 ]] 00:03:44.446 07:21:00 -- setup/devices.sh@43 -- # wipefs --all /dev/nvme0n1p2 00:03:44.446 00:03:44.446 real 0m5.719s 00:03:44.446 user 0m1.030s 00:03:44.446 sys 0m1.544s 00:03:44.446 07:21:00 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:03:44.446 07:21:00 -- common/autotest_common.sh@10 -- # set +x 00:03:44.446 ************************************ 00:03:44.446 END TEST dm_mount 00:03:44.446 ************************************ 00:03:44.446 07:21:00 -- setup/devices.sh@1 -- # cleanup 00:03:44.446 07:21:00 -- setup/devices.sh@11 -- # cleanup_nvme 00:03:44.446 07:21:00 -- setup/devices.sh@20 -- # mountpoint -q /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:03:44.446 07:21:00 -- setup/devices.sh@24 -- # [[ -b /dev/nvme0n1p1 ]] 00:03:44.446 07:21:00 -- setup/devices.sh@25 -- # wipefs --all /dev/nvme0n1p1 00:03:44.446 07:21:00 -- setup/devices.sh@27 -- # [[ -b /dev/nvme0n1 ]] 00:03:44.446 07:21:00 -- setup/devices.sh@28 -- # wipefs --all /dev/nvme0n1 00:03:44.703 /dev/nvme0n1: 8 bytes were erased at offset 0x00000200 (gpt): 45 46 49 20 50 41 52 54 00:03:44.703 /dev/nvme0n1: 8 bytes were erased at offset 0xe8e0db5e00 (gpt): 45 46 49 20 50 41 52 54 00:03:44.703 /dev/nvme0n1: 2 bytes were erased at offset 0x000001fe (PMBR): 55 aa 00:03:44.703 /dev/nvme0n1: calling ioctl to re-read partition table: Success 00:03:44.703 07:21:00 -- setup/devices.sh@12 -- # cleanup_dm 00:03:44.703 07:21:00 -- setup/devices.sh@33 -- # mountpoint -q /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount 00:03:44.703 07:21:00 -- setup/devices.sh@36 -- # [[ -L /dev/mapper/nvme_dm_test ]] 00:03:44.703 07:21:00 -- setup/devices.sh@39 -- # [[ -b /dev/nvme0n1p1 ]] 00:03:44.703 07:21:00 -- setup/devices.sh@42 -- # [[ -b /dev/nvme0n1p2 ]] 00:03:44.703 07:21:00 -- setup/devices.sh@14 -- # [[ -b /dev/nvme0n1 ]] 00:03:44.703 07:21:00 -- setup/devices.sh@15 -- # wipefs --all /dev/nvme0n1 00:03:44.703 00:03:44.703 real 0m13.945s 00:03:44.703 user 0m3.084s 00:03:44.703 sys 0m5.072s 00:03:44.703 07:21:00 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:03:44.703 07:21:00 -- common/autotest_common.sh@10 -- # set +x 00:03:44.703 ************************************ 00:03:44.703 END TEST devices 00:03:44.703 ************************************ 00:03:44.960 00:03:44.960 real 0m43.257s 00:03:44.960 user 0m12.586s 00:03:44.960 sys 0m19.019s 00:03:44.960 07:21:00 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:03:44.960 07:21:00 -- common/autotest_common.sh@10 -- # set +x 00:03:44.960 ************************************ 00:03:44.960 END TEST setup.sh 00:03:44.960 ************************************ 00:03:44.960 07:21:00 -- spdk/autotest.sh@139 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh status 00:03:45.893 Hugepages 00:03:45.893 node hugesize free / total 00:03:45.893 node0 1048576kB 0 / 0 00:03:45.893 node0 2048kB 2048 / 2048 00:03:45.893 node1 1048576kB 0 / 0 00:03:45.893 node1 2048kB 0 / 0 00:03:45.893 00:03:45.893 Type BDF Vendor Device NUMA Driver Device Block devices 00:03:45.893 I/OAT 0000:00:04.0 8086 0e20 0 ioatdma - - 00:03:45.893 I/OAT 0000:00:04.1 8086 0e21 0 ioatdma - - 00:03:45.893 I/OAT 0000:00:04.2 8086 0e22 0 ioatdma - - 00:03:45.893 I/OAT 0000:00:04.3 8086 0e23 0 ioatdma - - 00:03:45.893 I/OAT 0000:00:04.4 8086 0e24 0 ioatdma - - 00:03:45.893 I/OAT 0000:00:04.5 8086 0e25 0 ioatdma - - 00:03:45.893 I/OAT 0000:00:04.6 8086 0e26 0 ioatdma - - 00:03:45.893 I/OAT 0000:00:04.7 8086 0e27 0 ioatdma - - 00:03:45.893 I/OAT 0000:80:04.0 8086 0e20 1 ioatdma - - 00:03:45.893 I/OAT 0000:80:04.1 8086 0e21 1 ioatdma - - 00:03:45.893 I/OAT 0000:80:04.2 8086 0e22 1 ioatdma - - 00:03:45.893 I/OAT 0000:80:04.3 8086 0e23 1 ioatdma - - 00:03:45.893 I/OAT 0000:80:04.4 8086 0e24 1 ioatdma - - 00:03:45.893 I/OAT 0000:80:04.5 8086 0e25 1 ioatdma - - 00:03:45.893 I/OAT 0000:80:04.6 8086 0e26 1 ioatdma - - 00:03:45.893 I/OAT 0000:80:04.7 8086 0e27 1 ioatdma - - 00:03:45.893 NVMe 0000:88:00.0 8086 0a54 1 nvme nvme0 nvme0n1 00:03:45.893 07:21:01 -- spdk/autotest.sh@141 -- # uname -s 00:03:45.893 07:21:01 -- spdk/autotest.sh@141 -- # [[ Linux == Linux ]] 00:03:45.893 07:21:01 -- spdk/autotest.sh@143 -- # nvme_namespace_revert 00:03:45.893 07:21:01 -- common/autotest_common.sh@1516 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:03:47.300 0000:00:04.7 (8086 0e27): ioatdma -> vfio-pci 00:03:47.300 0000:00:04.6 (8086 0e26): ioatdma -> vfio-pci 00:03:47.300 0000:00:04.5 (8086 0e25): ioatdma -> vfio-pci 00:03:47.300 0000:00:04.4 (8086 0e24): ioatdma -> vfio-pci 00:03:47.300 0000:00:04.3 (8086 0e23): ioatdma -> vfio-pci 00:03:47.300 0000:00:04.2 (8086 0e22): ioatdma -> vfio-pci 00:03:47.300 0000:00:04.1 (8086 0e21): ioatdma -> vfio-pci 00:03:47.300 0000:00:04.0 (8086 0e20): ioatdma -> vfio-pci 00:03:47.300 0000:80:04.7 (8086 0e27): ioatdma -> vfio-pci 00:03:47.300 0000:80:04.6 (8086 0e26): ioatdma -> vfio-pci 00:03:47.300 0000:80:04.5 (8086 0e25): ioatdma -> vfio-pci 00:03:47.300 0000:80:04.4 (8086 0e24): ioatdma -> vfio-pci 00:03:47.300 0000:80:04.3 (8086 0e23): ioatdma -> vfio-pci 00:03:47.300 0000:80:04.2 (8086 0e22): ioatdma -> vfio-pci 00:03:47.300 0000:80:04.1 (8086 0e21): ioatdma -> vfio-pci 00:03:47.300 0000:80:04.0 (8086 0e20): ioatdma -> vfio-pci 00:03:48.235 0000:88:00.0 (8086 0a54): nvme -> vfio-pci 00:03:48.235 07:21:04 -- common/autotest_common.sh@1517 -- # sleep 1 00:03:49.169 07:21:05 -- common/autotest_common.sh@1518 -- # bdfs=() 00:03:49.169 07:21:05 -- common/autotest_common.sh@1518 -- # local bdfs 00:03:49.169 07:21:05 -- common/autotest_common.sh@1519 -- # bdfs=($(get_nvme_bdfs)) 00:03:49.169 07:21:05 -- common/autotest_common.sh@1519 -- # get_nvme_bdfs 00:03:49.169 07:21:05 -- common/autotest_common.sh@1498 -- # bdfs=() 00:03:49.169 07:21:05 -- common/autotest_common.sh@1498 -- # local bdfs 00:03:49.169 07:21:05 -- common/autotest_common.sh@1499 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:03:49.169 07:21:05 -- common/autotest_common.sh@1499 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh 00:03:49.169 07:21:05 -- common/autotest_common.sh@1499 -- # jq -r '.config[].params.traddr' 00:03:49.169 07:21:05 -- common/autotest_common.sh@1500 -- # (( 1 == 0 )) 00:03:49.169 07:21:05 -- common/autotest_common.sh@1504 -- # printf '%s\n' 0000:88:00.0 00:03:49.169 07:21:05 -- common/autotest_common.sh@1521 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:03:50.545 Waiting for block devices as requested 00:03:50.545 0000:88:00.0 (8086 0a54): vfio-pci -> nvme 00:03:50.545 0000:00:04.7 (8086 0e27): vfio-pci -> ioatdma 00:03:50.545 0000:00:04.6 (8086 0e26): vfio-pci -> ioatdma 00:03:50.803 0000:00:04.5 (8086 0e25): vfio-pci -> ioatdma 00:03:50.803 0000:00:04.4 (8086 0e24): vfio-pci -> ioatdma 00:03:50.803 0000:00:04.3 (8086 0e23): vfio-pci -> ioatdma 00:03:50.803 0000:00:04.2 (8086 0e22): vfio-pci -> ioatdma 00:03:51.062 0000:00:04.1 (8086 0e21): vfio-pci -> ioatdma 00:03:51.062 0000:00:04.0 (8086 0e20): vfio-pci -> ioatdma 00:03:51.062 0000:80:04.7 (8086 0e27): vfio-pci -> ioatdma 00:03:51.062 0000:80:04.6 (8086 0e26): vfio-pci -> ioatdma 00:03:51.320 0000:80:04.5 (8086 0e25): vfio-pci -> ioatdma 00:03:51.320 0000:80:04.4 (8086 0e24): vfio-pci -> ioatdma 00:03:51.320 0000:80:04.3 (8086 0e23): vfio-pci -> ioatdma 00:03:51.578 0000:80:04.2 (8086 0e22): vfio-pci -> ioatdma 00:03:51.578 0000:80:04.1 (8086 0e21): vfio-pci -> ioatdma 00:03:51.578 0000:80:04.0 (8086 0e20): vfio-pci -> ioatdma 00:03:51.578 07:21:07 -- common/autotest_common.sh@1523 -- # for bdf in "${bdfs[@]}" 00:03:51.578 07:21:07 -- common/autotest_common.sh@1524 -- # get_nvme_ctrlr_from_bdf 0000:88:00.0 00:03:51.578 07:21:07 -- common/autotest_common.sh@1487 -- # readlink -f /sys/class/nvme/nvme0 00:03:51.578 07:21:07 -- common/autotest_common.sh@1487 -- # grep 0000:88:00.0/nvme/nvme 00:03:51.578 07:21:07 -- common/autotest_common.sh@1487 -- # bdf_sysfs_path=/sys/devices/pci0000:80/0000:80:03.0/0000:88:00.0/nvme/nvme0 00:03:51.578 07:21:07 -- common/autotest_common.sh@1488 -- # [[ -z /sys/devices/pci0000:80/0000:80:03.0/0000:88:00.0/nvme/nvme0 ]] 00:03:51.578 07:21:07 -- common/autotest_common.sh@1492 -- # basename /sys/devices/pci0000:80/0000:80:03.0/0000:88:00.0/nvme/nvme0 00:03:51.578 07:21:07 -- common/autotest_common.sh@1492 -- # printf '%s\n' nvme0 00:03:51.578 07:21:07 -- common/autotest_common.sh@1524 -- # nvme_ctrlr=/dev/nvme0 00:03:51.578 07:21:07 -- common/autotest_common.sh@1525 -- # [[ -z /dev/nvme0 ]] 00:03:51.578 07:21:07 -- common/autotest_common.sh@1530 -- # nvme id-ctrl /dev/nvme0 00:03:51.578 07:21:07 -- common/autotest_common.sh@1530 -- # grep oacs 00:03:51.578 07:21:07 -- common/autotest_common.sh@1530 -- # cut -d: -f2 00:03:51.578 07:21:07 -- common/autotest_common.sh@1530 -- # oacs=' 0xf' 00:03:51.578 07:21:07 -- common/autotest_common.sh@1531 -- # oacs_ns_manage=8 00:03:51.578 07:21:07 -- common/autotest_common.sh@1533 -- # [[ 8 -ne 0 ]] 00:03:51.578 07:21:07 -- common/autotest_common.sh@1539 -- # nvme id-ctrl /dev/nvme0 00:03:51.578 07:21:07 -- common/autotest_common.sh@1539 -- # grep unvmcap 00:03:51.578 07:21:07 -- common/autotest_common.sh@1539 -- # cut -d: -f2 00:03:51.836 07:21:07 -- common/autotest_common.sh@1539 -- # unvmcap=' 0' 00:03:51.836 07:21:07 -- common/autotest_common.sh@1540 -- # [[ 0 -eq 0 ]] 00:03:51.836 07:21:07 -- common/autotest_common.sh@1542 -- # continue 00:03:51.836 07:21:07 -- spdk/autotest.sh@146 -- # timing_exit pre_cleanup 00:03:51.836 07:21:07 -- common/autotest_common.sh@718 -- # xtrace_disable 00:03:51.836 07:21:07 -- common/autotest_common.sh@10 -- # set +x 00:03:51.836 07:21:07 -- spdk/autotest.sh@149 -- # timing_enter afterboot 00:03:51.836 07:21:07 -- common/autotest_common.sh@712 -- # xtrace_disable 00:03:51.836 07:21:07 -- common/autotest_common.sh@10 -- # set +x 00:03:51.836 07:21:07 -- spdk/autotest.sh@150 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:03:52.771 0000:00:04.7 (8086 0e27): ioatdma -> vfio-pci 00:03:53.029 0000:00:04.6 (8086 0e26): ioatdma -> vfio-pci 00:03:53.029 0000:00:04.5 (8086 0e25): ioatdma -> vfio-pci 00:03:53.029 0000:00:04.4 (8086 0e24): ioatdma -> vfio-pci 00:03:53.029 0000:00:04.3 (8086 0e23): ioatdma -> vfio-pci 00:03:53.029 0000:00:04.2 (8086 0e22): ioatdma -> vfio-pci 00:03:53.029 0000:00:04.1 (8086 0e21): ioatdma -> vfio-pci 00:03:53.029 0000:00:04.0 (8086 0e20): ioatdma -> vfio-pci 00:03:53.029 0000:80:04.7 (8086 0e27): ioatdma -> vfio-pci 00:03:53.029 0000:80:04.6 (8086 0e26): ioatdma -> vfio-pci 00:03:53.029 0000:80:04.5 (8086 0e25): ioatdma -> vfio-pci 00:03:53.029 0000:80:04.4 (8086 0e24): ioatdma -> vfio-pci 00:03:53.029 0000:80:04.3 (8086 0e23): ioatdma -> vfio-pci 00:03:53.029 0000:80:04.2 (8086 0e22): ioatdma -> vfio-pci 00:03:53.029 0000:80:04.1 (8086 0e21): ioatdma -> vfio-pci 00:03:53.029 0000:80:04.0 (8086 0e20): ioatdma -> vfio-pci 00:03:53.963 0000:88:00.0 (8086 0a54): nvme -> vfio-pci 00:03:53.963 07:21:10 -- spdk/autotest.sh@151 -- # timing_exit afterboot 00:03:53.963 07:21:10 -- common/autotest_common.sh@718 -- # xtrace_disable 00:03:53.963 07:21:10 -- common/autotest_common.sh@10 -- # set +x 00:03:53.963 07:21:10 -- spdk/autotest.sh@155 -- # opal_revert_cleanup 00:03:53.963 07:21:10 -- common/autotest_common.sh@1576 -- # mapfile -t bdfs 00:03:53.963 07:21:10 -- common/autotest_common.sh@1576 -- # get_nvme_bdfs_by_id 0x0a54 00:03:53.963 07:21:10 -- common/autotest_common.sh@1562 -- # bdfs=() 00:03:53.963 07:21:10 -- common/autotest_common.sh@1562 -- # local bdfs 00:03:53.963 07:21:10 -- common/autotest_common.sh@1564 -- # get_nvme_bdfs 00:03:53.963 07:21:10 -- common/autotest_common.sh@1498 -- # bdfs=() 00:03:53.963 07:21:10 -- common/autotest_common.sh@1498 -- # local bdfs 00:03:53.963 07:21:10 -- common/autotest_common.sh@1499 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:03:53.963 07:21:10 -- common/autotest_common.sh@1499 -- # jq -r '.config[].params.traddr' 00:03:53.963 07:21:10 -- common/autotest_common.sh@1499 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh 00:03:54.221 07:21:10 -- common/autotest_common.sh@1500 -- # (( 1 == 0 )) 00:03:54.221 07:21:10 -- common/autotest_common.sh@1504 -- # printf '%s\n' 0000:88:00.0 00:03:54.221 07:21:10 -- common/autotest_common.sh@1564 -- # for bdf in $(get_nvme_bdfs) 00:03:54.221 07:21:10 -- common/autotest_common.sh@1565 -- # cat /sys/bus/pci/devices/0000:88:00.0/device 00:03:54.221 07:21:10 -- common/autotest_common.sh@1565 -- # device=0x0a54 00:03:54.221 07:21:10 -- common/autotest_common.sh@1566 -- # [[ 0x0a54 == \0\x\0\a\5\4 ]] 00:03:54.221 07:21:10 -- common/autotest_common.sh@1567 -- # bdfs+=($bdf) 00:03:54.221 07:21:10 -- common/autotest_common.sh@1571 -- # printf '%s\n' 0000:88:00.0 00:03:54.221 07:21:10 -- common/autotest_common.sh@1577 -- # [[ -z 0000:88:00.0 ]] 00:03:54.221 07:21:10 -- common/autotest_common.sh@1582 -- # spdk_tgt_pid=3967539 00:03:54.221 07:21:10 -- common/autotest_common.sh@1581 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:03:54.221 07:21:10 -- common/autotest_common.sh@1583 -- # waitforlisten 3967539 00:03:54.221 07:21:10 -- common/autotest_common.sh@819 -- # '[' -z 3967539 ']' 00:03:54.221 07:21:10 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:03:54.221 07:21:10 -- common/autotest_common.sh@824 -- # local max_retries=100 00:03:54.221 07:21:10 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:03:54.221 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:03:54.221 07:21:10 -- common/autotest_common.sh@828 -- # xtrace_disable 00:03:54.221 07:21:10 -- common/autotest_common.sh@10 -- # set +x 00:03:54.221 [2024-07-14 07:21:10.255697] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:03:54.221 [2024-07-14 07:21:10.255786] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3967539 ] 00:03:54.221 EAL: No free 2048 kB hugepages reported on node 1 00:03:54.221 [2024-07-14 07:21:10.316941] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:03:54.480 [2024-07-14 07:21:10.431207] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:03:54.480 [2024-07-14 07:21:10.431395] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:03:55.048 07:21:11 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:03:55.048 07:21:11 -- common/autotest_common.sh@852 -- # return 0 00:03:55.048 07:21:11 -- common/autotest_common.sh@1585 -- # bdf_id=0 00:03:55.048 07:21:11 -- common/autotest_common.sh@1586 -- # for bdf in "${bdfs[@]}" 00:03:55.048 07:21:11 -- common/autotest_common.sh@1587 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvme0 -t pcie -a 0000:88:00.0 00:03:58.330 nvme0n1 00:03:58.330 07:21:14 -- common/autotest_common.sh@1589 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_nvme_opal_revert -b nvme0 -p test 00:03:58.330 [2024-07-14 07:21:14.478197] nvme_opal.c:2059:spdk_opal_cmd_revert_tper: *ERROR*: Error on starting admin SP session with error 18 00:03:58.330 [2024-07-14 07:21:14.478257] vbdev_opal_rpc.c: 134:rpc_bdev_nvme_opal_revert: *ERROR*: Revert TPer failure: 18 00:03:58.330 request: 00:03:58.330 { 00:03:58.330 "nvme_ctrlr_name": "nvme0", 00:03:58.330 "password": "test", 00:03:58.330 "method": "bdev_nvme_opal_revert", 00:03:58.330 "req_id": 1 00:03:58.330 } 00:03:58.330 Got JSON-RPC error response 00:03:58.330 response: 00:03:58.330 { 00:03:58.330 "code": -32603, 00:03:58.330 "message": "Internal error" 00:03:58.330 } 00:03:58.330 07:21:14 -- common/autotest_common.sh@1589 -- # true 00:03:58.330 07:21:14 -- common/autotest_common.sh@1590 -- # (( ++bdf_id )) 00:03:58.330 07:21:14 -- common/autotest_common.sh@1593 -- # killprocess 3967539 00:03:58.330 07:21:14 -- common/autotest_common.sh@926 -- # '[' -z 3967539 ']' 00:03:58.330 07:21:14 -- common/autotest_common.sh@930 -- # kill -0 3967539 00:03:58.330 07:21:14 -- common/autotest_common.sh@931 -- # uname 00:03:58.330 07:21:14 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:03:58.330 07:21:14 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 3967539 00:03:58.588 07:21:14 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:03:58.588 07:21:14 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:03:58.588 07:21:14 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 3967539' 00:03:58.588 killing process with pid 3967539 00:03:58.588 07:21:14 -- common/autotest_common.sh@945 -- # kill 3967539 00:03:58.588 07:21:14 -- common/autotest_common.sh@950 -- # wait 3967539 00:04:00.484 07:21:16 -- spdk/autotest.sh@161 -- # '[' 0 -eq 1 ']' 00:04:00.484 07:21:16 -- spdk/autotest.sh@165 -- # '[' 1 -eq 1 ']' 00:04:00.484 07:21:16 -- spdk/autotest.sh@166 -- # [[ 0 -eq 1 ]] 00:04:00.484 07:21:16 -- spdk/autotest.sh@166 -- # [[ 0 -eq 1 ]] 00:04:00.484 07:21:16 -- spdk/autotest.sh@173 -- # timing_enter lib 00:04:00.484 07:21:16 -- common/autotest_common.sh@712 -- # xtrace_disable 00:04:00.484 07:21:16 -- common/autotest_common.sh@10 -- # set +x 00:04:00.484 07:21:16 -- spdk/autotest.sh@175 -- # run_test env /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/env.sh 00:04:00.485 07:21:16 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:04:00.485 07:21:16 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:04:00.485 07:21:16 -- common/autotest_common.sh@10 -- # set +x 00:04:00.485 ************************************ 00:04:00.485 START TEST env 00:04:00.485 ************************************ 00:04:00.485 07:21:16 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/env.sh 00:04:00.485 * Looking for test storage... 00:04:00.485 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env 00:04:00.485 07:21:16 -- env/env.sh@10 -- # run_test env_memory /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/memory/memory_ut 00:04:00.485 07:21:16 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:04:00.485 07:21:16 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:04:00.485 07:21:16 -- common/autotest_common.sh@10 -- # set +x 00:04:00.485 ************************************ 00:04:00.485 START TEST env_memory 00:04:00.485 ************************************ 00:04:00.485 07:21:16 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/memory/memory_ut 00:04:00.485 00:04:00.485 00:04:00.485 CUnit - A unit testing framework for C - Version 2.1-3 00:04:00.485 http://cunit.sourceforge.net/ 00:04:00.485 00:04:00.485 00:04:00.485 Suite: memory 00:04:00.485 Test: alloc and free memory map ...[2024-07-14 07:21:16.412127] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 283:spdk_mem_map_alloc: *ERROR*: Initial mem_map notify failed 00:04:00.485 passed 00:04:00.485 Test: mem map translation ...[2024-07-14 07:21:16.432034] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 590:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=2097152 len=1234 00:04:00.485 [2024-07-14 07:21:16.432057] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 590:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=1234 len=2097152 00:04:00.485 [2024-07-14 07:21:16.432115] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 584:spdk_mem_map_set_translation: *ERROR*: invalid usermode virtual address 281474976710656 00:04:00.485 [2024-07-14 07:21:16.432127] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 600:spdk_mem_map_set_translation: *ERROR*: could not get 0xffffffe00000 map 00:04:00.485 passed 00:04:00.485 Test: mem map registration ...[2024-07-14 07:21:16.473020] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 346:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=0x200000 len=1234 00:04:00.485 [2024-07-14 07:21:16.473039] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 346:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=0x4d2 len=2097152 00:04:00.485 passed 00:04:00.485 Test: mem map adjacent registrations ...passed 00:04:00.485 00:04:00.485 Run Summary: Type Total Ran Passed Failed Inactive 00:04:00.485 suites 1 1 n/a 0 0 00:04:00.485 tests 4 4 4 0 0 00:04:00.485 asserts 152 152 152 0 n/a 00:04:00.485 00:04:00.485 Elapsed time = 0.138 seconds 00:04:00.485 00:04:00.485 real 0m0.145s 00:04:00.485 user 0m0.140s 00:04:00.485 sys 0m0.004s 00:04:00.485 07:21:16 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:00.485 07:21:16 -- common/autotest_common.sh@10 -- # set +x 00:04:00.485 ************************************ 00:04:00.485 END TEST env_memory 00:04:00.485 ************************************ 00:04:00.485 07:21:16 -- env/env.sh@11 -- # run_test env_vtophys /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/vtophys/vtophys 00:04:00.485 07:21:16 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:04:00.485 07:21:16 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:04:00.485 07:21:16 -- common/autotest_common.sh@10 -- # set +x 00:04:00.485 ************************************ 00:04:00.485 START TEST env_vtophys 00:04:00.485 ************************************ 00:04:00.485 07:21:16 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/vtophys/vtophys 00:04:00.485 EAL: lib.eal log level changed from notice to debug 00:04:00.485 EAL: Detected lcore 0 as core 0 on socket 0 00:04:00.485 EAL: Detected lcore 1 as core 1 on socket 0 00:04:00.485 EAL: Detected lcore 2 as core 2 on socket 0 00:04:00.485 EAL: Detected lcore 3 as core 3 on socket 0 00:04:00.485 EAL: Detected lcore 4 as core 4 on socket 0 00:04:00.485 EAL: Detected lcore 5 as core 5 on socket 0 00:04:00.485 EAL: Detected lcore 6 as core 8 on socket 0 00:04:00.485 EAL: Detected lcore 7 as core 9 on socket 0 00:04:00.485 EAL: Detected lcore 8 as core 10 on socket 0 00:04:00.485 EAL: Detected lcore 9 as core 11 on socket 0 00:04:00.485 EAL: Detected lcore 10 as core 12 on socket 0 00:04:00.485 EAL: Detected lcore 11 as core 13 on socket 0 00:04:00.485 EAL: Detected lcore 12 as core 0 on socket 1 00:04:00.485 EAL: Detected lcore 13 as core 1 on socket 1 00:04:00.485 EAL: Detected lcore 14 as core 2 on socket 1 00:04:00.485 EAL: Detected lcore 15 as core 3 on socket 1 00:04:00.485 EAL: Detected lcore 16 as core 4 on socket 1 00:04:00.485 EAL: Detected lcore 17 as core 5 on socket 1 00:04:00.485 EAL: Detected lcore 18 as core 8 on socket 1 00:04:00.485 EAL: Detected lcore 19 as core 9 on socket 1 00:04:00.485 EAL: Detected lcore 20 as core 10 on socket 1 00:04:00.485 EAL: Detected lcore 21 as core 11 on socket 1 00:04:00.485 EAL: Detected lcore 22 as core 12 on socket 1 00:04:00.485 EAL: Detected lcore 23 as core 13 on socket 1 00:04:00.485 EAL: Detected lcore 24 as core 0 on socket 0 00:04:00.485 EAL: Detected lcore 25 as core 1 on socket 0 00:04:00.485 EAL: Detected lcore 26 as core 2 on socket 0 00:04:00.485 EAL: Detected lcore 27 as core 3 on socket 0 00:04:00.485 EAL: Detected lcore 28 as core 4 on socket 0 00:04:00.485 EAL: Detected lcore 29 as core 5 on socket 0 00:04:00.485 EAL: Detected lcore 30 as core 8 on socket 0 00:04:00.485 EAL: Detected lcore 31 as core 9 on socket 0 00:04:00.485 EAL: Detected lcore 32 as core 10 on socket 0 00:04:00.485 EAL: Detected lcore 33 as core 11 on socket 0 00:04:00.485 EAL: Detected lcore 34 as core 12 on socket 0 00:04:00.485 EAL: Detected lcore 35 as core 13 on socket 0 00:04:00.485 EAL: Detected lcore 36 as core 0 on socket 1 00:04:00.485 EAL: Detected lcore 37 as core 1 on socket 1 00:04:00.485 EAL: Detected lcore 38 as core 2 on socket 1 00:04:00.485 EAL: Detected lcore 39 as core 3 on socket 1 00:04:00.485 EAL: Detected lcore 40 as core 4 on socket 1 00:04:00.485 EAL: Detected lcore 41 as core 5 on socket 1 00:04:00.485 EAL: Detected lcore 42 as core 8 on socket 1 00:04:00.485 EAL: Detected lcore 43 as core 9 on socket 1 00:04:00.485 EAL: Detected lcore 44 as core 10 on socket 1 00:04:00.485 EAL: Detected lcore 45 as core 11 on socket 1 00:04:00.485 EAL: Detected lcore 46 as core 12 on socket 1 00:04:00.485 EAL: Detected lcore 47 as core 13 on socket 1 00:04:00.485 EAL: Maximum logical cores by configuration: 128 00:04:00.485 EAL: Detected CPU lcores: 48 00:04:00.485 EAL: Detected NUMA nodes: 2 00:04:00.485 EAL: Checking presence of .so 'librte_eal.so.24.0' 00:04:00.485 EAL: Detected shared linkage of DPDK 00:04:00.485 EAL: No shared files mode enabled, IPC will be disabled 00:04:00.485 EAL: Bus pci wants IOVA as 'DC' 00:04:00.485 EAL: Buses did not request a specific IOVA mode. 00:04:00.485 EAL: IOMMU is available, selecting IOVA as VA mode. 00:04:00.485 EAL: Selected IOVA mode 'VA' 00:04:00.485 EAL: No free 2048 kB hugepages reported on node 1 00:04:00.485 EAL: Probing VFIO support... 00:04:00.485 EAL: IOMMU type 1 (Type 1) is supported 00:04:00.485 EAL: IOMMU type 7 (sPAPR) is not supported 00:04:00.485 EAL: IOMMU type 8 (No-IOMMU) is not supported 00:04:00.485 EAL: VFIO support initialized 00:04:00.485 EAL: Ask a virtual area of 0x2e000 bytes 00:04:00.485 EAL: Virtual area found at 0x200000000000 (size = 0x2e000) 00:04:00.485 EAL: Setting up physically contiguous memory... 00:04:00.485 EAL: Setting maximum number of open files to 524288 00:04:00.485 EAL: Detected memory type: socket_id:0 hugepage_sz:2097152 00:04:00.485 EAL: Detected memory type: socket_id:1 hugepage_sz:2097152 00:04:00.485 EAL: Creating 4 segment lists: n_segs:8192 socket_id:0 hugepage_sz:2097152 00:04:00.485 EAL: Ask a virtual area of 0x61000 bytes 00:04:00.485 EAL: Virtual area found at 0x20000002e000 (size = 0x61000) 00:04:00.485 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:04:00.485 EAL: Ask a virtual area of 0x400000000 bytes 00:04:00.485 EAL: Virtual area found at 0x200000200000 (size = 0x400000000) 00:04:00.485 EAL: VA reserved for memseg list at 0x200000200000, size 400000000 00:04:00.485 EAL: Ask a virtual area of 0x61000 bytes 00:04:00.485 EAL: Virtual area found at 0x200400200000 (size = 0x61000) 00:04:00.485 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:04:00.485 EAL: Ask a virtual area of 0x400000000 bytes 00:04:00.485 EAL: Virtual area found at 0x200400400000 (size = 0x400000000) 00:04:00.485 EAL: VA reserved for memseg list at 0x200400400000, size 400000000 00:04:00.485 EAL: Ask a virtual area of 0x61000 bytes 00:04:00.485 EAL: Virtual area found at 0x200800400000 (size = 0x61000) 00:04:00.485 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:04:00.485 EAL: Ask a virtual area of 0x400000000 bytes 00:04:00.485 EAL: Virtual area found at 0x200800600000 (size = 0x400000000) 00:04:00.485 EAL: VA reserved for memseg list at 0x200800600000, size 400000000 00:04:00.485 EAL: Ask a virtual area of 0x61000 bytes 00:04:00.485 EAL: Virtual area found at 0x200c00600000 (size = 0x61000) 00:04:00.485 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:04:00.485 EAL: Ask a virtual area of 0x400000000 bytes 00:04:00.485 EAL: Virtual area found at 0x200c00800000 (size = 0x400000000) 00:04:00.485 EAL: VA reserved for memseg list at 0x200c00800000, size 400000000 00:04:00.485 EAL: Creating 4 segment lists: n_segs:8192 socket_id:1 hugepage_sz:2097152 00:04:00.485 EAL: Ask a virtual area of 0x61000 bytes 00:04:00.485 EAL: Virtual area found at 0x201000800000 (size = 0x61000) 00:04:00.485 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:04:00.485 EAL: Ask a virtual area of 0x400000000 bytes 00:04:00.485 EAL: Virtual area found at 0x201000a00000 (size = 0x400000000) 00:04:00.485 EAL: VA reserved for memseg list at 0x201000a00000, size 400000000 00:04:00.485 EAL: Ask a virtual area of 0x61000 bytes 00:04:00.485 EAL: Virtual area found at 0x201400a00000 (size = 0x61000) 00:04:00.485 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:04:00.485 EAL: Ask a virtual area of 0x400000000 bytes 00:04:00.485 EAL: Virtual area found at 0x201400c00000 (size = 0x400000000) 00:04:00.485 EAL: VA reserved for memseg list at 0x201400c00000, size 400000000 00:04:00.485 EAL: Ask a virtual area of 0x61000 bytes 00:04:00.485 EAL: Virtual area found at 0x201800c00000 (size = 0x61000) 00:04:00.485 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:04:00.485 EAL: Ask a virtual area of 0x400000000 bytes 00:04:00.485 EAL: Virtual area found at 0x201800e00000 (size = 0x400000000) 00:04:00.485 EAL: VA reserved for memseg list at 0x201800e00000, size 400000000 00:04:00.485 EAL: Ask a virtual area of 0x61000 bytes 00:04:00.485 EAL: Virtual area found at 0x201c00e00000 (size = 0x61000) 00:04:00.485 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:04:00.485 EAL: Ask a virtual area of 0x400000000 bytes 00:04:00.485 EAL: Virtual area found at 0x201c01000000 (size = 0x400000000) 00:04:00.485 EAL: VA reserved for memseg list at 0x201c01000000, size 400000000 00:04:00.486 EAL: Hugepages will be freed exactly as allocated. 00:04:00.486 EAL: No shared files mode enabled, IPC is disabled 00:04:00.486 EAL: No shared files mode enabled, IPC is disabled 00:04:00.486 EAL: TSC frequency is ~2700000 KHz 00:04:00.486 EAL: Main lcore 0 is ready (tid=7f51e4c73a00;cpuset=[0]) 00:04:00.486 EAL: Trying to obtain current memory policy. 00:04:00.486 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:00.486 EAL: Restoring previous memory policy: 0 00:04:00.486 EAL: request: mp_malloc_sync 00:04:00.486 EAL: No shared files mode enabled, IPC is disabled 00:04:00.486 EAL: Heap on socket 0 was expanded by 2MB 00:04:00.486 EAL: No shared files mode enabled, IPC is disabled 00:04:00.486 EAL: No PCI address specified using 'addr=' in: bus=pci 00:04:00.486 EAL: Mem event callback 'spdk:(nil)' registered 00:04:00.486 00:04:00.486 00:04:00.486 CUnit - A unit testing framework for C - Version 2.1-3 00:04:00.486 http://cunit.sourceforge.net/ 00:04:00.486 00:04:00.486 00:04:00.486 Suite: components_suite 00:04:00.486 Test: vtophys_malloc_test ...passed 00:04:00.486 Test: vtophys_spdk_malloc_test ...EAL: Trying to obtain current memory policy. 00:04:00.486 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:00.486 EAL: Restoring previous memory policy: 4 00:04:00.486 EAL: Calling mem event callback 'spdk:(nil)' 00:04:00.486 EAL: request: mp_malloc_sync 00:04:00.486 EAL: No shared files mode enabled, IPC is disabled 00:04:00.486 EAL: Heap on socket 0 was expanded by 4MB 00:04:00.486 EAL: Calling mem event callback 'spdk:(nil)' 00:04:00.486 EAL: request: mp_malloc_sync 00:04:00.486 EAL: No shared files mode enabled, IPC is disabled 00:04:00.486 EAL: Heap on socket 0 was shrunk by 4MB 00:04:00.486 EAL: Trying to obtain current memory policy. 00:04:00.486 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:00.486 EAL: Restoring previous memory policy: 4 00:04:00.486 EAL: Calling mem event callback 'spdk:(nil)' 00:04:00.486 EAL: request: mp_malloc_sync 00:04:00.486 EAL: No shared files mode enabled, IPC is disabled 00:04:00.486 EAL: Heap on socket 0 was expanded by 6MB 00:04:00.486 EAL: Calling mem event callback 'spdk:(nil)' 00:04:00.486 EAL: request: mp_malloc_sync 00:04:00.486 EAL: No shared files mode enabled, IPC is disabled 00:04:00.486 EAL: Heap on socket 0 was shrunk by 6MB 00:04:00.486 EAL: Trying to obtain current memory policy. 00:04:00.486 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:00.486 EAL: Restoring previous memory policy: 4 00:04:00.486 EAL: Calling mem event callback 'spdk:(nil)' 00:04:00.486 EAL: request: mp_malloc_sync 00:04:00.486 EAL: No shared files mode enabled, IPC is disabled 00:04:00.486 EAL: Heap on socket 0 was expanded by 10MB 00:04:00.486 EAL: Calling mem event callback 'spdk:(nil)' 00:04:00.486 EAL: request: mp_malloc_sync 00:04:00.486 EAL: No shared files mode enabled, IPC is disabled 00:04:00.486 EAL: Heap on socket 0 was shrunk by 10MB 00:04:00.486 EAL: Trying to obtain current memory policy. 00:04:00.486 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:00.486 EAL: Restoring previous memory policy: 4 00:04:00.486 EAL: Calling mem event callback 'spdk:(nil)' 00:04:00.486 EAL: request: mp_malloc_sync 00:04:00.486 EAL: No shared files mode enabled, IPC is disabled 00:04:00.486 EAL: Heap on socket 0 was expanded by 18MB 00:04:00.486 EAL: Calling mem event callback 'spdk:(nil)' 00:04:00.486 EAL: request: mp_malloc_sync 00:04:00.486 EAL: No shared files mode enabled, IPC is disabled 00:04:00.486 EAL: Heap on socket 0 was shrunk by 18MB 00:04:00.486 EAL: Trying to obtain current memory policy. 00:04:00.486 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:00.486 EAL: Restoring previous memory policy: 4 00:04:00.486 EAL: Calling mem event callback 'spdk:(nil)' 00:04:00.486 EAL: request: mp_malloc_sync 00:04:00.486 EAL: No shared files mode enabled, IPC is disabled 00:04:00.486 EAL: Heap on socket 0 was expanded by 34MB 00:04:00.486 EAL: Calling mem event callback 'spdk:(nil)' 00:04:00.744 EAL: request: mp_malloc_sync 00:04:00.744 EAL: No shared files mode enabled, IPC is disabled 00:04:00.744 EAL: Heap on socket 0 was shrunk by 34MB 00:04:00.744 EAL: Trying to obtain current memory policy. 00:04:00.744 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:00.744 EAL: Restoring previous memory policy: 4 00:04:00.744 EAL: Calling mem event callback 'spdk:(nil)' 00:04:00.744 EAL: request: mp_malloc_sync 00:04:00.744 EAL: No shared files mode enabled, IPC is disabled 00:04:00.744 EAL: Heap on socket 0 was expanded by 66MB 00:04:00.744 EAL: Calling mem event callback 'spdk:(nil)' 00:04:00.744 EAL: request: mp_malloc_sync 00:04:00.744 EAL: No shared files mode enabled, IPC is disabled 00:04:00.744 EAL: Heap on socket 0 was shrunk by 66MB 00:04:00.744 EAL: Trying to obtain current memory policy. 00:04:00.744 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:00.744 EAL: Restoring previous memory policy: 4 00:04:00.744 EAL: Calling mem event callback 'spdk:(nil)' 00:04:00.744 EAL: request: mp_malloc_sync 00:04:00.744 EAL: No shared files mode enabled, IPC is disabled 00:04:00.744 EAL: Heap on socket 0 was expanded by 130MB 00:04:00.744 EAL: Calling mem event callback 'spdk:(nil)' 00:04:00.744 EAL: request: mp_malloc_sync 00:04:00.744 EAL: No shared files mode enabled, IPC is disabled 00:04:00.744 EAL: Heap on socket 0 was shrunk by 130MB 00:04:00.744 EAL: Trying to obtain current memory policy. 00:04:00.744 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:00.744 EAL: Restoring previous memory policy: 4 00:04:00.744 EAL: Calling mem event callback 'spdk:(nil)' 00:04:00.744 EAL: request: mp_malloc_sync 00:04:00.744 EAL: No shared files mode enabled, IPC is disabled 00:04:00.744 EAL: Heap on socket 0 was expanded by 258MB 00:04:00.744 EAL: Calling mem event callback 'spdk:(nil)' 00:04:01.002 EAL: request: mp_malloc_sync 00:04:01.002 EAL: No shared files mode enabled, IPC is disabled 00:04:01.002 EAL: Heap on socket 0 was shrunk by 258MB 00:04:01.002 EAL: Trying to obtain current memory policy. 00:04:01.002 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:01.002 EAL: Restoring previous memory policy: 4 00:04:01.002 EAL: Calling mem event callback 'spdk:(nil)' 00:04:01.002 EAL: request: mp_malloc_sync 00:04:01.002 EAL: No shared files mode enabled, IPC is disabled 00:04:01.002 EAL: Heap on socket 0 was expanded by 514MB 00:04:01.260 EAL: Calling mem event callback 'spdk:(nil)' 00:04:01.260 EAL: request: mp_malloc_sync 00:04:01.260 EAL: No shared files mode enabled, IPC is disabled 00:04:01.260 EAL: Heap on socket 0 was shrunk by 514MB 00:04:01.260 EAL: Trying to obtain current memory policy. 00:04:01.260 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:01.518 EAL: Restoring previous memory policy: 4 00:04:01.518 EAL: Calling mem event callback 'spdk:(nil)' 00:04:01.518 EAL: request: mp_malloc_sync 00:04:01.518 EAL: No shared files mode enabled, IPC is disabled 00:04:01.518 EAL: Heap on socket 0 was expanded by 1026MB 00:04:01.775 EAL: Calling mem event callback 'spdk:(nil)' 00:04:02.032 EAL: request: mp_malloc_sync 00:04:02.032 EAL: No shared files mode enabled, IPC is disabled 00:04:02.032 EAL: Heap on socket 0 was shrunk by 1026MB 00:04:02.032 passed 00:04:02.032 00:04:02.032 Run Summary: Type Total Ran Passed Failed Inactive 00:04:02.032 suites 1 1 n/a 0 0 00:04:02.032 tests 2 2 2 0 0 00:04:02.032 asserts 497 497 497 0 n/a 00:04:02.032 00:04:02.032 Elapsed time = 1.368 seconds 00:04:02.032 EAL: Calling mem event callback 'spdk:(nil)' 00:04:02.032 EAL: request: mp_malloc_sync 00:04:02.032 EAL: No shared files mode enabled, IPC is disabled 00:04:02.032 EAL: Heap on socket 0 was shrunk by 2MB 00:04:02.032 EAL: No shared files mode enabled, IPC is disabled 00:04:02.032 EAL: No shared files mode enabled, IPC is disabled 00:04:02.032 EAL: No shared files mode enabled, IPC is disabled 00:04:02.032 00:04:02.032 real 0m1.482s 00:04:02.032 user 0m0.863s 00:04:02.032 sys 0m0.586s 00:04:02.032 07:21:18 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:02.032 07:21:18 -- common/autotest_common.sh@10 -- # set +x 00:04:02.032 ************************************ 00:04:02.032 END TEST env_vtophys 00:04:02.032 ************************************ 00:04:02.032 07:21:18 -- env/env.sh@12 -- # run_test env_pci /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/pci/pci_ut 00:04:02.032 07:21:18 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:04:02.032 07:21:18 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:04:02.032 07:21:18 -- common/autotest_common.sh@10 -- # set +x 00:04:02.032 ************************************ 00:04:02.032 START TEST env_pci 00:04:02.032 ************************************ 00:04:02.032 07:21:18 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/pci/pci_ut 00:04:02.032 00:04:02.032 00:04:02.032 CUnit - A unit testing framework for C - Version 2.1-3 00:04:02.032 http://cunit.sourceforge.net/ 00:04:02.032 00:04:02.032 00:04:02.032 Suite: pci 00:04:02.032 Test: pci_hook ...[2024-07-14 07:21:18.066700] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/pci.c:1040:spdk_pci_device_claim: *ERROR*: Cannot create lock on device /var/tmp/spdk_pci_lock_10000:00:01.0, probably process 3968571 has claimed it 00:04:02.032 EAL: Cannot find device (10000:00:01.0) 00:04:02.032 EAL: Failed to attach device on primary process 00:04:02.032 passed 00:04:02.032 00:04:02.032 Run Summary: Type Total Ran Passed Failed Inactive 00:04:02.032 suites 1 1 n/a 0 0 00:04:02.032 tests 1 1 1 0 0 00:04:02.032 asserts 25 25 25 0 n/a 00:04:02.032 00:04:02.032 Elapsed time = 0.021 seconds 00:04:02.032 00:04:02.032 real 0m0.033s 00:04:02.032 user 0m0.014s 00:04:02.032 sys 0m0.020s 00:04:02.032 07:21:18 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:02.032 07:21:18 -- common/autotest_common.sh@10 -- # set +x 00:04:02.032 ************************************ 00:04:02.032 END TEST env_pci 00:04:02.032 ************************************ 00:04:02.032 07:21:18 -- env/env.sh@14 -- # argv='-c 0x1 ' 00:04:02.032 07:21:18 -- env/env.sh@15 -- # uname 00:04:02.032 07:21:18 -- env/env.sh@15 -- # '[' Linux = Linux ']' 00:04:02.032 07:21:18 -- env/env.sh@22 -- # argv+=--base-virtaddr=0x200000000000 00:04:02.032 07:21:18 -- env/env.sh@24 -- # run_test env_dpdk_post_init /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:04:02.032 07:21:18 -- common/autotest_common.sh@1077 -- # '[' 5 -le 1 ']' 00:04:02.032 07:21:18 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:04:02.032 07:21:18 -- common/autotest_common.sh@10 -- # set +x 00:04:02.032 ************************************ 00:04:02.032 START TEST env_dpdk_post_init 00:04:02.032 ************************************ 00:04:02.032 07:21:18 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:04:02.032 EAL: Detected CPU lcores: 48 00:04:02.032 EAL: Detected NUMA nodes: 2 00:04:02.032 EAL: Detected shared linkage of DPDK 00:04:02.033 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:04:02.033 EAL: Selected IOVA mode 'VA' 00:04:02.033 EAL: No free 2048 kB hugepages reported on node 1 00:04:02.033 EAL: VFIO support initialized 00:04:02.033 TELEMETRY: No legacy callbacks, legacy socket not created 00:04:02.316 EAL: Using IOMMU type 1 (Type 1) 00:04:02.316 EAL: Probe PCI driver: spdk_ioat (8086:0e20) device: 0000:00:04.0 (socket 0) 00:04:02.316 EAL: Probe PCI driver: spdk_ioat (8086:0e21) device: 0000:00:04.1 (socket 0) 00:04:02.316 EAL: Probe PCI driver: spdk_ioat (8086:0e22) device: 0000:00:04.2 (socket 0) 00:04:02.316 EAL: Probe PCI driver: spdk_ioat (8086:0e23) device: 0000:00:04.3 (socket 0) 00:04:02.316 EAL: Probe PCI driver: spdk_ioat (8086:0e24) device: 0000:00:04.4 (socket 0) 00:04:02.316 EAL: Probe PCI driver: spdk_ioat (8086:0e25) device: 0000:00:04.5 (socket 0) 00:04:02.316 EAL: Probe PCI driver: spdk_ioat (8086:0e26) device: 0000:00:04.6 (socket 0) 00:04:02.316 EAL: Probe PCI driver: spdk_ioat (8086:0e27) device: 0000:00:04.7 (socket 0) 00:04:02.316 EAL: Probe PCI driver: spdk_ioat (8086:0e20) device: 0000:80:04.0 (socket 1) 00:04:02.316 EAL: Probe PCI driver: spdk_ioat (8086:0e21) device: 0000:80:04.1 (socket 1) 00:04:02.316 EAL: Probe PCI driver: spdk_ioat (8086:0e22) device: 0000:80:04.2 (socket 1) 00:04:02.316 EAL: Probe PCI driver: spdk_ioat (8086:0e23) device: 0000:80:04.3 (socket 1) 00:04:02.316 EAL: Probe PCI driver: spdk_ioat (8086:0e24) device: 0000:80:04.4 (socket 1) 00:04:02.316 EAL: Probe PCI driver: spdk_ioat (8086:0e25) device: 0000:80:04.5 (socket 1) 00:04:02.316 EAL: Probe PCI driver: spdk_ioat (8086:0e26) device: 0000:80:04.6 (socket 1) 00:04:02.316 EAL: Probe PCI driver: spdk_ioat (8086:0e27) device: 0000:80:04.7 (socket 1) 00:04:03.250 EAL: Probe PCI driver: spdk_nvme (8086:0a54) device: 0000:88:00.0 (socket 1) 00:04:06.528 EAL: Releasing PCI mapped resource for 0000:88:00.0 00:04:06.528 EAL: Calling pci_unmap_resource for 0000:88:00.0 at 0x202001040000 00:04:06.528 Starting DPDK initialization... 00:04:06.528 Starting SPDK post initialization... 00:04:06.528 SPDK NVMe probe 00:04:06.528 Attaching to 0000:88:00.0 00:04:06.528 Attached to 0000:88:00.0 00:04:06.528 Cleaning up... 00:04:06.528 00:04:06.528 real 0m4.385s 00:04:06.528 user 0m3.255s 00:04:06.528 sys 0m0.186s 00:04:06.528 07:21:22 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:06.528 07:21:22 -- common/autotest_common.sh@10 -- # set +x 00:04:06.528 ************************************ 00:04:06.528 END TEST env_dpdk_post_init 00:04:06.528 ************************************ 00:04:06.528 07:21:22 -- env/env.sh@26 -- # uname 00:04:06.528 07:21:22 -- env/env.sh@26 -- # '[' Linux = Linux ']' 00:04:06.528 07:21:22 -- env/env.sh@29 -- # run_test env_mem_callbacks /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/mem_callbacks/mem_callbacks 00:04:06.528 07:21:22 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:04:06.528 07:21:22 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:04:06.528 07:21:22 -- common/autotest_common.sh@10 -- # set +x 00:04:06.528 ************************************ 00:04:06.528 START TEST env_mem_callbacks 00:04:06.528 ************************************ 00:04:06.528 07:21:22 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/mem_callbacks/mem_callbacks 00:04:06.528 EAL: Detected CPU lcores: 48 00:04:06.528 EAL: Detected NUMA nodes: 2 00:04:06.528 EAL: Detected shared linkage of DPDK 00:04:06.528 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:04:06.528 EAL: Selected IOVA mode 'VA' 00:04:06.528 EAL: No free 2048 kB hugepages reported on node 1 00:04:06.528 EAL: VFIO support initialized 00:04:06.528 TELEMETRY: No legacy callbacks, legacy socket not created 00:04:06.528 00:04:06.528 00:04:06.528 CUnit - A unit testing framework for C - Version 2.1-3 00:04:06.528 http://cunit.sourceforge.net/ 00:04:06.528 00:04:06.528 00:04:06.528 Suite: memory 00:04:06.528 Test: test ... 00:04:06.528 register 0x200000200000 2097152 00:04:06.528 malloc 3145728 00:04:06.528 register 0x200000400000 4194304 00:04:06.528 buf 0x200000500000 len 3145728 PASSED 00:04:06.528 malloc 64 00:04:06.528 buf 0x2000004fff40 len 64 PASSED 00:04:06.528 malloc 4194304 00:04:06.528 register 0x200000800000 6291456 00:04:06.528 buf 0x200000a00000 len 4194304 PASSED 00:04:06.528 free 0x200000500000 3145728 00:04:06.528 free 0x2000004fff40 64 00:04:06.528 unregister 0x200000400000 4194304 PASSED 00:04:06.528 free 0x200000a00000 4194304 00:04:06.528 unregister 0x200000800000 6291456 PASSED 00:04:06.528 malloc 8388608 00:04:06.528 register 0x200000400000 10485760 00:04:06.528 buf 0x200000600000 len 8388608 PASSED 00:04:06.528 free 0x200000600000 8388608 00:04:06.528 unregister 0x200000400000 10485760 PASSED 00:04:06.528 passed 00:04:06.528 00:04:06.528 Run Summary: Type Total Ran Passed Failed Inactive 00:04:06.528 suites 1 1 n/a 0 0 00:04:06.528 tests 1 1 1 0 0 00:04:06.528 asserts 15 15 15 0 n/a 00:04:06.528 00:04:06.528 Elapsed time = 0.005 seconds 00:04:06.528 00:04:06.528 real 0m0.048s 00:04:06.528 user 0m0.018s 00:04:06.528 sys 0m0.029s 00:04:06.528 07:21:22 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:06.528 07:21:22 -- common/autotest_common.sh@10 -- # set +x 00:04:06.528 ************************************ 00:04:06.528 END TEST env_mem_callbacks 00:04:06.528 ************************************ 00:04:06.528 00:04:06.528 real 0m6.265s 00:04:06.528 user 0m4.362s 00:04:06.528 sys 0m0.950s 00:04:06.528 07:21:22 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:06.528 07:21:22 -- common/autotest_common.sh@10 -- # set +x 00:04:06.528 ************************************ 00:04:06.528 END TEST env 00:04:06.528 ************************************ 00:04:06.528 07:21:22 -- spdk/autotest.sh@176 -- # run_test rpc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/rpc.sh 00:04:06.528 07:21:22 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:04:06.528 07:21:22 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:04:06.528 07:21:22 -- common/autotest_common.sh@10 -- # set +x 00:04:06.529 ************************************ 00:04:06.529 START TEST rpc 00:04:06.529 ************************************ 00:04:06.529 07:21:22 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/rpc.sh 00:04:06.529 * Looking for test storage... 00:04:06.529 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc 00:04:06.529 07:21:22 -- rpc/rpc.sh@65 -- # spdk_pid=3969232 00:04:06.529 07:21:22 -- rpc/rpc.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -e bdev 00:04:06.529 07:21:22 -- rpc/rpc.sh@66 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:04:06.529 07:21:22 -- rpc/rpc.sh@67 -- # waitforlisten 3969232 00:04:06.529 07:21:22 -- common/autotest_common.sh@819 -- # '[' -z 3969232 ']' 00:04:06.529 07:21:22 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:06.529 07:21:22 -- common/autotest_common.sh@824 -- # local max_retries=100 00:04:06.529 07:21:22 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:06.529 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:06.529 07:21:22 -- common/autotest_common.sh@828 -- # xtrace_disable 00:04:06.529 07:21:22 -- common/autotest_common.sh@10 -- # set +x 00:04:06.787 [2024-07-14 07:21:22.710481] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:04:06.787 [2024-07-14 07:21:22.710560] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3969232 ] 00:04:06.787 EAL: No free 2048 kB hugepages reported on node 1 00:04:06.787 [2024-07-14 07:21:22.766913] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:06.787 [2024-07-14 07:21:22.872653] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:04:06.787 [2024-07-14 07:21:22.872835] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask bdev specified. 00:04:06.787 [2024-07-14 07:21:22.872856] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s spdk_tgt -p 3969232' to capture a snapshot of events at runtime. 00:04:06.787 [2024-07-14 07:21:22.872906] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/spdk_tgt_trace.pid3969232 for offline analysis/debug. 00:04:06.787 [2024-07-14 07:21:22.872959] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:04:07.719 07:21:23 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:04:07.719 07:21:23 -- common/autotest_common.sh@852 -- # return 0 00:04:07.719 07:21:23 -- rpc/rpc.sh@69 -- # export PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc 00:04:07.719 07:21:23 -- rpc/rpc.sh@69 -- # PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc 00:04:07.719 07:21:23 -- rpc/rpc.sh@72 -- # rpc=rpc_cmd 00:04:07.719 07:21:23 -- rpc/rpc.sh@73 -- # run_test rpc_integrity rpc_integrity 00:04:07.719 07:21:23 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:04:07.719 07:21:23 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:04:07.719 07:21:23 -- common/autotest_common.sh@10 -- # set +x 00:04:07.719 ************************************ 00:04:07.719 START TEST rpc_integrity 00:04:07.719 ************************************ 00:04:07.719 07:21:23 -- common/autotest_common.sh@1104 -- # rpc_integrity 00:04:07.719 07:21:23 -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:04:07.719 07:21:23 -- common/autotest_common.sh@551 -- # xtrace_disable 00:04:07.719 07:21:23 -- common/autotest_common.sh@10 -- # set +x 00:04:07.719 07:21:23 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:04:07.719 07:21:23 -- rpc/rpc.sh@12 -- # bdevs='[]' 00:04:07.719 07:21:23 -- rpc/rpc.sh@13 -- # jq length 00:04:07.719 07:21:23 -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:04:07.719 07:21:23 -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:04:07.719 07:21:23 -- common/autotest_common.sh@551 -- # xtrace_disable 00:04:07.719 07:21:23 -- common/autotest_common.sh@10 -- # set +x 00:04:07.719 07:21:23 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:04:07.719 07:21:23 -- rpc/rpc.sh@15 -- # malloc=Malloc0 00:04:07.719 07:21:23 -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:04:07.719 07:21:23 -- common/autotest_common.sh@551 -- # xtrace_disable 00:04:07.719 07:21:23 -- common/autotest_common.sh@10 -- # set +x 00:04:07.719 07:21:23 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:04:07.719 07:21:23 -- rpc/rpc.sh@16 -- # bdevs='[ 00:04:07.719 { 00:04:07.719 "name": "Malloc0", 00:04:07.719 "aliases": [ 00:04:07.719 "ccf44136-f3ad-4504-9fb2-fab3a4ef95fc" 00:04:07.719 ], 00:04:07.719 "product_name": "Malloc disk", 00:04:07.719 "block_size": 512, 00:04:07.719 "num_blocks": 16384, 00:04:07.719 "uuid": "ccf44136-f3ad-4504-9fb2-fab3a4ef95fc", 00:04:07.719 "assigned_rate_limits": { 00:04:07.719 "rw_ios_per_sec": 0, 00:04:07.719 "rw_mbytes_per_sec": 0, 00:04:07.719 "r_mbytes_per_sec": 0, 00:04:07.719 "w_mbytes_per_sec": 0 00:04:07.719 }, 00:04:07.719 "claimed": false, 00:04:07.719 "zoned": false, 00:04:07.719 "supported_io_types": { 00:04:07.719 "read": true, 00:04:07.719 "write": true, 00:04:07.719 "unmap": true, 00:04:07.719 "write_zeroes": true, 00:04:07.719 "flush": true, 00:04:07.719 "reset": true, 00:04:07.719 "compare": false, 00:04:07.719 "compare_and_write": false, 00:04:07.719 "abort": true, 00:04:07.719 "nvme_admin": false, 00:04:07.719 "nvme_io": false 00:04:07.719 }, 00:04:07.719 "memory_domains": [ 00:04:07.719 { 00:04:07.719 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:07.719 "dma_device_type": 2 00:04:07.719 } 00:04:07.719 ], 00:04:07.719 "driver_specific": {} 00:04:07.719 } 00:04:07.719 ]' 00:04:07.719 07:21:23 -- rpc/rpc.sh@17 -- # jq length 00:04:07.719 07:21:23 -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:04:07.719 07:21:23 -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc0 -p Passthru0 00:04:07.719 07:21:23 -- common/autotest_common.sh@551 -- # xtrace_disable 00:04:07.719 07:21:23 -- common/autotest_common.sh@10 -- # set +x 00:04:07.719 [2024-07-14 07:21:23.761371] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc0 00:04:07.719 [2024-07-14 07:21:23.761423] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:04:07.719 [2024-07-14 07:21:23.761448] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0xefef70 00:04:07.719 [2024-07-14 07:21:23.761464] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:04:07.719 [2024-07-14 07:21:23.762989] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:04:07.719 [2024-07-14 07:21:23.763013] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:04:07.719 Passthru0 00:04:07.719 07:21:23 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:04:07.719 07:21:23 -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:04:07.719 07:21:23 -- common/autotest_common.sh@551 -- # xtrace_disable 00:04:07.719 07:21:23 -- common/autotest_common.sh@10 -- # set +x 00:04:07.719 07:21:23 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:04:07.719 07:21:23 -- rpc/rpc.sh@20 -- # bdevs='[ 00:04:07.719 { 00:04:07.719 "name": "Malloc0", 00:04:07.719 "aliases": [ 00:04:07.719 "ccf44136-f3ad-4504-9fb2-fab3a4ef95fc" 00:04:07.719 ], 00:04:07.719 "product_name": "Malloc disk", 00:04:07.719 "block_size": 512, 00:04:07.719 "num_blocks": 16384, 00:04:07.719 "uuid": "ccf44136-f3ad-4504-9fb2-fab3a4ef95fc", 00:04:07.719 "assigned_rate_limits": { 00:04:07.719 "rw_ios_per_sec": 0, 00:04:07.719 "rw_mbytes_per_sec": 0, 00:04:07.719 "r_mbytes_per_sec": 0, 00:04:07.719 "w_mbytes_per_sec": 0 00:04:07.719 }, 00:04:07.719 "claimed": true, 00:04:07.719 "claim_type": "exclusive_write", 00:04:07.719 "zoned": false, 00:04:07.719 "supported_io_types": { 00:04:07.719 "read": true, 00:04:07.719 "write": true, 00:04:07.719 "unmap": true, 00:04:07.719 "write_zeroes": true, 00:04:07.719 "flush": true, 00:04:07.719 "reset": true, 00:04:07.719 "compare": false, 00:04:07.719 "compare_and_write": false, 00:04:07.719 "abort": true, 00:04:07.719 "nvme_admin": false, 00:04:07.719 "nvme_io": false 00:04:07.719 }, 00:04:07.719 "memory_domains": [ 00:04:07.719 { 00:04:07.719 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:07.719 "dma_device_type": 2 00:04:07.719 } 00:04:07.719 ], 00:04:07.719 "driver_specific": {} 00:04:07.719 }, 00:04:07.719 { 00:04:07.719 "name": "Passthru0", 00:04:07.719 "aliases": [ 00:04:07.719 "989c005f-ae40-5f9c-90ae-2ee24fb7fbfd" 00:04:07.719 ], 00:04:07.719 "product_name": "passthru", 00:04:07.719 "block_size": 512, 00:04:07.719 "num_blocks": 16384, 00:04:07.719 "uuid": "989c005f-ae40-5f9c-90ae-2ee24fb7fbfd", 00:04:07.719 "assigned_rate_limits": { 00:04:07.719 "rw_ios_per_sec": 0, 00:04:07.719 "rw_mbytes_per_sec": 0, 00:04:07.719 "r_mbytes_per_sec": 0, 00:04:07.719 "w_mbytes_per_sec": 0 00:04:07.719 }, 00:04:07.719 "claimed": false, 00:04:07.719 "zoned": false, 00:04:07.719 "supported_io_types": { 00:04:07.719 "read": true, 00:04:07.719 "write": true, 00:04:07.719 "unmap": true, 00:04:07.719 "write_zeroes": true, 00:04:07.719 "flush": true, 00:04:07.719 "reset": true, 00:04:07.719 "compare": false, 00:04:07.719 "compare_and_write": false, 00:04:07.720 "abort": true, 00:04:07.720 "nvme_admin": false, 00:04:07.720 "nvme_io": false 00:04:07.720 }, 00:04:07.720 "memory_domains": [ 00:04:07.720 { 00:04:07.720 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:07.720 "dma_device_type": 2 00:04:07.720 } 00:04:07.720 ], 00:04:07.720 "driver_specific": { 00:04:07.720 "passthru": { 00:04:07.720 "name": "Passthru0", 00:04:07.720 "base_bdev_name": "Malloc0" 00:04:07.720 } 00:04:07.720 } 00:04:07.720 } 00:04:07.720 ]' 00:04:07.720 07:21:23 -- rpc/rpc.sh@21 -- # jq length 00:04:07.720 07:21:23 -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:04:07.720 07:21:23 -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:04:07.720 07:21:23 -- common/autotest_common.sh@551 -- # xtrace_disable 00:04:07.720 07:21:23 -- common/autotest_common.sh@10 -- # set +x 00:04:07.720 07:21:23 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:04:07.720 07:21:23 -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc0 00:04:07.720 07:21:23 -- common/autotest_common.sh@551 -- # xtrace_disable 00:04:07.720 07:21:23 -- common/autotest_common.sh@10 -- # set +x 00:04:07.720 07:21:23 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:04:07.720 07:21:23 -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:04:07.720 07:21:23 -- common/autotest_common.sh@551 -- # xtrace_disable 00:04:07.720 07:21:23 -- common/autotest_common.sh@10 -- # set +x 00:04:07.720 07:21:23 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:04:07.720 07:21:23 -- rpc/rpc.sh@25 -- # bdevs='[]' 00:04:07.720 07:21:23 -- rpc/rpc.sh@26 -- # jq length 00:04:07.720 07:21:23 -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:04:07.720 00:04:07.720 real 0m0.222s 00:04:07.720 user 0m0.148s 00:04:07.720 sys 0m0.017s 00:04:07.720 07:21:23 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:07.720 07:21:23 -- common/autotest_common.sh@10 -- # set +x 00:04:07.720 ************************************ 00:04:07.720 END TEST rpc_integrity 00:04:07.720 ************************************ 00:04:07.977 07:21:23 -- rpc/rpc.sh@74 -- # run_test rpc_plugins rpc_plugins 00:04:07.977 07:21:23 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:04:07.977 07:21:23 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:04:07.977 07:21:23 -- common/autotest_common.sh@10 -- # set +x 00:04:07.977 ************************************ 00:04:07.977 START TEST rpc_plugins 00:04:07.977 ************************************ 00:04:07.977 07:21:23 -- common/autotest_common.sh@1104 -- # rpc_plugins 00:04:07.977 07:21:23 -- rpc/rpc.sh@30 -- # rpc_cmd --plugin rpc_plugin create_malloc 00:04:07.977 07:21:23 -- common/autotest_common.sh@551 -- # xtrace_disable 00:04:07.977 07:21:23 -- common/autotest_common.sh@10 -- # set +x 00:04:07.977 07:21:23 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:04:07.977 07:21:23 -- rpc/rpc.sh@30 -- # malloc=Malloc1 00:04:07.977 07:21:23 -- rpc/rpc.sh@31 -- # rpc_cmd bdev_get_bdevs 00:04:07.977 07:21:23 -- common/autotest_common.sh@551 -- # xtrace_disable 00:04:07.977 07:21:23 -- common/autotest_common.sh@10 -- # set +x 00:04:07.977 07:21:23 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:04:07.977 07:21:23 -- rpc/rpc.sh@31 -- # bdevs='[ 00:04:07.977 { 00:04:07.977 "name": "Malloc1", 00:04:07.977 "aliases": [ 00:04:07.977 "896574ea-f6a1-44ea-bd03-be9f165db498" 00:04:07.977 ], 00:04:07.977 "product_name": "Malloc disk", 00:04:07.977 "block_size": 4096, 00:04:07.977 "num_blocks": 256, 00:04:07.977 "uuid": "896574ea-f6a1-44ea-bd03-be9f165db498", 00:04:07.977 "assigned_rate_limits": { 00:04:07.977 "rw_ios_per_sec": 0, 00:04:07.977 "rw_mbytes_per_sec": 0, 00:04:07.977 "r_mbytes_per_sec": 0, 00:04:07.977 "w_mbytes_per_sec": 0 00:04:07.977 }, 00:04:07.977 "claimed": false, 00:04:07.977 "zoned": false, 00:04:07.977 "supported_io_types": { 00:04:07.977 "read": true, 00:04:07.977 "write": true, 00:04:07.977 "unmap": true, 00:04:07.977 "write_zeroes": true, 00:04:07.977 "flush": true, 00:04:07.977 "reset": true, 00:04:07.977 "compare": false, 00:04:07.977 "compare_and_write": false, 00:04:07.977 "abort": true, 00:04:07.977 "nvme_admin": false, 00:04:07.977 "nvme_io": false 00:04:07.977 }, 00:04:07.977 "memory_domains": [ 00:04:07.977 { 00:04:07.977 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:07.977 "dma_device_type": 2 00:04:07.977 } 00:04:07.977 ], 00:04:07.977 "driver_specific": {} 00:04:07.977 } 00:04:07.977 ]' 00:04:07.977 07:21:23 -- rpc/rpc.sh@32 -- # jq length 00:04:07.977 07:21:23 -- rpc/rpc.sh@32 -- # '[' 1 == 1 ']' 00:04:07.977 07:21:23 -- rpc/rpc.sh@34 -- # rpc_cmd --plugin rpc_plugin delete_malloc Malloc1 00:04:07.977 07:21:23 -- common/autotest_common.sh@551 -- # xtrace_disable 00:04:07.977 07:21:23 -- common/autotest_common.sh@10 -- # set +x 00:04:07.977 07:21:23 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:04:07.977 07:21:23 -- rpc/rpc.sh@35 -- # rpc_cmd bdev_get_bdevs 00:04:07.978 07:21:23 -- common/autotest_common.sh@551 -- # xtrace_disable 00:04:07.978 07:21:23 -- common/autotest_common.sh@10 -- # set +x 00:04:07.978 07:21:23 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:04:07.978 07:21:23 -- rpc/rpc.sh@35 -- # bdevs='[]' 00:04:07.978 07:21:23 -- rpc/rpc.sh@36 -- # jq length 00:04:07.978 07:21:24 -- rpc/rpc.sh@36 -- # '[' 0 == 0 ']' 00:04:07.978 00:04:07.978 real 0m0.116s 00:04:07.978 user 0m0.076s 00:04:07.978 sys 0m0.009s 00:04:07.978 07:21:24 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:07.978 07:21:24 -- common/autotest_common.sh@10 -- # set +x 00:04:07.978 ************************************ 00:04:07.978 END TEST rpc_plugins 00:04:07.978 ************************************ 00:04:07.978 07:21:24 -- rpc/rpc.sh@75 -- # run_test rpc_trace_cmd_test rpc_trace_cmd_test 00:04:07.978 07:21:24 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:04:07.978 07:21:24 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:04:07.978 07:21:24 -- common/autotest_common.sh@10 -- # set +x 00:04:07.978 ************************************ 00:04:07.978 START TEST rpc_trace_cmd_test 00:04:07.978 ************************************ 00:04:07.978 07:21:24 -- common/autotest_common.sh@1104 -- # rpc_trace_cmd_test 00:04:07.978 07:21:24 -- rpc/rpc.sh@40 -- # local info 00:04:07.978 07:21:24 -- rpc/rpc.sh@42 -- # rpc_cmd trace_get_info 00:04:07.978 07:21:24 -- common/autotest_common.sh@551 -- # xtrace_disable 00:04:07.978 07:21:24 -- common/autotest_common.sh@10 -- # set +x 00:04:07.978 07:21:24 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:04:07.978 07:21:24 -- rpc/rpc.sh@42 -- # info='{ 00:04:07.978 "tpoint_shm_path": "/dev/shm/spdk_tgt_trace.pid3969232", 00:04:07.978 "tpoint_group_mask": "0x8", 00:04:07.978 "iscsi_conn": { 00:04:07.978 "mask": "0x2", 00:04:07.978 "tpoint_mask": "0x0" 00:04:07.978 }, 00:04:07.978 "scsi": { 00:04:07.978 "mask": "0x4", 00:04:07.978 "tpoint_mask": "0x0" 00:04:07.978 }, 00:04:07.978 "bdev": { 00:04:07.978 "mask": "0x8", 00:04:07.978 "tpoint_mask": "0xffffffffffffffff" 00:04:07.978 }, 00:04:07.978 "nvmf_rdma": { 00:04:07.978 "mask": "0x10", 00:04:07.978 "tpoint_mask": "0x0" 00:04:07.978 }, 00:04:07.978 "nvmf_tcp": { 00:04:07.978 "mask": "0x20", 00:04:07.978 "tpoint_mask": "0x0" 00:04:07.978 }, 00:04:07.978 "ftl": { 00:04:07.978 "mask": "0x40", 00:04:07.978 "tpoint_mask": "0x0" 00:04:07.978 }, 00:04:07.978 "blobfs": { 00:04:07.978 "mask": "0x80", 00:04:07.978 "tpoint_mask": "0x0" 00:04:07.978 }, 00:04:07.978 "dsa": { 00:04:07.978 "mask": "0x200", 00:04:07.978 "tpoint_mask": "0x0" 00:04:07.978 }, 00:04:07.978 "thread": { 00:04:07.978 "mask": "0x400", 00:04:07.978 "tpoint_mask": "0x0" 00:04:07.978 }, 00:04:07.978 "nvme_pcie": { 00:04:07.978 "mask": "0x800", 00:04:07.978 "tpoint_mask": "0x0" 00:04:07.978 }, 00:04:07.978 "iaa": { 00:04:07.978 "mask": "0x1000", 00:04:07.978 "tpoint_mask": "0x0" 00:04:07.978 }, 00:04:07.978 "nvme_tcp": { 00:04:07.978 "mask": "0x2000", 00:04:07.978 "tpoint_mask": "0x0" 00:04:07.978 }, 00:04:07.978 "bdev_nvme": { 00:04:07.978 "mask": "0x4000", 00:04:07.978 "tpoint_mask": "0x0" 00:04:07.978 } 00:04:07.978 }' 00:04:07.978 07:21:24 -- rpc/rpc.sh@43 -- # jq length 00:04:07.978 07:21:24 -- rpc/rpc.sh@43 -- # '[' 15 -gt 2 ']' 00:04:07.978 07:21:24 -- rpc/rpc.sh@44 -- # jq 'has("tpoint_group_mask")' 00:04:07.978 07:21:24 -- rpc/rpc.sh@44 -- # '[' true = true ']' 00:04:07.978 07:21:24 -- rpc/rpc.sh@45 -- # jq 'has("tpoint_shm_path")' 00:04:08.235 07:21:24 -- rpc/rpc.sh@45 -- # '[' true = true ']' 00:04:08.235 07:21:24 -- rpc/rpc.sh@46 -- # jq 'has("bdev")' 00:04:08.235 07:21:24 -- rpc/rpc.sh@46 -- # '[' true = true ']' 00:04:08.235 07:21:24 -- rpc/rpc.sh@47 -- # jq -r .bdev.tpoint_mask 00:04:08.235 07:21:24 -- rpc/rpc.sh@47 -- # '[' 0xffffffffffffffff '!=' 0x0 ']' 00:04:08.235 00:04:08.235 real 0m0.191s 00:04:08.235 user 0m0.168s 00:04:08.235 sys 0m0.016s 00:04:08.235 07:21:24 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:08.235 07:21:24 -- common/autotest_common.sh@10 -- # set +x 00:04:08.235 ************************************ 00:04:08.235 END TEST rpc_trace_cmd_test 00:04:08.235 ************************************ 00:04:08.235 07:21:24 -- rpc/rpc.sh@76 -- # [[ 0 -eq 1 ]] 00:04:08.236 07:21:24 -- rpc/rpc.sh@80 -- # rpc=rpc_cmd 00:04:08.236 07:21:24 -- rpc/rpc.sh@81 -- # run_test rpc_daemon_integrity rpc_integrity 00:04:08.236 07:21:24 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:04:08.236 07:21:24 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:04:08.236 07:21:24 -- common/autotest_common.sh@10 -- # set +x 00:04:08.236 ************************************ 00:04:08.236 START TEST rpc_daemon_integrity 00:04:08.236 ************************************ 00:04:08.236 07:21:24 -- common/autotest_common.sh@1104 -- # rpc_integrity 00:04:08.236 07:21:24 -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:04:08.236 07:21:24 -- common/autotest_common.sh@551 -- # xtrace_disable 00:04:08.236 07:21:24 -- common/autotest_common.sh@10 -- # set +x 00:04:08.236 07:21:24 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:04:08.236 07:21:24 -- rpc/rpc.sh@12 -- # bdevs='[]' 00:04:08.236 07:21:24 -- rpc/rpc.sh@13 -- # jq length 00:04:08.236 07:21:24 -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:04:08.236 07:21:24 -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:04:08.236 07:21:24 -- common/autotest_common.sh@551 -- # xtrace_disable 00:04:08.236 07:21:24 -- common/autotest_common.sh@10 -- # set +x 00:04:08.236 07:21:24 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:04:08.236 07:21:24 -- rpc/rpc.sh@15 -- # malloc=Malloc2 00:04:08.236 07:21:24 -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:04:08.236 07:21:24 -- common/autotest_common.sh@551 -- # xtrace_disable 00:04:08.236 07:21:24 -- common/autotest_common.sh@10 -- # set +x 00:04:08.236 07:21:24 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:04:08.236 07:21:24 -- rpc/rpc.sh@16 -- # bdevs='[ 00:04:08.236 { 00:04:08.236 "name": "Malloc2", 00:04:08.236 "aliases": [ 00:04:08.236 "f5d27bd1-7e9d-4047-943b-eddfaf5cc17c" 00:04:08.236 ], 00:04:08.236 "product_name": "Malloc disk", 00:04:08.236 "block_size": 512, 00:04:08.236 "num_blocks": 16384, 00:04:08.236 "uuid": "f5d27bd1-7e9d-4047-943b-eddfaf5cc17c", 00:04:08.236 "assigned_rate_limits": { 00:04:08.236 "rw_ios_per_sec": 0, 00:04:08.236 "rw_mbytes_per_sec": 0, 00:04:08.236 "r_mbytes_per_sec": 0, 00:04:08.236 "w_mbytes_per_sec": 0 00:04:08.236 }, 00:04:08.236 "claimed": false, 00:04:08.236 "zoned": false, 00:04:08.236 "supported_io_types": { 00:04:08.236 "read": true, 00:04:08.236 "write": true, 00:04:08.236 "unmap": true, 00:04:08.236 "write_zeroes": true, 00:04:08.236 "flush": true, 00:04:08.236 "reset": true, 00:04:08.236 "compare": false, 00:04:08.236 "compare_and_write": false, 00:04:08.236 "abort": true, 00:04:08.236 "nvme_admin": false, 00:04:08.236 "nvme_io": false 00:04:08.236 }, 00:04:08.236 "memory_domains": [ 00:04:08.236 { 00:04:08.236 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:08.236 "dma_device_type": 2 00:04:08.236 } 00:04:08.236 ], 00:04:08.236 "driver_specific": {} 00:04:08.236 } 00:04:08.236 ]' 00:04:08.236 07:21:24 -- rpc/rpc.sh@17 -- # jq length 00:04:08.236 07:21:24 -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:04:08.236 07:21:24 -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc2 -p Passthru0 00:04:08.236 07:21:24 -- common/autotest_common.sh@551 -- # xtrace_disable 00:04:08.236 07:21:24 -- common/autotest_common.sh@10 -- # set +x 00:04:08.236 [2024-07-14 07:21:24.363110] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc2 00:04:08.236 [2024-07-14 07:21:24.363173] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:04:08.236 [2024-07-14 07:21:24.363199] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x109e970 00:04:08.236 [2024-07-14 07:21:24.363215] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:04:08.236 [2024-07-14 07:21:24.364557] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:04:08.236 [2024-07-14 07:21:24.364585] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:04:08.236 Passthru0 00:04:08.236 07:21:24 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:04:08.236 07:21:24 -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:04:08.236 07:21:24 -- common/autotest_common.sh@551 -- # xtrace_disable 00:04:08.236 07:21:24 -- common/autotest_common.sh@10 -- # set +x 00:04:08.236 07:21:24 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:04:08.236 07:21:24 -- rpc/rpc.sh@20 -- # bdevs='[ 00:04:08.236 { 00:04:08.236 "name": "Malloc2", 00:04:08.236 "aliases": [ 00:04:08.236 "f5d27bd1-7e9d-4047-943b-eddfaf5cc17c" 00:04:08.236 ], 00:04:08.236 "product_name": "Malloc disk", 00:04:08.236 "block_size": 512, 00:04:08.236 "num_blocks": 16384, 00:04:08.236 "uuid": "f5d27bd1-7e9d-4047-943b-eddfaf5cc17c", 00:04:08.236 "assigned_rate_limits": { 00:04:08.236 "rw_ios_per_sec": 0, 00:04:08.236 "rw_mbytes_per_sec": 0, 00:04:08.236 "r_mbytes_per_sec": 0, 00:04:08.236 "w_mbytes_per_sec": 0 00:04:08.236 }, 00:04:08.236 "claimed": true, 00:04:08.236 "claim_type": "exclusive_write", 00:04:08.236 "zoned": false, 00:04:08.236 "supported_io_types": { 00:04:08.236 "read": true, 00:04:08.236 "write": true, 00:04:08.236 "unmap": true, 00:04:08.236 "write_zeroes": true, 00:04:08.236 "flush": true, 00:04:08.236 "reset": true, 00:04:08.236 "compare": false, 00:04:08.236 "compare_and_write": false, 00:04:08.236 "abort": true, 00:04:08.236 "nvme_admin": false, 00:04:08.236 "nvme_io": false 00:04:08.236 }, 00:04:08.236 "memory_domains": [ 00:04:08.236 { 00:04:08.236 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:08.236 "dma_device_type": 2 00:04:08.236 } 00:04:08.236 ], 00:04:08.236 "driver_specific": {} 00:04:08.236 }, 00:04:08.236 { 00:04:08.236 "name": "Passthru0", 00:04:08.236 "aliases": [ 00:04:08.236 "e5b9fcab-7fc1-51ef-9d5e-f3c882cbe8c5" 00:04:08.236 ], 00:04:08.236 "product_name": "passthru", 00:04:08.236 "block_size": 512, 00:04:08.236 "num_blocks": 16384, 00:04:08.236 "uuid": "e5b9fcab-7fc1-51ef-9d5e-f3c882cbe8c5", 00:04:08.236 "assigned_rate_limits": { 00:04:08.236 "rw_ios_per_sec": 0, 00:04:08.236 "rw_mbytes_per_sec": 0, 00:04:08.236 "r_mbytes_per_sec": 0, 00:04:08.236 "w_mbytes_per_sec": 0 00:04:08.236 }, 00:04:08.236 "claimed": false, 00:04:08.236 "zoned": false, 00:04:08.236 "supported_io_types": { 00:04:08.236 "read": true, 00:04:08.236 "write": true, 00:04:08.236 "unmap": true, 00:04:08.236 "write_zeroes": true, 00:04:08.236 "flush": true, 00:04:08.236 "reset": true, 00:04:08.236 "compare": false, 00:04:08.236 "compare_and_write": false, 00:04:08.236 "abort": true, 00:04:08.236 "nvme_admin": false, 00:04:08.236 "nvme_io": false 00:04:08.236 }, 00:04:08.236 "memory_domains": [ 00:04:08.236 { 00:04:08.236 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:08.236 "dma_device_type": 2 00:04:08.236 } 00:04:08.236 ], 00:04:08.236 "driver_specific": { 00:04:08.236 "passthru": { 00:04:08.236 "name": "Passthru0", 00:04:08.236 "base_bdev_name": "Malloc2" 00:04:08.236 } 00:04:08.236 } 00:04:08.236 } 00:04:08.236 ]' 00:04:08.236 07:21:24 -- rpc/rpc.sh@21 -- # jq length 00:04:08.495 07:21:24 -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:04:08.495 07:21:24 -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:04:08.495 07:21:24 -- common/autotest_common.sh@551 -- # xtrace_disable 00:04:08.495 07:21:24 -- common/autotest_common.sh@10 -- # set +x 00:04:08.495 07:21:24 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:04:08.495 07:21:24 -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc2 00:04:08.495 07:21:24 -- common/autotest_common.sh@551 -- # xtrace_disable 00:04:08.495 07:21:24 -- common/autotest_common.sh@10 -- # set +x 00:04:08.495 07:21:24 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:04:08.495 07:21:24 -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:04:08.495 07:21:24 -- common/autotest_common.sh@551 -- # xtrace_disable 00:04:08.495 07:21:24 -- common/autotest_common.sh@10 -- # set +x 00:04:08.495 07:21:24 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:04:08.495 07:21:24 -- rpc/rpc.sh@25 -- # bdevs='[]' 00:04:08.495 07:21:24 -- rpc/rpc.sh@26 -- # jq length 00:04:08.495 07:21:24 -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:04:08.495 00:04:08.495 real 0m0.223s 00:04:08.495 user 0m0.151s 00:04:08.495 sys 0m0.017s 00:04:08.495 07:21:24 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:08.495 07:21:24 -- common/autotest_common.sh@10 -- # set +x 00:04:08.495 ************************************ 00:04:08.495 END TEST rpc_daemon_integrity 00:04:08.495 ************************************ 00:04:08.495 07:21:24 -- rpc/rpc.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:04:08.495 07:21:24 -- rpc/rpc.sh@84 -- # killprocess 3969232 00:04:08.495 07:21:24 -- common/autotest_common.sh@926 -- # '[' -z 3969232 ']' 00:04:08.495 07:21:24 -- common/autotest_common.sh@930 -- # kill -0 3969232 00:04:08.495 07:21:24 -- common/autotest_common.sh@931 -- # uname 00:04:08.495 07:21:24 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:04:08.495 07:21:24 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 3969232 00:04:08.495 07:21:24 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:04:08.495 07:21:24 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:04:08.495 07:21:24 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 3969232' 00:04:08.495 killing process with pid 3969232 00:04:08.495 07:21:24 -- common/autotest_common.sh@945 -- # kill 3969232 00:04:08.495 07:21:24 -- common/autotest_common.sh@950 -- # wait 3969232 00:04:09.062 00:04:09.062 real 0m2.373s 00:04:09.062 user 0m3.005s 00:04:09.062 sys 0m0.568s 00:04:09.062 07:21:24 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:09.062 07:21:24 -- common/autotest_common.sh@10 -- # set +x 00:04:09.062 ************************************ 00:04:09.062 END TEST rpc 00:04:09.062 ************************************ 00:04:09.062 07:21:25 -- spdk/autotest.sh@177 -- # run_test rpc_client /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_client/rpc_client.sh 00:04:09.062 07:21:25 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:04:09.062 07:21:25 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:04:09.062 07:21:25 -- common/autotest_common.sh@10 -- # set +x 00:04:09.062 ************************************ 00:04:09.062 START TEST rpc_client 00:04:09.062 ************************************ 00:04:09.062 07:21:25 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_client/rpc_client.sh 00:04:09.062 * Looking for test storage... 00:04:09.062 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_client 00:04:09.062 07:21:25 -- rpc_client/rpc_client.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_client/rpc_client_test 00:04:09.062 OK 00:04:09.062 07:21:25 -- rpc_client/rpc_client.sh@12 -- # trap - SIGINT SIGTERM EXIT 00:04:09.062 00:04:09.062 real 0m0.065s 00:04:09.062 user 0m0.026s 00:04:09.062 sys 0m0.044s 00:04:09.062 07:21:25 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:09.062 07:21:25 -- common/autotest_common.sh@10 -- # set +x 00:04:09.062 ************************************ 00:04:09.062 END TEST rpc_client 00:04:09.062 ************************************ 00:04:09.062 07:21:25 -- spdk/autotest.sh@178 -- # run_test json_config /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_config.sh 00:04:09.062 07:21:25 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:04:09.062 07:21:25 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:04:09.062 07:21:25 -- common/autotest_common.sh@10 -- # set +x 00:04:09.062 ************************************ 00:04:09.062 START TEST json_config 00:04:09.062 ************************************ 00:04:09.062 07:21:25 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_config.sh 00:04:09.062 07:21:25 -- json_config/json_config.sh@8 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:04:09.062 07:21:25 -- nvmf/common.sh@7 -- # uname -s 00:04:09.062 07:21:25 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:04:09.062 07:21:25 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:04:09.062 07:21:25 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:04:09.062 07:21:25 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:04:09.062 07:21:25 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:04:09.062 07:21:25 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:04:09.062 07:21:25 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:04:09.062 07:21:25 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:04:09.062 07:21:25 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:04:09.062 07:21:25 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:04:09.062 07:21:25 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:04:09.062 07:21:25 -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:04:09.062 07:21:25 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:04:09.062 07:21:25 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:04:09.062 07:21:25 -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:04:09.062 07:21:25 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:04:09.062 07:21:25 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:04:09.062 07:21:25 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:04:09.062 07:21:25 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:04:09.062 07:21:25 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:09.062 07:21:25 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:09.062 07:21:25 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:09.062 07:21:25 -- paths/export.sh@5 -- # export PATH 00:04:09.062 07:21:25 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:09.062 07:21:25 -- nvmf/common.sh@46 -- # : 0 00:04:09.062 07:21:25 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:04:09.062 07:21:25 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:04:09.062 07:21:25 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:04:09.062 07:21:25 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:04:09.062 07:21:25 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:04:09.062 07:21:25 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:04:09.062 07:21:25 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:04:09.062 07:21:25 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:04:09.062 07:21:25 -- json_config/json_config.sh@10 -- # [[ 0 -eq 1 ]] 00:04:09.062 07:21:25 -- json_config/json_config.sh@14 -- # [[ 0 -ne 1 ]] 00:04:09.062 07:21:25 -- json_config/json_config.sh@14 -- # [[ 0 -eq 1 ]] 00:04:09.062 07:21:25 -- json_config/json_config.sh@25 -- # (( SPDK_TEST_BLOCKDEV + SPDK_TEST_ISCSI + SPDK_TEST_NVMF + SPDK_TEST_VHOST + SPDK_TEST_VHOST_INIT + SPDK_TEST_RBD == 0 )) 00:04:09.062 07:21:25 -- json_config/json_config.sh@30 -- # app_pid=(['target']='' ['initiator']='') 00:04:09.062 07:21:25 -- json_config/json_config.sh@30 -- # declare -A app_pid 00:04:09.062 07:21:25 -- json_config/json_config.sh@31 -- # app_socket=(['target']='/var/tmp/spdk_tgt.sock' ['initiator']='/var/tmp/spdk_initiator.sock') 00:04:09.062 07:21:25 -- json_config/json_config.sh@31 -- # declare -A app_socket 00:04:09.062 07:21:25 -- json_config/json_config.sh@32 -- # app_params=(['target']='-m 0x1 -s 1024' ['initiator']='-m 0x2 -g -u -s 1024') 00:04:09.062 07:21:25 -- json_config/json_config.sh@32 -- # declare -A app_params 00:04:09.062 07:21:25 -- json_config/json_config.sh@33 -- # configs_path=(['target']='/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json' ['initiator']='/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_initiator_config.json') 00:04:09.062 07:21:25 -- json_config/json_config.sh@33 -- # declare -A configs_path 00:04:09.062 07:21:25 -- json_config/json_config.sh@43 -- # last_event_id=0 00:04:09.062 07:21:25 -- json_config/json_config.sh@418 -- # trap 'on_error_exit "${FUNCNAME}" "${LINENO}"' ERR 00:04:09.062 07:21:25 -- json_config/json_config.sh@419 -- # echo 'INFO: JSON configuration test init' 00:04:09.062 INFO: JSON configuration test init 00:04:09.062 07:21:25 -- json_config/json_config.sh@420 -- # json_config_test_init 00:04:09.062 07:21:25 -- json_config/json_config.sh@315 -- # timing_enter json_config_test_init 00:04:09.062 07:21:25 -- common/autotest_common.sh@712 -- # xtrace_disable 00:04:09.062 07:21:25 -- common/autotest_common.sh@10 -- # set +x 00:04:09.062 07:21:25 -- json_config/json_config.sh@316 -- # timing_enter json_config_setup_target 00:04:09.062 07:21:25 -- common/autotest_common.sh@712 -- # xtrace_disable 00:04:09.062 07:21:25 -- common/autotest_common.sh@10 -- # set +x 00:04:09.062 07:21:25 -- json_config/json_config.sh@318 -- # json_config_test_start_app target --wait-for-rpc 00:04:09.062 07:21:25 -- json_config/json_config.sh@98 -- # local app=target 00:04:09.062 07:21:25 -- json_config/json_config.sh@99 -- # shift 00:04:09.062 07:21:25 -- json_config/json_config.sh@101 -- # [[ -n 22 ]] 00:04:09.062 07:21:25 -- json_config/json_config.sh@102 -- # [[ -z '' ]] 00:04:09.062 07:21:25 -- json_config/json_config.sh@104 -- # local app_extra_params= 00:04:09.062 07:21:25 -- json_config/json_config.sh@105 -- # [[ 0 -eq 1 ]] 00:04:09.062 07:21:25 -- json_config/json_config.sh@105 -- # [[ 0 -eq 1 ]] 00:04:09.062 07:21:25 -- json_config/json_config.sh@111 -- # app_pid[$app]=3969718 00:04:09.062 07:21:25 -- json_config/json_config.sh@110 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --wait-for-rpc 00:04:09.062 07:21:25 -- json_config/json_config.sh@113 -- # echo 'Waiting for target to run...' 00:04:09.062 Waiting for target to run... 00:04:09.062 07:21:25 -- json_config/json_config.sh@114 -- # waitforlisten 3969718 /var/tmp/spdk_tgt.sock 00:04:09.062 07:21:25 -- common/autotest_common.sh@819 -- # '[' -z 3969718 ']' 00:04:09.062 07:21:25 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:04:09.062 07:21:25 -- common/autotest_common.sh@824 -- # local max_retries=100 00:04:09.063 07:21:25 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:04:09.063 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:04:09.063 07:21:25 -- common/autotest_common.sh@828 -- # xtrace_disable 00:04:09.063 07:21:25 -- common/autotest_common.sh@10 -- # set +x 00:04:09.063 [2024-07-14 07:21:25.199904] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:04:09.063 [2024-07-14 07:21:25.200007] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3969718 ] 00:04:09.063 EAL: No free 2048 kB hugepages reported on node 1 00:04:09.629 [2024-07-14 07:21:25.553813] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:09.630 [2024-07-14 07:21:25.641560] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:04:09.630 [2024-07-14 07:21:25.641725] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:04:10.195 07:21:26 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:04:10.195 07:21:26 -- common/autotest_common.sh@852 -- # return 0 00:04:10.195 07:21:26 -- json_config/json_config.sh@115 -- # echo '' 00:04:10.195 00:04:10.195 07:21:26 -- json_config/json_config.sh@322 -- # create_accel_config 00:04:10.195 07:21:26 -- json_config/json_config.sh@146 -- # timing_enter create_accel_config 00:04:10.195 07:21:26 -- common/autotest_common.sh@712 -- # xtrace_disable 00:04:10.195 07:21:26 -- common/autotest_common.sh@10 -- # set +x 00:04:10.195 07:21:26 -- json_config/json_config.sh@148 -- # [[ 0 -eq 1 ]] 00:04:10.195 07:21:26 -- json_config/json_config.sh@154 -- # timing_exit create_accel_config 00:04:10.195 07:21:26 -- common/autotest_common.sh@718 -- # xtrace_disable 00:04:10.195 07:21:26 -- common/autotest_common.sh@10 -- # set +x 00:04:10.195 07:21:26 -- json_config/json_config.sh@326 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh --json-with-subsystems 00:04:10.195 07:21:26 -- json_config/json_config.sh@327 -- # tgt_rpc load_config 00:04:10.196 07:21:26 -- json_config/json_config.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock load_config 00:04:13.479 07:21:29 -- json_config/json_config.sh@329 -- # tgt_check_notification_types 00:04:13.479 07:21:29 -- json_config/json_config.sh@46 -- # timing_enter tgt_check_notification_types 00:04:13.479 07:21:29 -- common/autotest_common.sh@712 -- # xtrace_disable 00:04:13.479 07:21:29 -- common/autotest_common.sh@10 -- # set +x 00:04:13.479 07:21:29 -- json_config/json_config.sh@48 -- # local ret=0 00:04:13.479 07:21:29 -- json_config/json_config.sh@49 -- # enabled_types=('bdev_register' 'bdev_unregister') 00:04:13.479 07:21:29 -- json_config/json_config.sh@49 -- # local enabled_types 00:04:13.479 07:21:29 -- json_config/json_config.sh@51 -- # tgt_rpc notify_get_types 00:04:13.479 07:21:29 -- json_config/json_config.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock notify_get_types 00:04:13.479 07:21:29 -- json_config/json_config.sh@51 -- # jq -r '.[]' 00:04:13.479 07:21:29 -- json_config/json_config.sh@51 -- # get_types=('bdev_register' 'bdev_unregister') 00:04:13.479 07:21:29 -- json_config/json_config.sh@51 -- # local get_types 00:04:13.479 07:21:29 -- json_config/json_config.sh@52 -- # [[ bdev_register bdev_unregister != \b\d\e\v\_\r\e\g\i\s\t\e\r\ \b\d\e\v\_\u\n\r\e\g\i\s\t\e\r ]] 00:04:13.479 07:21:29 -- json_config/json_config.sh@57 -- # timing_exit tgt_check_notification_types 00:04:13.479 07:21:29 -- common/autotest_common.sh@718 -- # xtrace_disable 00:04:13.479 07:21:29 -- common/autotest_common.sh@10 -- # set +x 00:04:13.479 07:21:29 -- json_config/json_config.sh@58 -- # return 0 00:04:13.479 07:21:29 -- json_config/json_config.sh@331 -- # [[ 0 -eq 1 ]] 00:04:13.479 07:21:29 -- json_config/json_config.sh@335 -- # [[ 0 -eq 1 ]] 00:04:13.479 07:21:29 -- json_config/json_config.sh@339 -- # [[ 0 -eq 1 ]] 00:04:13.479 07:21:29 -- json_config/json_config.sh@343 -- # [[ 1 -eq 1 ]] 00:04:13.479 07:21:29 -- json_config/json_config.sh@344 -- # create_nvmf_subsystem_config 00:04:13.479 07:21:29 -- json_config/json_config.sh@283 -- # timing_enter create_nvmf_subsystem_config 00:04:13.479 07:21:29 -- common/autotest_common.sh@712 -- # xtrace_disable 00:04:13.479 07:21:29 -- common/autotest_common.sh@10 -- # set +x 00:04:13.479 07:21:29 -- json_config/json_config.sh@285 -- # NVMF_FIRST_TARGET_IP=127.0.0.1 00:04:13.479 07:21:29 -- json_config/json_config.sh@286 -- # [[ tcp == \r\d\m\a ]] 00:04:13.479 07:21:29 -- json_config/json_config.sh@290 -- # [[ -z 127.0.0.1 ]] 00:04:13.479 07:21:29 -- json_config/json_config.sh@295 -- # tgt_rpc bdev_malloc_create 8 512 --name MallocForNvmf0 00:04:13.479 07:21:29 -- json_config/json_config.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 8 512 --name MallocForNvmf0 00:04:13.737 MallocForNvmf0 00:04:13.737 07:21:29 -- json_config/json_config.sh@296 -- # tgt_rpc bdev_malloc_create 4 1024 --name MallocForNvmf1 00:04:13.737 07:21:29 -- json_config/json_config.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 4 1024 --name MallocForNvmf1 00:04:13.995 MallocForNvmf1 00:04:13.995 07:21:30 -- json_config/json_config.sh@298 -- # tgt_rpc nvmf_create_transport -t tcp -u 8192 -c 0 00:04:13.995 07:21:30 -- json_config/json_config.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_create_transport -t tcp -u 8192 -c 0 00:04:14.253 [2024-07-14 07:21:30.297428] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:04:14.253 07:21:30 -- json_config/json_config.sh@299 -- # tgt_rpc nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:04:14.253 07:21:30 -- json_config/json_config.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:04:14.510 07:21:30 -- json_config/json_config.sh@300 -- # tgt_rpc nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf0 00:04:14.510 07:21:30 -- json_config/json_config.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf0 00:04:14.769 07:21:30 -- json_config/json_config.sh@301 -- # tgt_rpc nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf1 00:04:14.769 07:21:30 -- json_config/json_config.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf1 00:04:15.027 07:21:31 -- json_config/json_config.sh@302 -- # tgt_rpc nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 127.0.0.1 -s 4420 00:04:15.027 07:21:31 -- json_config/json_config.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 127.0.0.1 -s 4420 00:04:15.285 [2024-07-14 07:21:31.232501] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:04:15.285 07:21:31 -- json_config/json_config.sh@304 -- # timing_exit create_nvmf_subsystem_config 00:04:15.285 07:21:31 -- common/autotest_common.sh@718 -- # xtrace_disable 00:04:15.285 07:21:31 -- common/autotest_common.sh@10 -- # set +x 00:04:15.285 07:21:31 -- json_config/json_config.sh@346 -- # timing_exit json_config_setup_target 00:04:15.285 07:21:31 -- common/autotest_common.sh@718 -- # xtrace_disable 00:04:15.285 07:21:31 -- common/autotest_common.sh@10 -- # set +x 00:04:15.285 07:21:31 -- json_config/json_config.sh@348 -- # [[ 0 -eq 1 ]] 00:04:15.285 07:21:31 -- json_config/json_config.sh@353 -- # tgt_rpc bdev_malloc_create 8 512 --name MallocBdevForConfigChangeCheck 00:04:15.285 07:21:31 -- json_config/json_config.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 8 512 --name MallocBdevForConfigChangeCheck 00:04:15.543 MallocBdevForConfigChangeCheck 00:04:15.543 07:21:31 -- json_config/json_config.sh@355 -- # timing_exit json_config_test_init 00:04:15.543 07:21:31 -- common/autotest_common.sh@718 -- # xtrace_disable 00:04:15.543 07:21:31 -- common/autotest_common.sh@10 -- # set +x 00:04:15.543 07:21:31 -- json_config/json_config.sh@422 -- # tgt_rpc save_config 00:04:15.543 07:21:31 -- json_config/json_config.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:04:15.801 07:21:31 -- json_config/json_config.sh@424 -- # echo 'INFO: shutting down applications...' 00:04:15.801 INFO: shutting down applications... 00:04:15.801 07:21:31 -- json_config/json_config.sh@425 -- # [[ 0 -eq 1 ]] 00:04:15.801 07:21:31 -- json_config/json_config.sh@431 -- # json_config_clear target 00:04:15.801 07:21:31 -- json_config/json_config.sh@385 -- # [[ -n 22 ]] 00:04:15.801 07:21:31 -- json_config/json_config.sh@386 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/clear_config.py -s /var/tmp/spdk_tgt.sock clear_config 00:04:17.700 Calling clear_iscsi_subsystem 00:04:17.700 Calling clear_nvmf_subsystem 00:04:17.700 Calling clear_nbd_subsystem 00:04:17.700 Calling clear_ublk_subsystem 00:04:17.700 Calling clear_vhost_blk_subsystem 00:04:17.700 Calling clear_vhost_scsi_subsystem 00:04:17.700 Calling clear_scheduler_subsystem 00:04:17.700 Calling clear_bdev_subsystem 00:04:17.700 Calling clear_accel_subsystem 00:04:17.700 Calling clear_vmd_subsystem 00:04:17.700 Calling clear_sock_subsystem 00:04:17.700 Calling clear_iobuf_subsystem 00:04:17.700 07:21:33 -- json_config/json_config.sh@390 -- # local config_filter=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py 00:04:17.700 07:21:33 -- json_config/json_config.sh@396 -- # count=100 00:04:17.700 07:21:33 -- json_config/json_config.sh@397 -- # '[' 100 -gt 0 ']' 00:04:17.700 07:21:33 -- json_config/json_config.sh@398 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:04:17.700 07:21:33 -- json_config/json_config.sh@398 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method delete_global_parameters 00:04:17.700 07:21:33 -- json_config/json_config.sh@398 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method check_empty 00:04:17.957 07:21:33 -- json_config/json_config.sh@398 -- # break 00:04:17.957 07:21:33 -- json_config/json_config.sh@403 -- # '[' 100 -eq 0 ']' 00:04:17.958 07:21:33 -- json_config/json_config.sh@432 -- # json_config_test_shutdown_app target 00:04:17.958 07:21:33 -- json_config/json_config.sh@120 -- # local app=target 00:04:17.958 07:21:33 -- json_config/json_config.sh@123 -- # [[ -n 22 ]] 00:04:17.958 07:21:33 -- json_config/json_config.sh@124 -- # [[ -n 3969718 ]] 00:04:17.958 07:21:33 -- json_config/json_config.sh@127 -- # kill -SIGINT 3969718 00:04:17.958 07:21:33 -- json_config/json_config.sh@129 -- # (( i = 0 )) 00:04:17.958 07:21:33 -- json_config/json_config.sh@129 -- # (( i < 30 )) 00:04:17.958 07:21:33 -- json_config/json_config.sh@130 -- # kill -0 3969718 00:04:17.958 07:21:33 -- json_config/json_config.sh@134 -- # sleep 0.5 00:04:18.217 07:21:34 -- json_config/json_config.sh@129 -- # (( i++ )) 00:04:18.217 07:21:34 -- json_config/json_config.sh@129 -- # (( i < 30 )) 00:04:18.217 07:21:34 -- json_config/json_config.sh@130 -- # kill -0 3969718 00:04:18.217 07:21:34 -- json_config/json_config.sh@131 -- # app_pid[$app]= 00:04:18.217 07:21:34 -- json_config/json_config.sh@132 -- # break 00:04:18.217 07:21:34 -- json_config/json_config.sh@137 -- # [[ -n '' ]] 00:04:18.217 07:21:34 -- json_config/json_config.sh@142 -- # echo 'SPDK target shutdown done' 00:04:18.217 SPDK target shutdown done 00:04:18.217 07:21:34 -- json_config/json_config.sh@434 -- # echo 'INFO: relaunching applications...' 00:04:18.217 INFO: relaunching applications... 00:04:18.217 07:21:34 -- json_config/json_config.sh@435 -- # json_config_test_start_app target --json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:04:18.217 07:21:34 -- json_config/json_config.sh@98 -- # local app=target 00:04:18.217 07:21:34 -- json_config/json_config.sh@99 -- # shift 00:04:18.217 07:21:34 -- json_config/json_config.sh@101 -- # [[ -n 22 ]] 00:04:18.217 07:21:34 -- json_config/json_config.sh@102 -- # [[ -z '' ]] 00:04:18.217 07:21:34 -- json_config/json_config.sh@104 -- # local app_extra_params= 00:04:18.217 07:21:34 -- json_config/json_config.sh@105 -- # [[ 0 -eq 1 ]] 00:04:18.217 07:21:34 -- json_config/json_config.sh@105 -- # [[ 0 -eq 1 ]] 00:04:18.217 07:21:34 -- json_config/json_config.sh@111 -- # app_pid[$app]=3970938 00:04:18.217 07:21:34 -- json_config/json_config.sh@113 -- # echo 'Waiting for target to run...' 00:04:18.217 Waiting for target to run... 00:04:18.217 07:21:34 -- json_config/json_config.sh@110 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:04:18.217 07:21:34 -- json_config/json_config.sh@114 -- # waitforlisten 3970938 /var/tmp/spdk_tgt.sock 00:04:18.217 07:21:34 -- common/autotest_common.sh@819 -- # '[' -z 3970938 ']' 00:04:18.217 07:21:34 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:04:18.217 07:21:34 -- common/autotest_common.sh@824 -- # local max_retries=100 00:04:18.217 07:21:34 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:04:18.217 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:04:18.217 07:21:34 -- common/autotest_common.sh@828 -- # xtrace_disable 00:04:18.217 07:21:34 -- common/autotest_common.sh@10 -- # set +x 00:04:18.475 [2024-07-14 07:21:34.429617] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:04:18.475 [2024-07-14 07:21:34.429707] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3970938 ] 00:04:18.475 EAL: No free 2048 kB hugepages reported on node 1 00:04:19.070 [2024-07-14 07:21:34.933874] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:19.070 [2024-07-14 07:21:35.038594] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:04:19.070 [2024-07-14 07:21:35.038786] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:04:22.361 [2024-07-14 07:21:38.074814] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:04:22.361 [2024-07-14 07:21:38.107322] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:04:22.361 07:21:38 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:04:22.361 07:21:38 -- common/autotest_common.sh@852 -- # return 0 00:04:22.361 07:21:38 -- json_config/json_config.sh@115 -- # echo '' 00:04:22.361 00:04:22.361 07:21:38 -- json_config/json_config.sh@436 -- # [[ 0 -eq 1 ]] 00:04:22.361 07:21:38 -- json_config/json_config.sh@440 -- # echo 'INFO: Checking if target configuration is the same...' 00:04:22.361 INFO: Checking if target configuration is the same... 00:04:22.361 07:21:38 -- json_config/json_config.sh@441 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_diff.sh /dev/fd/62 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:04:22.361 07:21:38 -- json_config/json_config.sh@441 -- # tgt_rpc save_config 00:04:22.361 07:21:38 -- json_config/json_config.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:04:22.361 + '[' 2 -ne 2 ']' 00:04:22.361 +++ dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_diff.sh 00:04:22.361 ++ readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/../.. 00:04:22.361 + rootdir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:04:22.361 +++ basename /dev/fd/62 00:04:22.361 ++ mktemp /tmp/62.XXX 00:04:22.361 + tmp_file_1=/tmp/62.3GY 00:04:22.361 +++ basename /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:04:22.361 ++ mktemp /tmp/spdk_tgt_config.json.XXX 00:04:22.361 + tmp_file_2=/tmp/spdk_tgt_config.json.Rkf 00:04:22.361 + ret=0 00:04:22.361 + /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method sort 00:04:22.619 + /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method sort 00:04:22.619 + diff -u /tmp/62.3GY /tmp/spdk_tgt_config.json.Rkf 00:04:22.619 + echo 'INFO: JSON config files are the same' 00:04:22.619 INFO: JSON config files are the same 00:04:22.619 + rm /tmp/62.3GY /tmp/spdk_tgt_config.json.Rkf 00:04:22.619 + exit 0 00:04:22.619 07:21:38 -- json_config/json_config.sh@442 -- # [[ 0 -eq 1 ]] 00:04:22.619 07:21:38 -- json_config/json_config.sh@447 -- # echo 'INFO: changing configuration and checking if this can be detected...' 00:04:22.619 INFO: changing configuration and checking if this can be detected... 00:04:22.619 07:21:38 -- json_config/json_config.sh@449 -- # tgt_rpc bdev_malloc_delete MallocBdevForConfigChangeCheck 00:04:22.619 07:21:38 -- json_config/json_config.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_delete MallocBdevForConfigChangeCheck 00:04:22.876 07:21:38 -- json_config/json_config.sh@450 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_diff.sh /dev/fd/62 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:04:22.877 07:21:38 -- json_config/json_config.sh@450 -- # tgt_rpc save_config 00:04:22.877 07:21:38 -- json_config/json_config.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:04:22.877 + '[' 2 -ne 2 ']' 00:04:22.877 +++ dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_diff.sh 00:04:22.877 ++ readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/../.. 00:04:22.877 + rootdir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:04:22.877 +++ basename /dev/fd/62 00:04:22.877 ++ mktemp /tmp/62.XXX 00:04:22.877 + tmp_file_1=/tmp/62.LEd 00:04:22.877 +++ basename /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:04:22.877 ++ mktemp /tmp/spdk_tgt_config.json.XXX 00:04:22.877 + tmp_file_2=/tmp/spdk_tgt_config.json.9ze 00:04:22.877 + ret=0 00:04:22.877 + /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method sort 00:04:23.135 + /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method sort 00:04:23.135 + diff -u /tmp/62.LEd /tmp/spdk_tgt_config.json.9ze 00:04:23.135 + ret=1 00:04:23.135 + echo '=== Start of file: /tmp/62.LEd ===' 00:04:23.135 + cat /tmp/62.LEd 00:04:23.135 + echo '=== End of file: /tmp/62.LEd ===' 00:04:23.135 + echo '' 00:04:23.135 + echo '=== Start of file: /tmp/spdk_tgt_config.json.9ze ===' 00:04:23.135 + cat /tmp/spdk_tgt_config.json.9ze 00:04:23.135 + echo '=== End of file: /tmp/spdk_tgt_config.json.9ze ===' 00:04:23.135 + echo '' 00:04:23.135 + rm /tmp/62.LEd /tmp/spdk_tgt_config.json.9ze 00:04:23.135 + exit 1 00:04:23.135 07:21:39 -- json_config/json_config.sh@454 -- # echo 'INFO: configuration change detected.' 00:04:23.135 INFO: configuration change detected. 00:04:23.135 07:21:39 -- json_config/json_config.sh@457 -- # json_config_test_fini 00:04:23.135 07:21:39 -- json_config/json_config.sh@359 -- # timing_enter json_config_test_fini 00:04:23.135 07:21:39 -- common/autotest_common.sh@712 -- # xtrace_disable 00:04:23.135 07:21:39 -- common/autotest_common.sh@10 -- # set +x 00:04:23.135 07:21:39 -- json_config/json_config.sh@360 -- # local ret=0 00:04:23.135 07:21:39 -- json_config/json_config.sh@362 -- # [[ -n '' ]] 00:04:23.135 07:21:39 -- json_config/json_config.sh@370 -- # [[ -n 3970938 ]] 00:04:23.135 07:21:39 -- json_config/json_config.sh@373 -- # cleanup_bdev_subsystem_config 00:04:23.135 07:21:39 -- json_config/json_config.sh@237 -- # timing_enter cleanup_bdev_subsystem_config 00:04:23.135 07:21:39 -- common/autotest_common.sh@712 -- # xtrace_disable 00:04:23.135 07:21:39 -- common/autotest_common.sh@10 -- # set +x 00:04:23.135 07:21:39 -- json_config/json_config.sh@239 -- # [[ 0 -eq 1 ]] 00:04:23.135 07:21:39 -- json_config/json_config.sh@246 -- # uname -s 00:04:23.135 07:21:39 -- json_config/json_config.sh@246 -- # [[ Linux = Linux ]] 00:04:23.135 07:21:39 -- json_config/json_config.sh@247 -- # rm -f /sample_aio 00:04:23.135 07:21:39 -- json_config/json_config.sh@250 -- # [[ 0 -eq 1 ]] 00:04:23.135 07:21:39 -- json_config/json_config.sh@254 -- # timing_exit cleanup_bdev_subsystem_config 00:04:23.135 07:21:39 -- common/autotest_common.sh@718 -- # xtrace_disable 00:04:23.135 07:21:39 -- common/autotest_common.sh@10 -- # set +x 00:04:23.395 07:21:39 -- json_config/json_config.sh@376 -- # killprocess 3970938 00:04:23.395 07:21:39 -- common/autotest_common.sh@926 -- # '[' -z 3970938 ']' 00:04:23.395 07:21:39 -- common/autotest_common.sh@930 -- # kill -0 3970938 00:04:23.395 07:21:39 -- common/autotest_common.sh@931 -- # uname 00:04:23.395 07:21:39 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:04:23.395 07:21:39 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 3970938 00:04:23.395 07:21:39 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:04:23.395 07:21:39 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:04:23.395 07:21:39 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 3970938' 00:04:23.395 killing process with pid 3970938 00:04:23.395 07:21:39 -- common/autotest_common.sh@945 -- # kill 3970938 00:04:23.395 07:21:39 -- common/autotest_common.sh@950 -- # wait 3970938 00:04:25.302 07:21:40 -- json_config/json_config.sh@379 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_initiator_config.json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:04:25.302 07:21:41 -- json_config/json_config.sh@380 -- # timing_exit json_config_test_fini 00:04:25.302 07:21:41 -- common/autotest_common.sh@718 -- # xtrace_disable 00:04:25.302 07:21:41 -- common/autotest_common.sh@10 -- # set +x 00:04:25.302 07:21:41 -- json_config/json_config.sh@381 -- # return 0 00:04:25.302 07:21:41 -- json_config/json_config.sh@459 -- # echo 'INFO: Success' 00:04:25.302 INFO: Success 00:04:25.302 00:04:25.302 real 0m15.925s 00:04:25.302 user 0m18.096s 00:04:25.302 sys 0m2.125s 00:04:25.302 07:21:41 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:25.302 07:21:41 -- common/autotest_common.sh@10 -- # set +x 00:04:25.302 ************************************ 00:04:25.302 END TEST json_config 00:04:25.302 ************************************ 00:04:25.302 07:21:41 -- spdk/autotest.sh@179 -- # run_test json_config_extra_key /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_config_extra_key.sh 00:04:25.302 07:21:41 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:04:25.302 07:21:41 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:04:25.302 07:21:41 -- common/autotest_common.sh@10 -- # set +x 00:04:25.302 ************************************ 00:04:25.302 START TEST json_config_extra_key 00:04:25.302 ************************************ 00:04:25.302 07:21:41 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_config_extra_key.sh 00:04:25.302 07:21:41 -- json_config/json_config_extra_key.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:04:25.302 07:21:41 -- nvmf/common.sh@7 -- # uname -s 00:04:25.302 07:21:41 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:04:25.302 07:21:41 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:04:25.302 07:21:41 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:04:25.302 07:21:41 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:04:25.302 07:21:41 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:04:25.302 07:21:41 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:04:25.302 07:21:41 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:04:25.302 07:21:41 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:04:25.302 07:21:41 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:04:25.302 07:21:41 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:04:25.302 07:21:41 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:04:25.302 07:21:41 -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:04:25.302 07:21:41 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:04:25.302 07:21:41 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:04:25.302 07:21:41 -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:04:25.302 07:21:41 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:04:25.302 07:21:41 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:04:25.302 07:21:41 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:04:25.302 07:21:41 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:04:25.302 07:21:41 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:25.302 07:21:41 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:25.302 07:21:41 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:25.302 07:21:41 -- paths/export.sh@5 -- # export PATH 00:04:25.302 07:21:41 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:25.302 07:21:41 -- nvmf/common.sh@46 -- # : 0 00:04:25.302 07:21:41 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:04:25.302 07:21:41 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:04:25.302 07:21:41 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:04:25.302 07:21:41 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:04:25.302 07:21:41 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:04:25.302 07:21:41 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:04:25.302 07:21:41 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:04:25.302 07:21:41 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:04:25.302 07:21:41 -- json_config/json_config_extra_key.sh@16 -- # app_pid=(['target']='') 00:04:25.302 07:21:41 -- json_config/json_config_extra_key.sh@16 -- # declare -A app_pid 00:04:25.302 07:21:41 -- json_config/json_config_extra_key.sh@17 -- # app_socket=(['target']='/var/tmp/spdk_tgt.sock') 00:04:25.302 07:21:41 -- json_config/json_config_extra_key.sh@17 -- # declare -A app_socket 00:04:25.302 07:21:41 -- json_config/json_config_extra_key.sh@18 -- # app_params=(['target']='-m 0x1 -s 1024') 00:04:25.302 07:21:41 -- json_config/json_config_extra_key.sh@18 -- # declare -A app_params 00:04:25.302 07:21:41 -- json_config/json_config_extra_key.sh@19 -- # configs_path=(['target']='/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/extra_key.json') 00:04:25.302 07:21:41 -- json_config/json_config_extra_key.sh@19 -- # declare -A configs_path 00:04:25.302 07:21:41 -- json_config/json_config_extra_key.sh@74 -- # trap 'on_error_exit "${FUNCNAME}" "${LINENO}"' ERR 00:04:25.302 07:21:41 -- json_config/json_config_extra_key.sh@76 -- # echo 'INFO: launching applications...' 00:04:25.302 INFO: launching applications... 00:04:25.302 07:21:41 -- json_config/json_config_extra_key.sh@77 -- # json_config_test_start_app target --json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/extra_key.json 00:04:25.302 07:21:41 -- json_config/json_config_extra_key.sh@24 -- # local app=target 00:04:25.302 07:21:41 -- json_config/json_config_extra_key.sh@25 -- # shift 00:04:25.302 07:21:41 -- json_config/json_config_extra_key.sh@27 -- # [[ -n 22 ]] 00:04:25.302 07:21:41 -- json_config/json_config_extra_key.sh@28 -- # [[ -z '' ]] 00:04:25.302 07:21:41 -- json_config/json_config_extra_key.sh@31 -- # app_pid[$app]=3971881 00:04:25.302 07:21:41 -- json_config/json_config_extra_key.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/extra_key.json 00:04:25.302 07:21:41 -- json_config/json_config_extra_key.sh@33 -- # echo 'Waiting for target to run...' 00:04:25.302 Waiting for target to run... 00:04:25.302 07:21:41 -- json_config/json_config_extra_key.sh@34 -- # waitforlisten 3971881 /var/tmp/spdk_tgt.sock 00:04:25.302 07:21:41 -- common/autotest_common.sh@819 -- # '[' -z 3971881 ']' 00:04:25.302 07:21:41 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:04:25.302 07:21:41 -- common/autotest_common.sh@824 -- # local max_retries=100 00:04:25.302 07:21:41 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:04:25.302 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:04:25.302 07:21:41 -- common/autotest_common.sh@828 -- # xtrace_disable 00:04:25.302 07:21:41 -- common/autotest_common.sh@10 -- # set +x 00:04:25.302 [2024-07-14 07:21:41.153120] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:04:25.302 [2024-07-14 07:21:41.153248] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3971881 ] 00:04:25.302 EAL: No free 2048 kB hugepages reported on node 1 00:04:25.562 [2024-07-14 07:21:41.655923] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:25.819 [2024-07-14 07:21:41.761121] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:04:25.819 [2024-07-14 07:21:41.761320] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:04:26.077 07:21:42 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:04:26.077 07:21:42 -- common/autotest_common.sh@852 -- # return 0 00:04:26.077 07:21:42 -- json_config/json_config_extra_key.sh@35 -- # echo '' 00:04:26.077 00:04:26.077 07:21:42 -- json_config/json_config_extra_key.sh@79 -- # echo 'INFO: shutting down applications...' 00:04:26.077 INFO: shutting down applications... 00:04:26.077 07:21:42 -- json_config/json_config_extra_key.sh@80 -- # json_config_test_shutdown_app target 00:04:26.077 07:21:42 -- json_config/json_config_extra_key.sh@40 -- # local app=target 00:04:26.077 07:21:42 -- json_config/json_config_extra_key.sh@43 -- # [[ -n 22 ]] 00:04:26.077 07:21:42 -- json_config/json_config_extra_key.sh@44 -- # [[ -n 3971881 ]] 00:04:26.077 07:21:42 -- json_config/json_config_extra_key.sh@47 -- # kill -SIGINT 3971881 00:04:26.077 07:21:42 -- json_config/json_config_extra_key.sh@49 -- # (( i = 0 )) 00:04:26.077 07:21:42 -- json_config/json_config_extra_key.sh@49 -- # (( i < 30 )) 00:04:26.077 07:21:42 -- json_config/json_config_extra_key.sh@50 -- # kill -0 3971881 00:04:26.077 07:21:42 -- json_config/json_config_extra_key.sh@54 -- # sleep 0.5 00:04:26.642 07:21:42 -- json_config/json_config_extra_key.sh@49 -- # (( i++ )) 00:04:26.642 07:21:42 -- json_config/json_config_extra_key.sh@49 -- # (( i < 30 )) 00:04:26.642 07:21:42 -- json_config/json_config_extra_key.sh@50 -- # kill -0 3971881 00:04:26.642 07:21:42 -- json_config/json_config_extra_key.sh@51 -- # app_pid[$app]= 00:04:26.642 07:21:42 -- json_config/json_config_extra_key.sh@52 -- # break 00:04:26.642 07:21:42 -- json_config/json_config_extra_key.sh@57 -- # [[ -n '' ]] 00:04:26.642 07:21:42 -- json_config/json_config_extra_key.sh@62 -- # echo 'SPDK target shutdown done' 00:04:26.642 SPDK target shutdown done 00:04:26.642 07:21:42 -- json_config/json_config_extra_key.sh@82 -- # echo Success 00:04:26.642 Success 00:04:26.642 00:04:26.642 real 0m1.531s 00:04:26.642 user 0m1.367s 00:04:26.642 sys 0m0.594s 00:04:26.642 07:21:42 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:26.642 07:21:42 -- common/autotest_common.sh@10 -- # set +x 00:04:26.642 ************************************ 00:04:26.642 END TEST json_config_extra_key 00:04:26.642 ************************************ 00:04:26.642 07:21:42 -- spdk/autotest.sh@180 -- # run_test alias_rpc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:04:26.642 07:21:42 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:04:26.642 07:21:42 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:04:26.642 07:21:42 -- common/autotest_common.sh@10 -- # set +x 00:04:26.642 ************************************ 00:04:26.642 START TEST alias_rpc 00:04:26.642 ************************************ 00:04:26.642 07:21:42 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:04:26.642 * Looking for test storage... 00:04:26.642 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/alias_rpc 00:04:26.642 07:21:42 -- alias_rpc/alias_rpc.sh@10 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:04:26.642 07:21:42 -- alias_rpc/alias_rpc.sh@13 -- # spdk_tgt_pid=3972190 00:04:26.642 07:21:42 -- alias_rpc/alias_rpc.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:04:26.642 07:21:42 -- alias_rpc/alias_rpc.sh@14 -- # waitforlisten 3972190 00:04:26.642 07:21:42 -- common/autotest_common.sh@819 -- # '[' -z 3972190 ']' 00:04:26.642 07:21:42 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:26.642 07:21:42 -- common/autotest_common.sh@824 -- # local max_retries=100 00:04:26.642 07:21:42 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:26.642 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:26.642 07:21:42 -- common/autotest_common.sh@828 -- # xtrace_disable 00:04:26.642 07:21:42 -- common/autotest_common.sh@10 -- # set +x 00:04:26.642 [2024-07-14 07:21:42.704805] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:04:26.642 [2024-07-14 07:21:42.704901] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3972190 ] 00:04:26.642 EAL: No free 2048 kB hugepages reported on node 1 00:04:26.642 [2024-07-14 07:21:42.765451] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:26.901 [2024-07-14 07:21:42.880362] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:04:26.901 [2024-07-14 07:21:42.880552] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:04:27.835 07:21:43 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:04:27.835 07:21:43 -- common/autotest_common.sh@852 -- # return 0 00:04:27.835 07:21:43 -- alias_rpc/alias_rpc.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py load_config -i 00:04:27.835 07:21:43 -- alias_rpc/alias_rpc.sh@19 -- # killprocess 3972190 00:04:27.835 07:21:43 -- common/autotest_common.sh@926 -- # '[' -z 3972190 ']' 00:04:27.835 07:21:43 -- common/autotest_common.sh@930 -- # kill -0 3972190 00:04:27.835 07:21:43 -- common/autotest_common.sh@931 -- # uname 00:04:27.835 07:21:43 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:04:27.836 07:21:43 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 3972190 00:04:27.836 07:21:43 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:04:27.836 07:21:43 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:04:27.836 07:21:43 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 3972190' 00:04:27.836 killing process with pid 3972190 00:04:27.836 07:21:43 -- common/autotest_common.sh@945 -- # kill 3972190 00:04:27.836 07:21:43 -- common/autotest_common.sh@950 -- # wait 3972190 00:04:28.401 00:04:28.401 real 0m1.815s 00:04:28.401 user 0m2.079s 00:04:28.401 sys 0m0.471s 00:04:28.401 07:21:44 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:28.401 07:21:44 -- common/autotest_common.sh@10 -- # set +x 00:04:28.401 ************************************ 00:04:28.401 END TEST alias_rpc 00:04:28.401 ************************************ 00:04:28.401 07:21:44 -- spdk/autotest.sh@182 -- # [[ 0 -eq 0 ]] 00:04:28.401 07:21:44 -- spdk/autotest.sh@183 -- # run_test spdkcli_tcp /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/tcp.sh 00:04:28.401 07:21:44 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:04:28.401 07:21:44 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:04:28.401 07:21:44 -- common/autotest_common.sh@10 -- # set +x 00:04:28.401 ************************************ 00:04:28.401 START TEST spdkcli_tcp 00:04:28.401 ************************************ 00:04:28.401 07:21:44 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/tcp.sh 00:04:28.401 * Looking for test storage... 00:04:28.401 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli 00:04:28.401 07:21:44 -- spdkcli/tcp.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/common.sh 00:04:28.401 07:21:44 -- spdkcli/common.sh@6 -- # spdkcli_job=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/spdkcli_job.py 00:04:28.401 07:21:44 -- spdkcli/common.sh@7 -- # spdk_clear_config_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/clear_config.py 00:04:28.401 07:21:44 -- spdkcli/tcp.sh@18 -- # IP_ADDRESS=127.0.0.1 00:04:28.401 07:21:44 -- spdkcli/tcp.sh@19 -- # PORT=9998 00:04:28.401 07:21:44 -- spdkcli/tcp.sh@21 -- # trap 'err_cleanup; exit 1' SIGINT SIGTERM EXIT 00:04:28.401 07:21:44 -- spdkcli/tcp.sh@23 -- # timing_enter run_spdk_tgt_tcp 00:04:28.401 07:21:44 -- common/autotest_common.sh@712 -- # xtrace_disable 00:04:28.401 07:21:44 -- common/autotest_common.sh@10 -- # set +x 00:04:28.401 07:21:44 -- spdkcli/tcp.sh@25 -- # spdk_tgt_pid=3972390 00:04:28.401 07:21:44 -- spdkcli/tcp.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x3 -p 0 00:04:28.401 07:21:44 -- spdkcli/tcp.sh@27 -- # waitforlisten 3972390 00:04:28.401 07:21:44 -- common/autotest_common.sh@819 -- # '[' -z 3972390 ']' 00:04:28.401 07:21:44 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:28.401 07:21:44 -- common/autotest_common.sh@824 -- # local max_retries=100 00:04:28.401 07:21:44 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:28.401 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:28.401 07:21:44 -- common/autotest_common.sh@828 -- # xtrace_disable 00:04:28.401 07:21:44 -- common/autotest_common.sh@10 -- # set +x 00:04:28.401 [2024-07-14 07:21:44.546896] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:04:28.401 [2024-07-14 07:21:44.546987] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3972390 ] 00:04:28.659 EAL: No free 2048 kB hugepages reported on node 1 00:04:28.659 [2024-07-14 07:21:44.605328] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 2 00:04:28.659 [2024-07-14 07:21:44.710860] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:04:28.659 [2024-07-14 07:21:44.711077] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:04:28.659 [2024-07-14 07:21:44.711083] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:04:29.594 07:21:45 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:04:29.594 07:21:45 -- common/autotest_common.sh@852 -- # return 0 00:04:29.594 07:21:45 -- spdkcli/tcp.sh@31 -- # socat_pid=3972527 00:04:29.594 07:21:45 -- spdkcli/tcp.sh@30 -- # socat TCP-LISTEN:9998 UNIX-CONNECT:/var/tmp/spdk.sock 00:04:29.594 07:21:45 -- spdkcli/tcp.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -r 100 -t 2 -s 127.0.0.1 -p 9998 rpc_get_methods 00:04:29.594 [ 00:04:29.594 "bdev_malloc_delete", 00:04:29.594 "bdev_malloc_create", 00:04:29.594 "bdev_null_resize", 00:04:29.594 "bdev_null_delete", 00:04:29.594 "bdev_null_create", 00:04:29.594 "bdev_nvme_cuse_unregister", 00:04:29.594 "bdev_nvme_cuse_register", 00:04:29.594 "bdev_opal_new_user", 00:04:29.594 "bdev_opal_set_lock_state", 00:04:29.594 "bdev_opal_delete", 00:04:29.594 "bdev_opal_get_info", 00:04:29.594 "bdev_opal_create", 00:04:29.594 "bdev_nvme_opal_revert", 00:04:29.594 "bdev_nvme_opal_init", 00:04:29.594 "bdev_nvme_send_cmd", 00:04:29.594 "bdev_nvme_get_path_iostat", 00:04:29.594 "bdev_nvme_get_mdns_discovery_info", 00:04:29.594 "bdev_nvme_stop_mdns_discovery", 00:04:29.594 "bdev_nvme_start_mdns_discovery", 00:04:29.594 "bdev_nvme_set_multipath_policy", 00:04:29.594 "bdev_nvme_set_preferred_path", 00:04:29.594 "bdev_nvme_get_io_paths", 00:04:29.594 "bdev_nvme_remove_error_injection", 00:04:29.594 "bdev_nvme_add_error_injection", 00:04:29.594 "bdev_nvme_get_discovery_info", 00:04:29.594 "bdev_nvme_stop_discovery", 00:04:29.594 "bdev_nvme_start_discovery", 00:04:29.594 "bdev_nvme_get_controller_health_info", 00:04:29.594 "bdev_nvme_disable_controller", 00:04:29.594 "bdev_nvme_enable_controller", 00:04:29.594 "bdev_nvme_reset_controller", 00:04:29.594 "bdev_nvme_get_transport_statistics", 00:04:29.594 "bdev_nvme_apply_firmware", 00:04:29.594 "bdev_nvme_detach_controller", 00:04:29.594 "bdev_nvme_get_controllers", 00:04:29.594 "bdev_nvme_attach_controller", 00:04:29.594 "bdev_nvme_set_hotplug", 00:04:29.594 "bdev_nvme_set_options", 00:04:29.594 "bdev_passthru_delete", 00:04:29.594 "bdev_passthru_create", 00:04:29.594 "bdev_lvol_grow_lvstore", 00:04:29.594 "bdev_lvol_get_lvols", 00:04:29.594 "bdev_lvol_get_lvstores", 00:04:29.594 "bdev_lvol_delete", 00:04:29.594 "bdev_lvol_set_read_only", 00:04:29.594 "bdev_lvol_resize", 00:04:29.594 "bdev_lvol_decouple_parent", 00:04:29.594 "bdev_lvol_inflate", 00:04:29.594 "bdev_lvol_rename", 00:04:29.594 "bdev_lvol_clone_bdev", 00:04:29.594 "bdev_lvol_clone", 00:04:29.594 "bdev_lvol_snapshot", 00:04:29.594 "bdev_lvol_create", 00:04:29.594 "bdev_lvol_delete_lvstore", 00:04:29.594 "bdev_lvol_rename_lvstore", 00:04:29.594 "bdev_lvol_create_lvstore", 00:04:29.594 "bdev_raid_set_options", 00:04:29.594 "bdev_raid_remove_base_bdev", 00:04:29.594 "bdev_raid_add_base_bdev", 00:04:29.594 "bdev_raid_delete", 00:04:29.594 "bdev_raid_create", 00:04:29.594 "bdev_raid_get_bdevs", 00:04:29.594 "bdev_error_inject_error", 00:04:29.594 "bdev_error_delete", 00:04:29.594 "bdev_error_create", 00:04:29.594 "bdev_split_delete", 00:04:29.594 "bdev_split_create", 00:04:29.594 "bdev_delay_delete", 00:04:29.594 "bdev_delay_create", 00:04:29.594 "bdev_delay_update_latency", 00:04:29.594 "bdev_zone_block_delete", 00:04:29.594 "bdev_zone_block_create", 00:04:29.594 "blobfs_create", 00:04:29.594 "blobfs_detect", 00:04:29.594 "blobfs_set_cache_size", 00:04:29.594 "bdev_aio_delete", 00:04:29.594 "bdev_aio_rescan", 00:04:29.594 "bdev_aio_create", 00:04:29.594 "bdev_ftl_set_property", 00:04:29.594 "bdev_ftl_get_properties", 00:04:29.594 "bdev_ftl_get_stats", 00:04:29.594 "bdev_ftl_unmap", 00:04:29.594 "bdev_ftl_unload", 00:04:29.594 "bdev_ftl_delete", 00:04:29.594 "bdev_ftl_load", 00:04:29.594 "bdev_ftl_create", 00:04:29.594 "bdev_virtio_attach_controller", 00:04:29.594 "bdev_virtio_scsi_get_devices", 00:04:29.594 "bdev_virtio_detach_controller", 00:04:29.594 "bdev_virtio_blk_set_hotplug", 00:04:29.594 "bdev_iscsi_delete", 00:04:29.594 "bdev_iscsi_create", 00:04:29.594 "bdev_iscsi_set_options", 00:04:29.594 "accel_error_inject_error", 00:04:29.594 "ioat_scan_accel_module", 00:04:29.594 "dsa_scan_accel_module", 00:04:29.594 "iaa_scan_accel_module", 00:04:29.594 "iscsi_set_options", 00:04:29.594 "iscsi_get_auth_groups", 00:04:29.594 "iscsi_auth_group_remove_secret", 00:04:29.594 "iscsi_auth_group_add_secret", 00:04:29.594 "iscsi_delete_auth_group", 00:04:29.594 "iscsi_create_auth_group", 00:04:29.594 "iscsi_set_discovery_auth", 00:04:29.594 "iscsi_get_options", 00:04:29.594 "iscsi_target_node_request_logout", 00:04:29.594 "iscsi_target_node_set_redirect", 00:04:29.594 "iscsi_target_node_set_auth", 00:04:29.594 "iscsi_target_node_add_lun", 00:04:29.594 "iscsi_get_connections", 00:04:29.594 "iscsi_portal_group_set_auth", 00:04:29.594 "iscsi_start_portal_group", 00:04:29.594 "iscsi_delete_portal_group", 00:04:29.594 "iscsi_create_portal_group", 00:04:29.594 "iscsi_get_portal_groups", 00:04:29.594 "iscsi_delete_target_node", 00:04:29.594 "iscsi_target_node_remove_pg_ig_maps", 00:04:29.594 "iscsi_target_node_add_pg_ig_maps", 00:04:29.594 "iscsi_create_target_node", 00:04:29.594 "iscsi_get_target_nodes", 00:04:29.594 "iscsi_delete_initiator_group", 00:04:29.594 "iscsi_initiator_group_remove_initiators", 00:04:29.594 "iscsi_initiator_group_add_initiators", 00:04:29.594 "iscsi_create_initiator_group", 00:04:29.594 "iscsi_get_initiator_groups", 00:04:29.594 "nvmf_set_crdt", 00:04:29.594 "nvmf_set_config", 00:04:29.594 "nvmf_set_max_subsystems", 00:04:29.594 "nvmf_subsystem_get_listeners", 00:04:29.594 "nvmf_subsystem_get_qpairs", 00:04:29.594 "nvmf_subsystem_get_controllers", 00:04:29.594 "nvmf_get_stats", 00:04:29.594 "nvmf_get_transports", 00:04:29.594 "nvmf_create_transport", 00:04:29.594 "nvmf_get_targets", 00:04:29.594 "nvmf_delete_target", 00:04:29.594 "nvmf_create_target", 00:04:29.594 "nvmf_subsystem_allow_any_host", 00:04:29.594 "nvmf_subsystem_remove_host", 00:04:29.594 "nvmf_subsystem_add_host", 00:04:29.594 "nvmf_subsystem_remove_ns", 00:04:29.594 "nvmf_subsystem_add_ns", 00:04:29.594 "nvmf_subsystem_listener_set_ana_state", 00:04:29.594 "nvmf_discovery_get_referrals", 00:04:29.594 "nvmf_discovery_remove_referral", 00:04:29.594 "nvmf_discovery_add_referral", 00:04:29.594 "nvmf_subsystem_remove_listener", 00:04:29.594 "nvmf_subsystem_add_listener", 00:04:29.594 "nvmf_delete_subsystem", 00:04:29.594 "nvmf_create_subsystem", 00:04:29.594 "nvmf_get_subsystems", 00:04:29.594 "env_dpdk_get_mem_stats", 00:04:29.594 "nbd_get_disks", 00:04:29.594 "nbd_stop_disk", 00:04:29.594 "nbd_start_disk", 00:04:29.594 "ublk_recover_disk", 00:04:29.594 "ublk_get_disks", 00:04:29.594 "ublk_stop_disk", 00:04:29.594 "ublk_start_disk", 00:04:29.595 "ublk_destroy_target", 00:04:29.595 "ublk_create_target", 00:04:29.595 "virtio_blk_create_transport", 00:04:29.595 "virtio_blk_get_transports", 00:04:29.595 "vhost_controller_set_coalescing", 00:04:29.595 "vhost_get_controllers", 00:04:29.595 "vhost_delete_controller", 00:04:29.595 "vhost_create_blk_controller", 00:04:29.595 "vhost_scsi_controller_remove_target", 00:04:29.595 "vhost_scsi_controller_add_target", 00:04:29.595 "vhost_start_scsi_controller", 00:04:29.595 "vhost_create_scsi_controller", 00:04:29.595 "thread_set_cpumask", 00:04:29.595 "framework_get_scheduler", 00:04:29.595 "framework_set_scheduler", 00:04:29.595 "framework_get_reactors", 00:04:29.595 "thread_get_io_channels", 00:04:29.595 "thread_get_pollers", 00:04:29.595 "thread_get_stats", 00:04:29.595 "framework_monitor_context_switch", 00:04:29.595 "spdk_kill_instance", 00:04:29.595 "log_enable_timestamps", 00:04:29.595 "log_get_flags", 00:04:29.595 "log_clear_flag", 00:04:29.595 "log_set_flag", 00:04:29.595 "log_get_level", 00:04:29.595 "log_set_level", 00:04:29.595 "log_get_print_level", 00:04:29.595 "log_set_print_level", 00:04:29.595 "framework_enable_cpumask_locks", 00:04:29.595 "framework_disable_cpumask_locks", 00:04:29.595 "framework_wait_init", 00:04:29.595 "framework_start_init", 00:04:29.595 "scsi_get_devices", 00:04:29.595 "bdev_get_histogram", 00:04:29.595 "bdev_enable_histogram", 00:04:29.595 "bdev_set_qos_limit", 00:04:29.595 "bdev_set_qd_sampling_period", 00:04:29.595 "bdev_get_bdevs", 00:04:29.595 "bdev_reset_iostat", 00:04:29.595 "bdev_get_iostat", 00:04:29.595 "bdev_examine", 00:04:29.595 "bdev_wait_for_examine", 00:04:29.595 "bdev_set_options", 00:04:29.595 "notify_get_notifications", 00:04:29.595 "notify_get_types", 00:04:29.595 "accel_get_stats", 00:04:29.595 "accel_set_options", 00:04:29.595 "accel_set_driver", 00:04:29.595 "accel_crypto_key_destroy", 00:04:29.595 "accel_crypto_keys_get", 00:04:29.595 "accel_crypto_key_create", 00:04:29.595 "accel_assign_opc", 00:04:29.595 "accel_get_module_info", 00:04:29.595 "accel_get_opc_assignments", 00:04:29.595 "vmd_rescan", 00:04:29.595 "vmd_remove_device", 00:04:29.595 "vmd_enable", 00:04:29.595 "sock_set_default_impl", 00:04:29.595 "sock_impl_set_options", 00:04:29.595 "sock_impl_get_options", 00:04:29.595 "iobuf_get_stats", 00:04:29.595 "iobuf_set_options", 00:04:29.595 "framework_get_pci_devices", 00:04:29.595 "framework_get_config", 00:04:29.595 "framework_get_subsystems", 00:04:29.595 "trace_get_info", 00:04:29.595 "trace_get_tpoint_group_mask", 00:04:29.595 "trace_disable_tpoint_group", 00:04:29.595 "trace_enable_tpoint_group", 00:04:29.595 "trace_clear_tpoint_mask", 00:04:29.595 "trace_set_tpoint_mask", 00:04:29.595 "spdk_get_version", 00:04:29.595 "rpc_get_methods" 00:04:29.595 ] 00:04:29.595 07:21:45 -- spdkcli/tcp.sh@35 -- # timing_exit run_spdk_tgt_tcp 00:04:29.595 07:21:45 -- common/autotest_common.sh@718 -- # xtrace_disable 00:04:29.595 07:21:45 -- common/autotest_common.sh@10 -- # set +x 00:04:29.595 07:21:45 -- spdkcli/tcp.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:04:29.595 07:21:45 -- spdkcli/tcp.sh@38 -- # killprocess 3972390 00:04:29.595 07:21:45 -- common/autotest_common.sh@926 -- # '[' -z 3972390 ']' 00:04:29.595 07:21:45 -- common/autotest_common.sh@930 -- # kill -0 3972390 00:04:29.595 07:21:45 -- common/autotest_common.sh@931 -- # uname 00:04:29.595 07:21:45 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:04:29.595 07:21:45 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 3972390 00:04:29.595 07:21:45 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:04:29.595 07:21:45 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:04:29.595 07:21:45 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 3972390' 00:04:29.595 killing process with pid 3972390 00:04:29.595 07:21:45 -- common/autotest_common.sh@945 -- # kill 3972390 00:04:29.595 07:21:45 -- common/autotest_common.sh@950 -- # wait 3972390 00:04:30.162 00:04:30.162 real 0m1.763s 00:04:30.162 user 0m3.373s 00:04:30.162 sys 0m0.478s 00:04:30.162 07:21:46 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:30.162 07:21:46 -- common/autotest_common.sh@10 -- # set +x 00:04:30.162 ************************************ 00:04:30.162 END TEST spdkcli_tcp 00:04:30.162 ************************************ 00:04:30.162 07:21:46 -- spdk/autotest.sh@186 -- # run_test dpdk_mem_utility /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:04:30.162 07:21:46 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:04:30.162 07:21:46 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:04:30.162 07:21:46 -- common/autotest_common.sh@10 -- # set +x 00:04:30.162 ************************************ 00:04:30.162 START TEST dpdk_mem_utility 00:04:30.162 ************************************ 00:04:30.162 07:21:46 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:04:30.162 * Looking for test storage... 00:04:30.162 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/dpdk_memory_utility 00:04:30.162 07:21:46 -- dpdk_memory_utility/test_dpdk_mem_info.sh@10 -- # MEM_SCRIPT=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/dpdk_mem_info.py 00:04:30.162 07:21:46 -- dpdk_memory_utility/test_dpdk_mem_info.sh@13 -- # spdkpid=3972724 00:04:30.162 07:21:46 -- dpdk_memory_utility/test_dpdk_mem_info.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:04:30.162 07:21:46 -- dpdk_memory_utility/test_dpdk_mem_info.sh@15 -- # waitforlisten 3972724 00:04:30.162 07:21:46 -- common/autotest_common.sh@819 -- # '[' -z 3972724 ']' 00:04:30.162 07:21:46 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:30.162 07:21:46 -- common/autotest_common.sh@824 -- # local max_retries=100 00:04:30.162 07:21:46 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:30.162 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:30.162 07:21:46 -- common/autotest_common.sh@828 -- # xtrace_disable 00:04:30.162 07:21:46 -- common/autotest_common.sh@10 -- # set +x 00:04:30.162 [2024-07-14 07:21:46.331517] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:04:30.162 [2024-07-14 07:21:46.331608] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3972724 ] 00:04:30.423 EAL: No free 2048 kB hugepages reported on node 1 00:04:30.423 [2024-07-14 07:21:46.387897] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:30.423 [2024-07-14 07:21:46.491123] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:04:30.423 [2024-07-14 07:21:46.491311] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:04:31.364 07:21:47 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:04:31.364 07:21:47 -- common/autotest_common.sh@852 -- # return 0 00:04:31.364 07:21:47 -- dpdk_memory_utility/test_dpdk_mem_info.sh@17 -- # trap 'killprocess $spdkpid' SIGINT SIGTERM EXIT 00:04:31.364 07:21:47 -- dpdk_memory_utility/test_dpdk_mem_info.sh@19 -- # rpc_cmd env_dpdk_get_mem_stats 00:04:31.364 07:21:47 -- common/autotest_common.sh@551 -- # xtrace_disable 00:04:31.364 07:21:47 -- common/autotest_common.sh@10 -- # set +x 00:04:31.364 { 00:04:31.364 "filename": "/tmp/spdk_mem_dump.txt" 00:04:31.364 } 00:04:31.364 07:21:47 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:04:31.364 07:21:47 -- dpdk_memory_utility/test_dpdk_mem_info.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/dpdk_mem_info.py 00:04:31.364 DPDK memory size 814.000000 MiB in 1 heap(s) 00:04:31.364 1 heaps totaling size 814.000000 MiB 00:04:31.364 size: 814.000000 MiB heap id: 0 00:04:31.364 end heaps---------- 00:04:31.364 8 mempools totaling size 598.116089 MiB 00:04:31.364 size: 212.674988 MiB name: PDU_immediate_data_Pool 00:04:31.364 size: 158.602051 MiB name: PDU_data_out_Pool 00:04:31.364 size: 84.521057 MiB name: bdev_io_3972724 00:04:31.364 size: 51.011292 MiB name: evtpool_3972724 00:04:31.364 size: 50.003479 MiB name: msgpool_3972724 00:04:31.364 size: 21.763794 MiB name: PDU_Pool 00:04:31.364 size: 19.513306 MiB name: SCSI_TASK_Pool 00:04:31.364 size: 0.026123 MiB name: Session_Pool 00:04:31.364 end mempools------- 00:04:31.364 6 memzones totaling size 4.142822 MiB 00:04:31.364 size: 1.000366 MiB name: RG_ring_0_3972724 00:04:31.364 size: 1.000366 MiB name: RG_ring_1_3972724 00:04:31.364 size: 1.000366 MiB name: RG_ring_4_3972724 00:04:31.364 size: 1.000366 MiB name: RG_ring_5_3972724 00:04:31.364 size: 0.125366 MiB name: RG_ring_2_3972724 00:04:31.364 size: 0.015991 MiB name: RG_ring_3_3972724 00:04:31.364 end memzones------- 00:04:31.364 07:21:47 -- dpdk_memory_utility/test_dpdk_mem_info.sh@23 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/dpdk_mem_info.py -m 0 00:04:31.364 heap id: 0 total size: 814.000000 MiB number of busy elements: 41 number of free elements: 15 00:04:31.364 list of free elements. size: 12.519348 MiB 00:04:31.364 element at address: 0x200000400000 with size: 1.999512 MiB 00:04:31.364 element at address: 0x200018e00000 with size: 0.999878 MiB 00:04:31.364 element at address: 0x200019000000 with size: 0.999878 MiB 00:04:31.364 element at address: 0x200003e00000 with size: 0.996277 MiB 00:04:31.364 element at address: 0x200031c00000 with size: 0.994446 MiB 00:04:31.364 element at address: 0x200013800000 with size: 0.978699 MiB 00:04:31.364 element at address: 0x200007000000 with size: 0.959839 MiB 00:04:31.364 element at address: 0x200019200000 with size: 0.936584 MiB 00:04:31.364 element at address: 0x200000200000 with size: 0.841614 MiB 00:04:31.364 element at address: 0x20001aa00000 with size: 0.582886 MiB 00:04:31.364 element at address: 0x20000b200000 with size: 0.490723 MiB 00:04:31.364 element at address: 0x200000800000 with size: 0.487793 MiB 00:04:31.364 element at address: 0x200019400000 with size: 0.485657 MiB 00:04:31.364 element at address: 0x200027e00000 with size: 0.410034 MiB 00:04:31.364 element at address: 0x200003a00000 with size: 0.355530 MiB 00:04:31.364 list of standard malloc elements. size: 199.218079 MiB 00:04:31.364 element at address: 0x20000b3fff80 with size: 132.000122 MiB 00:04:31.364 element at address: 0x2000071fff80 with size: 64.000122 MiB 00:04:31.364 element at address: 0x200018efff80 with size: 1.000122 MiB 00:04:31.364 element at address: 0x2000190fff80 with size: 1.000122 MiB 00:04:31.364 element at address: 0x2000192fff80 with size: 1.000122 MiB 00:04:31.364 element at address: 0x2000003d9f00 with size: 0.140747 MiB 00:04:31.364 element at address: 0x2000192eff00 with size: 0.062622 MiB 00:04:31.364 element at address: 0x2000003fdf80 with size: 0.007935 MiB 00:04:31.364 element at address: 0x2000192efdc0 with size: 0.000305 MiB 00:04:31.364 element at address: 0x2000002d7740 with size: 0.000183 MiB 00:04:31.364 element at address: 0x2000002d7800 with size: 0.000183 MiB 00:04:31.364 element at address: 0x2000002d78c0 with size: 0.000183 MiB 00:04:31.364 element at address: 0x2000002d7ac0 with size: 0.000183 MiB 00:04:31.364 element at address: 0x2000002d7b80 with size: 0.000183 MiB 00:04:31.364 element at address: 0x2000002d7c40 with size: 0.000183 MiB 00:04:31.364 element at address: 0x2000003d9e40 with size: 0.000183 MiB 00:04:31.364 element at address: 0x20000087ce00 with size: 0.000183 MiB 00:04:31.364 element at address: 0x20000087cec0 with size: 0.000183 MiB 00:04:31.364 element at address: 0x2000008fd180 with size: 0.000183 MiB 00:04:31.364 element at address: 0x200003a5b040 with size: 0.000183 MiB 00:04:31.364 element at address: 0x200003adb300 with size: 0.000183 MiB 00:04:31.364 element at address: 0x200003adb500 with size: 0.000183 MiB 00:04:31.364 element at address: 0x200003adf7c0 with size: 0.000183 MiB 00:04:31.364 element at address: 0x200003affa80 with size: 0.000183 MiB 00:04:31.364 element at address: 0x200003affb40 with size: 0.000183 MiB 00:04:31.364 element at address: 0x200003eff0c0 with size: 0.000183 MiB 00:04:31.364 element at address: 0x2000070fdd80 with size: 0.000183 MiB 00:04:31.364 element at address: 0x20000b27da00 with size: 0.000183 MiB 00:04:31.365 element at address: 0x20000b27dac0 with size: 0.000183 MiB 00:04:31.365 element at address: 0x20000b2fdd80 with size: 0.000183 MiB 00:04:31.365 element at address: 0x2000138fa8c0 with size: 0.000183 MiB 00:04:31.365 element at address: 0x2000192efc40 with size: 0.000183 MiB 00:04:31.365 element at address: 0x2000192efd00 with size: 0.000183 MiB 00:04:31.365 element at address: 0x2000194bc740 with size: 0.000183 MiB 00:04:31.365 element at address: 0x20001aa95380 with size: 0.000183 MiB 00:04:31.365 element at address: 0x20001aa95440 with size: 0.000183 MiB 00:04:31.365 element at address: 0x200027e68f80 with size: 0.000183 MiB 00:04:31.365 element at address: 0x200027e69040 with size: 0.000183 MiB 00:04:31.365 element at address: 0x200027e6fc40 with size: 0.000183 MiB 00:04:31.365 element at address: 0x200027e6fe40 with size: 0.000183 MiB 00:04:31.365 element at address: 0x200027e6ff00 with size: 0.000183 MiB 00:04:31.365 list of memzone associated elements. size: 602.262573 MiB 00:04:31.365 element at address: 0x20001aa95500 with size: 211.416748 MiB 00:04:31.365 associated memzone info: size: 211.416626 MiB name: MP_PDU_immediate_data_Pool_0 00:04:31.365 element at address: 0x200027e6ffc0 with size: 157.562561 MiB 00:04:31.365 associated memzone info: size: 157.562439 MiB name: MP_PDU_data_out_Pool_0 00:04:31.365 element at address: 0x2000139fab80 with size: 84.020630 MiB 00:04:31.365 associated memzone info: size: 84.020508 MiB name: MP_bdev_io_3972724_0 00:04:31.365 element at address: 0x2000009ff380 with size: 48.003052 MiB 00:04:31.365 associated memzone info: size: 48.002930 MiB name: MP_evtpool_3972724_0 00:04:31.365 element at address: 0x200003fff380 with size: 48.003052 MiB 00:04:31.365 associated memzone info: size: 48.002930 MiB name: MP_msgpool_3972724_0 00:04:31.365 element at address: 0x2000195be940 with size: 20.255554 MiB 00:04:31.365 associated memzone info: size: 20.255432 MiB name: MP_PDU_Pool_0 00:04:31.365 element at address: 0x200031dfeb40 with size: 18.005066 MiB 00:04:31.365 associated memzone info: size: 18.004944 MiB name: MP_SCSI_TASK_Pool_0 00:04:31.365 element at address: 0x2000005ffe00 with size: 2.000488 MiB 00:04:31.365 associated memzone info: size: 2.000366 MiB name: RG_MP_evtpool_3972724 00:04:31.365 element at address: 0x200003bffe00 with size: 2.000488 MiB 00:04:31.365 associated memzone info: size: 2.000366 MiB name: RG_MP_msgpool_3972724 00:04:31.365 element at address: 0x2000002d7d00 with size: 1.008118 MiB 00:04:31.365 associated memzone info: size: 1.007996 MiB name: MP_evtpool_3972724 00:04:31.365 element at address: 0x20000b2fde40 with size: 1.008118 MiB 00:04:31.365 associated memzone info: size: 1.007996 MiB name: MP_PDU_Pool 00:04:31.365 element at address: 0x2000194bc800 with size: 1.008118 MiB 00:04:31.365 associated memzone info: size: 1.007996 MiB name: MP_PDU_immediate_data_Pool 00:04:31.365 element at address: 0x2000070fde40 with size: 1.008118 MiB 00:04:31.365 associated memzone info: size: 1.007996 MiB name: MP_PDU_data_out_Pool 00:04:31.365 element at address: 0x2000008fd240 with size: 1.008118 MiB 00:04:31.365 associated memzone info: size: 1.007996 MiB name: MP_SCSI_TASK_Pool 00:04:31.365 element at address: 0x200003eff180 with size: 1.000488 MiB 00:04:31.365 associated memzone info: size: 1.000366 MiB name: RG_ring_0_3972724 00:04:31.365 element at address: 0x200003affc00 with size: 1.000488 MiB 00:04:31.365 associated memzone info: size: 1.000366 MiB name: RG_ring_1_3972724 00:04:31.365 element at address: 0x2000138fa980 with size: 1.000488 MiB 00:04:31.365 associated memzone info: size: 1.000366 MiB name: RG_ring_4_3972724 00:04:31.365 element at address: 0x200031cfe940 with size: 1.000488 MiB 00:04:31.365 associated memzone info: size: 1.000366 MiB name: RG_ring_5_3972724 00:04:31.365 element at address: 0x200003a5b100 with size: 0.500488 MiB 00:04:31.365 associated memzone info: size: 0.500366 MiB name: RG_MP_bdev_io_3972724 00:04:31.365 element at address: 0x20000b27db80 with size: 0.500488 MiB 00:04:31.365 associated memzone info: size: 0.500366 MiB name: RG_MP_PDU_Pool 00:04:31.365 element at address: 0x20000087cf80 with size: 0.500488 MiB 00:04:31.365 associated memzone info: size: 0.500366 MiB name: RG_MP_SCSI_TASK_Pool 00:04:31.365 element at address: 0x20001947c540 with size: 0.250488 MiB 00:04:31.365 associated memzone info: size: 0.250366 MiB name: RG_MP_PDU_immediate_data_Pool 00:04:31.365 element at address: 0x200003adf880 with size: 0.125488 MiB 00:04:31.365 associated memzone info: size: 0.125366 MiB name: RG_ring_2_3972724 00:04:31.365 element at address: 0x2000070f5b80 with size: 0.031738 MiB 00:04:31.365 associated memzone info: size: 0.031616 MiB name: RG_MP_PDU_data_out_Pool 00:04:31.365 element at address: 0x200027e69100 with size: 0.023743 MiB 00:04:31.365 associated memzone info: size: 0.023621 MiB name: MP_Session_Pool_0 00:04:31.365 element at address: 0x200003adb5c0 with size: 0.016113 MiB 00:04:31.365 associated memzone info: size: 0.015991 MiB name: RG_ring_3_3972724 00:04:31.365 element at address: 0x200027e6f240 with size: 0.002441 MiB 00:04:31.365 associated memzone info: size: 0.002319 MiB name: RG_MP_Session_Pool 00:04:31.365 element at address: 0x2000002d7980 with size: 0.000305 MiB 00:04:31.365 associated memzone info: size: 0.000183 MiB name: MP_msgpool_3972724 00:04:31.365 element at address: 0x200003adb3c0 with size: 0.000305 MiB 00:04:31.365 associated memzone info: size: 0.000183 MiB name: MP_bdev_io_3972724 00:04:31.365 element at address: 0x200027e6fd00 with size: 0.000305 MiB 00:04:31.365 associated memzone info: size: 0.000183 MiB name: MP_Session_Pool 00:04:31.365 07:21:47 -- dpdk_memory_utility/test_dpdk_mem_info.sh@25 -- # trap - SIGINT SIGTERM EXIT 00:04:31.365 07:21:47 -- dpdk_memory_utility/test_dpdk_mem_info.sh@26 -- # killprocess 3972724 00:04:31.365 07:21:47 -- common/autotest_common.sh@926 -- # '[' -z 3972724 ']' 00:04:31.365 07:21:47 -- common/autotest_common.sh@930 -- # kill -0 3972724 00:04:31.365 07:21:47 -- common/autotest_common.sh@931 -- # uname 00:04:31.365 07:21:47 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:04:31.365 07:21:47 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 3972724 00:04:31.365 07:21:47 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:04:31.365 07:21:47 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:04:31.365 07:21:47 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 3972724' 00:04:31.365 killing process with pid 3972724 00:04:31.365 07:21:47 -- common/autotest_common.sh@945 -- # kill 3972724 00:04:31.365 07:21:47 -- common/autotest_common.sh@950 -- # wait 3972724 00:04:31.933 00:04:31.933 real 0m1.631s 00:04:31.933 user 0m1.793s 00:04:31.933 sys 0m0.425s 00:04:31.933 07:21:47 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:31.933 07:21:47 -- common/autotest_common.sh@10 -- # set +x 00:04:31.933 ************************************ 00:04:31.933 END TEST dpdk_mem_utility 00:04:31.933 ************************************ 00:04:31.933 07:21:47 -- spdk/autotest.sh@187 -- # run_test event /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/event.sh 00:04:31.933 07:21:47 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:04:31.933 07:21:47 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:04:31.933 07:21:47 -- common/autotest_common.sh@10 -- # set +x 00:04:31.933 ************************************ 00:04:31.933 START TEST event 00:04:31.933 ************************************ 00:04:31.933 07:21:47 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/event.sh 00:04:31.933 * Looking for test storage... 00:04:31.933 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event 00:04:31.933 07:21:47 -- event/event.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/bdev/nbd_common.sh 00:04:31.933 07:21:47 -- bdev/nbd_common.sh@6 -- # set -e 00:04:31.933 07:21:47 -- event/event.sh@45 -- # run_test event_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:04:31.933 07:21:47 -- common/autotest_common.sh@1077 -- # '[' 6 -le 1 ']' 00:04:31.933 07:21:47 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:04:31.933 07:21:47 -- common/autotest_common.sh@10 -- # set +x 00:04:31.933 ************************************ 00:04:31.933 START TEST event_perf 00:04:31.933 ************************************ 00:04:31.933 07:21:47 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:04:31.933 Running I/O for 1 seconds...[2024-07-14 07:21:47.963523] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:04:31.933 [2024-07-14 07:21:47.963608] [ DPDK EAL parameters: event_perf --no-shconf -c 0xF --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3972922 ] 00:04:31.933 EAL: No free 2048 kB hugepages reported on node 1 00:04:31.933 [2024-07-14 07:21:48.027094] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:04:32.191 [2024-07-14 07:21:48.138075] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:04:32.191 [2024-07-14 07:21:48.138132] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:04:32.191 [2024-07-14 07:21:48.138197] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:04:32.191 [2024-07-14 07:21:48.138200] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:04:33.126 Running I/O for 1 seconds... 00:04:33.126 lcore 0: 230483 00:04:33.126 lcore 1: 230485 00:04:33.126 lcore 2: 230483 00:04:33.126 lcore 3: 230483 00:04:33.126 done. 00:04:33.126 00:04:33.126 real 0m1.312s 00:04:33.126 user 0m4.221s 00:04:33.126 sys 0m0.084s 00:04:33.126 07:21:49 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:33.126 07:21:49 -- common/autotest_common.sh@10 -- # set +x 00:04:33.126 ************************************ 00:04:33.126 END TEST event_perf 00:04:33.126 ************************************ 00:04:33.126 07:21:49 -- event/event.sh@46 -- # run_test event_reactor /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/reactor/reactor -t 1 00:04:33.126 07:21:49 -- common/autotest_common.sh@1077 -- # '[' 4 -le 1 ']' 00:04:33.126 07:21:49 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:04:33.126 07:21:49 -- common/autotest_common.sh@10 -- # set +x 00:04:33.126 ************************************ 00:04:33.126 START TEST event_reactor 00:04:33.126 ************************************ 00:04:33.126 07:21:49 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/reactor/reactor -t 1 00:04:33.385 [2024-07-14 07:21:49.301433] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:04:33.385 [2024-07-14 07:21:49.301521] [ DPDK EAL parameters: reactor --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3973088 ] 00:04:33.385 EAL: No free 2048 kB hugepages reported on node 1 00:04:33.385 [2024-07-14 07:21:49.367011] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:33.385 [2024-07-14 07:21:49.482071] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:04:34.761 test_start 00:04:34.761 oneshot 00:04:34.761 tick 100 00:04:34.761 tick 100 00:04:34.761 tick 250 00:04:34.761 tick 100 00:04:34.761 tick 100 00:04:34.761 tick 100 00:04:34.761 tick 250 00:04:34.761 tick 500 00:04:34.761 tick 100 00:04:34.761 tick 100 00:04:34.761 tick 250 00:04:34.761 tick 100 00:04:34.761 tick 100 00:04:34.761 test_end 00:04:34.761 00:04:34.761 real 0m1.317s 00:04:34.761 user 0m1.230s 00:04:34.761 sys 0m0.081s 00:04:34.761 07:21:50 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:34.761 07:21:50 -- common/autotest_common.sh@10 -- # set +x 00:04:34.761 ************************************ 00:04:34.761 END TEST event_reactor 00:04:34.761 ************************************ 00:04:34.761 07:21:50 -- event/event.sh@47 -- # run_test event_reactor_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/reactor_perf/reactor_perf -t 1 00:04:34.761 07:21:50 -- common/autotest_common.sh@1077 -- # '[' 4 -le 1 ']' 00:04:34.761 07:21:50 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:04:34.761 07:21:50 -- common/autotest_common.sh@10 -- # set +x 00:04:34.761 ************************************ 00:04:34.761 START TEST event_reactor_perf 00:04:34.761 ************************************ 00:04:34.761 07:21:50 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/reactor_perf/reactor_perf -t 1 00:04:34.761 [2024-07-14 07:21:50.646960] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:04:34.761 [2024-07-14 07:21:50.647048] [ DPDK EAL parameters: reactor_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3973363 ] 00:04:34.761 EAL: No free 2048 kB hugepages reported on node 1 00:04:34.761 [2024-07-14 07:21:50.710284] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:34.761 [2024-07-14 07:21:50.823790] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:04:36.173 test_start 00:04:36.173 test_end 00:04:36.173 Performance: 350315 events per second 00:04:36.173 00:04:36.173 real 0m1.315s 00:04:36.173 user 0m1.229s 00:04:36.173 sys 0m0.080s 00:04:36.173 07:21:51 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:36.173 07:21:51 -- common/autotest_common.sh@10 -- # set +x 00:04:36.173 ************************************ 00:04:36.173 END TEST event_reactor_perf 00:04:36.173 ************************************ 00:04:36.173 07:21:51 -- event/event.sh@49 -- # uname -s 00:04:36.173 07:21:51 -- event/event.sh@49 -- # '[' Linux = Linux ']' 00:04:36.173 07:21:51 -- event/event.sh@50 -- # run_test event_scheduler /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/scheduler/scheduler.sh 00:04:36.173 07:21:51 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:04:36.173 07:21:51 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:04:36.173 07:21:51 -- common/autotest_common.sh@10 -- # set +x 00:04:36.173 ************************************ 00:04:36.173 START TEST event_scheduler 00:04:36.173 ************************************ 00:04:36.173 07:21:51 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/scheduler/scheduler.sh 00:04:36.173 * Looking for test storage... 00:04:36.173 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/scheduler 00:04:36.173 07:21:52 -- scheduler/scheduler.sh@29 -- # rpc=rpc_cmd 00:04:36.173 07:21:52 -- scheduler/scheduler.sh@35 -- # scheduler_pid=3973545 00:04:36.173 07:21:52 -- scheduler/scheduler.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/scheduler/scheduler -m 0xF -p 0x2 --wait-for-rpc -f 00:04:36.173 07:21:52 -- scheduler/scheduler.sh@36 -- # trap 'killprocess $scheduler_pid; exit 1' SIGINT SIGTERM EXIT 00:04:36.173 07:21:52 -- scheduler/scheduler.sh@37 -- # waitforlisten 3973545 00:04:36.173 07:21:52 -- common/autotest_common.sh@819 -- # '[' -z 3973545 ']' 00:04:36.173 07:21:52 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:36.173 07:21:52 -- common/autotest_common.sh@824 -- # local max_retries=100 00:04:36.173 07:21:52 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:36.173 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:36.173 07:21:52 -- common/autotest_common.sh@828 -- # xtrace_disable 00:04:36.173 07:21:52 -- common/autotest_common.sh@10 -- # set +x 00:04:36.173 [2024-07-14 07:21:52.069512] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:04:36.173 [2024-07-14 07:21:52.069592] [ DPDK EAL parameters: scheduler --no-shconf -c 0xF --main-lcore=2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3973545 ] 00:04:36.173 EAL: No free 2048 kB hugepages reported on node 1 00:04:36.173 [2024-07-14 07:21:52.126051] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:04:36.173 [2024-07-14 07:21:52.235891] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:04:36.173 [2024-07-14 07:21:52.235938] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:04:36.173 [2024-07-14 07:21:52.235987] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:04:36.173 [2024-07-14 07:21:52.235991] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:04:36.173 07:21:52 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:04:36.173 07:21:52 -- common/autotest_common.sh@852 -- # return 0 00:04:36.173 07:21:52 -- scheduler/scheduler.sh@39 -- # rpc_cmd framework_set_scheduler dynamic 00:04:36.173 07:21:52 -- common/autotest_common.sh@551 -- # xtrace_disable 00:04:36.173 07:21:52 -- common/autotest_common.sh@10 -- # set +x 00:04:36.173 POWER: Env isn't set yet! 00:04:36.173 POWER: Attempting to initialise ACPI cpufreq power management... 00:04:36.174 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_available_frequencies 00:04:36.174 POWER: Cannot get available frequencies of lcore 0 00:04:36.174 POWER: Attempting to initialise PSTAT power management... 00:04:36.174 POWER: Power management governor of lcore 0 has been set to 'performance' successfully 00:04:36.174 POWER: Initialized successfully for lcore 0 power management 00:04:36.174 POWER: Power management governor of lcore 1 has been set to 'performance' successfully 00:04:36.174 POWER: Initialized successfully for lcore 1 power management 00:04:36.174 POWER: Power management governor of lcore 2 has been set to 'performance' successfully 00:04:36.174 POWER: Initialized successfully for lcore 2 power management 00:04:36.174 POWER: Power management governor of lcore 3 has been set to 'performance' successfully 00:04:36.174 POWER: Initialized successfully for lcore 3 power management 00:04:36.174 [2024-07-14 07:21:52.303074] scheduler_dynamic.c: 387:set_opts: *NOTICE*: Setting scheduler load limit to 20 00:04:36.174 [2024-07-14 07:21:52.303092] scheduler_dynamic.c: 389:set_opts: *NOTICE*: Setting scheduler core limit to 80 00:04:36.174 [2024-07-14 07:21:52.303102] scheduler_dynamic.c: 391:set_opts: *NOTICE*: Setting scheduler core busy to 95 00:04:36.174 07:21:52 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:04:36.174 07:21:52 -- scheduler/scheduler.sh@40 -- # rpc_cmd framework_start_init 00:04:36.174 07:21:52 -- common/autotest_common.sh@551 -- # xtrace_disable 00:04:36.174 07:21:52 -- common/autotest_common.sh@10 -- # set +x 00:04:36.432 [2024-07-14 07:21:52.402994] scheduler.c: 382:test_start: *NOTICE*: Scheduler test application started. 00:04:36.432 07:21:52 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:04:36.432 07:21:52 -- scheduler/scheduler.sh@43 -- # run_test scheduler_create_thread scheduler_create_thread 00:04:36.432 07:21:52 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:04:36.432 07:21:52 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:04:36.432 07:21:52 -- common/autotest_common.sh@10 -- # set +x 00:04:36.432 ************************************ 00:04:36.432 START TEST scheduler_create_thread 00:04:36.432 ************************************ 00:04:36.432 07:21:52 -- common/autotest_common.sh@1104 -- # scheduler_create_thread 00:04:36.432 07:21:52 -- scheduler/scheduler.sh@12 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x1 -a 100 00:04:36.432 07:21:52 -- common/autotest_common.sh@551 -- # xtrace_disable 00:04:36.432 07:21:52 -- common/autotest_common.sh@10 -- # set +x 00:04:36.432 2 00:04:36.432 07:21:52 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:04:36.432 07:21:52 -- scheduler/scheduler.sh@13 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x2 -a 100 00:04:36.432 07:21:52 -- common/autotest_common.sh@551 -- # xtrace_disable 00:04:36.432 07:21:52 -- common/autotest_common.sh@10 -- # set +x 00:04:36.432 3 00:04:36.432 07:21:52 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:04:36.432 07:21:52 -- scheduler/scheduler.sh@14 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x4 -a 100 00:04:36.432 07:21:52 -- common/autotest_common.sh@551 -- # xtrace_disable 00:04:36.432 07:21:52 -- common/autotest_common.sh@10 -- # set +x 00:04:36.432 4 00:04:36.432 07:21:52 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:04:36.432 07:21:52 -- scheduler/scheduler.sh@15 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x8 -a 100 00:04:36.432 07:21:52 -- common/autotest_common.sh@551 -- # xtrace_disable 00:04:36.432 07:21:52 -- common/autotest_common.sh@10 -- # set +x 00:04:36.432 5 00:04:36.432 07:21:52 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:04:36.432 07:21:52 -- scheduler/scheduler.sh@16 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x1 -a 0 00:04:36.432 07:21:52 -- common/autotest_common.sh@551 -- # xtrace_disable 00:04:36.432 07:21:52 -- common/autotest_common.sh@10 -- # set +x 00:04:36.432 6 00:04:36.432 07:21:52 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:04:36.432 07:21:52 -- scheduler/scheduler.sh@17 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x2 -a 0 00:04:36.432 07:21:52 -- common/autotest_common.sh@551 -- # xtrace_disable 00:04:36.432 07:21:52 -- common/autotest_common.sh@10 -- # set +x 00:04:36.432 7 00:04:36.432 07:21:52 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:04:36.432 07:21:52 -- scheduler/scheduler.sh@18 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x4 -a 0 00:04:36.432 07:21:52 -- common/autotest_common.sh@551 -- # xtrace_disable 00:04:36.432 07:21:52 -- common/autotest_common.sh@10 -- # set +x 00:04:36.432 8 00:04:36.432 07:21:52 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:04:36.432 07:21:52 -- scheduler/scheduler.sh@19 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x8 -a 0 00:04:36.432 07:21:52 -- common/autotest_common.sh@551 -- # xtrace_disable 00:04:36.432 07:21:52 -- common/autotest_common.sh@10 -- # set +x 00:04:36.432 9 00:04:36.432 07:21:52 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:04:36.432 07:21:52 -- scheduler/scheduler.sh@21 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n one_third_active -a 30 00:04:36.432 07:21:52 -- common/autotest_common.sh@551 -- # xtrace_disable 00:04:36.432 07:21:52 -- common/autotest_common.sh@10 -- # set +x 00:04:36.432 10 00:04:36.432 07:21:52 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:04:36.432 07:21:52 -- scheduler/scheduler.sh@22 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n half_active -a 0 00:04:36.432 07:21:52 -- common/autotest_common.sh@551 -- # xtrace_disable 00:04:36.432 07:21:52 -- common/autotest_common.sh@10 -- # set +x 00:04:36.432 07:21:52 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:04:36.432 07:21:52 -- scheduler/scheduler.sh@22 -- # thread_id=11 00:04:36.432 07:21:52 -- scheduler/scheduler.sh@23 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_set_active 11 50 00:04:36.432 07:21:52 -- common/autotest_common.sh@551 -- # xtrace_disable 00:04:36.432 07:21:52 -- common/autotest_common.sh@10 -- # set +x 00:04:36.432 07:21:52 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:04:36.432 07:21:52 -- scheduler/scheduler.sh@25 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n deleted -a 100 00:04:36.432 07:21:52 -- common/autotest_common.sh@551 -- # xtrace_disable 00:04:36.432 07:21:52 -- common/autotest_common.sh@10 -- # set +x 00:04:36.997 07:21:53 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:04:36.997 07:21:53 -- scheduler/scheduler.sh@25 -- # thread_id=12 00:04:36.997 07:21:53 -- scheduler/scheduler.sh@26 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_delete 12 00:04:36.997 07:21:53 -- common/autotest_common.sh@551 -- # xtrace_disable 00:04:36.997 07:21:53 -- common/autotest_common.sh@10 -- # set +x 00:04:38.368 07:21:54 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:04:38.368 00:04:38.368 real 0m1.753s 00:04:38.368 user 0m0.014s 00:04:38.368 sys 0m0.000s 00:04:38.368 07:21:54 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:38.368 07:21:54 -- common/autotest_common.sh@10 -- # set +x 00:04:38.368 ************************************ 00:04:38.368 END TEST scheduler_create_thread 00:04:38.368 ************************************ 00:04:38.368 07:21:54 -- scheduler/scheduler.sh@45 -- # trap - SIGINT SIGTERM EXIT 00:04:38.368 07:21:54 -- scheduler/scheduler.sh@46 -- # killprocess 3973545 00:04:38.368 07:21:54 -- common/autotest_common.sh@926 -- # '[' -z 3973545 ']' 00:04:38.368 07:21:54 -- common/autotest_common.sh@930 -- # kill -0 3973545 00:04:38.368 07:21:54 -- common/autotest_common.sh@931 -- # uname 00:04:38.368 07:21:54 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:04:38.368 07:21:54 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 3973545 00:04:38.368 07:21:54 -- common/autotest_common.sh@932 -- # process_name=reactor_2 00:04:38.368 07:21:54 -- common/autotest_common.sh@936 -- # '[' reactor_2 = sudo ']' 00:04:38.368 07:21:54 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 3973545' 00:04:38.368 killing process with pid 3973545 00:04:38.368 07:21:54 -- common/autotest_common.sh@945 -- # kill 3973545 00:04:38.368 07:21:54 -- common/autotest_common.sh@950 -- # wait 3973545 00:04:38.626 [2024-07-14 07:21:54.642644] scheduler.c: 360:test_shutdown: *NOTICE*: Scheduler test application stopped. 00:04:38.626 POWER: Power management governor of lcore 0 has been set to 'userspace' successfully 00:04:38.626 POWER: Power management of lcore 0 has exited from 'performance' mode and been set back to the original 00:04:38.626 POWER: Power management governor of lcore 1 has been set to 'schedutil' successfully 00:04:38.626 POWER: Power management of lcore 1 has exited from 'performance' mode and been set back to the original 00:04:38.626 POWER: Power management governor of lcore 2 has been set to 'schedutil' successfully 00:04:38.626 POWER: Power management of lcore 2 has exited from 'performance' mode and been set back to the original 00:04:38.626 POWER: Power management governor of lcore 3 has been set to 'schedutil' successfully 00:04:38.626 POWER: Power management of lcore 3 has exited from 'performance' mode and been set back to the original 00:04:38.885 00:04:38.885 real 0m2.919s 00:04:38.885 user 0m3.730s 00:04:38.885 sys 0m0.300s 00:04:38.885 07:21:54 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:38.885 07:21:54 -- common/autotest_common.sh@10 -- # set +x 00:04:38.885 ************************************ 00:04:38.885 END TEST event_scheduler 00:04:38.885 ************************************ 00:04:38.885 07:21:54 -- event/event.sh@51 -- # modprobe -n nbd 00:04:38.885 07:21:54 -- event/event.sh@52 -- # run_test app_repeat app_repeat_test 00:04:38.885 07:21:54 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:04:38.885 07:21:54 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:04:38.885 07:21:54 -- common/autotest_common.sh@10 -- # set +x 00:04:38.885 ************************************ 00:04:38.885 START TEST app_repeat 00:04:38.885 ************************************ 00:04:38.885 07:21:54 -- common/autotest_common.sh@1104 -- # app_repeat_test 00:04:38.885 07:21:54 -- event/event.sh@12 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:38.885 07:21:54 -- event/event.sh@13 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:38.885 07:21:54 -- event/event.sh@13 -- # local nbd_list 00:04:38.885 07:21:54 -- event/event.sh@14 -- # bdev_list=('Malloc0' 'Malloc1') 00:04:38.885 07:21:54 -- event/event.sh@14 -- # local bdev_list 00:04:38.885 07:21:54 -- event/event.sh@15 -- # local repeat_times=4 00:04:38.885 07:21:54 -- event/event.sh@17 -- # modprobe nbd 00:04:38.885 07:21:54 -- event/event.sh@19 -- # repeat_pid=3973882 00:04:38.885 07:21:54 -- event/event.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/app_repeat/app_repeat -r /var/tmp/spdk-nbd.sock -m 0x3 -t 4 00:04:38.885 07:21:54 -- event/event.sh@20 -- # trap 'killprocess $repeat_pid; exit 1' SIGINT SIGTERM EXIT 00:04:38.885 07:21:54 -- event/event.sh@21 -- # echo 'Process app_repeat pid: 3973882' 00:04:38.885 Process app_repeat pid: 3973882 00:04:38.885 07:21:54 -- event/event.sh@23 -- # for i in {0..2} 00:04:38.885 07:21:54 -- event/event.sh@24 -- # echo 'spdk_app_start Round 0' 00:04:38.885 spdk_app_start Round 0 00:04:38.885 07:21:54 -- event/event.sh@25 -- # waitforlisten 3973882 /var/tmp/spdk-nbd.sock 00:04:38.885 07:21:54 -- common/autotest_common.sh@819 -- # '[' -z 3973882 ']' 00:04:38.885 07:21:54 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:04:38.885 07:21:54 -- common/autotest_common.sh@824 -- # local max_retries=100 00:04:38.885 07:21:54 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:04:38.885 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:04:38.885 07:21:54 -- common/autotest_common.sh@828 -- # xtrace_disable 00:04:38.885 07:21:54 -- common/autotest_common.sh@10 -- # set +x 00:04:38.885 [2024-07-14 07:21:54.960655] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:04:38.885 [2024-07-14 07:21:54.960741] [ DPDK EAL parameters: app_repeat --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3973882 ] 00:04:38.885 EAL: No free 2048 kB hugepages reported on node 1 00:04:38.885 [2024-07-14 07:21:55.025659] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 2 00:04:39.144 [2024-07-14 07:21:55.142213] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:04:39.144 [2024-07-14 07:21:55.142219] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:04:40.077 07:21:55 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:04:40.078 07:21:55 -- common/autotest_common.sh@852 -- # return 0 00:04:40.078 07:21:55 -- event/event.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:04:40.078 Malloc0 00:04:40.078 07:21:56 -- event/event.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:04:40.335 Malloc1 00:04:40.335 07:21:56 -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:04:40.335 07:21:56 -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:40.335 07:21:56 -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:04:40.335 07:21:56 -- bdev/nbd_common.sh@91 -- # local bdev_list 00:04:40.335 07:21:56 -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:40.335 07:21:56 -- bdev/nbd_common.sh@92 -- # local nbd_list 00:04:40.336 07:21:56 -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:04:40.336 07:21:56 -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:40.336 07:21:56 -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:04:40.336 07:21:56 -- bdev/nbd_common.sh@10 -- # local bdev_list 00:04:40.336 07:21:56 -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:40.336 07:21:56 -- bdev/nbd_common.sh@11 -- # local nbd_list 00:04:40.336 07:21:56 -- bdev/nbd_common.sh@12 -- # local i 00:04:40.336 07:21:56 -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:04:40.336 07:21:56 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:04:40.336 07:21:56 -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:04:40.593 /dev/nbd0 00:04:40.593 07:21:56 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:04:40.593 07:21:56 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:04:40.593 07:21:56 -- common/autotest_common.sh@856 -- # local nbd_name=nbd0 00:04:40.593 07:21:56 -- common/autotest_common.sh@857 -- # local i 00:04:40.593 07:21:56 -- common/autotest_common.sh@859 -- # (( i = 1 )) 00:04:40.593 07:21:56 -- common/autotest_common.sh@859 -- # (( i <= 20 )) 00:04:40.593 07:21:56 -- common/autotest_common.sh@860 -- # grep -q -w nbd0 /proc/partitions 00:04:40.593 07:21:56 -- common/autotest_common.sh@861 -- # break 00:04:40.593 07:21:56 -- common/autotest_common.sh@872 -- # (( i = 1 )) 00:04:40.593 07:21:56 -- common/autotest_common.sh@872 -- # (( i <= 20 )) 00:04:40.593 07:21:56 -- common/autotest_common.sh@873 -- # dd if=/dev/nbd0 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:04:40.593 1+0 records in 00:04:40.593 1+0 records out 00:04:40.593 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000174027 s, 23.5 MB/s 00:04:40.593 07:21:56 -- common/autotest_common.sh@874 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:04:40.593 07:21:56 -- common/autotest_common.sh@874 -- # size=4096 00:04:40.593 07:21:56 -- common/autotest_common.sh@875 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:04:40.593 07:21:56 -- common/autotest_common.sh@876 -- # '[' 4096 '!=' 0 ']' 00:04:40.593 07:21:56 -- common/autotest_common.sh@877 -- # return 0 00:04:40.593 07:21:56 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:04:40.593 07:21:56 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:04:40.593 07:21:56 -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:04:40.851 /dev/nbd1 00:04:40.851 07:21:56 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:04:40.851 07:21:56 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:04:40.851 07:21:56 -- common/autotest_common.sh@856 -- # local nbd_name=nbd1 00:04:40.851 07:21:56 -- common/autotest_common.sh@857 -- # local i 00:04:40.851 07:21:56 -- common/autotest_common.sh@859 -- # (( i = 1 )) 00:04:40.851 07:21:56 -- common/autotest_common.sh@859 -- # (( i <= 20 )) 00:04:40.851 07:21:56 -- common/autotest_common.sh@860 -- # grep -q -w nbd1 /proc/partitions 00:04:40.851 07:21:56 -- common/autotest_common.sh@861 -- # break 00:04:40.851 07:21:56 -- common/autotest_common.sh@872 -- # (( i = 1 )) 00:04:40.851 07:21:56 -- common/autotest_common.sh@872 -- # (( i <= 20 )) 00:04:40.851 07:21:56 -- common/autotest_common.sh@873 -- # dd if=/dev/nbd1 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:04:40.851 1+0 records in 00:04:40.851 1+0 records out 00:04:40.851 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000203252 s, 20.2 MB/s 00:04:40.851 07:21:56 -- common/autotest_common.sh@874 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:04:40.851 07:21:56 -- common/autotest_common.sh@874 -- # size=4096 00:04:40.851 07:21:56 -- common/autotest_common.sh@875 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:04:40.851 07:21:56 -- common/autotest_common.sh@876 -- # '[' 4096 '!=' 0 ']' 00:04:40.851 07:21:56 -- common/autotest_common.sh@877 -- # return 0 00:04:40.851 07:21:56 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:04:40.851 07:21:56 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:04:40.851 07:21:56 -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:04:40.851 07:21:56 -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:40.851 07:21:56 -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:04:41.109 07:21:57 -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:04:41.109 { 00:04:41.109 "nbd_device": "/dev/nbd0", 00:04:41.109 "bdev_name": "Malloc0" 00:04:41.109 }, 00:04:41.109 { 00:04:41.109 "nbd_device": "/dev/nbd1", 00:04:41.109 "bdev_name": "Malloc1" 00:04:41.109 } 00:04:41.109 ]' 00:04:41.109 07:21:57 -- bdev/nbd_common.sh@64 -- # echo '[ 00:04:41.109 { 00:04:41.109 "nbd_device": "/dev/nbd0", 00:04:41.109 "bdev_name": "Malloc0" 00:04:41.109 }, 00:04:41.109 { 00:04:41.109 "nbd_device": "/dev/nbd1", 00:04:41.109 "bdev_name": "Malloc1" 00:04:41.109 } 00:04:41.109 ]' 00:04:41.109 07:21:57 -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:04:41.109 07:21:57 -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:04:41.109 /dev/nbd1' 00:04:41.109 07:21:57 -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:04:41.109 /dev/nbd1' 00:04:41.109 07:21:57 -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:04:41.109 07:21:57 -- bdev/nbd_common.sh@65 -- # count=2 00:04:41.109 07:21:57 -- bdev/nbd_common.sh@66 -- # echo 2 00:04:41.109 07:21:57 -- bdev/nbd_common.sh@95 -- # count=2 00:04:41.109 07:21:57 -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:04:41.109 07:21:57 -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:04:41.109 07:21:57 -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:41.109 07:21:57 -- bdev/nbd_common.sh@70 -- # local nbd_list 00:04:41.109 07:21:57 -- bdev/nbd_common.sh@71 -- # local operation=write 00:04:41.109 07:21:57 -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:04:41.109 07:21:57 -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:04:41.109 07:21:57 -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest bs=4096 count=256 00:04:41.109 256+0 records in 00:04:41.109 256+0 records out 00:04:41.109 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00481879 s, 218 MB/s 00:04:41.109 07:21:57 -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:04:41.109 07:21:57 -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:04:41.368 256+0 records in 00:04:41.368 256+0 records out 00:04:41.368 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0239628 s, 43.8 MB/s 00:04:41.368 07:21:57 -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:04:41.368 07:21:57 -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:04:41.368 256+0 records in 00:04:41.368 256+0 records out 00:04:41.368 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0228354 s, 45.9 MB/s 00:04:41.368 07:21:57 -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:04:41.368 07:21:57 -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:41.368 07:21:57 -- bdev/nbd_common.sh@70 -- # local nbd_list 00:04:41.368 07:21:57 -- bdev/nbd_common.sh@71 -- # local operation=verify 00:04:41.368 07:21:57 -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:04:41.368 07:21:57 -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:04:41.368 07:21:57 -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:04:41.368 07:21:57 -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:04:41.368 07:21:57 -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd0 00:04:41.368 07:21:57 -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:04:41.368 07:21:57 -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd1 00:04:41.368 07:21:57 -- bdev/nbd_common.sh@85 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:04:41.368 07:21:57 -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:04:41.368 07:21:57 -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:41.368 07:21:57 -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:41.368 07:21:57 -- bdev/nbd_common.sh@50 -- # local nbd_list 00:04:41.368 07:21:57 -- bdev/nbd_common.sh@51 -- # local i 00:04:41.368 07:21:57 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:04:41.368 07:21:57 -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:04:41.626 07:21:57 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:04:41.626 07:21:57 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:04:41.626 07:21:57 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:04:41.626 07:21:57 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:04:41.626 07:21:57 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:04:41.626 07:21:57 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:04:41.626 07:21:57 -- bdev/nbd_common.sh@41 -- # break 00:04:41.626 07:21:57 -- bdev/nbd_common.sh@45 -- # return 0 00:04:41.626 07:21:57 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:04:41.626 07:21:57 -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:04:41.884 07:21:57 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:04:41.884 07:21:57 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:04:41.884 07:21:57 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:04:41.884 07:21:57 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:04:41.884 07:21:57 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:04:41.884 07:21:57 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:04:41.884 07:21:57 -- bdev/nbd_common.sh@41 -- # break 00:04:41.884 07:21:57 -- bdev/nbd_common.sh@45 -- # return 0 00:04:41.884 07:21:57 -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:04:41.884 07:21:57 -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:41.884 07:21:57 -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:04:42.142 07:21:58 -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:04:42.142 07:21:58 -- bdev/nbd_common.sh@64 -- # echo '[]' 00:04:42.142 07:21:58 -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:04:42.142 07:21:58 -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:04:42.142 07:21:58 -- bdev/nbd_common.sh@65 -- # echo '' 00:04:42.142 07:21:58 -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:04:42.142 07:21:58 -- bdev/nbd_common.sh@65 -- # true 00:04:42.142 07:21:58 -- bdev/nbd_common.sh@65 -- # count=0 00:04:42.142 07:21:58 -- bdev/nbd_common.sh@66 -- # echo 0 00:04:42.142 07:21:58 -- bdev/nbd_common.sh@104 -- # count=0 00:04:42.142 07:21:58 -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:04:42.142 07:21:58 -- bdev/nbd_common.sh@109 -- # return 0 00:04:42.142 07:21:58 -- event/event.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:04:42.400 07:21:58 -- event/event.sh@35 -- # sleep 3 00:04:42.659 [2024-07-14 07:21:58.663404] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 2 00:04:42.659 [2024-07-14 07:21:58.777585] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:04:42.659 [2024-07-14 07:21:58.777586] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:04:42.917 [2024-07-14 07:21:58.839511] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:04:42.917 [2024-07-14 07:21:58.839582] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:04:45.442 07:22:01 -- event/event.sh@23 -- # for i in {0..2} 00:04:45.442 07:22:01 -- event/event.sh@24 -- # echo 'spdk_app_start Round 1' 00:04:45.442 spdk_app_start Round 1 00:04:45.442 07:22:01 -- event/event.sh@25 -- # waitforlisten 3973882 /var/tmp/spdk-nbd.sock 00:04:45.442 07:22:01 -- common/autotest_common.sh@819 -- # '[' -z 3973882 ']' 00:04:45.442 07:22:01 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:04:45.442 07:22:01 -- common/autotest_common.sh@824 -- # local max_retries=100 00:04:45.442 07:22:01 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:04:45.442 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:04:45.442 07:22:01 -- common/autotest_common.sh@828 -- # xtrace_disable 00:04:45.442 07:22:01 -- common/autotest_common.sh@10 -- # set +x 00:04:45.699 07:22:01 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:04:45.699 07:22:01 -- common/autotest_common.sh@852 -- # return 0 00:04:45.699 07:22:01 -- event/event.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:04:45.957 Malloc0 00:04:45.957 07:22:01 -- event/event.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:04:46.213 Malloc1 00:04:46.213 07:22:02 -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:04:46.213 07:22:02 -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:46.213 07:22:02 -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:04:46.213 07:22:02 -- bdev/nbd_common.sh@91 -- # local bdev_list 00:04:46.213 07:22:02 -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:46.213 07:22:02 -- bdev/nbd_common.sh@92 -- # local nbd_list 00:04:46.213 07:22:02 -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:04:46.213 07:22:02 -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:46.213 07:22:02 -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:04:46.213 07:22:02 -- bdev/nbd_common.sh@10 -- # local bdev_list 00:04:46.213 07:22:02 -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:46.213 07:22:02 -- bdev/nbd_common.sh@11 -- # local nbd_list 00:04:46.213 07:22:02 -- bdev/nbd_common.sh@12 -- # local i 00:04:46.213 07:22:02 -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:04:46.213 07:22:02 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:04:46.213 07:22:02 -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:04:46.470 /dev/nbd0 00:04:46.470 07:22:02 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:04:46.470 07:22:02 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:04:46.470 07:22:02 -- common/autotest_common.sh@856 -- # local nbd_name=nbd0 00:04:46.470 07:22:02 -- common/autotest_common.sh@857 -- # local i 00:04:46.470 07:22:02 -- common/autotest_common.sh@859 -- # (( i = 1 )) 00:04:46.470 07:22:02 -- common/autotest_common.sh@859 -- # (( i <= 20 )) 00:04:46.470 07:22:02 -- common/autotest_common.sh@860 -- # grep -q -w nbd0 /proc/partitions 00:04:46.470 07:22:02 -- common/autotest_common.sh@861 -- # break 00:04:46.470 07:22:02 -- common/autotest_common.sh@872 -- # (( i = 1 )) 00:04:46.470 07:22:02 -- common/autotest_common.sh@872 -- # (( i <= 20 )) 00:04:46.470 07:22:02 -- common/autotest_common.sh@873 -- # dd if=/dev/nbd0 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:04:46.470 1+0 records in 00:04:46.470 1+0 records out 00:04:46.470 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000160022 s, 25.6 MB/s 00:04:46.470 07:22:02 -- common/autotest_common.sh@874 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:04:46.470 07:22:02 -- common/autotest_common.sh@874 -- # size=4096 00:04:46.470 07:22:02 -- common/autotest_common.sh@875 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:04:46.470 07:22:02 -- common/autotest_common.sh@876 -- # '[' 4096 '!=' 0 ']' 00:04:46.470 07:22:02 -- common/autotest_common.sh@877 -- # return 0 00:04:46.470 07:22:02 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:04:46.470 07:22:02 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:04:46.470 07:22:02 -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:04:46.728 /dev/nbd1 00:04:46.728 07:22:02 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:04:46.728 07:22:02 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:04:46.728 07:22:02 -- common/autotest_common.sh@856 -- # local nbd_name=nbd1 00:04:46.728 07:22:02 -- common/autotest_common.sh@857 -- # local i 00:04:46.728 07:22:02 -- common/autotest_common.sh@859 -- # (( i = 1 )) 00:04:46.728 07:22:02 -- common/autotest_common.sh@859 -- # (( i <= 20 )) 00:04:46.728 07:22:02 -- common/autotest_common.sh@860 -- # grep -q -w nbd1 /proc/partitions 00:04:46.728 07:22:02 -- common/autotest_common.sh@861 -- # break 00:04:46.728 07:22:02 -- common/autotest_common.sh@872 -- # (( i = 1 )) 00:04:46.728 07:22:02 -- common/autotest_common.sh@872 -- # (( i <= 20 )) 00:04:46.728 07:22:02 -- common/autotest_common.sh@873 -- # dd if=/dev/nbd1 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:04:46.728 1+0 records in 00:04:46.728 1+0 records out 00:04:46.728 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000197243 s, 20.8 MB/s 00:04:46.728 07:22:02 -- common/autotest_common.sh@874 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:04:46.728 07:22:02 -- common/autotest_common.sh@874 -- # size=4096 00:04:46.728 07:22:02 -- common/autotest_common.sh@875 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:04:46.728 07:22:02 -- common/autotest_common.sh@876 -- # '[' 4096 '!=' 0 ']' 00:04:46.728 07:22:02 -- common/autotest_common.sh@877 -- # return 0 00:04:46.728 07:22:02 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:04:46.728 07:22:02 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:04:46.728 07:22:02 -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:04:46.728 07:22:02 -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:46.728 07:22:02 -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:04:46.986 07:22:02 -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:04:46.986 { 00:04:46.986 "nbd_device": "/dev/nbd0", 00:04:46.986 "bdev_name": "Malloc0" 00:04:46.986 }, 00:04:46.986 { 00:04:46.986 "nbd_device": "/dev/nbd1", 00:04:46.986 "bdev_name": "Malloc1" 00:04:46.986 } 00:04:46.986 ]' 00:04:46.986 07:22:02 -- bdev/nbd_common.sh@64 -- # echo '[ 00:04:46.986 { 00:04:46.986 "nbd_device": "/dev/nbd0", 00:04:46.986 "bdev_name": "Malloc0" 00:04:46.986 }, 00:04:46.986 { 00:04:46.986 "nbd_device": "/dev/nbd1", 00:04:46.986 "bdev_name": "Malloc1" 00:04:46.986 } 00:04:46.986 ]' 00:04:46.986 07:22:02 -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:04:46.986 07:22:02 -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:04:46.986 /dev/nbd1' 00:04:46.986 07:22:02 -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:04:46.986 /dev/nbd1' 00:04:46.986 07:22:02 -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:04:46.986 07:22:02 -- bdev/nbd_common.sh@65 -- # count=2 00:04:46.986 07:22:02 -- bdev/nbd_common.sh@66 -- # echo 2 00:04:46.986 07:22:02 -- bdev/nbd_common.sh@95 -- # count=2 00:04:46.986 07:22:02 -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:04:46.986 07:22:02 -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:04:46.986 07:22:02 -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:46.986 07:22:02 -- bdev/nbd_common.sh@70 -- # local nbd_list 00:04:46.986 07:22:02 -- bdev/nbd_common.sh@71 -- # local operation=write 00:04:46.986 07:22:02 -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:04:46.986 07:22:02 -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:04:46.986 07:22:02 -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest bs=4096 count=256 00:04:46.986 256+0 records in 00:04:46.986 256+0 records out 00:04:46.986 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00479069 s, 219 MB/s 00:04:46.986 07:22:02 -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:04:46.986 07:22:02 -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:04:46.986 256+0 records in 00:04:46.986 256+0 records out 00:04:46.986 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0205119 s, 51.1 MB/s 00:04:46.986 07:22:03 -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:04:46.986 07:22:03 -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:04:46.986 256+0 records in 00:04:46.986 256+0 records out 00:04:46.986 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0224478 s, 46.7 MB/s 00:04:46.986 07:22:03 -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:04:46.986 07:22:03 -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:46.986 07:22:03 -- bdev/nbd_common.sh@70 -- # local nbd_list 00:04:46.986 07:22:03 -- bdev/nbd_common.sh@71 -- # local operation=verify 00:04:46.986 07:22:03 -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:04:46.986 07:22:03 -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:04:46.986 07:22:03 -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:04:46.986 07:22:03 -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:04:46.986 07:22:03 -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd0 00:04:46.986 07:22:03 -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:04:46.986 07:22:03 -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd1 00:04:46.986 07:22:03 -- bdev/nbd_common.sh@85 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:04:46.986 07:22:03 -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:04:46.986 07:22:03 -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:46.986 07:22:03 -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:46.986 07:22:03 -- bdev/nbd_common.sh@50 -- # local nbd_list 00:04:46.986 07:22:03 -- bdev/nbd_common.sh@51 -- # local i 00:04:46.986 07:22:03 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:04:46.986 07:22:03 -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:04:47.248 07:22:03 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:04:47.248 07:22:03 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:04:47.248 07:22:03 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:04:47.248 07:22:03 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:04:47.248 07:22:03 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:04:47.248 07:22:03 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:04:47.248 07:22:03 -- bdev/nbd_common.sh@41 -- # break 00:04:47.248 07:22:03 -- bdev/nbd_common.sh@45 -- # return 0 00:04:47.248 07:22:03 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:04:47.248 07:22:03 -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:04:47.510 07:22:03 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:04:47.510 07:22:03 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:04:47.510 07:22:03 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:04:47.510 07:22:03 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:04:47.510 07:22:03 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:04:47.510 07:22:03 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:04:47.510 07:22:03 -- bdev/nbd_common.sh@41 -- # break 00:04:47.510 07:22:03 -- bdev/nbd_common.sh@45 -- # return 0 00:04:47.510 07:22:03 -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:04:47.510 07:22:03 -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:47.510 07:22:03 -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:04:47.768 07:22:03 -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:04:47.768 07:22:03 -- bdev/nbd_common.sh@64 -- # echo '[]' 00:04:47.768 07:22:03 -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:04:47.768 07:22:03 -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:04:47.768 07:22:03 -- bdev/nbd_common.sh@65 -- # echo '' 00:04:47.768 07:22:03 -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:04:47.768 07:22:03 -- bdev/nbd_common.sh@65 -- # true 00:04:47.768 07:22:03 -- bdev/nbd_common.sh@65 -- # count=0 00:04:47.768 07:22:03 -- bdev/nbd_common.sh@66 -- # echo 0 00:04:47.768 07:22:03 -- bdev/nbd_common.sh@104 -- # count=0 00:04:47.768 07:22:03 -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:04:47.768 07:22:03 -- bdev/nbd_common.sh@109 -- # return 0 00:04:47.768 07:22:03 -- event/event.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:04:48.026 07:22:04 -- event/event.sh@35 -- # sleep 3 00:04:48.283 [2024-07-14 07:22:04.403909] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 2 00:04:48.541 [2024-07-14 07:22:04.520304] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:04:48.541 [2024-07-14 07:22:04.520304] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:04:48.541 [2024-07-14 07:22:04.577075] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:04:48.541 [2024-07-14 07:22:04.577142] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:04:51.160 07:22:07 -- event/event.sh@23 -- # for i in {0..2} 00:04:51.160 07:22:07 -- event/event.sh@24 -- # echo 'spdk_app_start Round 2' 00:04:51.160 spdk_app_start Round 2 00:04:51.160 07:22:07 -- event/event.sh@25 -- # waitforlisten 3973882 /var/tmp/spdk-nbd.sock 00:04:51.160 07:22:07 -- common/autotest_common.sh@819 -- # '[' -z 3973882 ']' 00:04:51.160 07:22:07 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:04:51.160 07:22:07 -- common/autotest_common.sh@824 -- # local max_retries=100 00:04:51.160 07:22:07 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:04:51.160 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:04:51.160 07:22:07 -- common/autotest_common.sh@828 -- # xtrace_disable 00:04:51.160 07:22:07 -- common/autotest_common.sh@10 -- # set +x 00:04:51.420 07:22:07 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:04:51.420 07:22:07 -- common/autotest_common.sh@852 -- # return 0 00:04:51.420 07:22:07 -- event/event.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:04:51.679 Malloc0 00:04:51.679 07:22:07 -- event/event.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:04:51.938 Malloc1 00:04:51.938 07:22:07 -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:04:51.938 07:22:07 -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:51.938 07:22:07 -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:04:51.938 07:22:07 -- bdev/nbd_common.sh@91 -- # local bdev_list 00:04:51.938 07:22:07 -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:51.938 07:22:07 -- bdev/nbd_common.sh@92 -- # local nbd_list 00:04:51.938 07:22:07 -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:04:51.938 07:22:07 -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:51.938 07:22:07 -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:04:51.938 07:22:07 -- bdev/nbd_common.sh@10 -- # local bdev_list 00:04:51.938 07:22:07 -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:51.938 07:22:07 -- bdev/nbd_common.sh@11 -- # local nbd_list 00:04:51.938 07:22:07 -- bdev/nbd_common.sh@12 -- # local i 00:04:51.938 07:22:07 -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:04:51.938 07:22:07 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:04:51.938 07:22:07 -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:04:52.196 /dev/nbd0 00:04:52.196 07:22:08 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:04:52.196 07:22:08 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:04:52.196 07:22:08 -- common/autotest_common.sh@856 -- # local nbd_name=nbd0 00:04:52.196 07:22:08 -- common/autotest_common.sh@857 -- # local i 00:04:52.196 07:22:08 -- common/autotest_common.sh@859 -- # (( i = 1 )) 00:04:52.196 07:22:08 -- common/autotest_common.sh@859 -- # (( i <= 20 )) 00:04:52.196 07:22:08 -- common/autotest_common.sh@860 -- # grep -q -w nbd0 /proc/partitions 00:04:52.196 07:22:08 -- common/autotest_common.sh@861 -- # break 00:04:52.196 07:22:08 -- common/autotest_common.sh@872 -- # (( i = 1 )) 00:04:52.196 07:22:08 -- common/autotest_common.sh@872 -- # (( i <= 20 )) 00:04:52.196 07:22:08 -- common/autotest_common.sh@873 -- # dd if=/dev/nbd0 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:04:52.197 1+0 records in 00:04:52.197 1+0 records out 00:04:52.197 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00016867 s, 24.3 MB/s 00:04:52.197 07:22:08 -- common/autotest_common.sh@874 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:04:52.197 07:22:08 -- common/autotest_common.sh@874 -- # size=4096 00:04:52.197 07:22:08 -- common/autotest_common.sh@875 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:04:52.197 07:22:08 -- common/autotest_common.sh@876 -- # '[' 4096 '!=' 0 ']' 00:04:52.197 07:22:08 -- common/autotest_common.sh@877 -- # return 0 00:04:52.197 07:22:08 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:04:52.197 07:22:08 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:04:52.197 07:22:08 -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:04:52.455 /dev/nbd1 00:04:52.455 07:22:08 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:04:52.455 07:22:08 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:04:52.455 07:22:08 -- common/autotest_common.sh@856 -- # local nbd_name=nbd1 00:04:52.455 07:22:08 -- common/autotest_common.sh@857 -- # local i 00:04:52.455 07:22:08 -- common/autotest_common.sh@859 -- # (( i = 1 )) 00:04:52.455 07:22:08 -- common/autotest_common.sh@859 -- # (( i <= 20 )) 00:04:52.455 07:22:08 -- common/autotest_common.sh@860 -- # grep -q -w nbd1 /proc/partitions 00:04:52.455 07:22:08 -- common/autotest_common.sh@861 -- # break 00:04:52.455 07:22:08 -- common/autotest_common.sh@872 -- # (( i = 1 )) 00:04:52.455 07:22:08 -- common/autotest_common.sh@872 -- # (( i <= 20 )) 00:04:52.455 07:22:08 -- common/autotest_common.sh@873 -- # dd if=/dev/nbd1 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:04:52.455 1+0 records in 00:04:52.455 1+0 records out 00:04:52.455 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000205343 s, 19.9 MB/s 00:04:52.455 07:22:08 -- common/autotest_common.sh@874 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:04:52.455 07:22:08 -- common/autotest_common.sh@874 -- # size=4096 00:04:52.455 07:22:08 -- common/autotest_common.sh@875 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:04:52.455 07:22:08 -- common/autotest_common.sh@876 -- # '[' 4096 '!=' 0 ']' 00:04:52.455 07:22:08 -- common/autotest_common.sh@877 -- # return 0 00:04:52.455 07:22:08 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:04:52.455 07:22:08 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:04:52.455 07:22:08 -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:04:52.455 07:22:08 -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:52.455 07:22:08 -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:04:52.712 07:22:08 -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:04:52.713 { 00:04:52.713 "nbd_device": "/dev/nbd0", 00:04:52.713 "bdev_name": "Malloc0" 00:04:52.713 }, 00:04:52.713 { 00:04:52.713 "nbd_device": "/dev/nbd1", 00:04:52.713 "bdev_name": "Malloc1" 00:04:52.713 } 00:04:52.713 ]' 00:04:52.713 07:22:08 -- bdev/nbd_common.sh@64 -- # echo '[ 00:04:52.713 { 00:04:52.713 "nbd_device": "/dev/nbd0", 00:04:52.713 "bdev_name": "Malloc0" 00:04:52.713 }, 00:04:52.713 { 00:04:52.713 "nbd_device": "/dev/nbd1", 00:04:52.713 "bdev_name": "Malloc1" 00:04:52.713 } 00:04:52.713 ]' 00:04:52.713 07:22:08 -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:04:52.713 07:22:08 -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:04:52.713 /dev/nbd1' 00:04:52.713 07:22:08 -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:04:52.713 /dev/nbd1' 00:04:52.713 07:22:08 -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:04:52.713 07:22:08 -- bdev/nbd_common.sh@65 -- # count=2 00:04:52.713 07:22:08 -- bdev/nbd_common.sh@66 -- # echo 2 00:04:52.713 07:22:08 -- bdev/nbd_common.sh@95 -- # count=2 00:04:52.713 07:22:08 -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:04:52.713 07:22:08 -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:04:52.713 07:22:08 -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:52.713 07:22:08 -- bdev/nbd_common.sh@70 -- # local nbd_list 00:04:52.713 07:22:08 -- bdev/nbd_common.sh@71 -- # local operation=write 00:04:52.713 07:22:08 -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:04:52.713 07:22:08 -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:04:52.713 07:22:08 -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest bs=4096 count=256 00:04:52.713 256+0 records in 00:04:52.713 256+0 records out 00:04:52.713 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00501301 s, 209 MB/s 00:04:52.713 07:22:08 -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:04:52.713 07:22:08 -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:04:52.713 256+0 records in 00:04:52.713 256+0 records out 00:04:52.713 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0202829 s, 51.7 MB/s 00:04:52.713 07:22:08 -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:04:52.713 07:22:08 -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:04:52.713 256+0 records in 00:04:52.713 256+0 records out 00:04:52.713 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0241233 s, 43.5 MB/s 00:04:52.713 07:22:08 -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:04:52.713 07:22:08 -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:52.713 07:22:08 -- bdev/nbd_common.sh@70 -- # local nbd_list 00:04:52.713 07:22:08 -- bdev/nbd_common.sh@71 -- # local operation=verify 00:04:52.713 07:22:08 -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:04:52.713 07:22:08 -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:04:52.713 07:22:08 -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:04:52.713 07:22:08 -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:04:52.713 07:22:08 -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd0 00:04:52.713 07:22:08 -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:04:52.713 07:22:08 -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd1 00:04:52.713 07:22:08 -- bdev/nbd_common.sh@85 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:04:52.713 07:22:08 -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:04:52.713 07:22:08 -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:52.713 07:22:08 -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:52.713 07:22:08 -- bdev/nbd_common.sh@50 -- # local nbd_list 00:04:52.713 07:22:08 -- bdev/nbd_common.sh@51 -- # local i 00:04:52.713 07:22:08 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:04:52.713 07:22:08 -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:04:52.970 07:22:09 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:04:52.970 07:22:09 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:04:52.970 07:22:09 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:04:52.970 07:22:09 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:04:52.970 07:22:09 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:04:52.970 07:22:09 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:04:52.970 07:22:09 -- bdev/nbd_common.sh@41 -- # break 00:04:52.970 07:22:09 -- bdev/nbd_common.sh@45 -- # return 0 00:04:52.970 07:22:09 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:04:52.970 07:22:09 -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:04:53.228 07:22:09 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:04:53.228 07:22:09 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:04:53.228 07:22:09 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:04:53.228 07:22:09 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:04:53.228 07:22:09 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:04:53.228 07:22:09 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:04:53.228 07:22:09 -- bdev/nbd_common.sh@41 -- # break 00:04:53.228 07:22:09 -- bdev/nbd_common.sh@45 -- # return 0 00:04:53.228 07:22:09 -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:04:53.228 07:22:09 -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:53.228 07:22:09 -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:04:53.485 07:22:09 -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:04:53.485 07:22:09 -- bdev/nbd_common.sh@64 -- # echo '[]' 00:04:53.485 07:22:09 -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:04:53.485 07:22:09 -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:04:53.485 07:22:09 -- bdev/nbd_common.sh@65 -- # echo '' 00:04:53.485 07:22:09 -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:04:53.485 07:22:09 -- bdev/nbd_common.sh@65 -- # true 00:04:53.485 07:22:09 -- bdev/nbd_common.sh@65 -- # count=0 00:04:53.485 07:22:09 -- bdev/nbd_common.sh@66 -- # echo 0 00:04:53.485 07:22:09 -- bdev/nbd_common.sh@104 -- # count=0 00:04:53.485 07:22:09 -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:04:53.485 07:22:09 -- bdev/nbd_common.sh@109 -- # return 0 00:04:53.485 07:22:09 -- event/event.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:04:53.743 07:22:09 -- event/event.sh@35 -- # sleep 3 00:04:54.309 [2024-07-14 07:22:10.182127] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 2 00:04:54.309 [2024-07-14 07:22:10.294323] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:04:54.309 [2024-07-14 07:22:10.294328] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:04:54.309 [2024-07-14 07:22:10.355988] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:04:54.309 [2024-07-14 07:22:10.356068] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:04:56.833 07:22:12 -- event/event.sh@38 -- # waitforlisten 3973882 /var/tmp/spdk-nbd.sock 00:04:56.833 07:22:12 -- common/autotest_common.sh@819 -- # '[' -z 3973882 ']' 00:04:56.833 07:22:12 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:04:56.833 07:22:12 -- common/autotest_common.sh@824 -- # local max_retries=100 00:04:56.833 07:22:12 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:04:56.833 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:04:56.833 07:22:12 -- common/autotest_common.sh@828 -- # xtrace_disable 00:04:56.833 07:22:12 -- common/autotest_common.sh@10 -- # set +x 00:04:57.092 07:22:13 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:04:57.092 07:22:13 -- common/autotest_common.sh@852 -- # return 0 00:04:57.092 07:22:13 -- event/event.sh@39 -- # killprocess 3973882 00:04:57.092 07:22:13 -- common/autotest_common.sh@926 -- # '[' -z 3973882 ']' 00:04:57.092 07:22:13 -- common/autotest_common.sh@930 -- # kill -0 3973882 00:04:57.092 07:22:13 -- common/autotest_common.sh@931 -- # uname 00:04:57.092 07:22:13 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:04:57.092 07:22:13 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 3973882 00:04:57.092 07:22:13 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:04:57.092 07:22:13 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:04:57.092 07:22:13 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 3973882' 00:04:57.092 killing process with pid 3973882 00:04:57.092 07:22:13 -- common/autotest_common.sh@945 -- # kill 3973882 00:04:57.092 07:22:13 -- common/autotest_common.sh@950 -- # wait 3973882 00:04:57.349 spdk_app_start is called in Round 0. 00:04:57.349 Shutdown signal received, stop current app iteration 00:04:57.349 Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 reinitialization... 00:04:57.349 spdk_app_start is called in Round 1. 00:04:57.349 Shutdown signal received, stop current app iteration 00:04:57.349 Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 reinitialization... 00:04:57.349 spdk_app_start is called in Round 2. 00:04:57.349 Shutdown signal received, stop current app iteration 00:04:57.349 Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 reinitialization... 00:04:57.349 spdk_app_start is called in Round 3. 00:04:57.349 Shutdown signal received, stop current app iteration 00:04:57.349 07:22:13 -- event/event.sh@40 -- # trap - SIGINT SIGTERM EXIT 00:04:57.349 07:22:13 -- event/event.sh@42 -- # return 0 00:04:57.349 00:04:57.349 real 0m18.524s 00:04:57.349 user 0m40.052s 00:04:57.349 sys 0m3.164s 00:04:57.349 07:22:13 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:57.349 07:22:13 -- common/autotest_common.sh@10 -- # set +x 00:04:57.349 ************************************ 00:04:57.349 END TEST app_repeat 00:04:57.349 ************************************ 00:04:57.349 07:22:13 -- event/event.sh@54 -- # (( SPDK_TEST_CRYPTO == 0 )) 00:04:57.349 07:22:13 -- event/event.sh@55 -- # run_test cpu_locks /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/cpu_locks.sh 00:04:57.349 07:22:13 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:04:57.349 07:22:13 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:04:57.349 07:22:13 -- common/autotest_common.sh@10 -- # set +x 00:04:57.349 ************************************ 00:04:57.349 START TEST cpu_locks 00:04:57.349 ************************************ 00:04:57.349 07:22:13 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/cpu_locks.sh 00:04:57.606 * Looking for test storage... 00:04:57.606 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event 00:04:57.606 07:22:13 -- event/cpu_locks.sh@11 -- # rpc_sock1=/var/tmp/spdk.sock 00:04:57.606 07:22:13 -- event/cpu_locks.sh@12 -- # rpc_sock2=/var/tmp/spdk2.sock 00:04:57.606 07:22:13 -- event/cpu_locks.sh@164 -- # trap cleanup EXIT SIGTERM SIGINT 00:04:57.606 07:22:13 -- event/cpu_locks.sh@166 -- # run_test default_locks default_locks 00:04:57.606 07:22:13 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:04:57.606 07:22:13 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:04:57.606 07:22:13 -- common/autotest_common.sh@10 -- # set +x 00:04:57.606 ************************************ 00:04:57.606 START TEST default_locks 00:04:57.606 ************************************ 00:04:57.607 07:22:13 -- common/autotest_common.sh@1104 -- # default_locks 00:04:57.607 07:22:13 -- event/cpu_locks.sh@46 -- # spdk_tgt_pid=3976427 00:04:57.607 07:22:13 -- event/cpu_locks.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:04:57.607 07:22:13 -- event/cpu_locks.sh@47 -- # waitforlisten 3976427 00:04:57.607 07:22:13 -- common/autotest_common.sh@819 -- # '[' -z 3976427 ']' 00:04:57.607 07:22:13 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:57.607 07:22:13 -- common/autotest_common.sh@824 -- # local max_retries=100 00:04:57.607 07:22:13 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:57.607 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:57.607 07:22:13 -- common/autotest_common.sh@828 -- # xtrace_disable 00:04:57.607 07:22:13 -- common/autotest_common.sh@10 -- # set +x 00:04:57.607 [2024-07-14 07:22:13.584484] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:04:57.607 [2024-07-14 07:22:13.584578] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3976427 ] 00:04:57.607 EAL: No free 2048 kB hugepages reported on node 1 00:04:57.607 [2024-07-14 07:22:13.640501] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:57.607 [2024-07-14 07:22:13.744638] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:04:57.607 [2024-07-14 07:22:13.744812] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:04:58.537 07:22:14 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:04:58.537 07:22:14 -- common/autotest_common.sh@852 -- # return 0 00:04:58.537 07:22:14 -- event/cpu_locks.sh@49 -- # locks_exist 3976427 00:04:58.537 07:22:14 -- event/cpu_locks.sh@22 -- # lslocks -p 3976427 00:04:58.537 07:22:14 -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:04:58.795 lslocks: write error 00:04:58.795 07:22:14 -- event/cpu_locks.sh@50 -- # killprocess 3976427 00:04:58.795 07:22:14 -- common/autotest_common.sh@926 -- # '[' -z 3976427 ']' 00:04:58.795 07:22:14 -- common/autotest_common.sh@930 -- # kill -0 3976427 00:04:58.795 07:22:14 -- common/autotest_common.sh@931 -- # uname 00:04:58.795 07:22:14 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:04:58.795 07:22:14 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 3976427 00:04:58.795 07:22:14 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:04:58.795 07:22:14 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:04:58.795 07:22:14 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 3976427' 00:04:58.795 killing process with pid 3976427 00:04:58.795 07:22:14 -- common/autotest_common.sh@945 -- # kill 3976427 00:04:58.795 07:22:14 -- common/autotest_common.sh@950 -- # wait 3976427 00:04:59.361 07:22:15 -- event/cpu_locks.sh@52 -- # NOT waitforlisten 3976427 00:04:59.361 07:22:15 -- common/autotest_common.sh@640 -- # local es=0 00:04:59.361 07:22:15 -- common/autotest_common.sh@642 -- # valid_exec_arg waitforlisten 3976427 00:04:59.361 07:22:15 -- common/autotest_common.sh@628 -- # local arg=waitforlisten 00:04:59.361 07:22:15 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:04:59.361 07:22:15 -- common/autotest_common.sh@632 -- # type -t waitforlisten 00:04:59.361 07:22:15 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:04:59.361 07:22:15 -- common/autotest_common.sh@643 -- # waitforlisten 3976427 00:04:59.361 07:22:15 -- common/autotest_common.sh@819 -- # '[' -z 3976427 ']' 00:04:59.361 07:22:15 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:59.361 07:22:15 -- common/autotest_common.sh@824 -- # local max_retries=100 00:04:59.361 07:22:15 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:59.361 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:59.361 07:22:15 -- common/autotest_common.sh@828 -- # xtrace_disable 00:04:59.361 07:22:15 -- common/autotest_common.sh@10 -- # set +x 00:04:59.361 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 834: kill: (3976427) - No such process 00:04:59.361 ERROR: process (pid: 3976427) is no longer running 00:04:59.361 07:22:15 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:04:59.361 07:22:15 -- common/autotest_common.sh@852 -- # return 1 00:04:59.361 07:22:15 -- common/autotest_common.sh@643 -- # es=1 00:04:59.361 07:22:15 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:04:59.361 07:22:15 -- common/autotest_common.sh@662 -- # [[ -n '' ]] 00:04:59.361 07:22:15 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:04:59.361 07:22:15 -- event/cpu_locks.sh@54 -- # no_locks 00:04:59.361 07:22:15 -- event/cpu_locks.sh@26 -- # lock_files=() 00:04:59.361 07:22:15 -- event/cpu_locks.sh@26 -- # local lock_files 00:04:59.361 07:22:15 -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:04:59.361 00:04:59.361 real 0m1.738s 00:04:59.361 user 0m1.861s 00:04:59.361 sys 0m0.547s 00:04:59.361 07:22:15 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:59.361 07:22:15 -- common/autotest_common.sh@10 -- # set +x 00:04:59.361 ************************************ 00:04:59.361 END TEST default_locks 00:04:59.361 ************************************ 00:04:59.361 07:22:15 -- event/cpu_locks.sh@167 -- # run_test default_locks_via_rpc default_locks_via_rpc 00:04:59.361 07:22:15 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:04:59.361 07:22:15 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:04:59.361 07:22:15 -- common/autotest_common.sh@10 -- # set +x 00:04:59.361 ************************************ 00:04:59.361 START TEST default_locks_via_rpc 00:04:59.361 ************************************ 00:04:59.361 07:22:15 -- common/autotest_common.sh@1104 -- # default_locks_via_rpc 00:04:59.361 07:22:15 -- event/cpu_locks.sh@62 -- # spdk_tgt_pid=3976713 00:04:59.361 07:22:15 -- event/cpu_locks.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:04:59.361 07:22:15 -- event/cpu_locks.sh@63 -- # waitforlisten 3976713 00:04:59.361 07:22:15 -- common/autotest_common.sh@819 -- # '[' -z 3976713 ']' 00:04:59.361 07:22:15 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:59.361 07:22:15 -- common/autotest_common.sh@824 -- # local max_retries=100 00:04:59.361 07:22:15 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:59.361 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:59.361 07:22:15 -- common/autotest_common.sh@828 -- # xtrace_disable 00:04:59.361 07:22:15 -- common/autotest_common.sh@10 -- # set +x 00:04:59.361 [2024-07-14 07:22:15.348026] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:04:59.361 [2024-07-14 07:22:15.348107] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3976713 ] 00:04:59.361 EAL: No free 2048 kB hugepages reported on node 1 00:04:59.361 [2024-07-14 07:22:15.409297] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:59.361 [2024-07-14 07:22:15.522414] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:04:59.361 [2024-07-14 07:22:15.522592] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:00.293 07:22:16 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:05:00.293 07:22:16 -- common/autotest_common.sh@852 -- # return 0 00:05:00.293 07:22:16 -- event/cpu_locks.sh@65 -- # rpc_cmd framework_disable_cpumask_locks 00:05:00.293 07:22:16 -- common/autotest_common.sh@551 -- # xtrace_disable 00:05:00.293 07:22:16 -- common/autotest_common.sh@10 -- # set +x 00:05:00.293 07:22:16 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:05:00.293 07:22:16 -- event/cpu_locks.sh@67 -- # no_locks 00:05:00.293 07:22:16 -- event/cpu_locks.sh@26 -- # lock_files=() 00:05:00.293 07:22:16 -- event/cpu_locks.sh@26 -- # local lock_files 00:05:00.293 07:22:16 -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:05:00.293 07:22:16 -- event/cpu_locks.sh@69 -- # rpc_cmd framework_enable_cpumask_locks 00:05:00.293 07:22:16 -- common/autotest_common.sh@551 -- # xtrace_disable 00:05:00.293 07:22:16 -- common/autotest_common.sh@10 -- # set +x 00:05:00.293 07:22:16 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:05:00.293 07:22:16 -- event/cpu_locks.sh@71 -- # locks_exist 3976713 00:05:00.293 07:22:16 -- event/cpu_locks.sh@22 -- # lslocks -p 3976713 00:05:00.293 07:22:16 -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:05:00.551 07:22:16 -- event/cpu_locks.sh@73 -- # killprocess 3976713 00:05:00.551 07:22:16 -- common/autotest_common.sh@926 -- # '[' -z 3976713 ']' 00:05:00.551 07:22:16 -- common/autotest_common.sh@930 -- # kill -0 3976713 00:05:00.551 07:22:16 -- common/autotest_common.sh@931 -- # uname 00:05:00.551 07:22:16 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:05:00.551 07:22:16 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 3976713 00:05:00.551 07:22:16 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:05:00.551 07:22:16 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:05:00.551 07:22:16 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 3976713' 00:05:00.551 killing process with pid 3976713 00:05:00.551 07:22:16 -- common/autotest_common.sh@945 -- # kill 3976713 00:05:00.551 07:22:16 -- common/autotest_common.sh@950 -- # wait 3976713 00:05:01.118 00:05:01.118 real 0m1.733s 00:05:01.118 user 0m1.824s 00:05:01.118 sys 0m0.575s 00:05:01.118 07:22:17 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:01.118 07:22:17 -- common/autotest_common.sh@10 -- # set +x 00:05:01.118 ************************************ 00:05:01.118 END TEST default_locks_via_rpc 00:05:01.118 ************************************ 00:05:01.118 07:22:17 -- event/cpu_locks.sh@168 -- # run_test non_locking_app_on_locked_coremask non_locking_app_on_locked_coremask 00:05:01.118 07:22:17 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:05:01.118 07:22:17 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:05:01.118 07:22:17 -- common/autotest_common.sh@10 -- # set +x 00:05:01.118 ************************************ 00:05:01.118 START TEST non_locking_app_on_locked_coremask 00:05:01.118 ************************************ 00:05:01.118 07:22:17 -- common/autotest_common.sh@1104 -- # non_locking_app_on_locked_coremask 00:05:01.118 07:22:17 -- event/cpu_locks.sh@80 -- # spdk_tgt_pid=3976896 00:05:01.118 07:22:17 -- event/cpu_locks.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:05:01.118 07:22:17 -- event/cpu_locks.sh@81 -- # waitforlisten 3976896 /var/tmp/spdk.sock 00:05:01.118 07:22:17 -- common/autotest_common.sh@819 -- # '[' -z 3976896 ']' 00:05:01.118 07:22:17 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:01.118 07:22:17 -- common/autotest_common.sh@824 -- # local max_retries=100 00:05:01.118 07:22:17 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:01.118 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:01.118 07:22:17 -- common/autotest_common.sh@828 -- # xtrace_disable 00:05:01.118 07:22:17 -- common/autotest_common.sh@10 -- # set +x 00:05:01.118 [2024-07-14 07:22:17.106473] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:05:01.118 [2024-07-14 07:22:17.106557] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3976896 ] 00:05:01.118 EAL: No free 2048 kB hugepages reported on node 1 00:05:01.118 [2024-07-14 07:22:17.166909] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:01.118 [2024-07-14 07:22:17.279936] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:05:01.118 [2024-07-14 07:22:17.280105] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:02.053 07:22:18 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:05:02.053 07:22:18 -- common/autotest_common.sh@852 -- # return 0 00:05:02.053 07:22:18 -- event/cpu_locks.sh@84 -- # spdk_tgt_pid2=3977035 00:05:02.053 07:22:18 -- event/cpu_locks.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks -r /var/tmp/spdk2.sock 00:05:02.053 07:22:18 -- event/cpu_locks.sh@85 -- # waitforlisten 3977035 /var/tmp/spdk2.sock 00:05:02.053 07:22:18 -- common/autotest_common.sh@819 -- # '[' -z 3977035 ']' 00:05:02.053 07:22:18 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk2.sock 00:05:02.054 07:22:18 -- common/autotest_common.sh@824 -- # local max_retries=100 00:05:02.054 07:22:18 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:05:02.054 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:05:02.054 07:22:18 -- common/autotest_common.sh@828 -- # xtrace_disable 00:05:02.054 07:22:18 -- common/autotest_common.sh@10 -- # set +x 00:05:02.054 [2024-07-14 07:22:18.063476] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:05:02.054 [2024-07-14 07:22:18.063563] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3977035 ] 00:05:02.054 EAL: No free 2048 kB hugepages reported on node 1 00:05:02.054 [2024-07-14 07:22:18.154826] app.c: 795:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:05:02.054 [2024-07-14 07:22:18.154858] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:02.312 [2024-07-14 07:22:18.386826] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:05:02.312 [2024-07-14 07:22:18.387031] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:02.879 07:22:18 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:05:02.879 07:22:18 -- common/autotest_common.sh@852 -- # return 0 00:05:02.879 07:22:18 -- event/cpu_locks.sh@87 -- # locks_exist 3976896 00:05:02.879 07:22:18 -- event/cpu_locks.sh@22 -- # lslocks -p 3976896 00:05:02.879 07:22:18 -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:05:03.444 lslocks: write error 00:05:03.444 07:22:19 -- event/cpu_locks.sh@89 -- # killprocess 3976896 00:05:03.444 07:22:19 -- common/autotest_common.sh@926 -- # '[' -z 3976896 ']' 00:05:03.444 07:22:19 -- common/autotest_common.sh@930 -- # kill -0 3976896 00:05:03.444 07:22:19 -- common/autotest_common.sh@931 -- # uname 00:05:03.444 07:22:19 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:05:03.444 07:22:19 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 3976896 00:05:03.444 07:22:19 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:05:03.444 07:22:19 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:05:03.444 07:22:19 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 3976896' 00:05:03.444 killing process with pid 3976896 00:05:03.444 07:22:19 -- common/autotest_common.sh@945 -- # kill 3976896 00:05:03.444 07:22:19 -- common/autotest_common.sh@950 -- # wait 3976896 00:05:04.402 07:22:20 -- event/cpu_locks.sh@90 -- # killprocess 3977035 00:05:04.402 07:22:20 -- common/autotest_common.sh@926 -- # '[' -z 3977035 ']' 00:05:04.402 07:22:20 -- common/autotest_common.sh@930 -- # kill -0 3977035 00:05:04.402 07:22:20 -- common/autotest_common.sh@931 -- # uname 00:05:04.402 07:22:20 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:05:04.402 07:22:20 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 3977035 00:05:04.402 07:22:20 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:05:04.402 07:22:20 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:05:04.402 07:22:20 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 3977035' 00:05:04.402 killing process with pid 3977035 00:05:04.402 07:22:20 -- common/autotest_common.sh@945 -- # kill 3977035 00:05:04.402 07:22:20 -- common/autotest_common.sh@950 -- # wait 3977035 00:05:04.672 00:05:04.672 real 0m3.776s 00:05:04.672 user 0m4.045s 00:05:04.672 sys 0m1.048s 00:05:04.672 07:22:20 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:04.672 07:22:20 -- common/autotest_common.sh@10 -- # set +x 00:05:04.672 ************************************ 00:05:04.672 END TEST non_locking_app_on_locked_coremask 00:05:04.672 ************************************ 00:05:04.931 07:22:20 -- event/cpu_locks.sh@169 -- # run_test locking_app_on_unlocked_coremask locking_app_on_unlocked_coremask 00:05:04.931 07:22:20 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:05:04.931 07:22:20 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:05:04.931 07:22:20 -- common/autotest_common.sh@10 -- # set +x 00:05:04.931 ************************************ 00:05:04.931 START TEST locking_app_on_unlocked_coremask 00:05:04.931 ************************************ 00:05:04.931 07:22:20 -- common/autotest_common.sh@1104 -- # locking_app_on_unlocked_coremask 00:05:04.931 07:22:20 -- event/cpu_locks.sh@98 -- # spdk_tgt_pid=3977357 00:05:04.931 07:22:20 -- event/cpu_locks.sh@97 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks 00:05:04.931 07:22:20 -- event/cpu_locks.sh@99 -- # waitforlisten 3977357 /var/tmp/spdk.sock 00:05:04.931 07:22:20 -- common/autotest_common.sh@819 -- # '[' -z 3977357 ']' 00:05:04.931 07:22:20 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:04.931 07:22:20 -- common/autotest_common.sh@824 -- # local max_retries=100 00:05:04.931 07:22:20 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:04.931 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:04.931 07:22:20 -- common/autotest_common.sh@828 -- # xtrace_disable 00:05:04.931 07:22:20 -- common/autotest_common.sh@10 -- # set +x 00:05:04.931 [2024-07-14 07:22:20.914978] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:05:04.931 [2024-07-14 07:22:20.915060] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3977357 ] 00:05:04.931 EAL: No free 2048 kB hugepages reported on node 1 00:05:04.931 [2024-07-14 07:22:20.972181] app.c: 795:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:05:04.931 [2024-07-14 07:22:20.972219] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:04.931 [2024-07-14 07:22:21.079831] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:05:04.931 [2024-07-14 07:22:21.080056] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:05.864 07:22:21 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:05:05.864 07:22:21 -- common/autotest_common.sh@852 -- # return 0 00:05:05.864 07:22:21 -- event/cpu_locks.sh@102 -- # spdk_tgt_pid2=3977491 00:05:05.864 07:22:21 -- event/cpu_locks.sh@101 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:05:05.864 07:22:21 -- event/cpu_locks.sh@103 -- # waitforlisten 3977491 /var/tmp/spdk2.sock 00:05:05.864 07:22:21 -- common/autotest_common.sh@819 -- # '[' -z 3977491 ']' 00:05:05.864 07:22:21 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk2.sock 00:05:05.864 07:22:21 -- common/autotest_common.sh@824 -- # local max_retries=100 00:05:05.864 07:22:21 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:05:05.864 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:05:05.864 07:22:21 -- common/autotest_common.sh@828 -- # xtrace_disable 00:05:05.864 07:22:21 -- common/autotest_common.sh@10 -- # set +x 00:05:05.864 [2024-07-14 07:22:21.892327] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:05:05.864 [2024-07-14 07:22:21.892415] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3977491 ] 00:05:05.864 EAL: No free 2048 kB hugepages reported on node 1 00:05:05.864 [2024-07-14 07:22:21.984084] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:06.122 [2024-07-14 07:22:22.222109] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:05:06.122 [2024-07-14 07:22:22.222297] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:06.688 07:22:22 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:05:06.688 07:22:22 -- common/autotest_common.sh@852 -- # return 0 00:05:06.688 07:22:22 -- event/cpu_locks.sh@105 -- # locks_exist 3977491 00:05:06.688 07:22:22 -- event/cpu_locks.sh@22 -- # lslocks -p 3977491 00:05:06.688 07:22:22 -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:05:07.253 lslocks: write error 00:05:07.253 07:22:23 -- event/cpu_locks.sh@107 -- # killprocess 3977357 00:05:07.253 07:22:23 -- common/autotest_common.sh@926 -- # '[' -z 3977357 ']' 00:05:07.253 07:22:23 -- common/autotest_common.sh@930 -- # kill -0 3977357 00:05:07.253 07:22:23 -- common/autotest_common.sh@931 -- # uname 00:05:07.253 07:22:23 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:05:07.253 07:22:23 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 3977357 00:05:07.253 07:22:23 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:05:07.253 07:22:23 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:05:07.253 07:22:23 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 3977357' 00:05:07.253 killing process with pid 3977357 00:05:07.253 07:22:23 -- common/autotest_common.sh@945 -- # kill 3977357 00:05:07.253 07:22:23 -- common/autotest_common.sh@950 -- # wait 3977357 00:05:08.188 07:22:24 -- event/cpu_locks.sh@108 -- # killprocess 3977491 00:05:08.188 07:22:24 -- common/autotest_common.sh@926 -- # '[' -z 3977491 ']' 00:05:08.188 07:22:24 -- common/autotest_common.sh@930 -- # kill -0 3977491 00:05:08.188 07:22:24 -- common/autotest_common.sh@931 -- # uname 00:05:08.188 07:22:24 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:05:08.188 07:22:24 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 3977491 00:05:08.188 07:22:24 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:05:08.188 07:22:24 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:05:08.188 07:22:24 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 3977491' 00:05:08.188 killing process with pid 3977491 00:05:08.188 07:22:24 -- common/autotest_common.sh@945 -- # kill 3977491 00:05:08.188 07:22:24 -- common/autotest_common.sh@950 -- # wait 3977491 00:05:08.756 00:05:08.756 real 0m3.913s 00:05:08.756 user 0m4.245s 00:05:08.756 sys 0m1.064s 00:05:08.756 07:22:24 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:08.756 07:22:24 -- common/autotest_common.sh@10 -- # set +x 00:05:08.756 ************************************ 00:05:08.756 END TEST locking_app_on_unlocked_coremask 00:05:08.756 ************************************ 00:05:08.756 07:22:24 -- event/cpu_locks.sh@170 -- # run_test locking_app_on_locked_coremask locking_app_on_locked_coremask 00:05:08.756 07:22:24 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:05:08.756 07:22:24 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:05:08.756 07:22:24 -- common/autotest_common.sh@10 -- # set +x 00:05:08.756 ************************************ 00:05:08.756 START TEST locking_app_on_locked_coremask 00:05:08.756 ************************************ 00:05:08.756 07:22:24 -- common/autotest_common.sh@1104 -- # locking_app_on_locked_coremask 00:05:08.756 07:22:24 -- event/cpu_locks.sh@115 -- # spdk_tgt_pid=3977932 00:05:08.756 07:22:24 -- event/cpu_locks.sh@114 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:05:08.756 07:22:24 -- event/cpu_locks.sh@116 -- # waitforlisten 3977932 /var/tmp/spdk.sock 00:05:08.756 07:22:24 -- common/autotest_common.sh@819 -- # '[' -z 3977932 ']' 00:05:08.756 07:22:24 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:08.756 07:22:24 -- common/autotest_common.sh@824 -- # local max_retries=100 00:05:08.756 07:22:24 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:08.756 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:08.756 07:22:24 -- common/autotest_common.sh@828 -- # xtrace_disable 00:05:08.756 07:22:24 -- common/autotest_common.sh@10 -- # set +x 00:05:08.756 [2024-07-14 07:22:24.853241] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:05:08.756 [2024-07-14 07:22:24.853330] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3977932 ] 00:05:08.756 EAL: No free 2048 kB hugepages reported on node 1 00:05:08.756 [2024-07-14 07:22:24.914520] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:09.015 [2024-07-14 07:22:25.027232] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:05:09.015 [2024-07-14 07:22:25.027405] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:09.948 07:22:25 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:05:09.948 07:22:25 -- common/autotest_common.sh@852 -- # return 0 00:05:09.948 07:22:25 -- event/cpu_locks.sh@119 -- # spdk_tgt_pid2=3978062 00:05:09.948 07:22:25 -- event/cpu_locks.sh@118 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:05:09.948 07:22:25 -- event/cpu_locks.sh@120 -- # NOT waitforlisten 3978062 /var/tmp/spdk2.sock 00:05:09.948 07:22:25 -- common/autotest_common.sh@640 -- # local es=0 00:05:09.948 07:22:25 -- common/autotest_common.sh@642 -- # valid_exec_arg waitforlisten 3978062 /var/tmp/spdk2.sock 00:05:09.948 07:22:25 -- common/autotest_common.sh@628 -- # local arg=waitforlisten 00:05:09.948 07:22:25 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:05:09.948 07:22:25 -- common/autotest_common.sh@632 -- # type -t waitforlisten 00:05:09.948 07:22:25 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:05:09.948 07:22:25 -- common/autotest_common.sh@643 -- # waitforlisten 3978062 /var/tmp/spdk2.sock 00:05:09.948 07:22:25 -- common/autotest_common.sh@819 -- # '[' -z 3978062 ']' 00:05:09.948 07:22:25 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk2.sock 00:05:09.948 07:22:25 -- common/autotest_common.sh@824 -- # local max_retries=100 00:05:09.948 07:22:25 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:05:09.948 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:05:09.948 07:22:25 -- common/autotest_common.sh@828 -- # xtrace_disable 00:05:09.948 07:22:25 -- common/autotest_common.sh@10 -- # set +x 00:05:09.948 [2024-07-14 07:22:25.813313] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:05:09.948 [2024-07-14 07:22:25.813388] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3978062 ] 00:05:09.948 EAL: No free 2048 kB hugepages reported on node 1 00:05:09.948 [2024-07-14 07:22:25.904317] app.c: 665:claim_cpu_cores: *ERROR*: Cannot create lock on core 0, probably process 3977932 has claimed it. 00:05:09.948 [2024-07-14 07:22:25.904393] app.c: 791:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:05:10.513 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 834: kill: (3978062) - No such process 00:05:10.513 ERROR: process (pid: 3978062) is no longer running 00:05:10.513 07:22:26 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:05:10.513 07:22:26 -- common/autotest_common.sh@852 -- # return 1 00:05:10.513 07:22:26 -- common/autotest_common.sh@643 -- # es=1 00:05:10.513 07:22:26 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:05:10.513 07:22:26 -- common/autotest_common.sh@662 -- # [[ -n '' ]] 00:05:10.513 07:22:26 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:05:10.513 07:22:26 -- event/cpu_locks.sh@122 -- # locks_exist 3977932 00:05:10.513 07:22:26 -- event/cpu_locks.sh@22 -- # lslocks -p 3977932 00:05:10.513 07:22:26 -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:05:11.080 lslocks: write error 00:05:11.080 07:22:26 -- event/cpu_locks.sh@124 -- # killprocess 3977932 00:05:11.080 07:22:26 -- common/autotest_common.sh@926 -- # '[' -z 3977932 ']' 00:05:11.080 07:22:26 -- common/autotest_common.sh@930 -- # kill -0 3977932 00:05:11.080 07:22:26 -- common/autotest_common.sh@931 -- # uname 00:05:11.080 07:22:26 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:05:11.080 07:22:26 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 3977932 00:05:11.080 07:22:27 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:05:11.080 07:22:27 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:05:11.080 07:22:27 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 3977932' 00:05:11.080 killing process with pid 3977932 00:05:11.080 07:22:27 -- common/autotest_common.sh@945 -- # kill 3977932 00:05:11.080 07:22:27 -- common/autotest_common.sh@950 -- # wait 3977932 00:05:11.339 00:05:11.339 real 0m2.672s 00:05:11.339 user 0m2.990s 00:05:11.339 sys 0m0.698s 00:05:11.339 07:22:27 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:11.339 07:22:27 -- common/autotest_common.sh@10 -- # set +x 00:05:11.339 ************************************ 00:05:11.339 END TEST locking_app_on_locked_coremask 00:05:11.339 ************************************ 00:05:11.339 07:22:27 -- event/cpu_locks.sh@171 -- # run_test locking_overlapped_coremask locking_overlapped_coremask 00:05:11.339 07:22:27 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:05:11.339 07:22:27 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:05:11.339 07:22:27 -- common/autotest_common.sh@10 -- # set +x 00:05:11.339 ************************************ 00:05:11.339 START TEST locking_overlapped_coremask 00:05:11.339 ************************************ 00:05:11.339 07:22:27 -- common/autotest_common.sh@1104 -- # locking_overlapped_coremask 00:05:11.339 07:22:27 -- event/cpu_locks.sh@132 -- # spdk_tgt_pid=3978238 00:05:11.339 07:22:27 -- event/cpu_locks.sh@131 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x7 00:05:11.339 07:22:27 -- event/cpu_locks.sh@133 -- # waitforlisten 3978238 /var/tmp/spdk.sock 00:05:11.339 07:22:27 -- common/autotest_common.sh@819 -- # '[' -z 3978238 ']' 00:05:11.339 07:22:27 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:11.339 07:22:27 -- common/autotest_common.sh@824 -- # local max_retries=100 00:05:11.339 07:22:27 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:11.339 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:11.339 07:22:27 -- common/autotest_common.sh@828 -- # xtrace_disable 00:05:11.339 07:22:27 -- common/autotest_common.sh@10 -- # set +x 00:05:11.598 [2024-07-14 07:22:27.554504] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:05:11.598 [2024-07-14 07:22:27.554594] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3978238 ] 00:05:11.598 EAL: No free 2048 kB hugepages reported on node 1 00:05:11.598 [2024-07-14 07:22:27.616349] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 3 00:05:11.598 [2024-07-14 07:22:27.733820] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:05:11.598 [2024-07-14 07:22:27.734043] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:05:11.598 [2024-07-14 07:22:27.734096] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:05:11.598 [2024-07-14 07:22:27.734099] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:12.530 07:22:28 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:05:12.530 07:22:28 -- common/autotest_common.sh@852 -- # return 0 00:05:12.530 07:22:28 -- event/cpu_locks.sh@136 -- # spdk_tgt_pid2=3978380 00:05:12.530 07:22:28 -- event/cpu_locks.sh@135 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock 00:05:12.530 07:22:28 -- event/cpu_locks.sh@137 -- # NOT waitforlisten 3978380 /var/tmp/spdk2.sock 00:05:12.530 07:22:28 -- common/autotest_common.sh@640 -- # local es=0 00:05:12.530 07:22:28 -- common/autotest_common.sh@642 -- # valid_exec_arg waitforlisten 3978380 /var/tmp/spdk2.sock 00:05:12.530 07:22:28 -- common/autotest_common.sh@628 -- # local arg=waitforlisten 00:05:12.530 07:22:28 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:05:12.530 07:22:28 -- common/autotest_common.sh@632 -- # type -t waitforlisten 00:05:12.530 07:22:28 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:05:12.530 07:22:28 -- common/autotest_common.sh@643 -- # waitforlisten 3978380 /var/tmp/spdk2.sock 00:05:12.530 07:22:28 -- common/autotest_common.sh@819 -- # '[' -z 3978380 ']' 00:05:12.530 07:22:28 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk2.sock 00:05:12.530 07:22:28 -- common/autotest_common.sh@824 -- # local max_retries=100 00:05:12.530 07:22:28 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:05:12.530 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:05:12.530 07:22:28 -- common/autotest_common.sh@828 -- # xtrace_disable 00:05:12.530 07:22:28 -- common/autotest_common.sh@10 -- # set +x 00:05:12.530 [2024-07-14 07:22:28.526681] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:05:12.530 [2024-07-14 07:22:28.526757] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3978380 ] 00:05:12.530 EAL: No free 2048 kB hugepages reported on node 1 00:05:12.530 [2024-07-14 07:22:28.613418] app.c: 665:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 3978238 has claimed it. 00:05:12.530 [2024-07-14 07:22:28.613485] app.c: 791:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:05:13.096 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 834: kill: (3978380) - No such process 00:05:13.096 ERROR: process (pid: 3978380) is no longer running 00:05:13.096 07:22:29 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:05:13.096 07:22:29 -- common/autotest_common.sh@852 -- # return 1 00:05:13.096 07:22:29 -- common/autotest_common.sh@643 -- # es=1 00:05:13.096 07:22:29 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:05:13.096 07:22:29 -- common/autotest_common.sh@662 -- # [[ -n '' ]] 00:05:13.096 07:22:29 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:05:13.096 07:22:29 -- event/cpu_locks.sh@139 -- # check_remaining_locks 00:05:13.096 07:22:29 -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:05:13.096 07:22:29 -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:05:13.096 07:22:29 -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:05:13.096 07:22:29 -- event/cpu_locks.sh@141 -- # killprocess 3978238 00:05:13.096 07:22:29 -- common/autotest_common.sh@926 -- # '[' -z 3978238 ']' 00:05:13.096 07:22:29 -- common/autotest_common.sh@930 -- # kill -0 3978238 00:05:13.096 07:22:29 -- common/autotest_common.sh@931 -- # uname 00:05:13.096 07:22:29 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:05:13.096 07:22:29 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 3978238 00:05:13.096 07:22:29 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:05:13.096 07:22:29 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:05:13.096 07:22:29 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 3978238' 00:05:13.096 killing process with pid 3978238 00:05:13.096 07:22:29 -- common/autotest_common.sh@945 -- # kill 3978238 00:05:13.096 07:22:29 -- common/autotest_common.sh@950 -- # wait 3978238 00:05:13.662 00:05:13.662 real 0m2.177s 00:05:13.662 user 0m6.043s 00:05:13.662 sys 0m0.509s 00:05:13.662 07:22:29 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:13.662 07:22:29 -- common/autotest_common.sh@10 -- # set +x 00:05:13.662 ************************************ 00:05:13.662 END TEST locking_overlapped_coremask 00:05:13.662 ************************************ 00:05:13.662 07:22:29 -- event/cpu_locks.sh@172 -- # run_test locking_overlapped_coremask_via_rpc locking_overlapped_coremask_via_rpc 00:05:13.662 07:22:29 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:05:13.662 07:22:29 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:05:13.662 07:22:29 -- common/autotest_common.sh@10 -- # set +x 00:05:13.662 ************************************ 00:05:13.662 START TEST locking_overlapped_coremask_via_rpc 00:05:13.662 ************************************ 00:05:13.662 07:22:29 -- common/autotest_common.sh@1104 -- # locking_overlapped_coremask_via_rpc 00:05:13.662 07:22:29 -- event/cpu_locks.sh@148 -- # spdk_tgt_pid=3978552 00:05:13.662 07:22:29 -- event/cpu_locks.sh@147 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x7 --disable-cpumask-locks 00:05:13.662 07:22:29 -- event/cpu_locks.sh@149 -- # waitforlisten 3978552 /var/tmp/spdk.sock 00:05:13.662 07:22:29 -- common/autotest_common.sh@819 -- # '[' -z 3978552 ']' 00:05:13.662 07:22:29 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:13.662 07:22:29 -- common/autotest_common.sh@824 -- # local max_retries=100 00:05:13.662 07:22:29 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:13.662 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:13.662 07:22:29 -- common/autotest_common.sh@828 -- # xtrace_disable 00:05:13.662 07:22:29 -- common/autotest_common.sh@10 -- # set +x 00:05:13.662 [2024-07-14 07:22:29.760133] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:05:13.662 [2024-07-14 07:22:29.760230] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3978552 ] 00:05:13.662 EAL: No free 2048 kB hugepages reported on node 1 00:05:13.662 [2024-07-14 07:22:29.816968] app.c: 795:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:05:13.662 [2024-07-14 07:22:29.817005] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 3 00:05:13.920 [2024-07-14 07:22:29.926885] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:05:13.920 [2024-07-14 07:22:29.927105] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:05:13.920 [2024-07-14 07:22:29.927165] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:05:13.920 [2024-07-14 07:22:29.927168] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:14.853 07:22:30 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:05:14.853 07:22:30 -- common/autotest_common.sh@852 -- # return 0 00:05:14.853 07:22:30 -- event/cpu_locks.sh@152 -- # spdk_tgt_pid2=3978694 00:05:14.853 07:22:30 -- event/cpu_locks.sh@153 -- # waitforlisten 3978694 /var/tmp/spdk2.sock 00:05:14.853 07:22:30 -- event/cpu_locks.sh@151 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock --disable-cpumask-locks 00:05:14.853 07:22:30 -- common/autotest_common.sh@819 -- # '[' -z 3978694 ']' 00:05:14.853 07:22:30 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk2.sock 00:05:14.853 07:22:30 -- common/autotest_common.sh@824 -- # local max_retries=100 00:05:14.853 07:22:30 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:05:14.853 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:05:14.853 07:22:30 -- common/autotest_common.sh@828 -- # xtrace_disable 00:05:14.853 07:22:30 -- common/autotest_common.sh@10 -- # set +x 00:05:14.853 [2024-07-14 07:22:30.736062] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:05:14.853 [2024-07-14 07:22:30.736137] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3978694 ] 00:05:14.853 EAL: No free 2048 kB hugepages reported on node 1 00:05:14.853 [2024-07-14 07:22:30.824492] app.c: 795:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:05:14.853 [2024-07-14 07:22:30.824524] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 3 00:05:15.110 [2024-07-14 07:22:31.042021] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:05:15.110 [2024-07-14 07:22:31.042217] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:05:15.110 [2024-07-14 07:22:31.042277] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 4 00:05:15.110 [2024-07-14 07:22:31.042280] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:05:15.676 07:22:31 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:05:15.676 07:22:31 -- common/autotest_common.sh@852 -- # return 0 00:05:15.676 07:22:31 -- event/cpu_locks.sh@155 -- # rpc_cmd framework_enable_cpumask_locks 00:05:15.676 07:22:31 -- common/autotest_common.sh@551 -- # xtrace_disable 00:05:15.676 07:22:31 -- common/autotest_common.sh@10 -- # set +x 00:05:15.676 07:22:31 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:05:15.676 07:22:31 -- event/cpu_locks.sh@156 -- # NOT rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:05:15.676 07:22:31 -- common/autotest_common.sh@640 -- # local es=0 00:05:15.676 07:22:31 -- common/autotest_common.sh@642 -- # valid_exec_arg rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:05:15.676 07:22:31 -- common/autotest_common.sh@628 -- # local arg=rpc_cmd 00:05:15.676 07:22:31 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:05:15.676 07:22:31 -- common/autotest_common.sh@632 -- # type -t rpc_cmd 00:05:15.676 07:22:31 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:05:15.676 07:22:31 -- common/autotest_common.sh@643 -- # rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:05:15.676 07:22:31 -- common/autotest_common.sh@551 -- # xtrace_disable 00:05:15.676 07:22:31 -- common/autotest_common.sh@10 -- # set +x 00:05:15.676 [2024-07-14 07:22:31.671974] app.c: 665:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 3978552 has claimed it. 00:05:15.676 request: 00:05:15.676 { 00:05:15.676 "method": "framework_enable_cpumask_locks", 00:05:15.676 "req_id": 1 00:05:15.676 } 00:05:15.676 Got JSON-RPC error response 00:05:15.676 response: 00:05:15.676 { 00:05:15.676 "code": -32603, 00:05:15.676 "message": "Failed to claim CPU core: 2" 00:05:15.676 } 00:05:15.676 07:22:31 -- common/autotest_common.sh@579 -- # [[ 1 == 0 ]] 00:05:15.676 07:22:31 -- common/autotest_common.sh@643 -- # es=1 00:05:15.676 07:22:31 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:05:15.676 07:22:31 -- common/autotest_common.sh@662 -- # [[ -n '' ]] 00:05:15.676 07:22:31 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:05:15.676 07:22:31 -- event/cpu_locks.sh@158 -- # waitforlisten 3978552 /var/tmp/spdk.sock 00:05:15.676 07:22:31 -- common/autotest_common.sh@819 -- # '[' -z 3978552 ']' 00:05:15.676 07:22:31 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:15.676 07:22:31 -- common/autotest_common.sh@824 -- # local max_retries=100 00:05:15.676 07:22:31 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:15.676 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:15.676 07:22:31 -- common/autotest_common.sh@828 -- # xtrace_disable 00:05:15.676 07:22:31 -- common/autotest_common.sh@10 -- # set +x 00:05:15.934 07:22:31 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:05:15.934 07:22:31 -- common/autotest_common.sh@852 -- # return 0 00:05:15.934 07:22:31 -- event/cpu_locks.sh@159 -- # waitforlisten 3978694 /var/tmp/spdk2.sock 00:05:15.934 07:22:31 -- common/autotest_common.sh@819 -- # '[' -z 3978694 ']' 00:05:15.934 07:22:31 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk2.sock 00:05:15.934 07:22:31 -- common/autotest_common.sh@824 -- # local max_retries=100 00:05:15.934 07:22:31 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:05:15.934 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:05:15.934 07:22:31 -- common/autotest_common.sh@828 -- # xtrace_disable 00:05:15.934 07:22:31 -- common/autotest_common.sh@10 -- # set +x 00:05:16.192 07:22:32 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:05:16.192 07:22:32 -- common/autotest_common.sh@852 -- # return 0 00:05:16.192 07:22:32 -- event/cpu_locks.sh@161 -- # check_remaining_locks 00:05:16.192 07:22:32 -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:05:16.193 07:22:32 -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:05:16.193 07:22:32 -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:05:16.193 00:05:16.193 real 0m2.441s 00:05:16.193 user 0m1.174s 00:05:16.193 sys 0m0.203s 00:05:16.193 07:22:32 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:16.193 07:22:32 -- common/autotest_common.sh@10 -- # set +x 00:05:16.193 ************************************ 00:05:16.193 END TEST locking_overlapped_coremask_via_rpc 00:05:16.193 ************************************ 00:05:16.193 07:22:32 -- event/cpu_locks.sh@174 -- # cleanup 00:05:16.193 07:22:32 -- event/cpu_locks.sh@15 -- # [[ -z 3978552 ]] 00:05:16.193 07:22:32 -- event/cpu_locks.sh@15 -- # killprocess 3978552 00:05:16.193 07:22:32 -- common/autotest_common.sh@926 -- # '[' -z 3978552 ']' 00:05:16.193 07:22:32 -- common/autotest_common.sh@930 -- # kill -0 3978552 00:05:16.193 07:22:32 -- common/autotest_common.sh@931 -- # uname 00:05:16.193 07:22:32 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:05:16.193 07:22:32 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 3978552 00:05:16.193 07:22:32 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:05:16.193 07:22:32 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:05:16.193 07:22:32 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 3978552' 00:05:16.193 killing process with pid 3978552 00:05:16.193 07:22:32 -- common/autotest_common.sh@945 -- # kill 3978552 00:05:16.193 07:22:32 -- common/autotest_common.sh@950 -- # wait 3978552 00:05:16.759 07:22:32 -- event/cpu_locks.sh@16 -- # [[ -z 3978694 ]] 00:05:16.759 07:22:32 -- event/cpu_locks.sh@16 -- # killprocess 3978694 00:05:16.759 07:22:32 -- common/autotest_common.sh@926 -- # '[' -z 3978694 ']' 00:05:16.759 07:22:32 -- common/autotest_common.sh@930 -- # kill -0 3978694 00:05:16.759 07:22:32 -- common/autotest_common.sh@931 -- # uname 00:05:16.759 07:22:32 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:05:16.759 07:22:32 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 3978694 00:05:16.759 07:22:32 -- common/autotest_common.sh@932 -- # process_name=reactor_2 00:05:16.759 07:22:32 -- common/autotest_common.sh@936 -- # '[' reactor_2 = sudo ']' 00:05:16.759 07:22:32 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 3978694' 00:05:16.759 killing process with pid 3978694 00:05:16.759 07:22:32 -- common/autotest_common.sh@945 -- # kill 3978694 00:05:16.759 07:22:32 -- common/autotest_common.sh@950 -- # wait 3978694 00:05:17.018 07:22:33 -- event/cpu_locks.sh@18 -- # rm -f 00:05:17.018 07:22:33 -- event/cpu_locks.sh@1 -- # cleanup 00:05:17.018 07:22:33 -- event/cpu_locks.sh@15 -- # [[ -z 3978552 ]] 00:05:17.018 07:22:33 -- event/cpu_locks.sh@15 -- # killprocess 3978552 00:05:17.018 07:22:33 -- common/autotest_common.sh@926 -- # '[' -z 3978552 ']' 00:05:17.018 07:22:33 -- common/autotest_common.sh@930 -- # kill -0 3978552 00:05:17.018 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 930: kill: (3978552) - No such process 00:05:17.018 07:22:33 -- common/autotest_common.sh@953 -- # echo 'Process with pid 3978552 is not found' 00:05:17.018 Process with pid 3978552 is not found 00:05:17.018 07:22:33 -- event/cpu_locks.sh@16 -- # [[ -z 3978694 ]] 00:05:17.018 07:22:33 -- event/cpu_locks.sh@16 -- # killprocess 3978694 00:05:17.018 07:22:33 -- common/autotest_common.sh@926 -- # '[' -z 3978694 ']' 00:05:17.018 07:22:33 -- common/autotest_common.sh@930 -- # kill -0 3978694 00:05:17.018 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 930: kill: (3978694) - No such process 00:05:17.018 07:22:33 -- common/autotest_common.sh@953 -- # echo 'Process with pid 3978694 is not found' 00:05:17.018 Process with pid 3978694 is not found 00:05:17.018 07:22:33 -- event/cpu_locks.sh@18 -- # rm -f 00:05:17.018 00:05:17.018 real 0m19.646s 00:05:17.018 user 0m34.385s 00:05:17.018 sys 0m5.419s 00:05:17.018 07:22:33 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:17.018 07:22:33 -- common/autotest_common.sh@10 -- # set +x 00:05:17.018 ************************************ 00:05:17.018 END TEST cpu_locks 00:05:17.018 ************************************ 00:05:17.018 00:05:17.018 real 0m45.259s 00:05:17.018 user 1m24.939s 00:05:17.018 sys 0m9.294s 00:05:17.018 07:22:33 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:17.018 07:22:33 -- common/autotest_common.sh@10 -- # set +x 00:05:17.018 ************************************ 00:05:17.018 END TEST event 00:05:17.018 ************************************ 00:05:17.018 07:22:33 -- spdk/autotest.sh@188 -- # run_test thread /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/thread.sh 00:05:17.018 07:22:33 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:05:17.018 07:22:33 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:05:17.018 07:22:33 -- common/autotest_common.sh@10 -- # set +x 00:05:17.018 ************************************ 00:05:17.018 START TEST thread 00:05:17.018 ************************************ 00:05:17.018 07:22:33 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/thread.sh 00:05:17.275 * Looking for test storage... 00:05:17.275 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread 00:05:17.275 07:22:33 -- thread/thread.sh@11 -- # run_test thread_poller_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:05:17.275 07:22:33 -- common/autotest_common.sh@1077 -- # '[' 8 -le 1 ']' 00:05:17.275 07:22:33 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:05:17.275 07:22:33 -- common/autotest_common.sh@10 -- # set +x 00:05:17.275 ************************************ 00:05:17.275 START TEST thread_poller_perf 00:05:17.275 ************************************ 00:05:17.275 07:22:33 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:05:17.275 [2024-07-14 07:22:33.240570] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:05:17.275 [2024-07-14 07:22:33.240651] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3979060 ] 00:05:17.275 EAL: No free 2048 kB hugepages reported on node 1 00:05:17.275 [2024-07-14 07:22:33.303316] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:17.275 [2024-07-14 07:22:33.409466] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:17.275 Running 1000 pollers for 1 seconds with 1 microseconds period. 00:05:18.653 ====================================== 00:05:18.653 busy:2714348181 (cyc) 00:05:18.653 total_run_count: 281000 00:05:18.653 tsc_hz: 2700000000 (cyc) 00:05:18.653 ====================================== 00:05:18.653 poller_cost: 9659 (cyc), 3577 (nsec) 00:05:18.653 00:05:18.653 real 0m1.316s 00:05:18.653 user 0m1.231s 00:05:18.653 sys 0m0.079s 00:05:18.653 07:22:34 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:18.653 07:22:34 -- common/autotest_common.sh@10 -- # set +x 00:05:18.653 ************************************ 00:05:18.653 END TEST thread_poller_perf 00:05:18.653 ************************************ 00:05:18.653 07:22:34 -- thread/thread.sh@12 -- # run_test thread_poller_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:05:18.653 07:22:34 -- common/autotest_common.sh@1077 -- # '[' 8 -le 1 ']' 00:05:18.653 07:22:34 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:05:18.653 07:22:34 -- common/autotest_common.sh@10 -- # set +x 00:05:18.653 ************************************ 00:05:18.653 START TEST thread_poller_perf 00:05:18.653 ************************************ 00:05:18.653 07:22:34 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:05:18.653 [2024-07-14 07:22:34.583489] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:05:18.653 [2024-07-14 07:22:34.583585] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3979333 ] 00:05:18.653 EAL: No free 2048 kB hugepages reported on node 1 00:05:18.653 [2024-07-14 07:22:34.647088] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:18.653 [2024-07-14 07:22:34.762236] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:18.653 Running 1000 pollers for 1 seconds with 0 microseconds period. 00:05:20.081 ====================================== 00:05:20.081 busy:2703528529 (cyc) 00:05:20.081 total_run_count: 3834000 00:05:20.081 tsc_hz: 2700000000 (cyc) 00:05:20.081 ====================================== 00:05:20.081 poller_cost: 705 (cyc), 261 (nsec) 00:05:20.081 00:05:20.081 real 0m1.318s 00:05:20.081 user 0m1.233s 00:05:20.081 sys 0m0.078s 00:05:20.081 07:22:35 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:20.081 07:22:35 -- common/autotest_common.sh@10 -- # set +x 00:05:20.081 ************************************ 00:05:20.081 END TEST thread_poller_perf 00:05:20.081 ************************************ 00:05:20.081 07:22:35 -- thread/thread.sh@17 -- # [[ y != \y ]] 00:05:20.081 00:05:20.081 real 0m2.733s 00:05:20.081 user 0m2.499s 00:05:20.081 sys 0m0.234s 00:05:20.081 07:22:35 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:20.081 07:22:35 -- common/autotest_common.sh@10 -- # set +x 00:05:20.081 ************************************ 00:05:20.081 END TEST thread 00:05:20.081 ************************************ 00:05:20.081 07:22:35 -- spdk/autotest.sh@189 -- # run_test accel /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/accel.sh 00:05:20.081 07:22:35 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:05:20.081 07:22:35 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:05:20.081 07:22:35 -- common/autotest_common.sh@10 -- # set +x 00:05:20.081 ************************************ 00:05:20.081 START TEST accel 00:05:20.081 ************************************ 00:05:20.081 07:22:35 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/accel.sh 00:05:20.081 * Looking for test storage... 00:05:20.081 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel 00:05:20.081 07:22:35 -- accel/accel.sh@73 -- # declare -A expected_opcs 00:05:20.081 07:22:35 -- accel/accel.sh@74 -- # get_expected_opcs 00:05:20.081 07:22:35 -- accel/accel.sh@57 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:05:20.081 07:22:35 -- accel/accel.sh@59 -- # spdk_tgt_pid=3979539 00:05:20.081 07:22:35 -- accel/accel.sh@60 -- # waitforlisten 3979539 00:05:20.081 07:22:35 -- common/autotest_common.sh@819 -- # '[' -z 3979539 ']' 00:05:20.081 07:22:35 -- accel/accel.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -c /dev/fd/63 00:05:20.081 07:22:35 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:20.081 07:22:35 -- accel/accel.sh@58 -- # build_accel_config 00:05:20.081 07:22:35 -- common/autotest_common.sh@824 -- # local max_retries=100 00:05:20.081 07:22:35 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:05:20.081 07:22:35 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:20.081 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:20.081 07:22:35 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:20.081 07:22:35 -- common/autotest_common.sh@828 -- # xtrace_disable 00:05:20.081 07:22:35 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:20.081 07:22:35 -- common/autotest_common.sh@10 -- # set +x 00:05:20.081 07:22:35 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:05:20.081 07:22:35 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:05:20.081 07:22:35 -- accel/accel.sh@41 -- # local IFS=, 00:05:20.081 07:22:35 -- accel/accel.sh@42 -- # jq -r . 00:05:20.081 [2024-07-14 07:22:36.030454] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:05:20.081 [2024-07-14 07:22:36.030539] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3979539 ] 00:05:20.081 EAL: No free 2048 kB hugepages reported on node 1 00:05:20.081 [2024-07-14 07:22:36.088647] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:20.081 [2024-07-14 07:22:36.193525] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:05:20.081 [2024-07-14 07:22:36.193713] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:21.023 07:22:36 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:05:21.023 07:22:36 -- common/autotest_common.sh@852 -- # return 0 00:05:21.023 07:22:36 -- accel/accel.sh@62 -- # exp_opcs=($($rpc_py accel_get_opc_assignments | jq -r ". | to_entries | map(\"\(.key)=\(.value)\") | .[]")) 00:05:21.023 07:22:36 -- accel/accel.sh@62 -- # rpc_cmd accel_get_opc_assignments 00:05:21.023 07:22:36 -- accel/accel.sh@62 -- # jq -r '. | to_entries | map("\(.key)=\(.value)") | .[]' 00:05:21.023 07:22:36 -- common/autotest_common.sh@551 -- # xtrace_disable 00:05:21.023 07:22:36 -- common/autotest_common.sh@10 -- # set +x 00:05:21.023 07:22:36 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:05:21.023 07:22:36 -- accel/accel.sh@63 -- # for opc_opt in "${exp_opcs[@]}" 00:05:21.023 07:22:36 -- accel/accel.sh@64 -- # IFS== 00:05:21.023 07:22:36 -- accel/accel.sh@64 -- # read -r opc module 00:05:21.023 07:22:36 -- accel/accel.sh@65 -- # expected_opcs["$opc"]=software 00:05:21.023 07:22:36 -- accel/accel.sh@63 -- # for opc_opt in "${exp_opcs[@]}" 00:05:21.023 07:22:36 -- accel/accel.sh@64 -- # IFS== 00:05:21.023 07:22:36 -- accel/accel.sh@64 -- # read -r opc module 00:05:21.023 07:22:36 -- accel/accel.sh@65 -- # expected_opcs["$opc"]=software 00:05:21.023 07:22:36 -- accel/accel.sh@63 -- # for opc_opt in "${exp_opcs[@]}" 00:05:21.023 07:22:36 -- accel/accel.sh@64 -- # IFS== 00:05:21.023 07:22:36 -- accel/accel.sh@64 -- # read -r opc module 00:05:21.023 07:22:36 -- accel/accel.sh@65 -- # expected_opcs["$opc"]=software 00:05:21.023 07:22:36 -- accel/accel.sh@63 -- # for opc_opt in "${exp_opcs[@]}" 00:05:21.023 07:22:36 -- accel/accel.sh@64 -- # IFS== 00:05:21.023 07:22:36 -- accel/accel.sh@64 -- # read -r opc module 00:05:21.023 07:22:36 -- accel/accel.sh@65 -- # expected_opcs["$opc"]=software 00:05:21.023 07:22:36 -- accel/accel.sh@63 -- # for opc_opt in "${exp_opcs[@]}" 00:05:21.023 07:22:36 -- accel/accel.sh@64 -- # IFS== 00:05:21.023 07:22:36 -- accel/accel.sh@64 -- # read -r opc module 00:05:21.023 07:22:36 -- accel/accel.sh@65 -- # expected_opcs["$opc"]=software 00:05:21.023 07:22:36 -- accel/accel.sh@63 -- # for opc_opt in "${exp_opcs[@]}" 00:05:21.023 07:22:36 -- accel/accel.sh@64 -- # IFS== 00:05:21.023 07:22:36 -- accel/accel.sh@64 -- # read -r opc module 00:05:21.023 07:22:36 -- accel/accel.sh@65 -- # expected_opcs["$opc"]=software 00:05:21.023 07:22:36 -- accel/accel.sh@63 -- # for opc_opt in "${exp_opcs[@]}" 00:05:21.023 07:22:36 -- accel/accel.sh@64 -- # IFS== 00:05:21.023 07:22:36 -- accel/accel.sh@64 -- # read -r opc module 00:05:21.023 07:22:36 -- accel/accel.sh@65 -- # expected_opcs["$opc"]=software 00:05:21.023 07:22:36 -- accel/accel.sh@63 -- # for opc_opt in "${exp_opcs[@]}" 00:05:21.023 07:22:36 -- accel/accel.sh@64 -- # IFS== 00:05:21.023 07:22:36 -- accel/accel.sh@64 -- # read -r opc module 00:05:21.023 07:22:36 -- accel/accel.sh@65 -- # expected_opcs["$opc"]=software 00:05:21.024 07:22:36 -- accel/accel.sh@63 -- # for opc_opt in "${exp_opcs[@]}" 00:05:21.024 07:22:36 -- accel/accel.sh@64 -- # IFS== 00:05:21.024 07:22:36 -- accel/accel.sh@64 -- # read -r opc module 00:05:21.024 07:22:36 -- accel/accel.sh@65 -- # expected_opcs["$opc"]=software 00:05:21.024 07:22:37 -- accel/accel.sh@63 -- # for opc_opt in "${exp_opcs[@]}" 00:05:21.024 07:22:37 -- accel/accel.sh@64 -- # IFS== 00:05:21.024 07:22:37 -- accel/accel.sh@64 -- # read -r opc module 00:05:21.024 07:22:37 -- accel/accel.sh@65 -- # expected_opcs["$opc"]=software 00:05:21.024 07:22:37 -- accel/accel.sh@63 -- # for opc_opt in "${exp_opcs[@]}" 00:05:21.024 07:22:37 -- accel/accel.sh@64 -- # IFS== 00:05:21.024 07:22:37 -- accel/accel.sh@64 -- # read -r opc module 00:05:21.024 07:22:37 -- accel/accel.sh@65 -- # expected_opcs["$opc"]=software 00:05:21.024 07:22:37 -- accel/accel.sh@63 -- # for opc_opt in "${exp_opcs[@]}" 00:05:21.024 07:22:37 -- accel/accel.sh@64 -- # IFS== 00:05:21.024 07:22:37 -- accel/accel.sh@64 -- # read -r opc module 00:05:21.024 07:22:37 -- accel/accel.sh@65 -- # expected_opcs["$opc"]=software 00:05:21.024 07:22:37 -- accel/accel.sh@63 -- # for opc_opt in "${exp_opcs[@]}" 00:05:21.024 07:22:37 -- accel/accel.sh@64 -- # IFS== 00:05:21.024 07:22:37 -- accel/accel.sh@64 -- # read -r opc module 00:05:21.024 07:22:37 -- accel/accel.sh@65 -- # expected_opcs["$opc"]=software 00:05:21.024 07:22:37 -- accel/accel.sh@63 -- # for opc_opt in "${exp_opcs[@]}" 00:05:21.024 07:22:37 -- accel/accel.sh@64 -- # IFS== 00:05:21.024 07:22:37 -- accel/accel.sh@64 -- # read -r opc module 00:05:21.024 07:22:37 -- accel/accel.sh@65 -- # expected_opcs["$opc"]=software 00:05:21.024 07:22:37 -- accel/accel.sh@67 -- # killprocess 3979539 00:05:21.024 07:22:37 -- common/autotest_common.sh@926 -- # '[' -z 3979539 ']' 00:05:21.024 07:22:37 -- common/autotest_common.sh@930 -- # kill -0 3979539 00:05:21.024 07:22:37 -- common/autotest_common.sh@931 -- # uname 00:05:21.024 07:22:37 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:05:21.024 07:22:37 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 3979539 00:05:21.024 07:22:37 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:05:21.024 07:22:37 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:05:21.024 07:22:37 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 3979539' 00:05:21.024 killing process with pid 3979539 00:05:21.024 07:22:37 -- common/autotest_common.sh@945 -- # kill 3979539 00:05:21.024 07:22:37 -- common/autotest_common.sh@950 -- # wait 3979539 00:05:21.591 07:22:37 -- accel/accel.sh@68 -- # trap - ERR 00:05:21.591 07:22:37 -- accel/accel.sh@81 -- # run_test accel_help accel_perf -h 00:05:21.591 07:22:37 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:05:21.591 07:22:37 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:05:21.591 07:22:37 -- common/autotest_common.sh@10 -- # set +x 00:05:21.591 07:22:37 -- common/autotest_common.sh@1104 -- # accel_perf -h 00:05:21.591 07:22:37 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -h 00:05:21.591 07:22:37 -- accel/accel.sh@12 -- # build_accel_config 00:05:21.591 07:22:37 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:05:21.591 07:22:37 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:21.591 07:22:37 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:21.591 07:22:37 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:05:21.591 07:22:37 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:05:21.591 07:22:37 -- accel/accel.sh@41 -- # local IFS=, 00:05:21.591 07:22:37 -- accel/accel.sh@42 -- # jq -r . 00:05:21.591 07:22:37 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:21.591 07:22:37 -- common/autotest_common.sh@10 -- # set +x 00:05:21.591 07:22:37 -- accel/accel.sh@83 -- # run_test accel_missing_filename NOT accel_perf -t 1 -w compress 00:05:21.591 07:22:37 -- common/autotest_common.sh@1077 -- # '[' 7 -le 1 ']' 00:05:21.591 07:22:37 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:05:21.591 07:22:37 -- common/autotest_common.sh@10 -- # set +x 00:05:21.591 ************************************ 00:05:21.591 START TEST accel_missing_filename 00:05:21.591 ************************************ 00:05:21.591 07:22:37 -- common/autotest_common.sh@1104 -- # NOT accel_perf -t 1 -w compress 00:05:21.591 07:22:37 -- common/autotest_common.sh@640 -- # local es=0 00:05:21.591 07:22:37 -- common/autotest_common.sh@642 -- # valid_exec_arg accel_perf -t 1 -w compress 00:05:21.591 07:22:37 -- common/autotest_common.sh@628 -- # local arg=accel_perf 00:05:21.591 07:22:37 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:05:21.591 07:22:37 -- common/autotest_common.sh@632 -- # type -t accel_perf 00:05:21.591 07:22:37 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:05:21.591 07:22:37 -- common/autotest_common.sh@643 -- # accel_perf -t 1 -w compress 00:05:21.591 07:22:37 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compress 00:05:21.591 07:22:37 -- accel/accel.sh@12 -- # build_accel_config 00:05:21.591 07:22:37 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:05:21.591 07:22:37 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:21.591 07:22:37 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:21.591 07:22:37 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:05:21.591 07:22:37 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:05:21.591 07:22:37 -- accel/accel.sh@41 -- # local IFS=, 00:05:21.591 07:22:37 -- accel/accel.sh@42 -- # jq -r . 00:05:21.591 [2024-07-14 07:22:37.550250] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:05:21.591 [2024-07-14 07:22:37.550328] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3979715 ] 00:05:21.591 EAL: No free 2048 kB hugepages reported on node 1 00:05:21.591 [2024-07-14 07:22:37.613796] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:21.591 [2024-07-14 07:22:37.730534] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:21.849 [2024-07-14 07:22:37.792080] app.c: 910:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:05:21.849 [2024-07-14 07:22:37.878882] accel_perf.c:1385:main: *ERROR*: ERROR starting application 00:05:21.849 A filename is required. 00:05:21.849 07:22:38 -- common/autotest_common.sh@643 -- # es=234 00:05:21.849 07:22:38 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:05:21.849 07:22:38 -- common/autotest_common.sh@652 -- # es=106 00:05:21.849 07:22:38 -- common/autotest_common.sh@653 -- # case "$es" in 00:05:21.849 07:22:38 -- common/autotest_common.sh@660 -- # es=1 00:05:21.849 07:22:38 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:05:21.850 00:05:21.850 real 0m0.474s 00:05:21.850 user 0m0.366s 00:05:21.850 sys 0m0.140s 00:05:21.850 07:22:38 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:21.850 07:22:38 -- common/autotest_common.sh@10 -- # set +x 00:05:21.850 ************************************ 00:05:21.850 END TEST accel_missing_filename 00:05:21.850 ************************************ 00:05:22.109 07:22:38 -- accel/accel.sh@85 -- # run_test accel_compress_verify NOT accel_perf -t 1 -w compress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y 00:05:22.109 07:22:38 -- common/autotest_common.sh@1077 -- # '[' 10 -le 1 ']' 00:05:22.109 07:22:38 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:05:22.109 07:22:38 -- common/autotest_common.sh@10 -- # set +x 00:05:22.109 ************************************ 00:05:22.109 START TEST accel_compress_verify 00:05:22.109 ************************************ 00:05:22.109 07:22:38 -- common/autotest_common.sh@1104 -- # NOT accel_perf -t 1 -w compress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y 00:05:22.109 07:22:38 -- common/autotest_common.sh@640 -- # local es=0 00:05:22.109 07:22:38 -- common/autotest_common.sh@642 -- # valid_exec_arg accel_perf -t 1 -w compress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y 00:05:22.109 07:22:38 -- common/autotest_common.sh@628 -- # local arg=accel_perf 00:05:22.109 07:22:38 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:05:22.109 07:22:38 -- common/autotest_common.sh@632 -- # type -t accel_perf 00:05:22.109 07:22:38 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:05:22.109 07:22:38 -- common/autotest_common.sh@643 -- # accel_perf -t 1 -w compress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y 00:05:22.109 07:22:38 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y 00:05:22.109 07:22:38 -- accel/accel.sh@12 -- # build_accel_config 00:05:22.109 07:22:38 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:05:22.109 07:22:38 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:22.109 07:22:38 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:22.109 07:22:38 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:05:22.109 07:22:38 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:05:22.109 07:22:38 -- accel/accel.sh@41 -- # local IFS=, 00:05:22.109 07:22:38 -- accel/accel.sh@42 -- # jq -r . 00:05:22.109 [2024-07-14 07:22:38.053978] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:05:22.109 [2024-07-14 07:22:38.054059] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3979861 ] 00:05:22.109 EAL: No free 2048 kB hugepages reported on node 1 00:05:22.109 [2024-07-14 07:22:38.117346] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:22.109 [2024-07-14 07:22:38.231940] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:22.369 [2024-07-14 07:22:38.291758] app.c: 910:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:05:22.369 [2024-07-14 07:22:38.365269] accel_perf.c:1385:main: *ERROR*: ERROR starting application 00:05:22.369 00:05:22.369 Compression does not support the verify option, aborting. 00:05:22.369 07:22:38 -- common/autotest_common.sh@643 -- # es=161 00:05:22.369 07:22:38 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:05:22.369 07:22:38 -- common/autotest_common.sh@652 -- # es=33 00:05:22.369 07:22:38 -- common/autotest_common.sh@653 -- # case "$es" in 00:05:22.369 07:22:38 -- common/autotest_common.sh@660 -- # es=1 00:05:22.369 07:22:38 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:05:22.369 00:05:22.369 real 0m0.452s 00:05:22.369 user 0m0.340s 00:05:22.369 sys 0m0.145s 00:05:22.369 07:22:38 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:22.369 07:22:38 -- common/autotest_common.sh@10 -- # set +x 00:05:22.369 ************************************ 00:05:22.369 END TEST accel_compress_verify 00:05:22.369 ************************************ 00:05:22.369 07:22:38 -- accel/accel.sh@87 -- # run_test accel_wrong_workload NOT accel_perf -t 1 -w foobar 00:05:22.369 07:22:38 -- common/autotest_common.sh@1077 -- # '[' 7 -le 1 ']' 00:05:22.369 07:22:38 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:05:22.369 07:22:38 -- common/autotest_common.sh@10 -- # set +x 00:05:22.369 ************************************ 00:05:22.369 START TEST accel_wrong_workload 00:05:22.369 ************************************ 00:05:22.369 07:22:38 -- common/autotest_common.sh@1104 -- # NOT accel_perf -t 1 -w foobar 00:05:22.369 07:22:38 -- common/autotest_common.sh@640 -- # local es=0 00:05:22.369 07:22:38 -- common/autotest_common.sh@642 -- # valid_exec_arg accel_perf -t 1 -w foobar 00:05:22.369 07:22:38 -- common/autotest_common.sh@628 -- # local arg=accel_perf 00:05:22.369 07:22:38 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:05:22.369 07:22:38 -- common/autotest_common.sh@632 -- # type -t accel_perf 00:05:22.369 07:22:38 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:05:22.369 07:22:38 -- common/autotest_common.sh@643 -- # accel_perf -t 1 -w foobar 00:05:22.369 07:22:38 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w foobar 00:05:22.369 07:22:38 -- accel/accel.sh@12 -- # build_accel_config 00:05:22.369 07:22:38 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:05:22.369 07:22:38 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:22.369 07:22:38 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:22.369 07:22:38 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:05:22.369 07:22:38 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:05:22.369 07:22:38 -- accel/accel.sh@41 -- # local IFS=, 00:05:22.369 07:22:38 -- accel/accel.sh@42 -- # jq -r . 00:05:22.369 Unsupported workload type: foobar 00:05:22.369 [2024-07-14 07:22:38.530011] app.c:1292:spdk_app_parse_args: *ERROR*: Parsing app-specific command line parameter 'w' failed: 1 00:05:22.369 accel_perf options: 00:05:22.369 [-h help message] 00:05:22.369 [-q queue depth per core] 00:05:22.369 [-C for supported workloads, use this value to configure the io vector size to test (default 1) 00:05:22.369 [-T number of threads per core 00:05:22.369 [-o transfer size in bytes (default: 4KiB. For compress/decompress, 0 means the input file size)] 00:05:22.369 [-t time in seconds] 00:05:22.369 [-w workload type must be one of these: copy, fill, crc32c, copy_crc32c, compare, compress, decompress, dualcast, xor, 00:05:22.369 [ dif_verify, , dif_generate, dif_generate_copy 00:05:22.369 [-M assign module to the operation, not compatible with accel_assign_opc RPC 00:05:22.369 [-l for compress/decompress workloads, name of uncompressed input file 00:05:22.369 [-S for crc32c workload, use this seed value (default 0) 00:05:22.369 [-P for compare workload, percentage of operations that should miscompare (percent, default 0) 00:05:22.369 [-f for fill workload, use this BYTE value (default 255) 00:05:22.369 [-x for xor workload, use this number of source buffers (default, minimum: 2)] 00:05:22.369 [-y verify result if this switch is on] 00:05:22.369 [-a tasks to allocate per core (default: same value as -q)] 00:05:22.369 Can be used to spread operations across a wider range of memory. 00:05:22.369 07:22:38 -- common/autotest_common.sh@643 -- # es=1 00:05:22.369 07:22:38 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:05:22.369 07:22:38 -- common/autotest_common.sh@662 -- # [[ -n '' ]] 00:05:22.369 07:22:38 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:05:22.369 00:05:22.369 real 0m0.024s 00:05:22.369 user 0m0.015s 00:05:22.369 sys 0m0.009s 00:05:22.369 07:22:38 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:22.369 07:22:38 -- common/autotest_common.sh@10 -- # set +x 00:05:22.369 ************************************ 00:05:22.369 END TEST accel_wrong_workload 00:05:22.369 ************************************ 00:05:22.628 Error: writing output failed: Broken pipe 00:05:22.628 07:22:38 -- accel/accel.sh@89 -- # run_test accel_negative_buffers NOT accel_perf -t 1 -w xor -y -x -1 00:05:22.628 07:22:38 -- common/autotest_common.sh@1077 -- # '[' 10 -le 1 ']' 00:05:22.628 07:22:38 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:05:22.628 07:22:38 -- common/autotest_common.sh@10 -- # set +x 00:05:22.628 ************************************ 00:05:22.628 START TEST accel_negative_buffers 00:05:22.628 ************************************ 00:05:22.628 07:22:38 -- common/autotest_common.sh@1104 -- # NOT accel_perf -t 1 -w xor -y -x -1 00:05:22.628 07:22:38 -- common/autotest_common.sh@640 -- # local es=0 00:05:22.628 07:22:38 -- common/autotest_common.sh@642 -- # valid_exec_arg accel_perf -t 1 -w xor -y -x -1 00:05:22.628 07:22:38 -- common/autotest_common.sh@628 -- # local arg=accel_perf 00:05:22.628 07:22:38 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:05:22.628 07:22:38 -- common/autotest_common.sh@632 -- # type -t accel_perf 00:05:22.628 07:22:38 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:05:22.628 07:22:38 -- common/autotest_common.sh@643 -- # accel_perf -t 1 -w xor -y -x -1 00:05:22.628 07:22:38 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w xor -y -x -1 00:05:22.628 07:22:38 -- accel/accel.sh@12 -- # build_accel_config 00:05:22.628 07:22:38 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:05:22.628 07:22:38 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:22.628 07:22:38 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:22.628 07:22:38 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:05:22.628 07:22:38 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:05:22.628 07:22:38 -- accel/accel.sh@41 -- # local IFS=, 00:05:22.628 07:22:38 -- accel/accel.sh@42 -- # jq -r . 00:05:22.628 -x option must be non-negative. 00:05:22.628 [2024-07-14 07:22:38.579593] app.c:1292:spdk_app_parse_args: *ERROR*: Parsing app-specific command line parameter 'x' failed: 1 00:05:22.628 accel_perf options: 00:05:22.629 [-h help message] 00:05:22.629 [-q queue depth per core] 00:05:22.629 [-C for supported workloads, use this value to configure the io vector size to test (default 1) 00:05:22.629 [-T number of threads per core 00:05:22.629 [-o transfer size in bytes (default: 4KiB. For compress/decompress, 0 means the input file size)] 00:05:22.629 [-t time in seconds] 00:05:22.629 [-w workload type must be one of these: copy, fill, crc32c, copy_crc32c, compare, compress, decompress, dualcast, xor, 00:05:22.629 [ dif_verify, , dif_generate, dif_generate_copy 00:05:22.629 [-M assign module to the operation, not compatible with accel_assign_opc RPC 00:05:22.629 [-l for compress/decompress workloads, name of uncompressed input file 00:05:22.629 [-S for crc32c workload, use this seed value (default 0) 00:05:22.629 [-P for compare workload, percentage of operations that should miscompare (percent, default 0) 00:05:22.629 [-f for fill workload, use this BYTE value (default 255) 00:05:22.629 [-x for xor workload, use this number of source buffers (default, minimum: 2)] 00:05:22.629 [-y verify result if this switch is on] 00:05:22.629 [-a tasks to allocate per core (default: same value as -q)] 00:05:22.629 Can be used to spread operations across a wider range of memory. 00:05:22.629 07:22:38 -- common/autotest_common.sh@643 -- # es=1 00:05:22.629 07:22:38 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:05:22.629 07:22:38 -- common/autotest_common.sh@662 -- # [[ -n '' ]] 00:05:22.629 07:22:38 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:05:22.629 00:05:22.629 real 0m0.024s 00:05:22.629 user 0m0.015s 00:05:22.629 sys 0m0.009s 00:05:22.629 07:22:38 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:22.629 07:22:38 -- common/autotest_common.sh@10 -- # set +x 00:05:22.629 ************************************ 00:05:22.629 END TEST accel_negative_buffers 00:05:22.629 ************************************ 00:05:22.629 Error: writing output failed: Broken pipe 00:05:22.629 07:22:38 -- accel/accel.sh@93 -- # run_test accel_crc32c accel_test -t 1 -w crc32c -S 32 -y 00:05:22.629 07:22:38 -- common/autotest_common.sh@1077 -- # '[' 9 -le 1 ']' 00:05:22.629 07:22:38 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:05:22.629 07:22:38 -- common/autotest_common.sh@10 -- # set +x 00:05:22.629 ************************************ 00:05:22.629 START TEST accel_crc32c 00:05:22.629 ************************************ 00:05:22.629 07:22:38 -- common/autotest_common.sh@1104 -- # accel_test -t 1 -w crc32c -S 32 -y 00:05:22.629 07:22:38 -- accel/accel.sh@16 -- # local accel_opc 00:05:22.629 07:22:38 -- accel/accel.sh@17 -- # local accel_module 00:05:22.629 07:22:38 -- accel/accel.sh@18 -- # accel_perf -t 1 -w crc32c -S 32 -y 00:05:22.629 07:22:38 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w crc32c -S 32 -y 00:05:22.629 07:22:38 -- accel/accel.sh@12 -- # build_accel_config 00:05:22.629 07:22:38 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:05:22.629 07:22:38 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:22.629 07:22:38 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:22.629 07:22:38 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:05:22.629 07:22:38 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:05:22.629 07:22:38 -- accel/accel.sh@41 -- # local IFS=, 00:05:22.629 07:22:38 -- accel/accel.sh@42 -- # jq -r . 00:05:22.629 [2024-07-14 07:22:38.628022] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:05:22.629 [2024-07-14 07:22:38.628084] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3979924 ] 00:05:22.629 EAL: No free 2048 kB hugepages reported on node 1 00:05:22.629 [2024-07-14 07:22:38.691561] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:22.887 [2024-07-14 07:22:38.809712] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:24.263 07:22:40 -- accel/accel.sh@18 -- # out=' 00:05:24.263 SPDK Configuration: 00:05:24.263 Core mask: 0x1 00:05:24.263 00:05:24.263 Accel Perf Configuration: 00:05:24.263 Workload Type: crc32c 00:05:24.263 CRC-32C seed: 32 00:05:24.263 Transfer size: 4096 bytes 00:05:24.263 Vector count 1 00:05:24.263 Module: software 00:05:24.263 Queue depth: 32 00:05:24.263 Allocate depth: 32 00:05:24.263 # threads/core: 1 00:05:24.263 Run time: 1 seconds 00:05:24.263 Verify: Yes 00:05:24.263 00:05:24.263 Running for 1 seconds... 00:05:24.263 00:05:24.263 Core,Thread Transfers Bandwidth Failed Miscompares 00:05:24.263 ------------------------------------------------------------------------------------ 00:05:24.263 0,0 403392/s 1575 MiB/s 0 0 00:05:24.263 ==================================================================================== 00:05:24.263 Total 403392/s 1575 MiB/s 0 0' 00:05:24.263 07:22:40 -- accel/accel.sh@20 -- # IFS=: 00:05:24.263 07:22:40 -- accel/accel.sh@15 -- # accel_perf -t 1 -w crc32c -S 32 -y 00:05:24.263 07:22:40 -- accel/accel.sh@20 -- # read -r var val 00:05:24.263 07:22:40 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w crc32c -S 32 -y 00:05:24.263 07:22:40 -- accel/accel.sh@12 -- # build_accel_config 00:05:24.263 07:22:40 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:05:24.263 07:22:40 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:24.263 07:22:40 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:24.263 07:22:40 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:05:24.263 07:22:40 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:05:24.263 07:22:40 -- accel/accel.sh@41 -- # local IFS=, 00:05:24.263 07:22:40 -- accel/accel.sh@42 -- # jq -r . 00:05:24.263 [2024-07-14 07:22:40.106852] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:05:24.263 [2024-07-14 07:22:40.106964] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3980137 ] 00:05:24.263 EAL: No free 2048 kB hugepages reported on node 1 00:05:24.263 [2024-07-14 07:22:40.171977] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:24.263 [2024-07-14 07:22:40.288488] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:24.263 07:22:40 -- accel/accel.sh@21 -- # val= 00:05:24.263 07:22:40 -- accel/accel.sh@22 -- # case "$var" in 00:05:24.263 07:22:40 -- accel/accel.sh@20 -- # IFS=: 00:05:24.263 07:22:40 -- accel/accel.sh@20 -- # read -r var val 00:05:24.263 07:22:40 -- accel/accel.sh@21 -- # val= 00:05:24.263 07:22:40 -- accel/accel.sh@22 -- # case "$var" in 00:05:24.263 07:22:40 -- accel/accel.sh@20 -- # IFS=: 00:05:24.263 07:22:40 -- accel/accel.sh@20 -- # read -r var val 00:05:24.263 07:22:40 -- accel/accel.sh@21 -- # val=0x1 00:05:24.263 07:22:40 -- accel/accel.sh@22 -- # case "$var" in 00:05:24.263 07:22:40 -- accel/accel.sh@20 -- # IFS=: 00:05:24.263 07:22:40 -- accel/accel.sh@20 -- # read -r var val 00:05:24.263 07:22:40 -- accel/accel.sh@21 -- # val= 00:05:24.263 07:22:40 -- accel/accel.sh@22 -- # case "$var" in 00:05:24.263 07:22:40 -- accel/accel.sh@20 -- # IFS=: 00:05:24.263 07:22:40 -- accel/accel.sh@20 -- # read -r var val 00:05:24.263 07:22:40 -- accel/accel.sh@21 -- # val= 00:05:24.263 07:22:40 -- accel/accel.sh@22 -- # case "$var" in 00:05:24.263 07:22:40 -- accel/accel.sh@20 -- # IFS=: 00:05:24.263 07:22:40 -- accel/accel.sh@20 -- # read -r var val 00:05:24.263 07:22:40 -- accel/accel.sh@21 -- # val=crc32c 00:05:24.263 07:22:40 -- accel/accel.sh@22 -- # case "$var" in 00:05:24.263 07:22:40 -- accel/accel.sh@24 -- # accel_opc=crc32c 00:05:24.263 07:22:40 -- accel/accel.sh@20 -- # IFS=: 00:05:24.263 07:22:40 -- accel/accel.sh@20 -- # read -r var val 00:05:24.263 07:22:40 -- accel/accel.sh@21 -- # val=32 00:05:24.263 07:22:40 -- accel/accel.sh@22 -- # case "$var" in 00:05:24.263 07:22:40 -- accel/accel.sh@20 -- # IFS=: 00:05:24.263 07:22:40 -- accel/accel.sh@20 -- # read -r var val 00:05:24.263 07:22:40 -- accel/accel.sh@21 -- # val='4096 bytes' 00:05:24.263 07:22:40 -- accel/accel.sh@22 -- # case "$var" in 00:05:24.263 07:22:40 -- accel/accel.sh@20 -- # IFS=: 00:05:24.263 07:22:40 -- accel/accel.sh@20 -- # read -r var val 00:05:24.263 07:22:40 -- accel/accel.sh@21 -- # val= 00:05:24.263 07:22:40 -- accel/accel.sh@22 -- # case "$var" in 00:05:24.263 07:22:40 -- accel/accel.sh@20 -- # IFS=: 00:05:24.263 07:22:40 -- accel/accel.sh@20 -- # read -r var val 00:05:24.263 07:22:40 -- accel/accel.sh@21 -- # val=software 00:05:24.263 07:22:40 -- accel/accel.sh@22 -- # case "$var" in 00:05:24.263 07:22:40 -- accel/accel.sh@23 -- # accel_module=software 00:05:24.263 07:22:40 -- accel/accel.sh@20 -- # IFS=: 00:05:24.263 07:22:40 -- accel/accel.sh@20 -- # read -r var val 00:05:24.263 07:22:40 -- accel/accel.sh@21 -- # val=32 00:05:24.263 07:22:40 -- accel/accel.sh@22 -- # case "$var" in 00:05:24.263 07:22:40 -- accel/accel.sh@20 -- # IFS=: 00:05:24.263 07:22:40 -- accel/accel.sh@20 -- # read -r var val 00:05:24.263 07:22:40 -- accel/accel.sh@21 -- # val=32 00:05:24.263 07:22:40 -- accel/accel.sh@22 -- # case "$var" in 00:05:24.263 07:22:40 -- accel/accel.sh@20 -- # IFS=: 00:05:24.263 07:22:40 -- accel/accel.sh@20 -- # read -r var val 00:05:24.263 07:22:40 -- accel/accel.sh@21 -- # val=1 00:05:24.263 07:22:40 -- accel/accel.sh@22 -- # case "$var" in 00:05:24.263 07:22:40 -- accel/accel.sh@20 -- # IFS=: 00:05:24.263 07:22:40 -- accel/accel.sh@20 -- # read -r var val 00:05:24.263 07:22:40 -- accel/accel.sh@21 -- # val='1 seconds' 00:05:24.263 07:22:40 -- accel/accel.sh@22 -- # case "$var" in 00:05:24.263 07:22:40 -- accel/accel.sh@20 -- # IFS=: 00:05:24.263 07:22:40 -- accel/accel.sh@20 -- # read -r var val 00:05:24.263 07:22:40 -- accel/accel.sh@21 -- # val=Yes 00:05:24.263 07:22:40 -- accel/accel.sh@22 -- # case "$var" in 00:05:24.263 07:22:40 -- accel/accel.sh@20 -- # IFS=: 00:05:24.263 07:22:40 -- accel/accel.sh@20 -- # read -r var val 00:05:24.263 07:22:40 -- accel/accel.sh@21 -- # val= 00:05:24.263 07:22:40 -- accel/accel.sh@22 -- # case "$var" in 00:05:24.263 07:22:40 -- accel/accel.sh@20 -- # IFS=: 00:05:24.263 07:22:40 -- accel/accel.sh@20 -- # read -r var val 00:05:24.263 07:22:40 -- accel/accel.sh@21 -- # val= 00:05:24.263 07:22:40 -- accel/accel.sh@22 -- # case "$var" in 00:05:24.263 07:22:40 -- accel/accel.sh@20 -- # IFS=: 00:05:24.263 07:22:40 -- accel/accel.sh@20 -- # read -r var val 00:05:25.638 07:22:41 -- accel/accel.sh@21 -- # val= 00:05:25.638 07:22:41 -- accel/accel.sh@22 -- # case "$var" in 00:05:25.638 07:22:41 -- accel/accel.sh@20 -- # IFS=: 00:05:25.638 07:22:41 -- accel/accel.sh@20 -- # read -r var val 00:05:25.638 07:22:41 -- accel/accel.sh@21 -- # val= 00:05:25.638 07:22:41 -- accel/accel.sh@22 -- # case "$var" in 00:05:25.638 07:22:41 -- accel/accel.sh@20 -- # IFS=: 00:05:25.638 07:22:41 -- accel/accel.sh@20 -- # read -r var val 00:05:25.638 07:22:41 -- accel/accel.sh@21 -- # val= 00:05:25.638 07:22:41 -- accel/accel.sh@22 -- # case "$var" in 00:05:25.638 07:22:41 -- accel/accel.sh@20 -- # IFS=: 00:05:25.638 07:22:41 -- accel/accel.sh@20 -- # read -r var val 00:05:25.638 07:22:41 -- accel/accel.sh@21 -- # val= 00:05:25.638 07:22:41 -- accel/accel.sh@22 -- # case "$var" in 00:05:25.638 07:22:41 -- accel/accel.sh@20 -- # IFS=: 00:05:25.638 07:22:41 -- accel/accel.sh@20 -- # read -r var val 00:05:25.638 07:22:41 -- accel/accel.sh@21 -- # val= 00:05:25.638 07:22:41 -- accel/accel.sh@22 -- # case "$var" in 00:05:25.638 07:22:41 -- accel/accel.sh@20 -- # IFS=: 00:05:25.638 07:22:41 -- accel/accel.sh@20 -- # read -r var val 00:05:25.638 07:22:41 -- accel/accel.sh@21 -- # val= 00:05:25.638 07:22:41 -- accel/accel.sh@22 -- # case "$var" in 00:05:25.638 07:22:41 -- accel/accel.sh@20 -- # IFS=: 00:05:25.638 07:22:41 -- accel/accel.sh@20 -- # read -r var val 00:05:25.638 07:22:41 -- accel/accel.sh@28 -- # [[ -n software ]] 00:05:25.638 07:22:41 -- accel/accel.sh@28 -- # [[ -n crc32c ]] 00:05:25.638 07:22:41 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:05:25.638 00:05:25.638 real 0m2.956s 00:05:25.638 user 0m2.651s 00:05:25.638 sys 0m0.298s 00:05:25.638 07:22:41 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:25.638 07:22:41 -- common/autotest_common.sh@10 -- # set +x 00:05:25.638 ************************************ 00:05:25.638 END TEST accel_crc32c 00:05:25.638 ************************************ 00:05:25.638 07:22:41 -- accel/accel.sh@94 -- # run_test accel_crc32c_C2 accel_test -t 1 -w crc32c -y -C 2 00:05:25.638 07:22:41 -- common/autotest_common.sh@1077 -- # '[' 9 -le 1 ']' 00:05:25.638 07:22:41 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:05:25.638 07:22:41 -- common/autotest_common.sh@10 -- # set +x 00:05:25.638 ************************************ 00:05:25.638 START TEST accel_crc32c_C2 00:05:25.638 ************************************ 00:05:25.638 07:22:41 -- common/autotest_common.sh@1104 -- # accel_test -t 1 -w crc32c -y -C 2 00:05:25.638 07:22:41 -- accel/accel.sh@16 -- # local accel_opc 00:05:25.638 07:22:41 -- accel/accel.sh@17 -- # local accel_module 00:05:25.638 07:22:41 -- accel/accel.sh@18 -- # accel_perf -t 1 -w crc32c -y -C 2 00:05:25.638 07:22:41 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w crc32c -y -C 2 00:05:25.638 07:22:41 -- accel/accel.sh@12 -- # build_accel_config 00:05:25.638 07:22:41 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:05:25.638 07:22:41 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:25.638 07:22:41 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:25.638 07:22:41 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:05:25.638 07:22:41 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:05:25.638 07:22:41 -- accel/accel.sh@41 -- # local IFS=, 00:05:25.638 07:22:41 -- accel/accel.sh@42 -- # jq -r . 00:05:25.638 [2024-07-14 07:22:41.609634] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:05:25.639 [2024-07-14 07:22:41.609705] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3980342 ] 00:05:25.639 EAL: No free 2048 kB hugepages reported on node 1 00:05:25.639 [2024-07-14 07:22:41.669968] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:25.639 [2024-07-14 07:22:41.788274] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:27.013 07:22:43 -- accel/accel.sh@18 -- # out=' 00:05:27.013 SPDK Configuration: 00:05:27.013 Core mask: 0x1 00:05:27.013 00:05:27.013 Accel Perf Configuration: 00:05:27.013 Workload Type: crc32c 00:05:27.013 CRC-32C seed: 0 00:05:27.013 Transfer size: 4096 bytes 00:05:27.013 Vector count 2 00:05:27.013 Module: software 00:05:27.013 Queue depth: 32 00:05:27.013 Allocate depth: 32 00:05:27.013 # threads/core: 1 00:05:27.013 Run time: 1 seconds 00:05:27.013 Verify: Yes 00:05:27.013 00:05:27.013 Running for 1 seconds... 00:05:27.013 00:05:27.013 Core,Thread Transfers Bandwidth Failed Miscompares 00:05:27.013 ------------------------------------------------------------------------------------ 00:05:27.013 0,0 315296/s 2463 MiB/s 0 0 00:05:27.013 ==================================================================================== 00:05:27.013 Total 315296/s 1231 MiB/s 0 0' 00:05:27.013 07:22:43 -- accel/accel.sh@20 -- # IFS=: 00:05:27.013 07:22:43 -- accel/accel.sh@15 -- # accel_perf -t 1 -w crc32c -y -C 2 00:05:27.013 07:22:43 -- accel/accel.sh@20 -- # read -r var val 00:05:27.013 07:22:43 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w crc32c -y -C 2 00:05:27.013 07:22:43 -- accel/accel.sh@12 -- # build_accel_config 00:05:27.013 07:22:43 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:05:27.013 07:22:43 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:27.013 07:22:43 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:27.013 07:22:43 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:05:27.013 07:22:43 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:05:27.013 07:22:43 -- accel/accel.sh@41 -- # local IFS=, 00:05:27.013 07:22:43 -- accel/accel.sh@42 -- # jq -r . 00:05:27.013 [2024-07-14 07:22:43.060542] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:05:27.013 [2024-07-14 07:22:43.060625] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3980495 ] 00:05:27.013 EAL: No free 2048 kB hugepages reported on node 1 00:05:27.013 [2024-07-14 07:22:43.125877] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:27.287 [2024-07-14 07:22:43.243137] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:27.287 07:22:43 -- accel/accel.sh@21 -- # val= 00:05:27.287 07:22:43 -- accel/accel.sh@22 -- # case "$var" in 00:05:27.287 07:22:43 -- accel/accel.sh@20 -- # IFS=: 00:05:27.287 07:22:43 -- accel/accel.sh@20 -- # read -r var val 00:05:27.287 07:22:43 -- accel/accel.sh@21 -- # val= 00:05:27.287 07:22:43 -- accel/accel.sh@22 -- # case "$var" in 00:05:27.287 07:22:43 -- accel/accel.sh@20 -- # IFS=: 00:05:27.287 07:22:43 -- accel/accel.sh@20 -- # read -r var val 00:05:27.287 07:22:43 -- accel/accel.sh@21 -- # val=0x1 00:05:27.287 07:22:43 -- accel/accel.sh@22 -- # case "$var" in 00:05:27.287 07:22:43 -- accel/accel.sh@20 -- # IFS=: 00:05:27.287 07:22:43 -- accel/accel.sh@20 -- # read -r var val 00:05:27.287 07:22:43 -- accel/accel.sh@21 -- # val= 00:05:27.287 07:22:43 -- accel/accel.sh@22 -- # case "$var" in 00:05:27.287 07:22:43 -- accel/accel.sh@20 -- # IFS=: 00:05:27.287 07:22:43 -- accel/accel.sh@20 -- # read -r var val 00:05:27.287 07:22:43 -- accel/accel.sh@21 -- # val= 00:05:27.287 07:22:43 -- accel/accel.sh@22 -- # case "$var" in 00:05:27.287 07:22:43 -- accel/accel.sh@20 -- # IFS=: 00:05:27.287 07:22:43 -- accel/accel.sh@20 -- # read -r var val 00:05:27.287 07:22:43 -- accel/accel.sh@21 -- # val=crc32c 00:05:27.287 07:22:43 -- accel/accel.sh@22 -- # case "$var" in 00:05:27.287 07:22:43 -- accel/accel.sh@24 -- # accel_opc=crc32c 00:05:27.287 07:22:43 -- accel/accel.sh@20 -- # IFS=: 00:05:27.287 07:22:43 -- accel/accel.sh@20 -- # read -r var val 00:05:27.287 07:22:43 -- accel/accel.sh@21 -- # val=0 00:05:27.287 07:22:43 -- accel/accel.sh@22 -- # case "$var" in 00:05:27.287 07:22:43 -- accel/accel.sh@20 -- # IFS=: 00:05:27.287 07:22:43 -- accel/accel.sh@20 -- # read -r var val 00:05:27.287 07:22:43 -- accel/accel.sh@21 -- # val='4096 bytes' 00:05:27.287 07:22:43 -- accel/accel.sh@22 -- # case "$var" in 00:05:27.287 07:22:43 -- accel/accel.sh@20 -- # IFS=: 00:05:27.287 07:22:43 -- accel/accel.sh@20 -- # read -r var val 00:05:27.287 07:22:43 -- accel/accel.sh@21 -- # val= 00:05:27.287 07:22:43 -- accel/accel.sh@22 -- # case "$var" in 00:05:27.287 07:22:43 -- accel/accel.sh@20 -- # IFS=: 00:05:27.287 07:22:43 -- accel/accel.sh@20 -- # read -r var val 00:05:27.287 07:22:43 -- accel/accel.sh@21 -- # val=software 00:05:27.287 07:22:43 -- accel/accel.sh@22 -- # case "$var" in 00:05:27.287 07:22:43 -- accel/accel.sh@23 -- # accel_module=software 00:05:27.287 07:22:43 -- accel/accel.sh@20 -- # IFS=: 00:05:27.287 07:22:43 -- accel/accel.sh@20 -- # read -r var val 00:05:27.287 07:22:43 -- accel/accel.sh@21 -- # val=32 00:05:27.287 07:22:43 -- accel/accel.sh@22 -- # case "$var" in 00:05:27.287 07:22:43 -- accel/accel.sh@20 -- # IFS=: 00:05:27.287 07:22:43 -- accel/accel.sh@20 -- # read -r var val 00:05:27.287 07:22:43 -- accel/accel.sh@21 -- # val=32 00:05:27.287 07:22:43 -- accel/accel.sh@22 -- # case "$var" in 00:05:27.287 07:22:43 -- accel/accel.sh@20 -- # IFS=: 00:05:27.287 07:22:43 -- accel/accel.sh@20 -- # read -r var val 00:05:27.287 07:22:43 -- accel/accel.sh@21 -- # val=1 00:05:27.287 07:22:43 -- accel/accel.sh@22 -- # case "$var" in 00:05:27.287 07:22:43 -- accel/accel.sh@20 -- # IFS=: 00:05:27.287 07:22:43 -- accel/accel.sh@20 -- # read -r var val 00:05:27.287 07:22:43 -- accel/accel.sh@21 -- # val='1 seconds' 00:05:27.287 07:22:43 -- accel/accel.sh@22 -- # case "$var" in 00:05:27.287 07:22:43 -- accel/accel.sh@20 -- # IFS=: 00:05:27.287 07:22:43 -- accel/accel.sh@20 -- # read -r var val 00:05:27.287 07:22:43 -- accel/accel.sh@21 -- # val=Yes 00:05:27.287 07:22:43 -- accel/accel.sh@22 -- # case "$var" in 00:05:27.287 07:22:43 -- accel/accel.sh@20 -- # IFS=: 00:05:27.287 07:22:43 -- accel/accel.sh@20 -- # read -r var val 00:05:27.287 07:22:43 -- accel/accel.sh@21 -- # val= 00:05:27.287 07:22:43 -- accel/accel.sh@22 -- # case "$var" in 00:05:27.287 07:22:43 -- accel/accel.sh@20 -- # IFS=: 00:05:27.287 07:22:43 -- accel/accel.sh@20 -- # read -r var val 00:05:27.287 07:22:43 -- accel/accel.sh@21 -- # val= 00:05:27.287 07:22:43 -- accel/accel.sh@22 -- # case "$var" in 00:05:27.287 07:22:43 -- accel/accel.sh@20 -- # IFS=: 00:05:27.287 07:22:43 -- accel/accel.sh@20 -- # read -r var val 00:05:28.657 07:22:44 -- accel/accel.sh@21 -- # val= 00:05:28.657 07:22:44 -- accel/accel.sh@22 -- # case "$var" in 00:05:28.657 07:22:44 -- accel/accel.sh@20 -- # IFS=: 00:05:28.657 07:22:44 -- accel/accel.sh@20 -- # read -r var val 00:05:28.657 07:22:44 -- accel/accel.sh@21 -- # val= 00:05:28.657 07:22:44 -- accel/accel.sh@22 -- # case "$var" in 00:05:28.657 07:22:44 -- accel/accel.sh@20 -- # IFS=: 00:05:28.657 07:22:44 -- accel/accel.sh@20 -- # read -r var val 00:05:28.657 07:22:44 -- accel/accel.sh@21 -- # val= 00:05:28.657 07:22:44 -- accel/accel.sh@22 -- # case "$var" in 00:05:28.657 07:22:44 -- accel/accel.sh@20 -- # IFS=: 00:05:28.657 07:22:44 -- accel/accel.sh@20 -- # read -r var val 00:05:28.657 07:22:44 -- accel/accel.sh@21 -- # val= 00:05:28.657 07:22:44 -- accel/accel.sh@22 -- # case "$var" in 00:05:28.657 07:22:44 -- accel/accel.sh@20 -- # IFS=: 00:05:28.657 07:22:44 -- accel/accel.sh@20 -- # read -r var val 00:05:28.657 07:22:44 -- accel/accel.sh@21 -- # val= 00:05:28.657 07:22:44 -- accel/accel.sh@22 -- # case "$var" in 00:05:28.657 07:22:44 -- accel/accel.sh@20 -- # IFS=: 00:05:28.657 07:22:44 -- accel/accel.sh@20 -- # read -r var val 00:05:28.657 07:22:44 -- accel/accel.sh@21 -- # val= 00:05:28.657 07:22:44 -- accel/accel.sh@22 -- # case "$var" in 00:05:28.657 07:22:44 -- accel/accel.sh@20 -- # IFS=: 00:05:28.657 07:22:44 -- accel/accel.sh@20 -- # read -r var val 00:05:28.657 07:22:44 -- accel/accel.sh@28 -- # [[ -n software ]] 00:05:28.657 07:22:44 -- accel/accel.sh@28 -- # [[ -n crc32c ]] 00:05:28.657 07:22:44 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:05:28.657 00:05:28.657 real 0m2.928s 00:05:28.657 user 0m2.618s 00:05:28.657 sys 0m0.303s 00:05:28.657 07:22:44 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:28.657 07:22:44 -- common/autotest_common.sh@10 -- # set +x 00:05:28.657 ************************************ 00:05:28.657 END TEST accel_crc32c_C2 00:05:28.657 ************************************ 00:05:28.657 07:22:44 -- accel/accel.sh@95 -- # run_test accel_copy accel_test -t 1 -w copy -y 00:05:28.657 07:22:44 -- common/autotest_common.sh@1077 -- # '[' 7 -le 1 ']' 00:05:28.657 07:22:44 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:05:28.657 07:22:44 -- common/autotest_common.sh@10 -- # set +x 00:05:28.657 ************************************ 00:05:28.657 START TEST accel_copy 00:05:28.657 ************************************ 00:05:28.657 07:22:44 -- common/autotest_common.sh@1104 -- # accel_test -t 1 -w copy -y 00:05:28.657 07:22:44 -- accel/accel.sh@16 -- # local accel_opc 00:05:28.657 07:22:44 -- accel/accel.sh@17 -- # local accel_module 00:05:28.657 07:22:44 -- accel/accel.sh@18 -- # accel_perf -t 1 -w copy -y 00:05:28.657 07:22:44 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w copy -y 00:05:28.657 07:22:44 -- accel/accel.sh@12 -- # build_accel_config 00:05:28.657 07:22:44 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:05:28.657 07:22:44 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:28.657 07:22:44 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:28.657 07:22:44 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:05:28.657 07:22:44 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:05:28.657 07:22:44 -- accel/accel.sh@41 -- # local IFS=, 00:05:28.657 07:22:44 -- accel/accel.sh@42 -- # jq -r . 00:05:28.657 [2024-07-14 07:22:44.566711] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:05:28.657 [2024-07-14 07:22:44.566802] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3980651 ] 00:05:28.657 EAL: No free 2048 kB hugepages reported on node 1 00:05:28.657 [2024-07-14 07:22:44.630993] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:28.657 [2024-07-14 07:22:44.746844] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:30.030 07:22:46 -- accel/accel.sh@18 -- # out=' 00:05:30.030 SPDK Configuration: 00:05:30.030 Core mask: 0x1 00:05:30.030 00:05:30.030 Accel Perf Configuration: 00:05:30.030 Workload Type: copy 00:05:30.030 Transfer size: 4096 bytes 00:05:30.030 Vector count 1 00:05:30.030 Module: software 00:05:30.030 Queue depth: 32 00:05:30.030 Allocate depth: 32 00:05:30.030 # threads/core: 1 00:05:30.030 Run time: 1 seconds 00:05:30.030 Verify: Yes 00:05:30.030 00:05:30.030 Running for 1 seconds... 00:05:30.030 00:05:30.030 Core,Thread Transfers Bandwidth Failed Miscompares 00:05:30.030 ------------------------------------------------------------------------------------ 00:05:30.030 0,0 278752/s 1088 MiB/s 0 0 00:05:30.030 ==================================================================================== 00:05:30.030 Total 278752/s 1088 MiB/s 0 0' 00:05:30.030 07:22:46 -- accel/accel.sh@20 -- # IFS=: 00:05:30.030 07:22:46 -- accel/accel.sh@15 -- # accel_perf -t 1 -w copy -y 00:05:30.030 07:22:46 -- accel/accel.sh@20 -- # read -r var val 00:05:30.030 07:22:46 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w copy -y 00:05:30.030 07:22:46 -- accel/accel.sh@12 -- # build_accel_config 00:05:30.030 07:22:46 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:05:30.030 07:22:46 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:30.030 07:22:46 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:30.030 07:22:46 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:05:30.030 07:22:46 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:05:30.030 07:22:46 -- accel/accel.sh@41 -- # local IFS=, 00:05:30.030 07:22:46 -- accel/accel.sh@42 -- # jq -r . 00:05:30.030 [2024-07-14 07:22:46.041512] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:05:30.030 [2024-07-14 07:22:46.041592] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3980914 ] 00:05:30.030 EAL: No free 2048 kB hugepages reported on node 1 00:05:30.030 [2024-07-14 07:22:46.102989] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:30.288 [2024-07-14 07:22:46.220999] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:30.289 07:22:46 -- accel/accel.sh@21 -- # val= 00:05:30.289 07:22:46 -- accel/accel.sh@22 -- # case "$var" in 00:05:30.289 07:22:46 -- accel/accel.sh@20 -- # IFS=: 00:05:30.289 07:22:46 -- accel/accel.sh@20 -- # read -r var val 00:05:30.289 07:22:46 -- accel/accel.sh@21 -- # val= 00:05:30.289 07:22:46 -- accel/accel.sh@22 -- # case "$var" in 00:05:30.289 07:22:46 -- accel/accel.sh@20 -- # IFS=: 00:05:30.289 07:22:46 -- accel/accel.sh@20 -- # read -r var val 00:05:30.289 07:22:46 -- accel/accel.sh@21 -- # val=0x1 00:05:30.289 07:22:46 -- accel/accel.sh@22 -- # case "$var" in 00:05:30.289 07:22:46 -- accel/accel.sh@20 -- # IFS=: 00:05:30.289 07:22:46 -- accel/accel.sh@20 -- # read -r var val 00:05:30.289 07:22:46 -- accel/accel.sh@21 -- # val= 00:05:30.289 07:22:46 -- accel/accel.sh@22 -- # case "$var" in 00:05:30.289 07:22:46 -- accel/accel.sh@20 -- # IFS=: 00:05:30.289 07:22:46 -- accel/accel.sh@20 -- # read -r var val 00:05:30.289 07:22:46 -- accel/accel.sh@21 -- # val= 00:05:30.289 07:22:46 -- accel/accel.sh@22 -- # case "$var" in 00:05:30.289 07:22:46 -- accel/accel.sh@20 -- # IFS=: 00:05:30.289 07:22:46 -- accel/accel.sh@20 -- # read -r var val 00:05:30.289 07:22:46 -- accel/accel.sh@21 -- # val=copy 00:05:30.289 07:22:46 -- accel/accel.sh@22 -- # case "$var" in 00:05:30.289 07:22:46 -- accel/accel.sh@24 -- # accel_opc=copy 00:05:30.289 07:22:46 -- accel/accel.sh@20 -- # IFS=: 00:05:30.289 07:22:46 -- accel/accel.sh@20 -- # read -r var val 00:05:30.289 07:22:46 -- accel/accel.sh@21 -- # val='4096 bytes' 00:05:30.289 07:22:46 -- accel/accel.sh@22 -- # case "$var" in 00:05:30.289 07:22:46 -- accel/accel.sh@20 -- # IFS=: 00:05:30.289 07:22:46 -- accel/accel.sh@20 -- # read -r var val 00:05:30.289 07:22:46 -- accel/accel.sh@21 -- # val= 00:05:30.289 07:22:46 -- accel/accel.sh@22 -- # case "$var" in 00:05:30.289 07:22:46 -- accel/accel.sh@20 -- # IFS=: 00:05:30.289 07:22:46 -- accel/accel.sh@20 -- # read -r var val 00:05:30.289 07:22:46 -- accel/accel.sh@21 -- # val=software 00:05:30.289 07:22:46 -- accel/accel.sh@22 -- # case "$var" in 00:05:30.289 07:22:46 -- accel/accel.sh@23 -- # accel_module=software 00:05:30.289 07:22:46 -- accel/accel.sh@20 -- # IFS=: 00:05:30.289 07:22:46 -- accel/accel.sh@20 -- # read -r var val 00:05:30.289 07:22:46 -- accel/accel.sh@21 -- # val=32 00:05:30.289 07:22:46 -- accel/accel.sh@22 -- # case "$var" in 00:05:30.289 07:22:46 -- accel/accel.sh@20 -- # IFS=: 00:05:30.289 07:22:46 -- accel/accel.sh@20 -- # read -r var val 00:05:30.289 07:22:46 -- accel/accel.sh@21 -- # val=32 00:05:30.289 07:22:46 -- accel/accel.sh@22 -- # case "$var" in 00:05:30.289 07:22:46 -- accel/accel.sh@20 -- # IFS=: 00:05:30.289 07:22:46 -- accel/accel.sh@20 -- # read -r var val 00:05:30.289 07:22:46 -- accel/accel.sh@21 -- # val=1 00:05:30.289 07:22:46 -- accel/accel.sh@22 -- # case "$var" in 00:05:30.289 07:22:46 -- accel/accel.sh@20 -- # IFS=: 00:05:30.289 07:22:46 -- accel/accel.sh@20 -- # read -r var val 00:05:30.289 07:22:46 -- accel/accel.sh@21 -- # val='1 seconds' 00:05:30.289 07:22:46 -- accel/accel.sh@22 -- # case "$var" in 00:05:30.289 07:22:46 -- accel/accel.sh@20 -- # IFS=: 00:05:30.289 07:22:46 -- accel/accel.sh@20 -- # read -r var val 00:05:30.289 07:22:46 -- accel/accel.sh@21 -- # val=Yes 00:05:30.289 07:22:46 -- accel/accel.sh@22 -- # case "$var" in 00:05:30.289 07:22:46 -- accel/accel.sh@20 -- # IFS=: 00:05:30.289 07:22:46 -- accel/accel.sh@20 -- # read -r var val 00:05:30.289 07:22:46 -- accel/accel.sh@21 -- # val= 00:05:30.289 07:22:46 -- accel/accel.sh@22 -- # case "$var" in 00:05:30.289 07:22:46 -- accel/accel.sh@20 -- # IFS=: 00:05:30.289 07:22:46 -- accel/accel.sh@20 -- # read -r var val 00:05:30.289 07:22:46 -- accel/accel.sh@21 -- # val= 00:05:30.289 07:22:46 -- accel/accel.sh@22 -- # case "$var" in 00:05:30.289 07:22:46 -- accel/accel.sh@20 -- # IFS=: 00:05:30.289 07:22:46 -- accel/accel.sh@20 -- # read -r var val 00:05:31.664 07:22:47 -- accel/accel.sh@21 -- # val= 00:05:31.664 07:22:47 -- accel/accel.sh@22 -- # case "$var" in 00:05:31.664 07:22:47 -- accel/accel.sh@20 -- # IFS=: 00:05:31.664 07:22:47 -- accel/accel.sh@20 -- # read -r var val 00:05:31.664 07:22:47 -- accel/accel.sh@21 -- # val= 00:05:31.664 07:22:47 -- accel/accel.sh@22 -- # case "$var" in 00:05:31.664 07:22:47 -- accel/accel.sh@20 -- # IFS=: 00:05:31.664 07:22:47 -- accel/accel.sh@20 -- # read -r var val 00:05:31.664 07:22:47 -- accel/accel.sh@21 -- # val= 00:05:31.664 07:22:47 -- accel/accel.sh@22 -- # case "$var" in 00:05:31.664 07:22:47 -- accel/accel.sh@20 -- # IFS=: 00:05:31.664 07:22:47 -- accel/accel.sh@20 -- # read -r var val 00:05:31.664 07:22:47 -- accel/accel.sh@21 -- # val= 00:05:31.664 07:22:47 -- accel/accel.sh@22 -- # case "$var" in 00:05:31.664 07:22:47 -- accel/accel.sh@20 -- # IFS=: 00:05:31.664 07:22:47 -- accel/accel.sh@20 -- # read -r var val 00:05:31.664 07:22:47 -- accel/accel.sh@21 -- # val= 00:05:31.664 07:22:47 -- accel/accel.sh@22 -- # case "$var" in 00:05:31.664 07:22:47 -- accel/accel.sh@20 -- # IFS=: 00:05:31.664 07:22:47 -- accel/accel.sh@20 -- # read -r var val 00:05:31.664 07:22:47 -- accel/accel.sh@21 -- # val= 00:05:31.664 07:22:47 -- accel/accel.sh@22 -- # case "$var" in 00:05:31.664 07:22:47 -- accel/accel.sh@20 -- # IFS=: 00:05:31.664 07:22:47 -- accel/accel.sh@20 -- # read -r var val 00:05:31.664 07:22:47 -- accel/accel.sh@28 -- # [[ -n software ]] 00:05:31.664 07:22:47 -- accel/accel.sh@28 -- # [[ -n copy ]] 00:05:31.664 07:22:47 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:05:31.664 00:05:31.664 real 0m2.945s 00:05:31.664 user 0m2.660s 00:05:31.664 sys 0m0.276s 00:05:31.664 07:22:47 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:31.664 07:22:47 -- common/autotest_common.sh@10 -- # set +x 00:05:31.664 ************************************ 00:05:31.664 END TEST accel_copy 00:05:31.664 ************************************ 00:05:31.664 07:22:47 -- accel/accel.sh@96 -- # run_test accel_fill accel_test -t 1 -w fill -f 128 -q 64 -a 64 -y 00:05:31.664 07:22:47 -- common/autotest_common.sh@1077 -- # '[' 13 -le 1 ']' 00:05:31.664 07:22:47 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:05:31.664 07:22:47 -- common/autotest_common.sh@10 -- # set +x 00:05:31.664 ************************************ 00:05:31.664 START TEST accel_fill 00:05:31.664 ************************************ 00:05:31.664 07:22:47 -- common/autotest_common.sh@1104 -- # accel_test -t 1 -w fill -f 128 -q 64 -a 64 -y 00:05:31.664 07:22:47 -- accel/accel.sh@16 -- # local accel_opc 00:05:31.664 07:22:47 -- accel/accel.sh@17 -- # local accel_module 00:05:31.664 07:22:47 -- accel/accel.sh@18 -- # accel_perf -t 1 -w fill -f 128 -q 64 -a 64 -y 00:05:31.664 07:22:47 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w fill -f 128 -q 64 -a 64 -y 00:05:31.664 07:22:47 -- accel/accel.sh@12 -- # build_accel_config 00:05:31.664 07:22:47 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:05:31.664 07:22:47 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:31.664 07:22:47 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:31.664 07:22:47 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:05:31.664 07:22:47 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:05:31.664 07:22:47 -- accel/accel.sh@41 -- # local IFS=, 00:05:31.664 07:22:47 -- accel/accel.sh@42 -- # jq -r . 00:05:31.664 [2024-07-14 07:22:47.533322] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:05:31.664 [2024-07-14 07:22:47.533389] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3981074 ] 00:05:31.664 EAL: No free 2048 kB hugepages reported on node 1 00:05:31.664 [2024-07-14 07:22:47.594110] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:31.664 [2024-07-14 07:22:47.709635] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:33.037 07:22:48 -- accel/accel.sh@18 -- # out=' 00:05:33.037 SPDK Configuration: 00:05:33.037 Core mask: 0x1 00:05:33.037 00:05:33.037 Accel Perf Configuration: 00:05:33.037 Workload Type: fill 00:05:33.037 Fill pattern: 0x80 00:05:33.037 Transfer size: 4096 bytes 00:05:33.037 Vector count 1 00:05:33.037 Module: software 00:05:33.037 Queue depth: 64 00:05:33.037 Allocate depth: 64 00:05:33.037 # threads/core: 1 00:05:33.037 Run time: 1 seconds 00:05:33.037 Verify: Yes 00:05:33.037 00:05:33.037 Running for 1 seconds... 00:05:33.037 00:05:33.037 Core,Thread Transfers Bandwidth Failed Miscompares 00:05:33.037 ------------------------------------------------------------------------------------ 00:05:33.037 0,0 403776/s 1577 MiB/s 0 0 00:05:33.037 ==================================================================================== 00:05:33.037 Total 403776/s 1577 MiB/s 0 0' 00:05:33.037 07:22:48 -- accel/accel.sh@20 -- # IFS=: 00:05:33.037 07:22:48 -- accel/accel.sh@15 -- # accel_perf -t 1 -w fill -f 128 -q 64 -a 64 -y 00:05:33.037 07:22:48 -- accel/accel.sh@20 -- # read -r var val 00:05:33.037 07:22:48 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w fill -f 128 -q 64 -a 64 -y 00:05:33.037 07:22:48 -- accel/accel.sh@12 -- # build_accel_config 00:05:33.037 07:22:48 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:05:33.037 07:22:48 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:33.037 07:22:48 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:33.037 07:22:48 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:05:33.037 07:22:48 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:05:33.037 07:22:48 -- accel/accel.sh@41 -- # local IFS=, 00:05:33.037 07:22:48 -- accel/accel.sh@42 -- # jq -r . 00:05:33.037 [2024-07-14 07:22:49.009622] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:05:33.037 [2024-07-14 07:22:49.009702] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3981223 ] 00:05:33.037 EAL: No free 2048 kB hugepages reported on node 1 00:05:33.037 [2024-07-14 07:22:49.070358] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:33.037 [2024-07-14 07:22:49.190093] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:33.296 07:22:49 -- accel/accel.sh@21 -- # val= 00:05:33.296 07:22:49 -- accel/accel.sh@22 -- # case "$var" in 00:05:33.296 07:22:49 -- accel/accel.sh@20 -- # IFS=: 00:05:33.296 07:22:49 -- accel/accel.sh@20 -- # read -r var val 00:05:33.296 07:22:49 -- accel/accel.sh@21 -- # val= 00:05:33.296 07:22:49 -- accel/accel.sh@22 -- # case "$var" in 00:05:33.296 07:22:49 -- accel/accel.sh@20 -- # IFS=: 00:05:33.296 07:22:49 -- accel/accel.sh@20 -- # read -r var val 00:05:33.296 07:22:49 -- accel/accel.sh@21 -- # val=0x1 00:05:33.296 07:22:49 -- accel/accel.sh@22 -- # case "$var" in 00:05:33.296 07:22:49 -- accel/accel.sh@20 -- # IFS=: 00:05:33.296 07:22:49 -- accel/accel.sh@20 -- # read -r var val 00:05:33.296 07:22:49 -- accel/accel.sh@21 -- # val= 00:05:33.296 07:22:49 -- accel/accel.sh@22 -- # case "$var" in 00:05:33.296 07:22:49 -- accel/accel.sh@20 -- # IFS=: 00:05:33.296 07:22:49 -- accel/accel.sh@20 -- # read -r var val 00:05:33.296 07:22:49 -- accel/accel.sh@21 -- # val= 00:05:33.296 07:22:49 -- accel/accel.sh@22 -- # case "$var" in 00:05:33.296 07:22:49 -- accel/accel.sh@20 -- # IFS=: 00:05:33.296 07:22:49 -- accel/accel.sh@20 -- # read -r var val 00:05:33.296 07:22:49 -- accel/accel.sh@21 -- # val=fill 00:05:33.296 07:22:49 -- accel/accel.sh@22 -- # case "$var" in 00:05:33.296 07:22:49 -- accel/accel.sh@24 -- # accel_opc=fill 00:05:33.296 07:22:49 -- accel/accel.sh@20 -- # IFS=: 00:05:33.296 07:22:49 -- accel/accel.sh@20 -- # read -r var val 00:05:33.296 07:22:49 -- accel/accel.sh@21 -- # val=0x80 00:05:33.296 07:22:49 -- accel/accel.sh@22 -- # case "$var" in 00:05:33.296 07:22:49 -- accel/accel.sh@20 -- # IFS=: 00:05:33.296 07:22:49 -- accel/accel.sh@20 -- # read -r var val 00:05:33.296 07:22:49 -- accel/accel.sh@21 -- # val='4096 bytes' 00:05:33.296 07:22:49 -- accel/accel.sh@22 -- # case "$var" in 00:05:33.296 07:22:49 -- accel/accel.sh@20 -- # IFS=: 00:05:33.296 07:22:49 -- accel/accel.sh@20 -- # read -r var val 00:05:33.296 07:22:49 -- accel/accel.sh@21 -- # val= 00:05:33.296 07:22:49 -- accel/accel.sh@22 -- # case "$var" in 00:05:33.296 07:22:49 -- accel/accel.sh@20 -- # IFS=: 00:05:33.296 07:22:49 -- accel/accel.sh@20 -- # read -r var val 00:05:33.296 07:22:49 -- accel/accel.sh@21 -- # val=software 00:05:33.296 07:22:49 -- accel/accel.sh@22 -- # case "$var" in 00:05:33.296 07:22:49 -- accel/accel.sh@23 -- # accel_module=software 00:05:33.296 07:22:49 -- accel/accel.sh@20 -- # IFS=: 00:05:33.296 07:22:49 -- accel/accel.sh@20 -- # read -r var val 00:05:33.296 07:22:49 -- accel/accel.sh@21 -- # val=64 00:05:33.296 07:22:49 -- accel/accel.sh@22 -- # case "$var" in 00:05:33.296 07:22:49 -- accel/accel.sh@20 -- # IFS=: 00:05:33.296 07:22:49 -- accel/accel.sh@20 -- # read -r var val 00:05:33.296 07:22:49 -- accel/accel.sh@21 -- # val=64 00:05:33.296 07:22:49 -- accel/accel.sh@22 -- # case "$var" in 00:05:33.296 07:22:49 -- accel/accel.sh@20 -- # IFS=: 00:05:33.296 07:22:49 -- accel/accel.sh@20 -- # read -r var val 00:05:33.296 07:22:49 -- accel/accel.sh@21 -- # val=1 00:05:33.296 07:22:49 -- accel/accel.sh@22 -- # case "$var" in 00:05:33.296 07:22:49 -- accel/accel.sh@20 -- # IFS=: 00:05:33.296 07:22:49 -- accel/accel.sh@20 -- # read -r var val 00:05:33.296 07:22:49 -- accel/accel.sh@21 -- # val='1 seconds' 00:05:33.296 07:22:49 -- accel/accel.sh@22 -- # case "$var" in 00:05:33.296 07:22:49 -- accel/accel.sh@20 -- # IFS=: 00:05:33.296 07:22:49 -- accel/accel.sh@20 -- # read -r var val 00:05:33.296 07:22:49 -- accel/accel.sh@21 -- # val=Yes 00:05:33.296 07:22:49 -- accel/accel.sh@22 -- # case "$var" in 00:05:33.296 07:22:49 -- accel/accel.sh@20 -- # IFS=: 00:05:33.296 07:22:49 -- accel/accel.sh@20 -- # read -r var val 00:05:33.296 07:22:49 -- accel/accel.sh@21 -- # val= 00:05:33.296 07:22:49 -- accel/accel.sh@22 -- # case "$var" in 00:05:33.296 07:22:49 -- accel/accel.sh@20 -- # IFS=: 00:05:33.296 07:22:49 -- accel/accel.sh@20 -- # read -r var val 00:05:33.296 07:22:49 -- accel/accel.sh@21 -- # val= 00:05:33.296 07:22:49 -- accel/accel.sh@22 -- # case "$var" in 00:05:33.296 07:22:49 -- accel/accel.sh@20 -- # IFS=: 00:05:33.296 07:22:49 -- accel/accel.sh@20 -- # read -r var val 00:05:34.675 07:22:50 -- accel/accel.sh@21 -- # val= 00:05:34.675 07:22:50 -- accel/accel.sh@22 -- # case "$var" in 00:05:34.675 07:22:50 -- accel/accel.sh@20 -- # IFS=: 00:05:34.675 07:22:50 -- accel/accel.sh@20 -- # read -r var val 00:05:34.675 07:22:50 -- accel/accel.sh@21 -- # val= 00:05:34.675 07:22:50 -- accel/accel.sh@22 -- # case "$var" in 00:05:34.675 07:22:50 -- accel/accel.sh@20 -- # IFS=: 00:05:34.675 07:22:50 -- accel/accel.sh@20 -- # read -r var val 00:05:34.675 07:22:50 -- accel/accel.sh@21 -- # val= 00:05:34.675 07:22:50 -- accel/accel.sh@22 -- # case "$var" in 00:05:34.675 07:22:50 -- accel/accel.sh@20 -- # IFS=: 00:05:34.675 07:22:50 -- accel/accel.sh@20 -- # read -r var val 00:05:34.675 07:22:50 -- accel/accel.sh@21 -- # val= 00:05:34.675 07:22:50 -- accel/accel.sh@22 -- # case "$var" in 00:05:34.675 07:22:50 -- accel/accel.sh@20 -- # IFS=: 00:05:34.675 07:22:50 -- accel/accel.sh@20 -- # read -r var val 00:05:34.675 07:22:50 -- accel/accel.sh@21 -- # val= 00:05:34.675 07:22:50 -- accel/accel.sh@22 -- # case "$var" in 00:05:34.675 07:22:50 -- accel/accel.sh@20 -- # IFS=: 00:05:34.675 07:22:50 -- accel/accel.sh@20 -- # read -r var val 00:05:34.675 07:22:50 -- accel/accel.sh@21 -- # val= 00:05:34.675 07:22:50 -- accel/accel.sh@22 -- # case "$var" in 00:05:34.675 07:22:50 -- accel/accel.sh@20 -- # IFS=: 00:05:34.675 07:22:50 -- accel/accel.sh@20 -- # read -r var val 00:05:34.675 07:22:50 -- accel/accel.sh@28 -- # [[ -n software ]] 00:05:34.675 07:22:50 -- accel/accel.sh@28 -- # [[ -n fill ]] 00:05:34.675 07:22:50 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:05:34.675 00:05:34.675 real 0m2.947s 00:05:34.675 user 0m2.646s 00:05:34.675 sys 0m0.292s 00:05:34.675 07:22:50 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:34.675 07:22:50 -- common/autotest_common.sh@10 -- # set +x 00:05:34.675 ************************************ 00:05:34.675 END TEST accel_fill 00:05:34.675 ************************************ 00:05:34.675 07:22:50 -- accel/accel.sh@97 -- # run_test accel_copy_crc32c accel_test -t 1 -w copy_crc32c -y 00:05:34.675 07:22:50 -- common/autotest_common.sh@1077 -- # '[' 7 -le 1 ']' 00:05:34.675 07:22:50 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:05:34.675 07:22:50 -- common/autotest_common.sh@10 -- # set +x 00:05:34.675 ************************************ 00:05:34.675 START TEST accel_copy_crc32c 00:05:34.675 ************************************ 00:05:34.675 07:22:50 -- common/autotest_common.sh@1104 -- # accel_test -t 1 -w copy_crc32c -y 00:05:34.675 07:22:50 -- accel/accel.sh@16 -- # local accel_opc 00:05:34.675 07:22:50 -- accel/accel.sh@17 -- # local accel_module 00:05:34.675 07:22:50 -- accel/accel.sh@18 -- # accel_perf -t 1 -w copy_crc32c -y 00:05:34.675 07:22:50 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w copy_crc32c -y 00:05:34.675 07:22:50 -- accel/accel.sh@12 -- # build_accel_config 00:05:34.675 07:22:50 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:05:34.675 07:22:50 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:34.675 07:22:50 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:34.675 07:22:50 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:05:34.675 07:22:50 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:05:34.675 07:22:50 -- accel/accel.sh@41 -- # local IFS=, 00:05:34.675 07:22:50 -- accel/accel.sh@42 -- # jq -r . 00:05:34.675 [2024-07-14 07:22:50.509308] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:05:34.675 [2024-07-14 07:22:50.509393] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3981497 ] 00:05:34.675 EAL: No free 2048 kB hugepages reported on node 1 00:05:34.675 [2024-07-14 07:22:50.575131] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:34.675 [2024-07-14 07:22:50.695852] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:36.096 07:22:51 -- accel/accel.sh@18 -- # out=' 00:05:36.096 SPDK Configuration: 00:05:36.096 Core mask: 0x1 00:05:36.096 00:05:36.096 Accel Perf Configuration: 00:05:36.096 Workload Type: copy_crc32c 00:05:36.096 CRC-32C seed: 0 00:05:36.096 Vector size: 4096 bytes 00:05:36.096 Transfer size: 4096 bytes 00:05:36.096 Vector count 1 00:05:36.096 Module: software 00:05:36.096 Queue depth: 32 00:05:36.096 Allocate depth: 32 00:05:36.096 # threads/core: 1 00:05:36.096 Run time: 1 seconds 00:05:36.096 Verify: Yes 00:05:36.096 00:05:36.096 Running for 1 seconds... 00:05:36.096 00:05:36.096 Core,Thread Transfers Bandwidth Failed Miscompares 00:05:36.096 ------------------------------------------------------------------------------------ 00:05:36.096 0,0 216000/s 843 MiB/s 0 0 00:05:36.096 ==================================================================================== 00:05:36.096 Total 216000/s 843 MiB/s 0 0' 00:05:36.096 07:22:51 -- accel/accel.sh@20 -- # IFS=: 00:05:36.096 07:22:51 -- accel/accel.sh@15 -- # accel_perf -t 1 -w copy_crc32c -y 00:05:36.096 07:22:51 -- accel/accel.sh@20 -- # read -r var val 00:05:36.096 07:22:51 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w copy_crc32c -y 00:05:36.096 07:22:51 -- accel/accel.sh@12 -- # build_accel_config 00:05:36.096 07:22:51 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:05:36.096 07:22:51 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:36.096 07:22:51 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:36.096 07:22:51 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:05:36.096 07:22:51 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:05:36.096 07:22:51 -- accel/accel.sh@41 -- # local IFS=, 00:05:36.096 07:22:51 -- accel/accel.sh@42 -- # jq -r . 00:05:36.096 [2024-07-14 07:22:51.992720] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:05:36.096 [2024-07-14 07:22:51.992806] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3981647 ] 00:05:36.096 EAL: No free 2048 kB hugepages reported on node 1 00:05:36.096 [2024-07-14 07:22:52.053362] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:36.096 [2024-07-14 07:22:52.171837] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:36.096 07:22:52 -- accel/accel.sh@21 -- # val= 00:05:36.096 07:22:52 -- accel/accel.sh@22 -- # case "$var" in 00:05:36.096 07:22:52 -- accel/accel.sh@20 -- # IFS=: 00:05:36.096 07:22:52 -- accel/accel.sh@20 -- # read -r var val 00:05:36.096 07:22:52 -- accel/accel.sh@21 -- # val= 00:05:36.096 07:22:52 -- accel/accel.sh@22 -- # case "$var" in 00:05:36.096 07:22:52 -- accel/accel.sh@20 -- # IFS=: 00:05:36.096 07:22:52 -- accel/accel.sh@20 -- # read -r var val 00:05:36.096 07:22:52 -- accel/accel.sh@21 -- # val=0x1 00:05:36.096 07:22:52 -- accel/accel.sh@22 -- # case "$var" in 00:05:36.096 07:22:52 -- accel/accel.sh@20 -- # IFS=: 00:05:36.096 07:22:52 -- accel/accel.sh@20 -- # read -r var val 00:05:36.096 07:22:52 -- accel/accel.sh@21 -- # val= 00:05:36.096 07:22:52 -- accel/accel.sh@22 -- # case "$var" in 00:05:36.096 07:22:52 -- accel/accel.sh@20 -- # IFS=: 00:05:36.096 07:22:52 -- accel/accel.sh@20 -- # read -r var val 00:05:36.096 07:22:52 -- accel/accel.sh@21 -- # val= 00:05:36.096 07:22:52 -- accel/accel.sh@22 -- # case "$var" in 00:05:36.096 07:22:52 -- accel/accel.sh@20 -- # IFS=: 00:05:36.096 07:22:52 -- accel/accel.sh@20 -- # read -r var val 00:05:36.096 07:22:52 -- accel/accel.sh@21 -- # val=copy_crc32c 00:05:36.096 07:22:52 -- accel/accel.sh@22 -- # case "$var" in 00:05:36.096 07:22:52 -- accel/accel.sh@24 -- # accel_opc=copy_crc32c 00:05:36.096 07:22:52 -- accel/accel.sh@20 -- # IFS=: 00:05:36.097 07:22:52 -- accel/accel.sh@20 -- # read -r var val 00:05:36.097 07:22:52 -- accel/accel.sh@21 -- # val=0 00:05:36.097 07:22:52 -- accel/accel.sh@22 -- # case "$var" in 00:05:36.097 07:22:52 -- accel/accel.sh@20 -- # IFS=: 00:05:36.097 07:22:52 -- accel/accel.sh@20 -- # read -r var val 00:05:36.097 07:22:52 -- accel/accel.sh@21 -- # val='4096 bytes' 00:05:36.097 07:22:52 -- accel/accel.sh@22 -- # case "$var" in 00:05:36.097 07:22:52 -- accel/accel.sh@20 -- # IFS=: 00:05:36.097 07:22:52 -- accel/accel.sh@20 -- # read -r var val 00:05:36.097 07:22:52 -- accel/accel.sh@21 -- # val='4096 bytes' 00:05:36.097 07:22:52 -- accel/accel.sh@22 -- # case "$var" in 00:05:36.097 07:22:52 -- accel/accel.sh@20 -- # IFS=: 00:05:36.097 07:22:52 -- accel/accel.sh@20 -- # read -r var val 00:05:36.097 07:22:52 -- accel/accel.sh@21 -- # val= 00:05:36.097 07:22:52 -- accel/accel.sh@22 -- # case "$var" in 00:05:36.097 07:22:52 -- accel/accel.sh@20 -- # IFS=: 00:05:36.097 07:22:52 -- accel/accel.sh@20 -- # read -r var val 00:05:36.097 07:22:52 -- accel/accel.sh@21 -- # val=software 00:05:36.097 07:22:52 -- accel/accel.sh@22 -- # case "$var" in 00:05:36.097 07:22:52 -- accel/accel.sh@23 -- # accel_module=software 00:05:36.097 07:22:52 -- accel/accel.sh@20 -- # IFS=: 00:05:36.097 07:22:52 -- accel/accel.sh@20 -- # read -r var val 00:05:36.097 07:22:52 -- accel/accel.sh@21 -- # val=32 00:05:36.097 07:22:52 -- accel/accel.sh@22 -- # case "$var" in 00:05:36.097 07:22:52 -- accel/accel.sh@20 -- # IFS=: 00:05:36.097 07:22:52 -- accel/accel.sh@20 -- # read -r var val 00:05:36.097 07:22:52 -- accel/accel.sh@21 -- # val=32 00:05:36.097 07:22:52 -- accel/accel.sh@22 -- # case "$var" in 00:05:36.097 07:22:52 -- accel/accel.sh@20 -- # IFS=: 00:05:36.097 07:22:52 -- accel/accel.sh@20 -- # read -r var val 00:05:36.097 07:22:52 -- accel/accel.sh@21 -- # val=1 00:05:36.097 07:22:52 -- accel/accel.sh@22 -- # case "$var" in 00:05:36.097 07:22:52 -- accel/accel.sh@20 -- # IFS=: 00:05:36.097 07:22:52 -- accel/accel.sh@20 -- # read -r var val 00:05:36.097 07:22:52 -- accel/accel.sh@21 -- # val='1 seconds' 00:05:36.097 07:22:52 -- accel/accel.sh@22 -- # case "$var" in 00:05:36.097 07:22:52 -- accel/accel.sh@20 -- # IFS=: 00:05:36.097 07:22:52 -- accel/accel.sh@20 -- # read -r var val 00:05:36.097 07:22:52 -- accel/accel.sh@21 -- # val=Yes 00:05:36.097 07:22:52 -- accel/accel.sh@22 -- # case "$var" in 00:05:36.097 07:22:52 -- accel/accel.sh@20 -- # IFS=: 00:05:36.097 07:22:52 -- accel/accel.sh@20 -- # read -r var val 00:05:36.097 07:22:52 -- accel/accel.sh@21 -- # val= 00:05:36.097 07:22:52 -- accel/accel.sh@22 -- # case "$var" in 00:05:36.097 07:22:52 -- accel/accel.sh@20 -- # IFS=: 00:05:36.097 07:22:52 -- accel/accel.sh@20 -- # read -r var val 00:05:36.097 07:22:52 -- accel/accel.sh@21 -- # val= 00:05:36.097 07:22:52 -- accel/accel.sh@22 -- # case "$var" in 00:05:36.097 07:22:52 -- accel/accel.sh@20 -- # IFS=: 00:05:36.097 07:22:52 -- accel/accel.sh@20 -- # read -r var val 00:05:37.468 07:22:53 -- accel/accel.sh@21 -- # val= 00:05:37.468 07:22:53 -- accel/accel.sh@22 -- # case "$var" in 00:05:37.468 07:22:53 -- accel/accel.sh@20 -- # IFS=: 00:05:37.468 07:22:53 -- accel/accel.sh@20 -- # read -r var val 00:05:37.468 07:22:53 -- accel/accel.sh@21 -- # val= 00:05:37.468 07:22:53 -- accel/accel.sh@22 -- # case "$var" in 00:05:37.468 07:22:53 -- accel/accel.sh@20 -- # IFS=: 00:05:37.468 07:22:53 -- accel/accel.sh@20 -- # read -r var val 00:05:37.468 07:22:53 -- accel/accel.sh@21 -- # val= 00:05:37.468 07:22:53 -- accel/accel.sh@22 -- # case "$var" in 00:05:37.468 07:22:53 -- accel/accel.sh@20 -- # IFS=: 00:05:37.468 07:22:53 -- accel/accel.sh@20 -- # read -r var val 00:05:37.468 07:22:53 -- accel/accel.sh@21 -- # val= 00:05:37.468 07:22:53 -- accel/accel.sh@22 -- # case "$var" in 00:05:37.468 07:22:53 -- accel/accel.sh@20 -- # IFS=: 00:05:37.468 07:22:53 -- accel/accel.sh@20 -- # read -r var val 00:05:37.468 07:22:53 -- accel/accel.sh@21 -- # val= 00:05:37.468 07:22:53 -- accel/accel.sh@22 -- # case "$var" in 00:05:37.468 07:22:53 -- accel/accel.sh@20 -- # IFS=: 00:05:37.468 07:22:53 -- accel/accel.sh@20 -- # read -r var val 00:05:37.468 07:22:53 -- accel/accel.sh@21 -- # val= 00:05:37.468 07:22:53 -- accel/accel.sh@22 -- # case "$var" in 00:05:37.468 07:22:53 -- accel/accel.sh@20 -- # IFS=: 00:05:37.468 07:22:53 -- accel/accel.sh@20 -- # read -r var val 00:05:37.468 07:22:53 -- accel/accel.sh@28 -- # [[ -n software ]] 00:05:37.468 07:22:53 -- accel/accel.sh@28 -- # [[ -n copy_crc32c ]] 00:05:37.468 07:22:53 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:05:37.468 00:05:37.468 real 0m2.955s 00:05:37.468 user 0m2.669s 00:05:37.468 sys 0m0.276s 00:05:37.468 07:22:53 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:37.468 07:22:53 -- common/autotest_common.sh@10 -- # set +x 00:05:37.468 ************************************ 00:05:37.468 END TEST accel_copy_crc32c 00:05:37.468 ************************************ 00:05:37.468 07:22:53 -- accel/accel.sh@98 -- # run_test accel_copy_crc32c_C2 accel_test -t 1 -w copy_crc32c -y -C 2 00:05:37.468 07:22:53 -- common/autotest_common.sh@1077 -- # '[' 9 -le 1 ']' 00:05:37.468 07:22:53 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:05:37.468 07:22:53 -- common/autotest_common.sh@10 -- # set +x 00:05:37.468 ************************************ 00:05:37.468 START TEST accel_copy_crc32c_C2 00:05:37.468 ************************************ 00:05:37.468 07:22:53 -- common/autotest_common.sh@1104 -- # accel_test -t 1 -w copy_crc32c -y -C 2 00:05:37.468 07:22:53 -- accel/accel.sh@16 -- # local accel_opc 00:05:37.468 07:22:53 -- accel/accel.sh@17 -- # local accel_module 00:05:37.468 07:22:53 -- accel/accel.sh@18 -- # accel_perf -t 1 -w copy_crc32c -y -C 2 00:05:37.468 07:22:53 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w copy_crc32c -y -C 2 00:05:37.468 07:22:53 -- accel/accel.sh@12 -- # build_accel_config 00:05:37.468 07:22:53 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:05:37.468 07:22:53 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:37.468 07:22:53 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:37.468 07:22:53 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:05:37.468 07:22:53 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:05:37.468 07:22:53 -- accel/accel.sh@41 -- # local IFS=, 00:05:37.468 07:22:53 -- accel/accel.sh@42 -- # jq -r . 00:05:37.468 [2024-07-14 07:22:53.489893] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:05:37.468 [2024-07-14 07:22:53.489973] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3981808 ] 00:05:37.468 EAL: No free 2048 kB hugepages reported on node 1 00:05:37.468 [2024-07-14 07:22:53.555273] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:37.727 [2024-07-14 07:22:53.677495] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:39.100 07:22:54 -- accel/accel.sh@18 -- # out=' 00:05:39.100 SPDK Configuration: 00:05:39.100 Core mask: 0x1 00:05:39.100 00:05:39.100 Accel Perf Configuration: 00:05:39.100 Workload Type: copy_crc32c 00:05:39.100 CRC-32C seed: 0 00:05:39.100 Vector size: 4096 bytes 00:05:39.100 Transfer size: 8192 bytes 00:05:39.100 Vector count 2 00:05:39.100 Module: software 00:05:39.100 Queue depth: 32 00:05:39.100 Allocate depth: 32 00:05:39.100 # threads/core: 1 00:05:39.100 Run time: 1 seconds 00:05:39.100 Verify: Yes 00:05:39.100 00:05:39.100 Running for 1 seconds... 00:05:39.100 00:05:39.100 Core,Thread Transfers Bandwidth Failed Miscompares 00:05:39.100 ------------------------------------------------------------------------------------ 00:05:39.100 0,0 154784/s 1209 MiB/s 0 0 00:05:39.100 ==================================================================================== 00:05:39.100 Total 154784/s 604 MiB/s 0 0' 00:05:39.100 07:22:54 -- accel/accel.sh@20 -- # IFS=: 00:05:39.101 07:22:54 -- accel/accel.sh@15 -- # accel_perf -t 1 -w copy_crc32c -y -C 2 00:05:39.101 07:22:54 -- accel/accel.sh@20 -- # read -r var val 00:05:39.101 07:22:54 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w copy_crc32c -y -C 2 00:05:39.101 07:22:54 -- accel/accel.sh@12 -- # build_accel_config 00:05:39.101 07:22:54 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:05:39.101 07:22:54 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:39.101 07:22:54 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:39.101 07:22:54 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:05:39.101 07:22:54 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:05:39.101 07:22:54 -- accel/accel.sh@41 -- # local IFS=, 00:05:39.101 07:22:54 -- accel/accel.sh@42 -- # jq -r . 00:05:39.101 [2024-07-14 07:22:54.976029] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:05:39.101 [2024-07-14 07:22:54.976108] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3982071 ] 00:05:39.101 EAL: No free 2048 kB hugepages reported on node 1 00:05:39.101 [2024-07-14 07:22:55.037162] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:39.101 [2024-07-14 07:22:55.157650] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:39.101 07:22:55 -- accel/accel.sh@21 -- # val= 00:05:39.101 07:22:55 -- accel/accel.sh@22 -- # case "$var" in 00:05:39.101 07:22:55 -- accel/accel.sh@20 -- # IFS=: 00:05:39.101 07:22:55 -- accel/accel.sh@20 -- # read -r var val 00:05:39.101 07:22:55 -- accel/accel.sh@21 -- # val= 00:05:39.101 07:22:55 -- accel/accel.sh@22 -- # case "$var" in 00:05:39.101 07:22:55 -- accel/accel.sh@20 -- # IFS=: 00:05:39.101 07:22:55 -- accel/accel.sh@20 -- # read -r var val 00:05:39.101 07:22:55 -- accel/accel.sh@21 -- # val=0x1 00:05:39.101 07:22:55 -- accel/accel.sh@22 -- # case "$var" in 00:05:39.101 07:22:55 -- accel/accel.sh@20 -- # IFS=: 00:05:39.101 07:22:55 -- accel/accel.sh@20 -- # read -r var val 00:05:39.101 07:22:55 -- accel/accel.sh@21 -- # val= 00:05:39.101 07:22:55 -- accel/accel.sh@22 -- # case "$var" in 00:05:39.101 07:22:55 -- accel/accel.sh@20 -- # IFS=: 00:05:39.101 07:22:55 -- accel/accel.sh@20 -- # read -r var val 00:05:39.101 07:22:55 -- accel/accel.sh@21 -- # val= 00:05:39.101 07:22:55 -- accel/accel.sh@22 -- # case "$var" in 00:05:39.101 07:22:55 -- accel/accel.sh@20 -- # IFS=: 00:05:39.101 07:22:55 -- accel/accel.sh@20 -- # read -r var val 00:05:39.101 07:22:55 -- accel/accel.sh@21 -- # val=copy_crc32c 00:05:39.101 07:22:55 -- accel/accel.sh@22 -- # case "$var" in 00:05:39.101 07:22:55 -- accel/accel.sh@24 -- # accel_opc=copy_crc32c 00:05:39.101 07:22:55 -- accel/accel.sh@20 -- # IFS=: 00:05:39.101 07:22:55 -- accel/accel.sh@20 -- # read -r var val 00:05:39.101 07:22:55 -- accel/accel.sh@21 -- # val=0 00:05:39.101 07:22:55 -- accel/accel.sh@22 -- # case "$var" in 00:05:39.101 07:22:55 -- accel/accel.sh@20 -- # IFS=: 00:05:39.101 07:22:55 -- accel/accel.sh@20 -- # read -r var val 00:05:39.101 07:22:55 -- accel/accel.sh@21 -- # val='4096 bytes' 00:05:39.101 07:22:55 -- accel/accel.sh@22 -- # case "$var" in 00:05:39.101 07:22:55 -- accel/accel.sh@20 -- # IFS=: 00:05:39.101 07:22:55 -- accel/accel.sh@20 -- # read -r var val 00:05:39.101 07:22:55 -- accel/accel.sh@21 -- # val='8192 bytes' 00:05:39.101 07:22:55 -- accel/accel.sh@22 -- # case "$var" in 00:05:39.101 07:22:55 -- accel/accel.sh@20 -- # IFS=: 00:05:39.101 07:22:55 -- accel/accel.sh@20 -- # read -r var val 00:05:39.101 07:22:55 -- accel/accel.sh@21 -- # val= 00:05:39.101 07:22:55 -- accel/accel.sh@22 -- # case "$var" in 00:05:39.101 07:22:55 -- accel/accel.sh@20 -- # IFS=: 00:05:39.101 07:22:55 -- accel/accel.sh@20 -- # read -r var val 00:05:39.101 07:22:55 -- accel/accel.sh@21 -- # val=software 00:05:39.101 07:22:55 -- accel/accel.sh@22 -- # case "$var" in 00:05:39.101 07:22:55 -- accel/accel.sh@23 -- # accel_module=software 00:05:39.101 07:22:55 -- accel/accel.sh@20 -- # IFS=: 00:05:39.101 07:22:55 -- accel/accel.sh@20 -- # read -r var val 00:05:39.101 07:22:55 -- accel/accel.sh@21 -- # val=32 00:05:39.101 07:22:55 -- accel/accel.sh@22 -- # case "$var" in 00:05:39.101 07:22:55 -- accel/accel.sh@20 -- # IFS=: 00:05:39.101 07:22:55 -- accel/accel.sh@20 -- # read -r var val 00:05:39.101 07:22:55 -- accel/accel.sh@21 -- # val=32 00:05:39.101 07:22:55 -- accel/accel.sh@22 -- # case "$var" in 00:05:39.101 07:22:55 -- accel/accel.sh@20 -- # IFS=: 00:05:39.101 07:22:55 -- accel/accel.sh@20 -- # read -r var val 00:05:39.101 07:22:55 -- accel/accel.sh@21 -- # val=1 00:05:39.101 07:22:55 -- accel/accel.sh@22 -- # case "$var" in 00:05:39.101 07:22:55 -- accel/accel.sh@20 -- # IFS=: 00:05:39.101 07:22:55 -- accel/accel.sh@20 -- # read -r var val 00:05:39.101 07:22:55 -- accel/accel.sh@21 -- # val='1 seconds' 00:05:39.101 07:22:55 -- accel/accel.sh@22 -- # case "$var" in 00:05:39.101 07:22:55 -- accel/accel.sh@20 -- # IFS=: 00:05:39.101 07:22:55 -- accel/accel.sh@20 -- # read -r var val 00:05:39.101 07:22:55 -- accel/accel.sh@21 -- # val=Yes 00:05:39.101 07:22:55 -- accel/accel.sh@22 -- # case "$var" in 00:05:39.101 07:22:55 -- accel/accel.sh@20 -- # IFS=: 00:05:39.101 07:22:55 -- accel/accel.sh@20 -- # read -r var val 00:05:39.101 07:22:55 -- accel/accel.sh@21 -- # val= 00:05:39.101 07:22:55 -- accel/accel.sh@22 -- # case "$var" in 00:05:39.101 07:22:55 -- accel/accel.sh@20 -- # IFS=: 00:05:39.101 07:22:55 -- accel/accel.sh@20 -- # read -r var val 00:05:39.101 07:22:55 -- accel/accel.sh@21 -- # val= 00:05:39.101 07:22:55 -- accel/accel.sh@22 -- # case "$var" in 00:05:39.101 07:22:55 -- accel/accel.sh@20 -- # IFS=: 00:05:39.101 07:22:55 -- accel/accel.sh@20 -- # read -r var val 00:05:40.476 07:22:56 -- accel/accel.sh@21 -- # val= 00:05:40.476 07:22:56 -- accel/accel.sh@22 -- # case "$var" in 00:05:40.476 07:22:56 -- accel/accel.sh@20 -- # IFS=: 00:05:40.476 07:22:56 -- accel/accel.sh@20 -- # read -r var val 00:05:40.476 07:22:56 -- accel/accel.sh@21 -- # val= 00:05:40.476 07:22:56 -- accel/accel.sh@22 -- # case "$var" in 00:05:40.476 07:22:56 -- accel/accel.sh@20 -- # IFS=: 00:05:40.476 07:22:56 -- accel/accel.sh@20 -- # read -r var val 00:05:40.476 07:22:56 -- accel/accel.sh@21 -- # val= 00:05:40.476 07:22:56 -- accel/accel.sh@22 -- # case "$var" in 00:05:40.476 07:22:56 -- accel/accel.sh@20 -- # IFS=: 00:05:40.476 07:22:56 -- accel/accel.sh@20 -- # read -r var val 00:05:40.476 07:22:56 -- accel/accel.sh@21 -- # val= 00:05:40.476 07:22:56 -- accel/accel.sh@22 -- # case "$var" in 00:05:40.476 07:22:56 -- accel/accel.sh@20 -- # IFS=: 00:05:40.476 07:22:56 -- accel/accel.sh@20 -- # read -r var val 00:05:40.476 07:22:56 -- accel/accel.sh@21 -- # val= 00:05:40.476 07:22:56 -- accel/accel.sh@22 -- # case "$var" in 00:05:40.476 07:22:56 -- accel/accel.sh@20 -- # IFS=: 00:05:40.476 07:22:56 -- accel/accel.sh@20 -- # read -r var val 00:05:40.476 07:22:56 -- accel/accel.sh@21 -- # val= 00:05:40.476 07:22:56 -- accel/accel.sh@22 -- # case "$var" in 00:05:40.476 07:22:56 -- accel/accel.sh@20 -- # IFS=: 00:05:40.476 07:22:56 -- accel/accel.sh@20 -- # read -r var val 00:05:40.476 07:22:56 -- accel/accel.sh@28 -- # [[ -n software ]] 00:05:40.476 07:22:56 -- accel/accel.sh@28 -- # [[ -n copy_crc32c ]] 00:05:40.476 07:22:56 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:05:40.476 00:05:40.476 real 0m2.970s 00:05:40.476 user 0m2.671s 00:05:40.476 sys 0m0.290s 00:05:40.476 07:22:56 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:40.476 07:22:56 -- common/autotest_common.sh@10 -- # set +x 00:05:40.476 ************************************ 00:05:40.476 END TEST accel_copy_crc32c_C2 00:05:40.476 ************************************ 00:05:40.476 07:22:56 -- accel/accel.sh@99 -- # run_test accel_dualcast accel_test -t 1 -w dualcast -y 00:05:40.476 07:22:56 -- common/autotest_common.sh@1077 -- # '[' 7 -le 1 ']' 00:05:40.476 07:22:56 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:05:40.476 07:22:56 -- common/autotest_common.sh@10 -- # set +x 00:05:40.476 ************************************ 00:05:40.476 START TEST accel_dualcast 00:05:40.476 ************************************ 00:05:40.476 07:22:56 -- common/autotest_common.sh@1104 -- # accel_test -t 1 -w dualcast -y 00:05:40.476 07:22:56 -- accel/accel.sh@16 -- # local accel_opc 00:05:40.476 07:22:56 -- accel/accel.sh@17 -- # local accel_module 00:05:40.476 07:22:56 -- accel/accel.sh@18 -- # accel_perf -t 1 -w dualcast -y 00:05:40.476 07:22:56 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dualcast -y 00:05:40.476 07:22:56 -- accel/accel.sh@12 -- # build_accel_config 00:05:40.476 07:22:56 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:05:40.476 07:22:56 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:40.476 07:22:56 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:40.476 07:22:56 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:05:40.476 07:22:56 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:05:40.476 07:22:56 -- accel/accel.sh@41 -- # local IFS=, 00:05:40.476 07:22:56 -- accel/accel.sh@42 -- # jq -r . 00:05:40.476 [2024-07-14 07:22:56.488478] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:05:40.476 [2024-07-14 07:22:56.488558] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3982230 ] 00:05:40.476 EAL: No free 2048 kB hugepages reported on node 1 00:05:40.476 [2024-07-14 07:22:56.552490] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:40.734 [2024-07-14 07:22:56.675115] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:42.109 07:22:57 -- accel/accel.sh@18 -- # out=' 00:05:42.109 SPDK Configuration: 00:05:42.109 Core mask: 0x1 00:05:42.109 00:05:42.109 Accel Perf Configuration: 00:05:42.109 Workload Type: dualcast 00:05:42.109 Transfer size: 4096 bytes 00:05:42.109 Vector count 1 00:05:42.109 Module: software 00:05:42.109 Queue depth: 32 00:05:42.109 Allocate depth: 32 00:05:42.109 # threads/core: 1 00:05:42.109 Run time: 1 seconds 00:05:42.109 Verify: Yes 00:05:42.109 00:05:42.109 Running for 1 seconds... 00:05:42.109 00:05:42.109 Core,Thread Transfers Bandwidth Failed Miscompares 00:05:42.109 ------------------------------------------------------------------------------------ 00:05:42.109 0,0 298080/s 1164 MiB/s 0 0 00:05:42.109 ==================================================================================== 00:05:42.109 Total 298080/s 1164 MiB/s 0 0' 00:05:42.109 07:22:57 -- accel/accel.sh@20 -- # IFS=: 00:05:42.109 07:22:57 -- accel/accel.sh@15 -- # accel_perf -t 1 -w dualcast -y 00:05:42.109 07:22:57 -- accel/accel.sh@20 -- # read -r var val 00:05:42.109 07:22:57 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dualcast -y 00:05:42.109 07:22:57 -- accel/accel.sh@12 -- # build_accel_config 00:05:42.109 07:22:57 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:05:42.109 07:22:57 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:42.109 07:22:57 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:42.109 07:22:57 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:05:42.109 07:22:57 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:05:42.109 07:22:57 -- accel/accel.sh@41 -- # local IFS=, 00:05:42.109 07:22:57 -- accel/accel.sh@42 -- # jq -r . 00:05:42.109 [2024-07-14 07:22:57.972409] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:05:42.109 [2024-07-14 07:22:57.972493] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3982373 ] 00:05:42.109 EAL: No free 2048 kB hugepages reported on node 1 00:05:42.109 [2024-07-14 07:22:58.036924] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:42.109 [2024-07-14 07:22:58.154930] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:42.109 07:22:58 -- accel/accel.sh@21 -- # val= 00:05:42.109 07:22:58 -- accel/accel.sh@22 -- # case "$var" in 00:05:42.109 07:22:58 -- accel/accel.sh@20 -- # IFS=: 00:05:42.109 07:22:58 -- accel/accel.sh@20 -- # read -r var val 00:05:42.109 07:22:58 -- accel/accel.sh@21 -- # val= 00:05:42.109 07:22:58 -- accel/accel.sh@22 -- # case "$var" in 00:05:42.109 07:22:58 -- accel/accel.sh@20 -- # IFS=: 00:05:42.109 07:22:58 -- accel/accel.sh@20 -- # read -r var val 00:05:42.109 07:22:58 -- accel/accel.sh@21 -- # val=0x1 00:05:42.109 07:22:58 -- accel/accel.sh@22 -- # case "$var" in 00:05:42.109 07:22:58 -- accel/accel.sh@20 -- # IFS=: 00:05:42.109 07:22:58 -- accel/accel.sh@20 -- # read -r var val 00:05:42.109 07:22:58 -- accel/accel.sh@21 -- # val= 00:05:42.109 07:22:58 -- accel/accel.sh@22 -- # case "$var" in 00:05:42.109 07:22:58 -- accel/accel.sh@20 -- # IFS=: 00:05:42.109 07:22:58 -- accel/accel.sh@20 -- # read -r var val 00:05:42.109 07:22:58 -- accel/accel.sh@21 -- # val= 00:05:42.109 07:22:58 -- accel/accel.sh@22 -- # case "$var" in 00:05:42.109 07:22:58 -- accel/accel.sh@20 -- # IFS=: 00:05:42.109 07:22:58 -- accel/accel.sh@20 -- # read -r var val 00:05:42.109 07:22:58 -- accel/accel.sh@21 -- # val=dualcast 00:05:42.109 07:22:58 -- accel/accel.sh@22 -- # case "$var" in 00:05:42.109 07:22:58 -- accel/accel.sh@24 -- # accel_opc=dualcast 00:05:42.109 07:22:58 -- accel/accel.sh@20 -- # IFS=: 00:05:42.109 07:22:58 -- accel/accel.sh@20 -- # read -r var val 00:05:42.109 07:22:58 -- accel/accel.sh@21 -- # val='4096 bytes' 00:05:42.109 07:22:58 -- accel/accel.sh@22 -- # case "$var" in 00:05:42.109 07:22:58 -- accel/accel.sh@20 -- # IFS=: 00:05:42.109 07:22:58 -- accel/accel.sh@20 -- # read -r var val 00:05:42.109 07:22:58 -- accel/accel.sh@21 -- # val= 00:05:42.109 07:22:58 -- accel/accel.sh@22 -- # case "$var" in 00:05:42.110 07:22:58 -- accel/accel.sh@20 -- # IFS=: 00:05:42.110 07:22:58 -- accel/accel.sh@20 -- # read -r var val 00:05:42.110 07:22:58 -- accel/accel.sh@21 -- # val=software 00:05:42.110 07:22:58 -- accel/accel.sh@22 -- # case "$var" in 00:05:42.110 07:22:58 -- accel/accel.sh@23 -- # accel_module=software 00:05:42.110 07:22:58 -- accel/accel.sh@20 -- # IFS=: 00:05:42.110 07:22:58 -- accel/accel.sh@20 -- # read -r var val 00:05:42.110 07:22:58 -- accel/accel.sh@21 -- # val=32 00:05:42.110 07:22:58 -- accel/accel.sh@22 -- # case "$var" in 00:05:42.110 07:22:58 -- accel/accel.sh@20 -- # IFS=: 00:05:42.110 07:22:58 -- accel/accel.sh@20 -- # read -r var val 00:05:42.110 07:22:58 -- accel/accel.sh@21 -- # val=32 00:05:42.110 07:22:58 -- accel/accel.sh@22 -- # case "$var" in 00:05:42.110 07:22:58 -- accel/accel.sh@20 -- # IFS=: 00:05:42.110 07:22:58 -- accel/accel.sh@20 -- # read -r var val 00:05:42.110 07:22:58 -- accel/accel.sh@21 -- # val=1 00:05:42.110 07:22:58 -- accel/accel.sh@22 -- # case "$var" in 00:05:42.110 07:22:58 -- accel/accel.sh@20 -- # IFS=: 00:05:42.110 07:22:58 -- accel/accel.sh@20 -- # read -r var val 00:05:42.110 07:22:58 -- accel/accel.sh@21 -- # val='1 seconds' 00:05:42.110 07:22:58 -- accel/accel.sh@22 -- # case "$var" in 00:05:42.110 07:22:58 -- accel/accel.sh@20 -- # IFS=: 00:05:42.110 07:22:58 -- accel/accel.sh@20 -- # read -r var val 00:05:42.110 07:22:58 -- accel/accel.sh@21 -- # val=Yes 00:05:42.110 07:22:58 -- accel/accel.sh@22 -- # case "$var" in 00:05:42.110 07:22:58 -- accel/accel.sh@20 -- # IFS=: 00:05:42.110 07:22:58 -- accel/accel.sh@20 -- # read -r var val 00:05:42.110 07:22:58 -- accel/accel.sh@21 -- # val= 00:05:42.110 07:22:58 -- accel/accel.sh@22 -- # case "$var" in 00:05:42.110 07:22:58 -- accel/accel.sh@20 -- # IFS=: 00:05:42.110 07:22:58 -- accel/accel.sh@20 -- # read -r var val 00:05:42.110 07:22:58 -- accel/accel.sh@21 -- # val= 00:05:42.110 07:22:58 -- accel/accel.sh@22 -- # case "$var" in 00:05:42.110 07:22:58 -- accel/accel.sh@20 -- # IFS=: 00:05:42.110 07:22:58 -- accel/accel.sh@20 -- # read -r var val 00:05:43.485 07:22:59 -- accel/accel.sh@21 -- # val= 00:05:43.485 07:22:59 -- accel/accel.sh@22 -- # case "$var" in 00:05:43.485 07:22:59 -- accel/accel.sh@20 -- # IFS=: 00:05:43.485 07:22:59 -- accel/accel.sh@20 -- # read -r var val 00:05:43.485 07:22:59 -- accel/accel.sh@21 -- # val= 00:05:43.485 07:22:59 -- accel/accel.sh@22 -- # case "$var" in 00:05:43.485 07:22:59 -- accel/accel.sh@20 -- # IFS=: 00:05:43.485 07:22:59 -- accel/accel.sh@20 -- # read -r var val 00:05:43.485 07:22:59 -- accel/accel.sh@21 -- # val= 00:05:43.485 07:22:59 -- accel/accel.sh@22 -- # case "$var" in 00:05:43.485 07:22:59 -- accel/accel.sh@20 -- # IFS=: 00:05:43.485 07:22:59 -- accel/accel.sh@20 -- # read -r var val 00:05:43.485 07:22:59 -- accel/accel.sh@21 -- # val= 00:05:43.485 07:22:59 -- accel/accel.sh@22 -- # case "$var" in 00:05:43.485 07:22:59 -- accel/accel.sh@20 -- # IFS=: 00:05:43.485 07:22:59 -- accel/accel.sh@20 -- # read -r var val 00:05:43.485 07:22:59 -- accel/accel.sh@21 -- # val= 00:05:43.485 07:22:59 -- accel/accel.sh@22 -- # case "$var" in 00:05:43.485 07:22:59 -- accel/accel.sh@20 -- # IFS=: 00:05:43.485 07:22:59 -- accel/accel.sh@20 -- # read -r var val 00:05:43.485 07:22:59 -- accel/accel.sh@21 -- # val= 00:05:43.485 07:22:59 -- accel/accel.sh@22 -- # case "$var" in 00:05:43.485 07:22:59 -- accel/accel.sh@20 -- # IFS=: 00:05:43.485 07:22:59 -- accel/accel.sh@20 -- # read -r var val 00:05:43.485 07:22:59 -- accel/accel.sh@28 -- # [[ -n software ]] 00:05:43.485 07:22:59 -- accel/accel.sh@28 -- # [[ -n dualcast ]] 00:05:43.485 07:22:59 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:05:43.485 00:05:43.485 real 0m2.970s 00:05:43.485 user 0m2.657s 00:05:43.485 sys 0m0.304s 00:05:43.485 07:22:59 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:43.485 07:22:59 -- common/autotest_common.sh@10 -- # set +x 00:05:43.485 ************************************ 00:05:43.485 END TEST accel_dualcast 00:05:43.485 ************************************ 00:05:43.485 07:22:59 -- accel/accel.sh@100 -- # run_test accel_compare accel_test -t 1 -w compare -y 00:05:43.485 07:22:59 -- common/autotest_common.sh@1077 -- # '[' 7 -le 1 ']' 00:05:43.485 07:22:59 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:05:43.485 07:22:59 -- common/autotest_common.sh@10 -- # set +x 00:05:43.485 ************************************ 00:05:43.485 START TEST accel_compare 00:05:43.485 ************************************ 00:05:43.485 07:22:59 -- common/autotest_common.sh@1104 -- # accel_test -t 1 -w compare -y 00:05:43.485 07:22:59 -- accel/accel.sh@16 -- # local accel_opc 00:05:43.485 07:22:59 -- accel/accel.sh@17 -- # local accel_module 00:05:43.485 07:22:59 -- accel/accel.sh@18 -- # accel_perf -t 1 -w compare -y 00:05:43.485 07:22:59 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compare -y 00:05:43.485 07:22:59 -- accel/accel.sh@12 -- # build_accel_config 00:05:43.485 07:22:59 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:05:43.485 07:22:59 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:43.485 07:22:59 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:43.485 07:22:59 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:05:43.485 07:22:59 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:05:43.485 07:22:59 -- accel/accel.sh@41 -- # local IFS=, 00:05:43.485 07:22:59 -- accel/accel.sh@42 -- # jq -r . 00:05:43.485 [2024-07-14 07:22:59.483850] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:05:43.485 [2024-07-14 07:22:59.483940] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3982618 ] 00:05:43.485 EAL: No free 2048 kB hugepages reported on node 1 00:05:43.485 [2024-07-14 07:22:59.548018] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:43.743 [2024-07-14 07:22:59.671116] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:45.119 07:23:00 -- accel/accel.sh@18 -- # out=' 00:05:45.119 SPDK Configuration: 00:05:45.119 Core mask: 0x1 00:05:45.119 00:05:45.119 Accel Perf Configuration: 00:05:45.119 Workload Type: compare 00:05:45.119 Transfer size: 4096 bytes 00:05:45.119 Vector count 1 00:05:45.119 Module: software 00:05:45.119 Queue depth: 32 00:05:45.119 Allocate depth: 32 00:05:45.119 # threads/core: 1 00:05:45.119 Run time: 1 seconds 00:05:45.119 Verify: Yes 00:05:45.119 00:05:45.119 Running for 1 seconds... 00:05:45.119 00:05:45.119 Core,Thread Transfers Bandwidth Failed Miscompares 00:05:45.119 ------------------------------------------------------------------------------------ 00:05:45.119 0,0 396000/s 1546 MiB/s 0 0 00:05:45.119 ==================================================================================== 00:05:45.119 Total 396000/s 1546 MiB/s 0 0' 00:05:45.119 07:23:00 -- accel/accel.sh@20 -- # IFS=: 00:05:45.119 07:23:00 -- accel/accel.sh@15 -- # accel_perf -t 1 -w compare -y 00:05:45.119 07:23:00 -- accel/accel.sh@20 -- # read -r var val 00:05:45.119 07:23:00 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compare -y 00:05:45.119 07:23:00 -- accel/accel.sh@12 -- # build_accel_config 00:05:45.119 07:23:00 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:05:45.119 07:23:00 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:45.119 07:23:00 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:45.119 07:23:00 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:05:45.119 07:23:00 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:05:45.119 07:23:00 -- accel/accel.sh@41 -- # local IFS=, 00:05:45.119 07:23:00 -- accel/accel.sh@42 -- # jq -r . 00:05:45.119 [2024-07-14 07:23:00.974383] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:05:45.119 [2024-07-14 07:23:00.974466] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3982792 ] 00:05:45.119 EAL: No free 2048 kB hugepages reported on node 1 00:05:45.119 [2024-07-14 07:23:01.036811] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:45.119 [2024-07-14 07:23:01.155784] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:45.119 07:23:01 -- accel/accel.sh@21 -- # val= 00:05:45.119 07:23:01 -- accel/accel.sh@22 -- # case "$var" in 00:05:45.119 07:23:01 -- accel/accel.sh@20 -- # IFS=: 00:05:45.119 07:23:01 -- accel/accel.sh@20 -- # read -r var val 00:05:45.119 07:23:01 -- accel/accel.sh@21 -- # val= 00:05:45.119 07:23:01 -- accel/accel.sh@22 -- # case "$var" in 00:05:45.119 07:23:01 -- accel/accel.sh@20 -- # IFS=: 00:05:45.119 07:23:01 -- accel/accel.sh@20 -- # read -r var val 00:05:45.119 07:23:01 -- accel/accel.sh@21 -- # val=0x1 00:05:45.119 07:23:01 -- accel/accel.sh@22 -- # case "$var" in 00:05:45.119 07:23:01 -- accel/accel.sh@20 -- # IFS=: 00:05:45.119 07:23:01 -- accel/accel.sh@20 -- # read -r var val 00:05:45.119 07:23:01 -- accel/accel.sh@21 -- # val= 00:05:45.119 07:23:01 -- accel/accel.sh@22 -- # case "$var" in 00:05:45.119 07:23:01 -- accel/accel.sh@20 -- # IFS=: 00:05:45.119 07:23:01 -- accel/accel.sh@20 -- # read -r var val 00:05:45.119 07:23:01 -- accel/accel.sh@21 -- # val= 00:05:45.119 07:23:01 -- accel/accel.sh@22 -- # case "$var" in 00:05:45.119 07:23:01 -- accel/accel.sh@20 -- # IFS=: 00:05:45.119 07:23:01 -- accel/accel.sh@20 -- # read -r var val 00:05:45.119 07:23:01 -- accel/accel.sh@21 -- # val=compare 00:05:45.119 07:23:01 -- accel/accel.sh@22 -- # case "$var" in 00:05:45.119 07:23:01 -- accel/accel.sh@24 -- # accel_opc=compare 00:05:45.119 07:23:01 -- accel/accel.sh@20 -- # IFS=: 00:05:45.119 07:23:01 -- accel/accel.sh@20 -- # read -r var val 00:05:45.119 07:23:01 -- accel/accel.sh@21 -- # val='4096 bytes' 00:05:45.119 07:23:01 -- accel/accel.sh@22 -- # case "$var" in 00:05:45.119 07:23:01 -- accel/accel.sh@20 -- # IFS=: 00:05:45.119 07:23:01 -- accel/accel.sh@20 -- # read -r var val 00:05:45.119 07:23:01 -- accel/accel.sh@21 -- # val= 00:05:45.119 07:23:01 -- accel/accel.sh@22 -- # case "$var" in 00:05:45.119 07:23:01 -- accel/accel.sh@20 -- # IFS=: 00:05:45.119 07:23:01 -- accel/accel.sh@20 -- # read -r var val 00:05:45.119 07:23:01 -- accel/accel.sh@21 -- # val=software 00:05:45.119 07:23:01 -- accel/accel.sh@22 -- # case "$var" in 00:05:45.119 07:23:01 -- accel/accel.sh@23 -- # accel_module=software 00:05:45.119 07:23:01 -- accel/accel.sh@20 -- # IFS=: 00:05:45.119 07:23:01 -- accel/accel.sh@20 -- # read -r var val 00:05:45.119 07:23:01 -- accel/accel.sh@21 -- # val=32 00:05:45.119 07:23:01 -- accel/accel.sh@22 -- # case "$var" in 00:05:45.119 07:23:01 -- accel/accel.sh@20 -- # IFS=: 00:05:45.119 07:23:01 -- accel/accel.sh@20 -- # read -r var val 00:05:45.120 07:23:01 -- accel/accel.sh@21 -- # val=32 00:05:45.120 07:23:01 -- accel/accel.sh@22 -- # case "$var" in 00:05:45.120 07:23:01 -- accel/accel.sh@20 -- # IFS=: 00:05:45.120 07:23:01 -- accel/accel.sh@20 -- # read -r var val 00:05:45.120 07:23:01 -- accel/accel.sh@21 -- # val=1 00:05:45.120 07:23:01 -- accel/accel.sh@22 -- # case "$var" in 00:05:45.120 07:23:01 -- accel/accel.sh@20 -- # IFS=: 00:05:45.120 07:23:01 -- accel/accel.sh@20 -- # read -r var val 00:05:45.120 07:23:01 -- accel/accel.sh@21 -- # val='1 seconds' 00:05:45.120 07:23:01 -- accel/accel.sh@22 -- # case "$var" in 00:05:45.120 07:23:01 -- accel/accel.sh@20 -- # IFS=: 00:05:45.120 07:23:01 -- accel/accel.sh@20 -- # read -r var val 00:05:45.120 07:23:01 -- accel/accel.sh@21 -- # val=Yes 00:05:45.120 07:23:01 -- accel/accel.sh@22 -- # case "$var" in 00:05:45.120 07:23:01 -- accel/accel.sh@20 -- # IFS=: 00:05:45.120 07:23:01 -- accel/accel.sh@20 -- # read -r var val 00:05:45.120 07:23:01 -- accel/accel.sh@21 -- # val= 00:05:45.120 07:23:01 -- accel/accel.sh@22 -- # case "$var" in 00:05:45.120 07:23:01 -- accel/accel.sh@20 -- # IFS=: 00:05:45.120 07:23:01 -- accel/accel.sh@20 -- # read -r var val 00:05:45.120 07:23:01 -- accel/accel.sh@21 -- # val= 00:05:45.120 07:23:01 -- accel/accel.sh@22 -- # case "$var" in 00:05:45.120 07:23:01 -- accel/accel.sh@20 -- # IFS=: 00:05:45.120 07:23:01 -- accel/accel.sh@20 -- # read -r var val 00:05:46.495 07:23:02 -- accel/accel.sh@21 -- # val= 00:05:46.495 07:23:02 -- accel/accel.sh@22 -- # case "$var" in 00:05:46.495 07:23:02 -- accel/accel.sh@20 -- # IFS=: 00:05:46.495 07:23:02 -- accel/accel.sh@20 -- # read -r var val 00:05:46.495 07:23:02 -- accel/accel.sh@21 -- # val= 00:05:46.495 07:23:02 -- accel/accel.sh@22 -- # case "$var" in 00:05:46.495 07:23:02 -- accel/accel.sh@20 -- # IFS=: 00:05:46.495 07:23:02 -- accel/accel.sh@20 -- # read -r var val 00:05:46.495 07:23:02 -- accel/accel.sh@21 -- # val= 00:05:46.495 07:23:02 -- accel/accel.sh@22 -- # case "$var" in 00:05:46.495 07:23:02 -- accel/accel.sh@20 -- # IFS=: 00:05:46.495 07:23:02 -- accel/accel.sh@20 -- # read -r var val 00:05:46.495 07:23:02 -- accel/accel.sh@21 -- # val= 00:05:46.495 07:23:02 -- accel/accel.sh@22 -- # case "$var" in 00:05:46.495 07:23:02 -- accel/accel.sh@20 -- # IFS=: 00:05:46.495 07:23:02 -- accel/accel.sh@20 -- # read -r var val 00:05:46.495 07:23:02 -- accel/accel.sh@21 -- # val= 00:05:46.495 07:23:02 -- accel/accel.sh@22 -- # case "$var" in 00:05:46.495 07:23:02 -- accel/accel.sh@20 -- # IFS=: 00:05:46.495 07:23:02 -- accel/accel.sh@20 -- # read -r var val 00:05:46.495 07:23:02 -- accel/accel.sh@21 -- # val= 00:05:46.495 07:23:02 -- accel/accel.sh@22 -- # case "$var" in 00:05:46.495 07:23:02 -- accel/accel.sh@20 -- # IFS=: 00:05:46.495 07:23:02 -- accel/accel.sh@20 -- # read -r var val 00:05:46.495 07:23:02 -- accel/accel.sh@28 -- # [[ -n software ]] 00:05:46.495 07:23:02 -- accel/accel.sh@28 -- # [[ -n compare ]] 00:05:46.495 07:23:02 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:05:46.495 00:05:46.495 real 0m2.958s 00:05:46.495 user 0m2.652s 00:05:46.495 sys 0m0.298s 00:05:46.495 07:23:02 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:46.495 07:23:02 -- common/autotest_common.sh@10 -- # set +x 00:05:46.495 ************************************ 00:05:46.495 END TEST accel_compare 00:05:46.495 ************************************ 00:05:46.495 07:23:02 -- accel/accel.sh@101 -- # run_test accel_xor accel_test -t 1 -w xor -y 00:05:46.495 07:23:02 -- common/autotest_common.sh@1077 -- # '[' 7 -le 1 ']' 00:05:46.495 07:23:02 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:05:46.495 07:23:02 -- common/autotest_common.sh@10 -- # set +x 00:05:46.495 ************************************ 00:05:46.495 START TEST accel_xor 00:05:46.495 ************************************ 00:05:46.495 07:23:02 -- common/autotest_common.sh@1104 -- # accel_test -t 1 -w xor -y 00:05:46.495 07:23:02 -- accel/accel.sh@16 -- # local accel_opc 00:05:46.495 07:23:02 -- accel/accel.sh@17 -- # local accel_module 00:05:46.495 07:23:02 -- accel/accel.sh@18 -- # accel_perf -t 1 -w xor -y 00:05:46.495 07:23:02 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w xor -y 00:05:46.495 07:23:02 -- accel/accel.sh@12 -- # build_accel_config 00:05:46.495 07:23:02 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:05:46.495 07:23:02 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:46.495 07:23:02 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:46.495 07:23:02 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:05:46.495 07:23:02 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:05:46.495 07:23:02 -- accel/accel.sh@41 -- # local IFS=, 00:05:46.495 07:23:02 -- accel/accel.sh@42 -- # jq -r . 00:05:46.495 [2024-07-14 07:23:02.468445] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:05:46.495 [2024-07-14 07:23:02.468526] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3982960 ] 00:05:46.495 EAL: No free 2048 kB hugepages reported on node 1 00:05:46.495 [2024-07-14 07:23:02.529764] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:46.495 [2024-07-14 07:23:02.650110] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:47.869 07:23:03 -- accel/accel.sh@18 -- # out=' 00:05:47.869 SPDK Configuration: 00:05:47.869 Core mask: 0x1 00:05:47.869 00:05:47.869 Accel Perf Configuration: 00:05:47.869 Workload Type: xor 00:05:47.869 Source buffers: 2 00:05:47.869 Transfer size: 4096 bytes 00:05:47.869 Vector count 1 00:05:47.869 Module: software 00:05:47.869 Queue depth: 32 00:05:47.869 Allocate depth: 32 00:05:47.869 # threads/core: 1 00:05:47.869 Run time: 1 seconds 00:05:47.869 Verify: Yes 00:05:47.869 00:05:47.869 Running for 1 seconds... 00:05:47.869 00:05:47.869 Core,Thread Transfers Bandwidth Failed Miscompares 00:05:47.869 ------------------------------------------------------------------------------------ 00:05:47.869 0,0 192288/s 751 MiB/s 0 0 00:05:47.869 ==================================================================================== 00:05:47.869 Total 192288/s 751 MiB/s 0 0' 00:05:47.869 07:23:03 -- accel/accel.sh@20 -- # IFS=: 00:05:47.869 07:23:03 -- accel/accel.sh@15 -- # accel_perf -t 1 -w xor -y 00:05:47.869 07:23:03 -- accel/accel.sh@20 -- # read -r var val 00:05:47.869 07:23:03 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w xor -y 00:05:47.869 07:23:03 -- accel/accel.sh@12 -- # build_accel_config 00:05:47.869 07:23:03 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:05:47.869 07:23:03 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:47.869 07:23:03 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:47.869 07:23:03 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:05:47.869 07:23:03 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:05:47.869 07:23:03 -- accel/accel.sh@41 -- # local IFS=, 00:05:47.869 07:23:03 -- accel/accel.sh@42 -- # jq -r . 00:05:47.869 [2024-07-14 07:23:03.951160] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:05:47.869 [2024-07-14 07:23:03.951241] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3983115 ] 00:05:47.869 EAL: No free 2048 kB hugepages reported on node 1 00:05:47.869 [2024-07-14 07:23:04.012293] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:48.127 [2024-07-14 07:23:04.133015] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:48.127 07:23:04 -- accel/accel.sh@21 -- # val= 00:05:48.127 07:23:04 -- accel/accel.sh@22 -- # case "$var" in 00:05:48.127 07:23:04 -- accel/accel.sh@20 -- # IFS=: 00:05:48.127 07:23:04 -- accel/accel.sh@20 -- # read -r var val 00:05:48.127 07:23:04 -- accel/accel.sh@21 -- # val= 00:05:48.127 07:23:04 -- accel/accel.sh@22 -- # case "$var" in 00:05:48.127 07:23:04 -- accel/accel.sh@20 -- # IFS=: 00:05:48.127 07:23:04 -- accel/accel.sh@20 -- # read -r var val 00:05:48.127 07:23:04 -- accel/accel.sh@21 -- # val=0x1 00:05:48.127 07:23:04 -- accel/accel.sh@22 -- # case "$var" in 00:05:48.127 07:23:04 -- accel/accel.sh@20 -- # IFS=: 00:05:48.127 07:23:04 -- accel/accel.sh@20 -- # read -r var val 00:05:48.127 07:23:04 -- accel/accel.sh@21 -- # val= 00:05:48.127 07:23:04 -- accel/accel.sh@22 -- # case "$var" in 00:05:48.127 07:23:04 -- accel/accel.sh@20 -- # IFS=: 00:05:48.127 07:23:04 -- accel/accel.sh@20 -- # read -r var val 00:05:48.127 07:23:04 -- accel/accel.sh@21 -- # val= 00:05:48.127 07:23:04 -- accel/accel.sh@22 -- # case "$var" in 00:05:48.127 07:23:04 -- accel/accel.sh@20 -- # IFS=: 00:05:48.127 07:23:04 -- accel/accel.sh@20 -- # read -r var val 00:05:48.127 07:23:04 -- accel/accel.sh@21 -- # val=xor 00:05:48.127 07:23:04 -- accel/accel.sh@22 -- # case "$var" in 00:05:48.127 07:23:04 -- accel/accel.sh@24 -- # accel_opc=xor 00:05:48.127 07:23:04 -- accel/accel.sh@20 -- # IFS=: 00:05:48.127 07:23:04 -- accel/accel.sh@20 -- # read -r var val 00:05:48.127 07:23:04 -- accel/accel.sh@21 -- # val=2 00:05:48.127 07:23:04 -- accel/accel.sh@22 -- # case "$var" in 00:05:48.127 07:23:04 -- accel/accel.sh@20 -- # IFS=: 00:05:48.127 07:23:04 -- accel/accel.sh@20 -- # read -r var val 00:05:48.127 07:23:04 -- accel/accel.sh@21 -- # val='4096 bytes' 00:05:48.127 07:23:04 -- accel/accel.sh@22 -- # case "$var" in 00:05:48.127 07:23:04 -- accel/accel.sh@20 -- # IFS=: 00:05:48.127 07:23:04 -- accel/accel.sh@20 -- # read -r var val 00:05:48.127 07:23:04 -- accel/accel.sh@21 -- # val= 00:05:48.127 07:23:04 -- accel/accel.sh@22 -- # case "$var" in 00:05:48.127 07:23:04 -- accel/accel.sh@20 -- # IFS=: 00:05:48.127 07:23:04 -- accel/accel.sh@20 -- # read -r var val 00:05:48.127 07:23:04 -- accel/accel.sh@21 -- # val=software 00:05:48.127 07:23:04 -- accel/accel.sh@22 -- # case "$var" in 00:05:48.127 07:23:04 -- accel/accel.sh@23 -- # accel_module=software 00:05:48.127 07:23:04 -- accel/accel.sh@20 -- # IFS=: 00:05:48.127 07:23:04 -- accel/accel.sh@20 -- # read -r var val 00:05:48.127 07:23:04 -- accel/accel.sh@21 -- # val=32 00:05:48.127 07:23:04 -- accel/accel.sh@22 -- # case "$var" in 00:05:48.127 07:23:04 -- accel/accel.sh@20 -- # IFS=: 00:05:48.127 07:23:04 -- accel/accel.sh@20 -- # read -r var val 00:05:48.127 07:23:04 -- accel/accel.sh@21 -- # val=32 00:05:48.127 07:23:04 -- accel/accel.sh@22 -- # case "$var" in 00:05:48.127 07:23:04 -- accel/accel.sh@20 -- # IFS=: 00:05:48.127 07:23:04 -- accel/accel.sh@20 -- # read -r var val 00:05:48.127 07:23:04 -- accel/accel.sh@21 -- # val=1 00:05:48.127 07:23:04 -- accel/accel.sh@22 -- # case "$var" in 00:05:48.127 07:23:04 -- accel/accel.sh@20 -- # IFS=: 00:05:48.127 07:23:04 -- accel/accel.sh@20 -- # read -r var val 00:05:48.127 07:23:04 -- accel/accel.sh@21 -- # val='1 seconds' 00:05:48.127 07:23:04 -- accel/accel.sh@22 -- # case "$var" in 00:05:48.127 07:23:04 -- accel/accel.sh@20 -- # IFS=: 00:05:48.127 07:23:04 -- accel/accel.sh@20 -- # read -r var val 00:05:48.127 07:23:04 -- accel/accel.sh@21 -- # val=Yes 00:05:48.127 07:23:04 -- accel/accel.sh@22 -- # case "$var" in 00:05:48.127 07:23:04 -- accel/accel.sh@20 -- # IFS=: 00:05:48.127 07:23:04 -- accel/accel.sh@20 -- # read -r var val 00:05:48.127 07:23:04 -- accel/accel.sh@21 -- # val= 00:05:48.127 07:23:04 -- accel/accel.sh@22 -- # case "$var" in 00:05:48.127 07:23:04 -- accel/accel.sh@20 -- # IFS=: 00:05:48.127 07:23:04 -- accel/accel.sh@20 -- # read -r var val 00:05:48.127 07:23:04 -- accel/accel.sh@21 -- # val= 00:05:48.127 07:23:04 -- accel/accel.sh@22 -- # case "$var" in 00:05:48.127 07:23:04 -- accel/accel.sh@20 -- # IFS=: 00:05:48.127 07:23:04 -- accel/accel.sh@20 -- # read -r var val 00:05:49.502 07:23:05 -- accel/accel.sh@21 -- # val= 00:05:49.502 07:23:05 -- accel/accel.sh@22 -- # case "$var" in 00:05:49.502 07:23:05 -- accel/accel.sh@20 -- # IFS=: 00:05:49.502 07:23:05 -- accel/accel.sh@20 -- # read -r var val 00:05:49.502 07:23:05 -- accel/accel.sh@21 -- # val= 00:05:49.502 07:23:05 -- accel/accel.sh@22 -- # case "$var" in 00:05:49.502 07:23:05 -- accel/accel.sh@20 -- # IFS=: 00:05:49.502 07:23:05 -- accel/accel.sh@20 -- # read -r var val 00:05:49.502 07:23:05 -- accel/accel.sh@21 -- # val= 00:05:49.502 07:23:05 -- accel/accel.sh@22 -- # case "$var" in 00:05:49.502 07:23:05 -- accel/accel.sh@20 -- # IFS=: 00:05:49.502 07:23:05 -- accel/accel.sh@20 -- # read -r var val 00:05:49.502 07:23:05 -- accel/accel.sh@21 -- # val= 00:05:49.502 07:23:05 -- accel/accel.sh@22 -- # case "$var" in 00:05:49.502 07:23:05 -- accel/accel.sh@20 -- # IFS=: 00:05:49.502 07:23:05 -- accel/accel.sh@20 -- # read -r var val 00:05:49.502 07:23:05 -- accel/accel.sh@21 -- # val= 00:05:49.502 07:23:05 -- accel/accel.sh@22 -- # case "$var" in 00:05:49.502 07:23:05 -- accel/accel.sh@20 -- # IFS=: 00:05:49.502 07:23:05 -- accel/accel.sh@20 -- # read -r var val 00:05:49.502 07:23:05 -- accel/accel.sh@21 -- # val= 00:05:49.502 07:23:05 -- accel/accel.sh@22 -- # case "$var" in 00:05:49.502 07:23:05 -- accel/accel.sh@20 -- # IFS=: 00:05:49.502 07:23:05 -- accel/accel.sh@20 -- # read -r var val 00:05:49.502 07:23:05 -- accel/accel.sh@28 -- # [[ -n software ]] 00:05:49.502 07:23:05 -- accel/accel.sh@28 -- # [[ -n xor ]] 00:05:49.502 07:23:05 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:05:49.502 00:05:49.502 real 0m2.970s 00:05:49.502 user 0m2.656s 00:05:49.502 sys 0m0.305s 00:05:49.502 07:23:05 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:49.502 07:23:05 -- common/autotest_common.sh@10 -- # set +x 00:05:49.502 ************************************ 00:05:49.502 END TEST accel_xor 00:05:49.502 ************************************ 00:05:49.502 07:23:05 -- accel/accel.sh@102 -- # run_test accel_xor accel_test -t 1 -w xor -y -x 3 00:05:49.502 07:23:05 -- common/autotest_common.sh@1077 -- # '[' 9 -le 1 ']' 00:05:49.502 07:23:05 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:05:49.502 07:23:05 -- common/autotest_common.sh@10 -- # set +x 00:05:49.502 ************************************ 00:05:49.502 START TEST accel_xor 00:05:49.502 ************************************ 00:05:49.502 07:23:05 -- common/autotest_common.sh@1104 -- # accel_test -t 1 -w xor -y -x 3 00:05:49.502 07:23:05 -- accel/accel.sh@16 -- # local accel_opc 00:05:49.502 07:23:05 -- accel/accel.sh@17 -- # local accel_module 00:05:49.502 07:23:05 -- accel/accel.sh@18 -- # accel_perf -t 1 -w xor -y -x 3 00:05:49.502 07:23:05 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w xor -y -x 3 00:05:49.502 07:23:05 -- accel/accel.sh@12 -- # build_accel_config 00:05:49.502 07:23:05 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:05:49.502 07:23:05 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:49.502 07:23:05 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:49.502 07:23:05 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:05:49.502 07:23:05 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:05:49.502 07:23:05 -- accel/accel.sh@41 -- # local IFS=, 00:05:49.502 07:23:05 -- accel/accel.sh@42 -- # jq -r . 00:05:49.502 [2024-07-14 07:23:05.462704] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:05:49.502 [2024-07-14 07:23:05.462783] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3983381 ] 00:05:49.502 EAL: No free 2048 kB hugepages reported on node 1 00:05:49.502 [2024-07-14 07:23:05.524538] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:49.502 [2024-07-14 07:23:05.644951] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:50.877 07:23:06 -- accel/accel.sh@18 -- # out=' 00:05:50.877 SPDK Configuration: 00:05:50.877 Core mask: 0x1 00:05:50.877 00:05:50.877 Accel Perf Configuration: 00:05:50.877 Workload Type: xor 00:05:50.877 Source buffers: 3 00:05:50.877 Transfer size: 4096 bytes 00:05:50.877 Vector count 1 00:05:50.877 Module: software 00:05:50.877 Queue depth: 32 00:05:50.877 Allocate depth: 32 00:05:50.877 # threads/core: 1 00:05:50.877 Run time: 1 seconds 00:05:50.877 Verify: Yes 00:05:50.877 00:05:50.877 Running for 1 seconds... 00:05:50.877 00:05:50.877 Core,Thread Transfers Bandwidth Failed Miscompares 00:05:50.877 ------------------------------------------------------------------------------------ 00:05:50.877 0,0 182464/s 712 MiB/s 0 0 00:05:50.877 ==================================================================================== 00:05:50.877 Total 182464/s 712 MiB/s 0 0' 00:05:50.877 07:23:06 -- accel/accel.sh@20 -- # IFS=: 00:05:50.877 07:23:06 -- accel/accel.sh@15 -- # accel_perf -t 1 -w xor -y -x 3 00:05:50.877 07:23:06 -- accel/accel.sh@20 -- # read -r var val 00:05:50.877 07:23:06 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w xor -y -x 3 00:05:50.877 07:23:06 -- accel/accel.sh@12 -- # build_accel_config 00:05:50.877 07:23:06 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:05:50.877 07:23:06 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:50.877 07:23:06 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:50.877 07:23:06 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:05:50.877 07:23:06 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:05:50.877 07:23:06 -- accel/accel.sh@41 -- # local IFS=, 00:05:50.877 07:23:06 -- accel/accel.sh@42 -- # jq -r . 00:05:50.877 [2024-07-14 07:23:06.933541] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:05:50.877 [2024-07-14 07:23:06.933622] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3983524 ] 00:05:50.877 EAL: No free 2048 kB hugepages reported on node 1 00:05:50.877 [2024-07-14 07:23:06.994588] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:51.160 [2024-07-14 07:23:07.116299] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:51.160 07:23:07 -- accel/accel.sh@21 -- # val= 00:05:51.160 07:23:07 -- accel/accel.sh@22 -- # case "$var" in 00:05:51.160 07:23:07 -- accel/accel.sh@20 -- # IFS=: 00:05:51.160 07:23:07 -- accel/accel.sh@20 -- # read -r var val 00:05:51.160 07:23:07 -- accel/accel.sh@21 -- # val= 00:05:51.160 07:23:07 -- accel/accel.sh@22 -- # case "$var" in 00:05:51.160 07:23:07 -- accel/accel.sh@20 -- # IFS=: 00:05:51.160 07:23:07 -- accel/accel.sh@20 -- # read -r var val 00:05:51.160 07:23:07 -- accel/accel.sh@21 -- # val=0x1 00:05:51.160 07:23:07 -- accel/accel.sh@22 -- # case "$var" in 00:05:51.160 07:23:07 -- accel/accel.sh@20 -- # IFS=: 00:05:51.160 07:23:07 -- accel/accel.sh@20 -- # read -r var val 00:05:51.160 07:23:07 -- accel/accel.sh@21 -- # val= 00:05:51.160 07:23:07 -- accel/accel.sh@22 -- # case "$var" in 00:05:51.160 07:23:07 -- accel/accel.sh@20 -- # IFS=: 00:05:51.160 07:23:07 -- accel/accel.sh@20 -- # read -r var val 00:05:51.160 07:23:07 -- accel/accel.sh@21 -- # val= 00:05:51.160 07:23:07 -- accel/accel.sh@22 -- # case "$var" in 00:05:51.160 07:23:07 -- accel/accel.sh@20 -- # IFS=: 00:05:51.160 07:23:07 -- accel/accel.sh@20 -- # read -r var val 00:05:51.160 07:23:07 -- accel/accel.sh@21 -- # val=xor 00:05:51.160 07:23:07 -- accel/accel.sh@22 -- # case "$var" in 00:05:51.160 07:23:07 -- accel/accel.sh@24 -- # accel_opc=xor 00:05:51.160 07:23:07 -- accel/accel.sh@20 -- # IFS=: 00:05:51.160 07:23:07 -- accel/accel.sh@20 -- # read -r var val 00:05:51.160 07:23:07 -- accel/accel.sh@21 -- # val=3 00:05:51.160 07:23:07 -- accel/accel.sh@22 -- # case "$var" in 00:05:51.160 07:23:07 -- accel/accel.sh@20 -- # IFS=: 00:05:51.160 07:23:07 -- accel/accel.sh@20 -- # read -r var val 00:05:51.160 07:23:07 -- accel/accel.sh@21 -- # val='4096 bytes' 00:05:51.160 07:23:07 -- accel/accel.sh@22 -- # case "$var" in 00:05:51.160 07:23:07 -- accel/accel.sh@20 -- # IFS=: 00:05:51.160 07:23:07 -- accel/accel.sh@20 -- # read -r var val 00:05:51.160 07:23:07 -- accel/accel.sh@21 -- # val= 00:05:51.160 07:23:07 -- accel/accel.sh@22 -- # case "$var" in 00:05:51.160 07:23:07 -- accel/accel.sh@20 -- # IFS=: 00:05:51.160 07:23:07 -- accel/accel.sh@20 -- # read -r var val 00:05:51.160 07:23:07 -- accel/accel.sh@21 -- # val=software 00:05:51.160 07:23:07 -- accel/accel.sh@22 -- # case "$var" in 00:05:51.160 07:23:07 -- accel/accel.sh@23 -- # accel_module=software 00:05:51.160 07:23:07 -- accel/accel.sh@20 -- # IFS=: 00:05:51.160 07:23:07 -- accel/accel.sh@20 -- # read -r var val 00:05:51.160 07:23:07 -- accel/accel.sh@21 -- # val=32 00:05:51.160 07:23:07 -- accel/accel.sh@22 -- # case "$var" in 00:05:51.160 07:23:07 -- accel/accel.sh@20 -- # IFS=: 00:05:51.160 07:23:07 -- accel/accel.sh@20 -- # read -r var val 00:05:51.160 07:23:07 -- accel/accel.sh@21 -- # val=32 00:05:51.160 07:23:07 -- accel/accel.sh@22 -- # case "$var" in 00:05:51.160 07:23:07 -- accel/accel.sh@20 -- # IFS=: 00:05:51.160 07:23:07 -- accel/accel.sh@20 -- # read -r var val 00:05:51.160 07:23:07 -- accel/accel.sh@21 -- # val=1 00:05:51.160 07:23:07 -- accel/accel.sh@22 -- # case "$var" in 00:05:51.160 07:23:07 -- accel/accel.sh@20 -- # IFS=: 00:05:51.160 07:23:07 -- accel/accel.sh@20 -- # read -r var val 00:05:51.160 07:23:07 -- accel/accel.sh@21 -- # val='1 seconds' 00:05:51.160 07:23:07 -- accel/accel.sh@22 -- # case "$var" in 00:05:51.160 07:23:07 -- accel/accel.sh@20 -- # IFS=: 00:05:51.160 07:23:07 -- accel/accel.sh@20 -- # read -r var val 00:05:51.160 07:23:07 -- accel/accel.sh@21 -- # val=Yes 00:05:51.160 07:23:07 -- accel/accel.sh@22 -- # case "$var" in 00:05:51.160 07:23:07 -- accel/accel.sh@20 -- # IFS=: 00:05:51.160 07:23:07 -- accel/accel.sh@20 -- # read -r var val 00:05:51.160 07:23:07 -- accel/accel.sh@21 -- # val= 00:05:51.160 07:23:07 -- accel/accel.sh@22 -- # case "$var" in 00:05:51.160 07:23:07 -- accel/accel.sh@20 -- # IFS=: 00:05:51.160 07:23:07 -- accel/accel.sh@20 -- # read -r var val 00:05:51.160 07:23:07 -- accel/accel.sh@21 -- # val= 00:05:51.160 07:23:07 -- accel/accel.sh@22 -- # case "$var" in 00:05:51.160 07:23:07 -- accel/accel.sh@20 -- # IFS=: 00:05:51.160 07:23:07 -- accel/accel.sh@20 -- # read -r var val 00:05:52.542 07:23:08 -- accel/accel.sh@21 -- # val= 00:05:52.542 07:23:08 -- accel/accel.sh@22 -- # case "$var" in 00:05:52.542 07:23:08 -- accel/accel.sh@20 -- # IFS=: 00:05:52.542 07:23:08 -- accel/accel.sh@20 -- # read -r var val 00:05:52.542 07:23:08 -- accel/accel.sh@21 -- # val= 00:05:52.542 07:23:08 -- accel/accel.sh@22 -- # case "$var" in 00:05:52.542 07:23:08 -- accel/accel.sh@20 -- # IFS=: 00:05:52.542 07:23:08 -- accel/accel.sh@20 -- # read -r var val 00:05:52.542 07:23:08 -- accel/accel.sh@21 -- # val= 00:05:52.542 07:23:08 -- accel/accel.sh@22 -- # case "$var" in 00:05:52.542 07:23:08 -- accel/accel.sh@20 -- # IFS=: 00:05:52.542 07:23:08 -- accel/accel.sh@20 -- # read -r var val 00:05:52.542 07:23:08 -- accel/accel.sh@21 -- # val= 00:05:52.542 07:23:08 -- accel/accel.sh@22 -- # case "$var" in 00:05:52.542 07:23:08 -- accel/accel.sh@20 -- # IFS=: 00:05:52.542 07:23:08 -- accel/accel.sh@20 -- # read -r var val 00:05:52.542 07:23:08 -- accel/accel.sh@21 -- # val= 00:05:52.542 07:23:08 -- accel/accel.sh@22 -- # case "$var" in 00:05:52.542 07:23:08 -- accel/accel.sh@20 -- # IFS=: 00:05:52.542 07:23:08 -- accel/accel.sh@20 -- # read -r var val 00:05:52.542 07:23:08 -- accel/accel.sh@21 -- # val= 00:05:52.542 07:23:08 -- accel/accel.sh@22 -- # case "$var" in 00:05:52.542 07:23:08 -- accel/accel.sh@20 -- # IFS=: 00:05:52.542 07:23:08 -- accel/accel.sh@20 -- # read -r var val 00:05:52.542 07:23:08 -- accel/accel.sh@28 -- # [[ -n software ]] 00:05:52.542 07:23:08 -- accel/accel.sh@28 -- # [[ -n xor ]] 00:05:52.542 07:23:08 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:05:52.542 00:05:52.542 real 0m2.947s 00:05:52.542 user 0m2.654s 00:05:52.542 sys 0m0.284s 00:05:52.542 07:23:08 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:52.542 07:23:08 -- common/autotest_common.sh@10 -- # set +x 00:05:52.542 ************************************ 00:05:52.542 END TEST accel_xor 00:05:52.542 ************************************ 00:05:52.542 07:23:08 -- accel/accel.sh@103 -- # run_test accel_dif_verify accel_test -t 1 -w dif_verify 00:05:52.542 07:23:08 -- common/autotest_common.sh@1077 -- # '[' 6 -le 1 ']' 00:05:52.542 07:23:08 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:05:52.542 07:23:08 -- common/autotest_common.sh@10 -- # set +x 00:05:52.542 ************************************ 00:05:52.542 START TEST accel_dif_verify 00:05:52.542 ************************************ 00:05:52.542 07:23:08 -- common/autotest_common.sh@1104 -- # accel_test -t 1 -w dif_verify 00:05:52.542 07:23:08 -- accel/accel.sh@16 -- # local accel_opc 00:05:52.542 07:23:08 -- accel/accel.sh@17 -- # local accel_module 00:05:52.542 07:23:08 -- accel/accel.sh@18 -- # accel_perf -t 1 -w dif_verify 00:05:52.542 07:23:08 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dif_verify 00:05:52.542 07:23:08 -- accel/accel.sh@12 -- # build_accel_config 00:05:52.542 07:23:08 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:05:52.542 07:23:08 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:52.542 07:23:08 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:52.542 07:23:08 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:05:52.542 07:23:08 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:05:52.542 07:23:08 -- accel/accel.sh@41 -- # local IFS=, 00:05:52.542 07:23:08 -- accel/accel.sh@42 -- # jq -r . 00:05:52.542 [2024-07-14 07:23:08.436021] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:05:52.542 [2024-07-14 07:23:08.436107] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3983692 ] 00:05:52.542 EAL: No free 2048 kB hugepages reported on node 1 00:05:52.542 [2024-07-14 07:23:08.498668] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:52.542 [2024-07-14 07:23:08.617765] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:53.914 07:23:09 -- accel/accel.sh@18 -- # out=' 00:05:53.914 SPDK Configuration: 00:05:53.914 Core mask: 0x1 00:05:53.914 00:05:53.914 Accel Perf Configuration: 00:05:53.914 Workload Type: dif_verify 00:05:53.914 Vector size: 4096 bytes 00:05:53.914 Transfer size: 4096 bytes 00:05:53.914 Block size: 512 bytes 00:05:53.914 Metadata size: 8 bytes 00:05:53.914 Vector count 1 00:05:53.914 Module: software 00:05:53.914 Queue depth: 32 00:05:53.914 Allocate depth: 32 00:05:53.914 # threads/core: 1 00:05:53.914 Run time: 1 seconds 00:05:53.914 Verify: No 00:05:53.914 00:05:53.914 Running for 1 seconds... 00:05:53.914 00:05:53.914 Core,Thread Transfers Bandwidth Failed Miscompares 00:05:53.914 ------------------------------------------------------------------------------------ 00:05:53.914 0,0 81888/s 324 MiB/s 0 0 00:05:53.914 ==================================================================================== 00:05:53.914 Total 81888/s 319 MiB/s 0 0' 00:05:53.914 07:23:09 -- accel/accel.sh@20 -- # IFS=: 00:05:53.914 07:23:09 -- accel/accel.sh@15 -- # accel_perf -t 1 -w dif_verify 00:05:53.914 07:23:09 -- accel/accel.sh@20 -- # read -r var val 00:05:53.914 07:23:09 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dif_verify 00:05:53.914 07:23:09 -- accel/accel.sh@12 -- # build_accel_config 00:05:53.914 07:23:09 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:05:53.914 07:23:09 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:53.914 07:23:09 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:53.914 07:23:09 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:05:53.914 07:23:09 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:05:53.914 07:23:09 -- accel/accel.sh@41 -- # local IFS=, 00:05:53.914 07:23:09 -- accel/accel.sh@42 -- # jq -r . 00:05:53.914 [2024-07-14 07:23:09.913482] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:05:53.914 [2024-07-14 07:23:09.913565] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3983949 ] 00:05:53.914 EAL: No free 2048 kB hugepages reported on node 1 00:05:53.914 [2024-07-14 07:23:09.978644] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:54.171 [2024-07-14 07:23:10.108872] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:54.171 07:23:10 -- accel/accel.sh@21 -- # val= 00:05:54.171 07:23:10 -- accel/accel.sh@22 -- # case "$var" in 00:05:54.171 07:23:10 -- accel/accel.sh@20 -- # IFS=: 00:05:54.171 07:23:10 -- accel/accel.sh@20 -- # read -r var val 00:05:54.171 07:23:10 -- accel/accel.sh@21 -- # val= 00:05:54.171 07:23:10 -- accel/accel.sh@22 -- # case "$var" in 00:05:54.171 07:23:10 -- accel/accel.sh@20 -- # IFS=: 00:05:54.171 07:23:10 -- accel/accel.sh@20 -- # read -r var val 00:05:54.171 07:23:10 -- accel/accel.sh@21 -- # val=0x1 00:05:54.171 07:23:10 -- accel/accel.sh@22 -- # case "$var" in 00:05:54.171 07:23:10 -- accel/accel.sh@20 -- # IFS=: 00:05:54.171 07:23:10 -- accel/accel.sh@20 -- # read -r var val 00:05:54.171 07:23:10 -- accel/accel.sh@21 -- # val= 00:05:54.171 07:23:10 -- accel/accel.sh@22 -- # case "$var" in 00:05:54.171 07:23:10 -- accel/accel.sh@20 -- # IFS=: 00:05:54.171 07:23:10 -- accel/accel.sh@20 -- # read -r var val 00:05:54.171 07:23:10 -- accel/accel.sh@21 -- # val= 00:05:54.171 07:23:10 -- accel/accel.sh@22 -- # case "$var" in 00:05:54.171 07:23:10 -- accel/accel.sh@20 -- # IFS=: 00:05:54.171 07:23:10 -- accel/accel.sh@20 -- # read -r var val 00:05:54.171 07:23:10 -- accel/accel.sh@21 -- # val=dif_verify 00:05:54.171 07:23:10 -- accel/accel.sh@22 -- # case "$var" in 00:05:54.171 07:23:10 -- accel/accel.sh@24 -- # accel_opc=dif_verify 00:05:54.171 07:23:10 -- accel/accel.sh@20 -- # IFS=: 00:05:54.171 07:23:10 -- accel/accel.sh@20 -- # read -r var val 00:05:54.171 07:23:10 -- accel/accel.sh@21 -- # val='4096 bytes' 00:05:54.171 07:23:10 -- accel/accel.sh@22 -- # case "$var" in 00:05:54.171 07:23:10 -- accel/accel.sh@20 -- # IFS=: 00:05:54.171 07:23:10 -- accel/accel.sh@20 -- # read -r var val 00:05:54.171 07:23:10 -- accel/accel.sh@21 -- # val='4096 bytes' 00:05:54.171 07:23:10 -- accel/accel.sh@22 -- # case "$var" in 00:05:54.171 07:23:10 -- accel/accel.sh@20 -- # IFS=: 00:05:54.171 07:23:10 -- accel/accel.sh@20 -- # read -r var val 00:05:54.171 07:23:10 -- accel/accel.sh@21 -- # val='512 bytes' 00:05:54.171 07:23:10 -- accel/accel.sh@22 -- # case "$var" in 00:05:54.171 07:23:10 -- accel/accel.sh@20 -- # IFS=: 00:05:54.171 07:23:10 -- accel/accel.sh@20 -- # read -r var val 00:05:54.171 07:23:10 -- accel/accel.sh@21 -- # val='8 bytes' 00:05:54.171 07:23:10 -- accel/accel.sh@22 -- # case "$var" in 00:05:54.171 07:23:10 -- accel/accel.sh@20 -- # IFS=: 00:05:54.171 07:23:10 -- accel/accel.sh@20 -- # read -r var val 00:05:54.171 07:23:10 -- accel/accel.sh@21 -- # val= 00:05:54.171 07:23:10 -- accel/accel.sh@22 -- # case "$var" in 00:05:54.171 07:23:10 -- accel/accel.sh@20 -- # IFS=: 00:05:54.171 07:23:10 -- accel/accel.sh@20 -- # read -r var val 00:05:54.171 07:23:10 -- accel/accel.sh@21 -- # val=software 00:05:54.171 07:23:10 -- accel/accel.sh@22 -- # case "$var" in 00:05:54.171 07:23:10 -- accel/accel.sh@23 -- # accel_module=software 00:05:54.171 07:23:10 -- accel/accel.sh@20 -- # IFS=: 00:05:54.171 07:23:10 -- accel/accel.sh@20 -- # read -r var val 00:05:54.171 07:23:10 -- accel/accel.sh@21 -- # val=32 00:05:54.171 07:23:10 -- accel/accel.sh@22 -- # case "$var" in 00:05:54.171 07:23:10 -- accel/accel.sh@20 -- # IFS=: 00:05:54.171 07:23:10 -- accel/accel.sh@20 -- # read -r var val 00:05:54.171 07:23:10 -- accel/accel.sh@21 -- # val=32 00:05:54.171 07:23:10 -- accel/accel.sh@22 -- # case "$var" in 00:05:54.171 07:23:10 -- accel/accel.sh@20 -- # IFS=: 00:05:54.171 07:23:10 -- accel/accel.sh@20 -- # read -r var val 00:05:54.171 07:23:10 -- accel/accel.sh@21 -- # val=1 00:05:54.171 07:23:10 -- accel/accel.sh@22 -- # case "$var" in 00:05:54.171 07:23:10 -- accel/accel.sh@20 -- # IFS=: 00:05:54.171 07:23:10 -- accel/accel.sh@20 -- # read -r var val 00:05:54.171 07:23:10 -- accel/accel.sh@21 -- # val='1 seconds' 00:05:54.171 07:23:10 -- accel/accel.sh@22 -- # case "$var" in 00:05:54.171 07:23:10 -- accel/accel.sh@20 -- # IFS=: 00:05:54.171 07:23:10 -- accel/accel.sh@20 -- # read -r var val 00:05:54.171 07:23:10 -- accel/accel.sh@21 -- # val=No 00:05:54.171 07:23:10 -- accel/accel.sh@22 -- # case "$var" in 00:05:54.171 07:23:10 -- accel/accel.sh@20 -- # IFS=: 00:05:54.171 07:23:10 -- accel/accel.sh@20 -- # read -r var val 00:05:54.171 07:23:10 -- accel/accel.sh@21 -- # val= 00:05:54.171 07:23:10 -- accel/accel.sh@22 -- # case "$var" in 00:05:54.171 07:23:10 -- accel/accel.sh@20 -- # IFS=: 00:05:54.171 07:23:10 -- accel/accel.sh@20 -- # read -r var val 00:05:54.171 07:23:10 -- accel/accel.sh@21 -- # val= 00:05:54.171 07:23:10 -- accel/accel.sh@22 -- # case "$var" in 00:05:54.171 07:23:10 -- accel/accel.sh@20 -- # IFS=: 00:05:54.171 07:23:10 -- accel/accel.sh@20 -- # read -r var val 00:05:55.545 07:23:11 -- accel/accel.sh@21 -- # val= 00:05:55.545 07:23:11 -- accel/accel.sh@22 -- # case "$var" in 00:05:55.545 07:23:11 -- accel/accel.sh@20 -- # IFS=: 00:05:55.545 07:23:11 -- accel/accel.sh@20 -- # read -r var val 00:05:55.545 07:23:11 -- accel/accel.sh@21 -- # val= 00:05:55.545 07:23:11 -- accel/accel.sh@22 -- # case "$var" in 00:05:55.545 07:23:11 -- accel/accel.sh@20 -- # IFS=: 00:05:55.545 07:23:11 -- accel/accel.sh@20 -- # read -r var val 00:05:55.545 07:23:11 -- accel/accel.sh@21 -- # val= 00:05:55.545 07:23:11 -- accel/accel.sh@22 -- # case "$var" in 00:05:55.545 07:23:11 -- accel/accel.sh@20 -- # IFS=: 00:05:55.545 07:23:11 -- accel/accel.sh@20 -- # read -r var val 00:05:55.545 07:23:11 -- accel/accel.sh@21 -- # val= 00:05:55.545 07:23:11 -- accel/accel.sh@22 -- # case "$var" in 00:05:55.545 07:23:11 -- accel/accel.sh@20 -- # IFS=: 00:05:55.545 07:23:11 -- accel/accel.sh@20 -- # read -r var val 00:05:55.545 07:23:11 -- accel/accel.sh@21 -- # val= 00:05:55.545 07:23:11 -- accel/accel.sh@22 -- # case "$var" in 00:05:55.545 07:23:11 -- accel/accel.sh@20 -- # IFS=: 00:05:55.545 07:23:11 -- accel/accel.sh@20 -- # read -r var val 00:05:55.545 07:23:11 -- accel/accel.sh@21 -- # val= 00:05:55.545 07:23:11 -- accel/accel.sh@22 -- # case "$var" in 00:05:55.545 07:23:11 -- accel/accel.sh@20 -- # IFS=: 00:05:55.545 07:23:11 -- accel/accel.sh@20 -- # read -r var val 00:05:55.545 07:23:11 -- accel/accel.sh@28 -- # [[ -n software ]] 00:05:55.545 07:23:11 -- accel/accel.sh@28 -- # [[ -n dif_verify ]] 00:05:55.545 07:23:11 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:05:55.545 00:05:55.545 real 0m2.974s 00:05:55.545 user 0m2.670s 00:05:55.545 sys 0m0.298s 00:05:55.545 07:23:11 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:55.545 07:23:11 -- common/autotest_common.sh@10 -- # set +x 00:05:55.545 ************************************ 00:05:55.545 END TEST accel_dif_verify 00:05:55.545 ************************************ 00:05:55.545 07:23:11 -- accel/accel.sh@104 -- # run_test accel_dif_generate accel_test -t 1 -w dif_generate 00:05:55.545 07:23:11 -- common/autotest_common.sh@1077 -- # '[' 6 -le 1 ']' 00:05:55.545 07:23:11 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:05:55.545 07:23:11 -- common/autotest_common.sh@10 -- # set +x 00:05:55.545 ************************************ 00:05:55.545 START TEST accel_dif_generate 00:05:55.545 ************************************ 00:05:55.545 07:23:11 -- common/autotest_common.sh@1104 -- # accel_test -t 1 -w dif_generate 00:05:55.545 07:23:11 -- accel/accel.sh@16 -- # local accel_opc 00:05:55.545 07:23:11 -- accel/accel.sh@17 -- # local accel_module 00:05:55.545 07:23:11 -- accel/accel.sh@18 -- # accel_perf -t 1 -w dif_generate 00:05:55.545 07:23:11 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dif_generate 00:05:55.545 07:23:11 -- accel/accel.sh@12 -- # build_accel_config 00:05:55.545 07:23:11 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:05:55.545 07:23:11 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:55.545 07:23:11 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:55.545 07:23:11 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:05:55.545 07:23:11 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:05:55.545 07:23:11 -- accel/accel.sh@41 -- # local IFS=, 00:05:55.545 07:23:11 -- accel/accel.sh@42 -- # jq -r . 00:05:55.545 [2024-07-14 07:23:11.437560] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:05:55.545 [2024-07-14 07:23:11.437647] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3984114 ] 00:05:55.545 EAL: No free 2048 kB hugepages reported on node 1 00:05:55.545 [2024-07-14 07:23:11.499733] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:55.545 [2024-07-14 07:23:11.620460] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:56.917 07:23:12 -- accel/accel.sh@18 -- # out=' 00:05:56.917 SPDK Configuration: 00:05:56.917 Core mask: 0x1 00:05:56.917 00:05:56.917 Accel Perf Configuration: 00:05:56.917 Workload Type: dif_generate 00:05:56.917 Vector size: 4096 bytes 00:05:56.917 Transfer size: 4096 bytes 00:05:56.917 Block size: 512 bytes 00:05:56.917 Metadata size: 8 bytes 00:05:56.917 Vector count 1 00:05:56.917 Module: software 00:05:56.917 Queue depth: 32 00:05:56.917 Allocate depth: 32 00:05:56.917 # threads/core: 1 00:05:56.917 Run time: 1 seconds 00:05:56.917 Verify: No 00:05:56.917 00:05:56.917 Running for 1 seconds... 00:05:56.917 00:05:56.917 Core,Thread Transfers Bandwidth Failed Miscompares 00:05:56.917 ------------------------------------------------------------------------------------ 00:05:56.917 0,0 95008/s 376 MiB/s 0 0 00:05:56.917 ==================================================================================== 00:05:56.917 Total 95008/s 371 MiB/s 0 0' 00:05:56.917 07:23:12 -- accel/accel.sh@20 -- # IFS=: 00:05:56.917 07:23:12 -- accel/accel.sh@15 -- # accel_perf -t 1 -w dif_generate 00:05:56.917 07:23:12 -- accel/accel.sh@20 -- # read -r var val 00:05:56.917 07:23:12 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dif_generate 00:05:56.917 07:23:12 -- accel/accel.sh@12 -- # build_accel_config 00:05:56.917 07:23:12 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:05:56.917 07:23:12 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:56.917 07:23:12 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:56.917 07:23:12 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:05:56.917 07:23:12 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:05:56.917 07:23:12 -- accel/accel.sh@41 -- # local IFS=, 00:05:56.917 07:23:12 -- accel/accel.sh@42 -- # jq -r . 00:05:56.917 [2024-07-14 07:23:12.920125] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:05:56.917 [2024-07-14 07:23:12.920211] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3984255 ] 00:05:56.917 EAL: No free 2048 kB hugepages reported on node 1 00:05:56.917 [2024-07-14 07:23:12.981495] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:57.175 [2024-07-14 07:23:13.101367] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:57.175 07:23:13 -- accel/accel.sh@21 -- # val= 00:05:57.175 07:23:13 -- accel/accel.sh@22 -- # case "$var" in 00:05:57.175 07:23:13 -- accel/accel.sh@20 -- # IFS=: 00:05:57.175 07:23:13 -- accel/accel.sh@20 -- # read -r var val 00:05:57.175 07:23:13 -- accel/accel.sh@21 -- # val= 00:05:57.175 07:23:13 -- accel/accel.sh@22 -- # case "$var" in 00:05:57.175 07:23:13 -- accel/accel.sh@20 -- # IFS=: 00:05:57.175 07:23:13 -- accel/accel.sh@20 -- # read -r var val 00:05:57.175 07:23:13 -- accel/accel.sh@21 -- # val=0x1 00:05:57.175 07:23:13 -- accel/accel.sh@22 -- # case "$var" in 00:05:57.175 07:23:13 -- accel/accel.sh@20 -- # IFS=: 00:05:57.175 07:23:13 -- accel/accel.sh@20 -- # read -r var val 00:05:57.175 07:23:13 -- accel/accel.sh@21 -- # val= 00:05:57.175 07:23:13 -- accel/accel.sh@22 -- # case "$var" in 00:05:57.175 07:23:13 -- accel/accel.sh@20 -- # IFS=: 00:05:57.175 07:23:13 -- accel/accel.sh@20 -- # read -r var val 00:05:57.175 07:23:13 -- accel/accel.sh@21 -- # val= 00:05:57.175 07:23:13 -- accel/accel.sh@22 -- # case "$var" in 00:05:57.175 07:23:13 -- accel/accel.sh@20 -- # IFS=: 00:05:57.175 07:23:13 -- accel/accel.sh@20 -- # read -r var val 00:05:57.175 07:23:13 -- accel/accel.sh@21 -- # val=dif_generate 00:05:57.175 07:23:13 -- accel/accel.sh@22 -- # case "$var" in 00:05:57.175 07:23:13 -- accel/accel.sh@24 -- # accel_opc=dif_generate 00:05:57.175 07:23:13 -- accel/accel.sh@20 -- # IFS=: 00:05:57.175 07:23:13 -- accel/accel.sh@20 -- # read -r var val 00:05:57.175 07:23:13 -- accel/accel.sh@21 -- # val='4096 bytes' 00:05:57.175 07:23:13 -- accel/accel.sh@22 -- # case "$var" in 00:05:57.175 07:23:13 -- accel/accel.sh@20 -- # IFS=: 00:05:57.175 07:23:13 -- accel/accel.sh@20 -- # read -r var val 00:05:57.175 07:23:13 -- accel/accel.sh@21 -- # val='4096 bytes' 00:05:57.175 07:23:13 -- accel/accel.sh@22 -- # case "$var" in 00:05:57.175 07:23:13 -- accel/accel.sh@20 -- # IFS=: 00:05:57.175 07:23:13 -- accel/accel.sh@20 -- # read -r var val 00:05:57.175 07:23:13 -- accel/accel.sh@21 -- # val='512 bytes' 00:05:57.175 07:23:13 -- accel/accel.sh@22 -- # case "$var" in 00:05:57.175 07:23:13 -- accel/accel.sh@20 -- # IFS=: 00:05:57.175 07:23:13 -- accel/accel.sh@20 -- # read -r var val 00:05:57.175 07:23:13 -- accel/accel.sh@21 -- # val='8 bytes' 00:05:57.175 07:23:13 -- accel/accel.sh@22 -- # case "$var" in 00:05:57.175 07:23:13 -- accel/accel.sh@20 -- # IFS=: 00:05:57.175 07:23:13 -- accel/accel.sh@20 -- # read -r var val 00:05:57.175 07:23:13 -- accel/accel.sh@21 -- # val= 00:05:57.175 07:23:13 -- accel/accel.sh@22 -- # case "$var" in 00:05:57.175 07:23:13 -- accel/accel.sh@20 -- # IFS=: 00:05:57.175 07:23:13 -- accel/accel.sh@20 -- # read -r var val 00:05:57.175 07:23:13 -- accel/accel.sh@21 -- # val=software 00:05:57.175 07:23:13 -- accel/accel.sh@22 -- # case "$var" in 00:05:57.175 07:23:13 -- accel/accel.sh@23 -- # accel_module=software 00:05:57.175 07:23:13 -- accel/accel.sh@20 -- # IFS=: 00:05:57.175 07:23:13 -- accel/accel.sh@20 -- # read -r var val 00:05:57.175 07:23:13 -- accel/accel.sh@21 -- # val=32 00:05:57.175 07:23:13 -- accel/accel.sh@22 -- # case "$var" in 00:05:57.175 07:23:13 -- accel/accel.sh@20 -- # IFS=: 00:05:57.175 07:23:13 -- accel/accel.sh@20 -- # read -r var val 00:05:57.175 07:23:13 -- accel/accel.sh@21 -- # val=32 00:05:57.175 07:23:13 -- accel/accel.sh@22 -- # case "$var" in 00:05:57.175 07:23:13 -- accel/accel.sh@20 -- # IFS=: 00:05:57.175 07:23:13 -- accel/accel.sh@20 -- # read -r var val 00:05:57.175 07:23:13 -- accel/accel.sh@21 -- # val=1 00:05:57.175 07:23:13 -- accel/accel.sh@22 -- # case "$var" in 00:05:57.175 07:23:13 -- accel/accel.sh@20 -- # IFS=: 00:05:57.175 07:23:13 -- accel/accel.sh@20 -- # read -r var val 00:05:57.175 07:23:13 -- accel/accel.sh@21 -- # val='1 seconds' 00:05:57.175 07:23:13 -- accel/accel.sh@22 -- # case "$var" in 00:05:57.175 07:23:13 -- accel/accel.sh@20 -- # IFS=: 00:05:57.175 07:23:13 -- accel/accel.sh@20 -- # read -r var val 00:05:57.175 07:23:13 -- accel/accel.sh@21 -- # val=No 00:05:57.175 07:23:13 -- accel/accel.sh@22 -- # case "$var" in 00:05:57.175 07:23:13 -- accel/accel.sh@20 -- # IFS=: 00:05:57.175 07:23:13 -- accel/accel.sh@20 -- # read -r var val 00:05:57.175 07:23:13 -- accel/accel.sh@21 -- # val= 00:05:57.175 07:23:13 -- accel/accel.sh@22 -- # case "$var" in 00:05:57.175 07:23:13 -- accel/accel.sh@20 -- # IFS=: 00:05:57.175 07:23:13 -- accel/accel.sh@20 -- # read -r var val 00:05:57.175 07:23:13 -- accel/accel.sh@21 -- # val= 00:05:57.175 07:23:13 -- accel/accel.sh@22 -- # case "$var" in 00:05:57.175 07:23:13 -- accel/accel.sh@20 -- # IFS=: 00:05:57.175 07:23:13 -- accel/accel.sh@20 -- # read -r var val 00:05:58.545 07:23:14 -- accel/accel.sh@21 -- # val= 00:05:58.545 07:23:14 -- accel/accel.sh@22 -- # case "$var" in 00:05:58.545 07:23:14 -- accel/accel.sh@20 -- # IFS=: 00:05:58.545 07:23:14 -- accel/accel.sh@20 -- # read -r var val 00:05:58.545 07:23:14 -- accel/accel.sh@21 -- # val= 00:05:58.545 07:23:14 -- accel/accel.sh@22 -- # case "$var" in 00:05:58.545 07:23:14 -- accel/accel.sh@20 -- # IFS=: 00:05:58.545 07:23:14 -- accel/accel.sh@20 -- # read -r var val 00:05:58.545 07:23:14 -- accel/accel.sh@21 -- # val= 00:05:58.545 07:23:14 -- accel/accel.sh@22 -- # case "$var" in 00:05:58.545 07:23:14 -- accel/accel.sh@20 -- # IFS=: 00:05:58.545 07:23:14 -- accel/accel.sh@20 -- # read -r var val 00:05:58.545 07:23:14 -- accel/accel.sh@21 -- # val= 00:05:58.545 07:23:14 -- accel/accel.sh@22 -- # case "$var" in 00:05:58.545 07:23:14 -- accel/accel.sh@20 -- # IFS=: 00:05:58.545 07:23:14 -- accel/accel.sh@20 -- # read -r var val 00:05:58.545 07:23:14 -- accel/accel.sh@21 -- # val= 00:05:58.545 07:23:14 -- accel/accel.sh@22 -- # case "$var" in 00:05:58.545 07:23:14 -- accel/accel.sh@20 -- # IFS=: 00:05:58.545 07:23:14 -- accel/accel.sh@20 -- # read -r var val 00:05:58.545 07:23:14 -- accel/accel.sh@21 -- # val= 00:05:58.545 07:23:14 -- accel/accel.sh@22 -- # case "$var" in 00:05:58.545 07:23:14 -- accel/accel.sh@20 -- # IFS=: 00:05:58.545 07:23:14 -- accel/accel.sh@20 -- # read -r var val 00:05:58.545 07:23:14 -- accel/accel.sh@28 -- # [[ -n software ]] 00:05:58.545 07:23:14 -- accel/accel.sh@28 -- # [[ -n dif_generate ]] 00:05:58.545 07:23:14 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:05:58.545 00:05:58.545 real 0m2.962s 00:05:58.545 user 0m2.665s 00:05:58.545 sys 0m0.291s 00:05:58.545 07:23:14 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:58.545 07:23:14 -- common/autotest_common.sh@10 -- # set +x 00:05:58.545 ************************************ 00:05:58.545 END TEST accel_dif_generate 00:05:58.545 ************************************ 00:05:58.545 07:23:14 -- accel/accel.sh@105 -- # run_test accel_dif_generate_copy accel_test -t 1 -w dif_generate_copy 00:05:58.545 07:23:14 -- common/autotest_common.sh@1077 -- # '[' 6 -le 1 ']' 00:05:58.545 07:23:14 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:05:58.545 07:23:14 -- common/autotest_common.sh@10 -- # set +x 00:05:58.545 ************************************ 00:05:58.545 START TEST accel_dif_generate_copy 00:05:58.545 ************************************ 00:05:58.545 07:23:14 -- common/autotest_common.sh@1104 -- # accel_test -t 1 -w dif_generate_copy 00:05:58.545 07:23:14 -- accel/accel.sh@16 -- # local accel_opc 00:05:58.545 07:23:14 -- accel/accel.sh@17 -- # local accel_module 00:05:58.545 07:23:14 -- accel/accel.sh@18 -- # accel_perf -t 1 -w dif_generate_copy 00:05:58.545 07:23:14 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dif_generate_copy 00:05:58.545 07:23:14 -- accel/accel.sh@12 -- # build_accel_config 00:05:58.545 07:23:14 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:05:58.545 07:23:14 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:58.545 07:23:14 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:58.545 07:23:14 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:05:58.545 07:23:14 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:05:58.545 07:23:14 -- accel/accel.sh@41 -- # local IFS=, 00:05:58.545 07:23:14 -- accel/accel.sh@42 -- # jq -r . 00:05:58.545 [2024-07-14 07:23:14.424312] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:05:58.545 [2024-07-14 07:23:14.424395] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3984538 ] 00:05:58.545 EAL: No free 2048 kB hugepages reported on node 1 00:05:58.545 [2024-07-14 07:23:14.484449] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:58.545 [2024-07-14 07:23:14.605236] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:59.918 07:23:15 -- accel/accel.sh@18 -- # out=' 00:05:59.918 SPDK Configuration: 00:05:59.918 Core mask: 0x1 00:05:59.918 00:05:59.918 Accel Perf Configuration: 00:05:59.918 Workload Type: dif_generate_copy 00:05:59.918 Vector size: 4096 bytes 00:05:59.918 Transfer size: 4096 bytes 00:05:59.918 Vector count 1 00:05:59.919 Module: software 00:05:59.919 Queue depth: 32 00:05:59.919 Allocate depth: 32 00:05:59.919 # threads/core: 1 00:05:59.919 Run time: 1 seconds 00:05:59.919 Verify: No 00:05:59.919 00:05:59.919 Running for 1 seconds... 00:05:59.919 00:05:59.919 Core,Thread Transfers Bandwidth Failed Miscompares 00:05:59.919 ------------------------------------------------------------------------------------ 00:05:59.919 0,0 75712/s 300 MiB/s 0 0 00:05:59.919 ==================================================================================== 00:05:59.919 Total 75712/s 295 MiB/s 0 0' 00:05:59.919 07:23:15 -- accel/accel.sh@20 -- # IFS=: 00:05:59.919 07:23:15 -- accel/accel.sh@15 -- # accel_perf -t 1 -w dif_generate_copy 00:05:59.919 07:23:15 -- accel/accel.sh@20 -- # read -r var val 00:05:59.919 07:23:15 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dif_generate_copy 00:05:59.919 07:23:15 -- accel/accel.sh@12 -- # build_accel_config 00:05:59.919 07:23:15 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:05:59.919 07:23:15 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:59.919 07:23:15 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:59.919 07:23:15 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:05:59.919 07:23:15 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:05:59.919 07:23:15 -- accel/accel.sh@41 -- # local IFS=, 00:05:59.919 07:23:15 -- accel/accel.sh@42 -- # jq -r . 00:05:59.919 [2024-07-14 07:23:15.905756] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:05:59.919 [2024-07-14 07:23:15.905835] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3984680 ] 00:05:59.919 EAL: No free 2048 kB hugepages reported on node 1 00:05:59.919 [2024-07-14 07:23:15.962238] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:59.919 [2024-07-14 07:23:16.080103] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:00.177 07:23:16 -- accel/accel.sh@21 -- # val= 00:06:00.177 07:23:16 -- accel/accel.sh@22 -- # case "$var" in 00:06:00.177 07:23:16 -- accel/accel.sh@20 -- # IFS=: 00:06:00.177 07:23:16 -- accel/accel.sh@20 -- # read -r var val 00:06:00.177 07:23:16 -- accel/accel.sh@21 -- # val= 00:06:00.177 07:23:16 -- accel/accel.sh@22 -- # case "$var" in 00:06:00.177 07:23:16 -- accel/accel.sh@20 -- # IFS=: 00:06:00.177 07:23:16 -- accel/accel.sh@20 -- # read -r var val 00:06:00.177 07:23:16 -- accel/accel.sh@21 -- # val=0x1 00:06:00.177 07:23:16 -- accel/accel.sh@22 -- # case "$var" in 00:06:00.177 07:23:16 -- accel/accel.sh@20 -- # IFS=: 00:06:00.177 07:23:16 -- accel/accel.sh@20 -- # read -r var val 00:06:00.177 07:23:16 -- accel/accel.sh@21 -- # val= 00:06:00.177 07:23:16 -- accel/accel.sh@22 -- # case "$var" in 00:06:00.177 07:23:16 -- accel/accel.sh@20 -- # IFS=: 00:06:00.177 07:23:16 -- accel/accel.sh@20 -- # read -r var val 00:06:00.177 07:23:16 -- accel/accel.sh@21 -- # val= 00:06:00.178 07:23:16 -- accel/accel.sh@22 -- # case "$var" in 00:06:00.178 07:23:16 -- accel/accel.sh@20 -- # IFS=: 00:06:00.178 07:23:16 -- accel/accel.sh@20 -- # read -r var val 00:06:00.178 07:23:16 -- accel/accel.sh@21 -- # val=dif_generate_copy 00:06:00.178 07:23:16 -- accel/accel.sh@22 -- # case "$var" in 00:06:00.178 07:23:16 -- accel/accel.sh@24 -- # accel_opc=dif_generate_copy 00:06:00.178 07:23:16 -- accel/accel.sh@20 -- # IFS=: 00:06:00.178 07:23:16 -- accel/accel.sh@20 -- # read -r var val 00:06:00.178 07:23:16 -- accel/accel.sh@21 -- # val='4096 bytes' 00:06:00.178 07:23:16 -- accel/accel.sh@22 -- # case "$var" in 00:06:00.178 07:23:16 -- accel/accel.sh@20 -- # IFS=: 00:06:00.178 07:23:16 -- accel/accel.sh@20 -- # read -r var val 00:06:00.178 07:23:16 -- accel/accel.sh@21 -- # val='4096 bytes' 00:06:00.178 07:23:16 -- accel/accel.sh@22 -- # case "$var" in 00:06:00.178 07:23:16 -- accel/accel.sh@20 -- # IFS=: 00:06:00.178 07:23:16 -- accel/accel.sh@20 -- # read -r var val 00:06:00.178 07:23:16 -- accel/accel.sh@21 -- # val= 00:06:00.178 07:23:16 -- accel/accel.sh@22 -- # case "$var" in 00:06:00.178 07:23:16 -- accel/accel.sh@20 -- # IFS=: 00:06:00.178 07:23:16 -- accel/accel.sh@20 -- # read -r var val 00:06:00.178 07:23:16 -- accel/accel.sh@21 -- # val=software 00:06:00.178 07:23:16 -- accel/accel.sh@22 -- # case "$var" in 00:06:00.178 07:23:16 -- accel/accel.sh@23 -- # accel_module=software 00:06:00.178 07:23:16 -- accel/accel.sh@20 -- # IFS=: 00:06:00.178 07:23:16 -- accel/accel.sh@20 -- # read -r var val 00:06:00.178 07:23:16 -- accel/accel.sh@21 -- # val=32 00:06:00.178 07:23:16 -- accel/accel.sh@22 -- # case "$var" in 00:06:00.178 07:23:16 -- accel/accel.sh@20 -- # IFS=: 00:06:00.178 07:23:16 -- accel/accel.sh@20 -- # read -r var val 00:06:00.178 07:23:16 -- accel/accel.sh@21 -- # val=32 00:06:00.178 07:23:16 -- accel/accel.sh@22 -- # case "$var" in 00:06:00.178 07:23:16 -- accel/accel.sh@20 -- # IFS=: 00:06:00.178 07:23:16 -- accel/accel.sh@20 -- # read -r var val 00:06:00.178 07:23:16 -- accel/accel.sh@21 -- # val=1 00:06:00.178 07:23:16 -- accel/accel.sh@22 -- # case "$var" in 00:06:00.178 07:23:16 -- accel/accel.sh@20 -- # IFS=: 00:06:00.178 07:23:16 -- accel/accel.sh@20 -- # read -r var val 00:06:00.178 07:23:16 -- accel/accel.sh@21 -- # val='1 seconds' 00:06:00.178 07:23:16 -- accel/accel.sh@22 -- # case "$var" in 00:06:00.178 07:23:16 -- accel/accel.sh@20 -- # IFS=: 00:06:00.178 07:23:16 -- accel/accel.sh@20 -- # read -r var val 00:06:00.178 07:23:16 -- accel/accel.sh@21 -- # val=No 00:06:00.178 07:23:16 -- accel/accel.sh@22 -- # case "$var" in 00:06:00.178 07:23:16 -- accel/accel.sh@20 -- # IFS=: 00:06:00.178 07:23:16 -- accel/accel.sh@20 -- # read -r var val 00:06:00.178 07:23:16 -- accel/accel.sh@21 -- # val= 00:06:00.178 07:23:16 -- accel/accel.sh@22 -- # case "$var" in 00:06:00.178 07:23:16 -- accel/accel.sh@20 -- # IFS=: 00:06:00.178 07:23:16 -- accel/accel.sh@20 -- # read -r var val 00:06:00.178 07:23:16 -- accel/accel.sh@21 -- # val= 00:06:00.178 07:23:16 -- accel/accel.sh@22 -- # case "$var" in 00:06:00.178 07:23:16 -- accel/accel.sh@20 -- # IFS=: 00:06:00.178 07:23:16 -- accel/accel.sh@20 -- # read -r var val 00:06:01.551 07:23:17 -- accel/accel.sh@21 -- # val= 00:06:01.551 07:23:17 -- accel/accel.sh@22 -- # case "$var" in 00:06:01.551 07:23:17 -- accel/accel.sh@20 -- # IFS=: 00:06:01.551 07:23:17 -- accel/accel.sh@20 -- # read -r var val 00:06:01.551 07:23:17 -- accel/accel.sh@21 -- # val= 00:06:01.551 07:23:17 -- accel/accel.sh@22 -- # case "$var" in 00:06:01.551 07:23:17 -- accel/accel.sh@20 -- # IFS=: 00:06:01.551 07:23:17 -- accel/accel.sh@20 -- # read -r var val 00:06:01.551 07:23:17 -- accel/accel.sh@21 -- # val= 00:06:01.551 07:23:17 -- accel/accel.sh@22 -- # case "$var" in 00:06:01.551 07:23:17 -- accel/accel.sh@20 -- # IFS=: 00:06:01.551 07:23:17 -- accel/accel.sh@20 -- # read -r var val 00:06:01.551 07:23:17 -- accel/accel.sh@21 -- # val= 00:06:01.551 07:23:17 -- accel/accel.sh@22 -- # case "$var" in 00:06:01.551 07:23:17 -- accel/accel.sh@20 -- # IFS=: 00:06:01.551 07:23:17 -- accel/accel.sh@20 -- # read -r var val 00:06:01.551 07:23:17 -- accel/accel.sh@21 -- # val= 00:06:01.551 07:23:17 -- accel/accel.sh@22 -- # case "$var" in 00:06:01.551 07:23:17 -- accel/accel.sh@20 -- # IFS=: 00:06:01.551 07:23:17 -- accel/accel.sh@20 -- # read -r var val 00:06:01.551 07:23:17 -- accel/accel.sh@21 -- # val= 00:06:01.551 07:23:17 -- accel/accel.sh@22 -- # case "$var" in 00:06:01.551 07:23:17 -- accel/accel.sh@20 -- # IFS=: 00:06:01.551 07:23:17 -- accel/accel.sh@20 -- # read -r var val 00:06:01.551 07:23:17 -- accel/accel.sh@28 -- # [[ -n software ]] 00:06:01.551 07:23:17 -- accel/accel.sh@28 -- # [[ -n dif_generate_copy ]] 00:06:01.551 07:23:17 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:01.551 00:06:01.551 real 0m2.955s 00:06:01.551 user 0m2.664s 00:06:01.551 sys 0m0.283s 00:06:01.551 07:23:17 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:01.551 07:23:17 -- common/autotest_common.sh@10 -- # set +x 00:06:01.551 ************************************ 00:06:01.551 END TEST accel_dif_generate_copy 00:06:01.551 ************************************ 00:06:01.551 07:23:17 -- accel/accel.sh@107 -- # [[ y == y ]] 00:06:01.551 07:23:17 -- accel/accel.sh@108 -- # run_test accel_comp accel_test -t 1 -w compress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:06:01.551 07:23:17 -- common/autotest_common.sh@1077 -- # '[' 8 -le 1 ']' 00:06:01.551 07:23:17 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:06:01.551 07:23:17 -- common/autotest_common.sh@10 -- # set +x 00:06:01.551 ************************************ 00:06:01.551 START TEST accel_comp 00:06:01.551 ************************************ 00:06:01.551 07:23:17 -- common/autotest_common.sh@1104 -- # accel_test -t 1 -w compress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:06:01.551 07:23:17 -- accel/accel.sh@16 -- # local accel_opc 00:06:01.551 07:23:17 -- accel/accel.sh@17 -- # local accel_module 00:06:01.551 07:23:17 -- accel/accel.sh@18 -- # accel_perf -t 1 -w compress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:06:01.551 07:23:17 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:06:01.551 07:23:17 -- accel/accel.sh@12 -- # build_accel_config 00:06:01.551 07:23:17 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:06:01.551 07:23:17 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:01.551 07:23:17 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:01.551 07:23:17 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:06:01.551 07:23:17 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:06:01.551 07:23:17 -- accel/accel.sh@41 -- # local IFS=, 00:06:01.551 07:23:17 -- accel/accel.sh@42 -- # jq -r . 00:06:01.551 [2024-07-14 07:23:17.404260] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:06:01.551 [2024-07-14 07:23:17.404341] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3984842 ] 00:06:01.551 EAL: No free 2048 kB hugepages reported on node 1 00:06:01.551 [2024-07-14 07:23:17.465422] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:01.551 [2024-07-14 07:23:17.585289] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:02.925 07:23:18 -- accel/accel.sh@18 -- # out='Preparing input file... 00:06:02.925 00:06:02.925 SPDK Configuration: 00:06:02.925 Core mask: 0x1 00:06:02.925 00:06:02.925 Accel Perf Configuration: 00:06:02.925 Workload Type: compress 00:06:02.925 Transfer size: 4096 bytes 00:06:02.925 Vector count 1 00:06:02.925 Module: software 00:06:02.925 File Name: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:06:02.925 Queue depth: 32 00:06:02.925 Allocate depth: 32 00:06:02.925 # threads/core: 1 00:06:02.925 Run time: 1 seconds 00:06:02.925 Verify: No 00:06:02.925 00:06:02.925 Running for 1 seconds... 00:06:02.925 00:06:02.925 Core,Thread Transfers Bandwidth Failed Miscompares 00:06:02.925 ------------------------------------------------------------------------------------ 00:06:02.925 0,0 32384/s 134 MiB/s 0 0 00:06:02.925 ==================================================================================== 00:06:02.925 Total 32384/s 126 MiB/s 0 0' 00:06:02.925 07:23:18 -- accel/accel.sh@20 -- # IFS=: 00:06:02.925 07:23:18 -- accel/accel.sh@15 -- # accel_perf -t 1 -w compress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:06:02.925 07:23:18 -- accel/accel.sh@20 -- # read -r var val 00:06:02.925 07:23:18 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:06:02.925 07:23:18 -- accel/accel.sh@12 -- # build_accel_config 00:06:02.925 07:23:18 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:06:02.925 07:23:18 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:02.925 07:23:18 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:02.925 07:23:18 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:06:02.925 07:23:18 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:06:02.925 07:23:18 -- accel/accel.sh@41 -- # local IFS=, 00:06:02.925 07:23:18 -- accel/accel.sh@42 -- # jq -r . 00:06:02.925 [2024-07-14 07:23:18.880577] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:06:02.925 [2024-07-14 07:23:18.880660] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3985104 ] 00:06:02.925 EAL: No free 2048 kB hugepages reported on node 1 00:06:02.925 [2024-07-14 07:23:18.942584] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:02.925 [2024-07-14 07:23:19.063085] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:03.184 07:23:19 -- accel/accel.sh@21 -- # val= 00:06:03.184 07:23:19 -- accel/accel.sh@22 -- # case "$var" in 00:06:03.184 07:23:19 -- accel/accel.sh@20 -- # IFS=: 00:06:03.184 07:23:19 -- accel/accel.sh@20 -- # read -r var val 00:06:03.184 07:23:19 -- accel/accel.sh@21 -- # val= 00:06:03.184 07:23:19 -- accel/accel.sh@22 -- # case "$var" in 00:06:03.184 07:23:19 -- accel/accel.sh@20 -- # IFS=: 00:06:03.184 07:23:19 -- accel/accel.sh@20 -- # read -r var val 00:06:03.184 07:23:19 -- accel/accel.sh@21 -- # val= 00:06:03.184 07:23:19 -- accel/accel.sh@22 -- # case "$var" in 00:06:03.184 07:23:19 -- accel/accel.sh@20 -- # IFS=: 00:06:03.184 07:23:19 -- accel/accel.sh@20 -- # read -r var val 00:06:03.184 07:23:19 -- accel/accel.sh@21 -- # val=0x1 00:06:03.184 07:23:19 -- accel/accel.sh@22 -- # case "$var" in 00:06:03.184 07:23:19 -- accel/accel.sh@20 -- # IFS=: 00:06:03.184 07:23:19 -- accel/accel.sh@20 -- # read -r var val 00:06:03.184 07:23:19 -- accel/accel.sh@21 -- # val= 00:06:03.184 07:23:19 -- accel/accel.sh@22 -- # case "$var" in 00:06:03.184 07:23:19 -- accel/accel.sh@20 -- # IFS=: 00:06:03.184 07:23:19 -- accel/accel.sh@20 -- # read -r var val 00:06:03.184 07:23:19 -- accel/accel.sh@21 -- # val= 00:06:03.184 07:23:19 -- accel/accel.sh@22 -- # case "$var" in 00:06:03.184 07:23:19 -- accel/accel.sh@20 -- # IFS=: 00:06:03.184 07:23:19 -- accel/accel.sh@20 -- # read -r var val 00:06:03.184 07:23:19 -- accel/accel.sh@21 -- # val=compress 00:06:03.184 07:23:19 -- accel/accel.sh@22 -- # case "$var" in 00:06:03.184 07:23:19 -- accel/accel.sh@24 -- # accel_opc=compress 00:06:03.184 07:23:19 -- accel/accel.sh@20 -- # IFS=: 00:06:03.184 07:23:19 -- accel/accel.sh@20 -- # read -r var val 00:06:03.184 07:23:19 -- accel/accel.sh@21 -- # val='4096 bytes' 00:06:03.184 07:23:19 -- accel/accel.sh@22 -- # case "$var" in 00:06:03.184 07:23:19 -- accel/accel.sh@20 -- # IFS=: 00:06:03.184 07:23:19 -- accel/accel.sh@20 -- # read -r var val 00:06:03.184 07:23:19 -- accel/accel.sh@21 -- # val= 00:06:03.184 07:23:19 -- accel/accel.sh@22 -- # case "$var" in 00:06:03.184 07:23:19 -- accel/accel.sh@20 -- # IFS=: 00:06:03.184 07:23:19 -- accel/accel.sh@20 -- # read -r var val 00:06:03.184 07:23:19 -- accel/accel.sh@21 -- # val=software 00:06:03.184 07:23:19 -- accel/accel.sh@22 -- # case "$var" in 00:06:03.184 07:23:19 -- accel/accel.sh@23 -- # accel_module=software 00:06:03.184 07:23:19 -- accel/accel.sh@20 -- # IFS=: 00:06:03.184 07:23:19 -- accel/accel.sh@20 -- # read -r var val 00:06:03.184 07:23:19 -- accel/accel.sh@21 -- # val=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:06:03.184 07:23:19 -- accel/accel.sh@22 -- # case "$var" in 00:06:03.184 07:23:19 -- accel/accel.sh@20 -- # IFS=: 00:06:03.184 07:23:19 -- accel/accel.sh@20 -- # read -r var val 00:06:03.184 07:23:19 -- accel/accel.sh@21 -- # val=32 00:06:03.184 07:23:19 -- accel/accel.sh@22 -- # case "$var" in 00:06:03.184 07:23:19 -- accel/accel.sh@20 -- # IFS=: 00:06:03.184 07:23:19 -- accel/accel.sh@20 -- # read -r var val 00:06:03.184 07:23:19 -- accel/accel.sh@21 -- # val=32 00:06:03.184 07:23:19 -- accel/accel.sh@22 -- # case "$var" in 00:06:03.184 07:23:19 -- accel/accel.sh@20 -- # IFS=: 00:06:03.184 07:23:19 -- accel/accel.sh@20 -- # read -r var val 00:06:03.184 07:23:19 -- accel/accel.sh@21 -- # val=1 00:06:03.184 07:23:19 -- accel/accel.sh@22 -- # case "$var" in 00:06:03.184 07:23:19 -- accel/accel.sh@20 -- # IFS=: 00:06:03.184 07:23:19 -- accel/accel.sh@20 -- # read -r var val 00:06:03.184 07:23:19 -- accel/accel.sh@21 -- # val='1 seconds' 00:06:03.184 07:23:19 -- accel/accel.sh@22 -- # case "$var" in 00:06:03.184 07:23:19 -- accel/accel.sh@20 -- # IFS=: 00:06:03.184 07:23:19 -- accel/accel.sh@20 -- # read -r var val 00:06:03.184 07:23:19 -- accel/accel.sh@21 -- # val=No 00:06:03.184 07:23:19 -- accel/accel.sh@22 -- # case "$var" in 00:06:03.184 07:23:19 -- accel/accel.sh@20 -- # IFS=: 00:06:03.184 07:23:19 -- accel/accel.sh@20 -- # read -r var val 00:06:03.184 07:23:19 -- accel/accel.sh@21 -- # val= 00:06:03.184 07:23:19 -- accel/accel.sh@22 -- # case "$var" in 00:06:03.184 07:23:19 -- accel/accel.sh@20 -- # IFS=: 00:06:03.184 07:23:19 -- accel/accel.sh@20 -- # read -r var val 00:06:03.184 07:23:19 -- accel/accel.sh@21 -- # val= 00:06:03.184 07:23:19 -- accel/accel.sh@22 -- # case "$var" in 00:06:03.184 07:23:19 -- accel/accel.sh@20 -- # IFS=: 00:06:03.184 07:23:19 -- accel/accel.sh@20 -- # read -r var val 00:06:04.559 07:23:20 -- accel/accel.sh@21 -- # val= 00:06:04.559 07:23:20 -- accel/accel.sh@22 -- # case "$var" in 00:06:04.559 07:23:20 -- accel/accel.sh@20 -- # IFS=: 00:06:04.559 07:23:20 -- accel/accel.sh@20 -- # read -r var val 00:06:04.559 07:23:20 -- accel/accel.sh@21 -- # val= 00:06:04.559 07:23:20 -- accel/accel.sh@22 -- # case "$var" in 00:06:04.559 07:23:20 -- accel/accel.sh@20 -- # IFS=: 00:06:04.559 07:23:20 -- accel/accel.sh@20 -- # read -r var val 00:06:04.559 07:23:20 -- accel/accel.sh@21 -- # val= 00:06:04.559 07:23:20 -- accel/accel.sh@22 -- # case "$var" in 00:06:04.559 07:23:20 -- accel/accel.sh@20 -- # IFS=: 00:06:04.559 07:23:20 -- accel/accel.sh@20 -- # read -r var val 00:06:04.559 07:23:20 -- accel/accel.sh@21 -- # val= 00:06:04.559 07:23:20 -- accel/accel.sh@22 -- # case "$var" in 00:06:04.559 07:23:20 -- accel/accel.sh@20 -- # IFS=: 00:06:04.559 07:23:20 -- accel/accel.sh@20 -- # read -r var val 00:06:04.559 07:23:20 -- accel/accel.sh@21 -- # val= 00:06:04.559 07:23:20 -- accel/accel.sh@22 -- # case "$var" in 00:06:04.559 07:23:20 -- accel/accel.sh@20 -- # IFS=: 00:06:04.559 07:23:20 -- accel/accel.sh@20 -- # read -r var val 00:06:04.559 07:23:20 -- accel/accel.sh@21 -- # val= 00:06:04.559 07:23:20 -- accel/accel.sh@22 -- # case "$var" in 00:06:04.559 07:23:20 -- accel/accel.sh@20 -- # IFS=: 00:06:04.559 07:23:20 -- accel/accel.sh@20 -- # read -r var val 00:06:04.559 07:23:20 -- accel/accel.sh@28 -- # [[ -n software ]] 00:06:04.559 07:23:20 -- accel/accel.sh@28 -- # [[ -n compress ]] 00:06:04.559 07:23:20 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:04.559 00:06:04.559 real 0m2.967s 00:06:04.559 user 0m2.660s 00:06:04.559 sys 0m0.300s 00:06:04.559 07:23:20 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:04.559 07:23:20 -- common/autotest_common.sh@10 -- # set +x 00:06:04.559 ************************************ 00:06:04.559 END TEST accel_comp 00:06:04.559 ************************************ 00:06:04.559 07:23:20 -- accel/accel.sh@109 -- # run_test accel_decomp accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y 00:06:04.559 07:23:20 -- common/autotest_common.sh@1077 -- # '[' 9 -le 1 ']' 00:06:04.559 07:23:20 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:06:04.559 07:23:20 -- common/autotest_common.sh@10 -- # set +x 00:06:04.559 ************************************ 00:06:04.559 START TEST accel_decomp 00:06:04.559 ************************************ 00:06:04.559 07:23:20 -- common/autotest_common.sh@1104 -- # accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y 00:06:04.559 07:23:20 -- accel/accel.sh@16 -- # local accel_opc 00:06:04.559 07:23:20 -- accel/accel.sh@17 -- # local accel_module 00:06:04.559 07:23:20 -- accel/accel.sh@18 -- # accel_perf -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y 00:06:04.559 07:23:20 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y 00:06:04.559 07:23:20 -- accel/accel.sh@12 -- # build_accel_config 00:06:04.559 07:23:20 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:06:04.559 07:23:20 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:04.559 07:23:20 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:04.559 07:23:20 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:06:04.559 07:23:20 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:06:04.559 07:23:20 -- accel/accel.sh@41 -- # local IFS=, 00:06:04.559 07:23:20 -- accel/accel.sh@42 -- # jq -r . 00:06:04.559 [2024-07-14 07:23:20.397267] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:06:04.559 [2024-07-14 07:23:20.397348] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3985265 ] 00:06:04.559 EAL: No free 2048 kB hugepages reported on node 1 00:06:04.559 [2024-07-14 07:23:20.463106] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:04.559 [2024-07-14 07:23:20.581722] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:05.931 07:23:21 -- accel/accel.sh@18 -- # out='Preparing input file... 00:06:05.931 00:06:05.931 SPDK Configuration: 00:06:05.931 Core mask: 0x1 00:06:05.931 00:06:05.931 Accel Perf Configuration: 00:06:05.931 Workload Type: decompress 00:06:05.931 Transfer size: 4096 bytes 00:06:05.931 Vector count 1 00:06:05.931 Module: software 00:06:05.931 File Name: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:06:05.931 Queue depth: 32 00:06:05.931 Allocate depth: 32 00:06:05.931 # threads/core: 1 00:06:05.931 Run time: 1 seconds 00:06:05.931 Verify: Yes 00:06:05.931 00:06:05.931 Running for 1 seconds... 00:06:05.931 00:06:05.931 Core,Thread Transfers Bandwidth Failed Miscompares 00:06:05.931 ------------------------------------------------------------------------------------ 00:06:05.931 0,0 55488/s 102 MiB/s 0 0 00:06:05.931 ==================================================================================== 00:06:05.931 Total 55488/s 216 MiB/s 0 0' 00:06:05.931 07:23:21 -- accel/accel.sh@20 -- # IFS=: 00:06:05.931 07:23:21 -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y 00:06:05.931 07:23:21 -- accel/accel.sh@20 -- # read -r var val 00:06:05.931 07:23:21 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y 00:06:05.931 07:23:21 -- accel/accel.sh@12 -- # build_accel_config 00:06:05.931 07:23:21 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:06:05.931 07:23:21 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:05.931 07:23:21 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:05.931 07:23:21 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:06:05.931 07:23:21 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:06:05.931 07:23:21 -- accel/accel.sh@41 -- # local IFS=, 00:06:05.931 07:23:21 -- accel/accel.sh@42 -- # jq -r . 00:06:05.931 [2024-07-14 07:23:21.869508] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:06:05.931 [2024-07-14 07:23:21.869590] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3985408 ] 00:06:05.931 EAL: No free 2048 kB hugepages reported on node 1 00:06:05.931 [2024-07-14 07:23:21.930785] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:05.931 [2024-07-14 07:23:22.050674] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:06.190 07:23:22 -- accel/accel.sh@21 -- # val= 00:06:06.190 07:23:22 -- accel/accel.sh@22 -- # case "$var" in 00:06:06.190 07:23:22 -- accel/accel.sh@20 -- # IFS=: 00:06:06.190 07:23:22 -- accel/accel.sh@20 -- # read -r var val 00:06:06.190 07:23:22 -- accel/accel.sh@21 -- # val= 00:06:06.190 07:23:22 -- accel/accel.sh@22 -- # case "$var" in 00:06:06.190 07:23:22 -- accel/accel.sh@20 -- # IFS=: 00:06:06.190 07:23:22 -- accel/accel.sh@20 -- # read -r var val 00:06:06.190 07:23:22 -- accel/accel.sh@21 -- # val= 00:06:06.190 07:23:22 -- accel/accel.sh@22 -- # case "$var" in 00:06:06.190 07:23:22 -- accel/accel.sh@20 -- # IFS=: 00:06:06.190 07:23:22 -- accel/accel.sh@20 -- # read -r var val 00:06:06.190 07:23:22 -- accel/accel.sh@21 -- # val=0x1 00:06:06.190 07:23:22 -- accel/accel.sh@22 -- # case "$var" in 00:06:06.190 07:23:22 -- accel/accel.sh@20 -- # IFS=: 00:06:06.190 07:23:22 -- accel/accel.sh@20 -- # read -r var val 00:06:06.190 07:23:22 -- accel/accel.sh@21 -- # val= 00:06:06.190 07:23:22 -- accel/accel.sh@22 -- # case "$var" in 00:06:06.190 07:23:22 -- accel/accel.sh@20 -- # IFS=: 00:06:06.190 07:23:22 -- accel/accel.sh@20 -- # read -r var val 00:06:06.190 07:23:22 -- accel/accel.sh@21 -- # val= 00:06:06.190 07:23:22 -- accel/accel.sh@22 -- # case "$var" in 00:06:06.190 07:23:22 -- accel/accel.sh@20 -- # IFS=: 00:06:06.190 07:23:22 -- accel/accel.sh@20 -- # read -r var val 00:06:06.190 07:23:22 -- accel/accel.sh@21 -- # val=decompress 00:06:06.190 07:23:22 -- accel/accel.sh@22 -- # case "$var" in 00:06:06.190 07:23:22 -- accel/accel.sh@24 -- # accel_opc=decompress 00:06:06.190 07:23:22 -- accel/accel.sh@20 -- # IFS=: 00:06:06.190 07:23:22 -- accel/accel.sh@20 -- # read -r var val 00:06:06.190 07:23:22 -- accel/accel.sh@21 -- # val='4096 bytes' 00:06:06.190 07:23:22 -- accel/accel.sh@22 -- # case "$var" in 00:06:06.190 07:23:22 -- accel/accel.sh@20 -- # IFS=: 00:06:06.190 07:23:22 -- accel/accel.sh@20 -- # read -r var val 00:06:06.190 07:23:22 -- accel/accel.sh@21 -- # val= 00:06:06.190 07:23:22 -- accel/accel.sh@22 -- # case "$var" in 00:06:06.190 07:23:22 -- accel/accel.sh@20 -- # IFS=: 00:06:06.190 07:23:22 -- accel/accel.sh@20 -- # read -r var val 00:06:06.190 07:23:22 -- accel/accel.sh@21 -- # val=software 00:06:06.190 07:23:22 -- accel/accel.sh@22 -- # case "$var" in 00:06:06.190 07:23:22 -- accel/accel.sh@23 -- # accel_module=software 00:06:06.190 07:23:22 -- accel/accel.sh@20 -- # IFS=: 00:06:06.190 07:23:22 -- accel/accel.sh@20 -- # read -r var val 00:06:06.190 07:23:22 -- accel/accel.sh@21 -- # val=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:06:06.190 07:23:22 -- accel/accel.sh@22 -- # case "$var" in 00:06:06.190 07:23:22 -- accel/accel.sh@20 -- # IFS=: 00:06:06.190 07:23:22 -- accel/accel.sh@20 -- # read -r var val 00:06:06.190 07:23:22 -- accel/accel.sh@21 -- # val=32 00:06:06.190 07:23:22 -- accel/accel.sh@22 -- # case "$var" in 00:06:06.190 07:23:22 -- accel/accel.sh@20 -- # IFS=: 00:06:06.190 07:23:22 -- accel/accel.sh@20 -- # read -r var val 00:06:06.190 07:23:22 -- accel/accel.sh@21 -- # val=32 00:06:06.190 07:23:22 -- accel/accel.sh@22 -- # case "$var" in 00:06:06.190 07:23:22 -- accel/accel.sh@20 -- # IFS=: 00:06:06.190 07:23:22 -- accel/accel.sh@20 -- # read -r var val 00:06:06.190 07:23:22 -- accel/accel.sh@21 -- # val=1 00:06:06.190 07:23:22 -- accel/accel.sh@22 -- # case "$var" in 00:06:06.190 07:23:22 -- accel/accel.sh@20 -- # IFS=: 00:06:06.190 07:23:22 -- accel/accel.sh@20 -- # read -r var val 00:06:06.190 07:23:22 -- accel/accel.sh@21 -- # val='1 seconds' 00:06:06.190 07:23:22 -- accel/accel.sh@22 -- # case "$var" in 00:06:06.190 07:23:22 -- accel/accel.sh@20 -- # IFS=: 00:06:06.190 07:23:22 -- accel/accel.sh@20 -- # read -r var val 00:06:06.190 07:23:22 -- accel/accel.sh@21 -- # val=Yes 00:06:06.190 07:23:22 -- accel/accel.sh@22 -- # case "$var" in 00:06:06.190 07:23:22 -- accel/accel.sh@20 -- # IFS=: 00:06:06.190 07:23:22 -- accel/accel.sh@20 -- # read -r var val 00:06:06.190 07:23:22 -- accel/accel.sh@21 -- # val= 00:06:06.190 07:23:22 -- accel/accel.sh@22 -- # case "$var" in 00:06:06.190 07:23:22 -- accel/accel.sh@20 -- # IFS=: 00:06:06.190 07:23:22 -- accel/accel.sh@20 -- # read -r var val 00:06:06.190 07:23:22 -- accel/accel.sh@21 -- # val= 00:06:06.190 07:23:22 -- accel/accel.sh@22 -- # case "$var" in 00:06:06.190 07:23:22 -- accel/accel.sh@20 -- # IFS=: 00:06:06.190 07:23:22 -- accel/accel.sh@20 -- # read -r var val 00:06:07.571 07:23:23 -- accel/accel.sh@21 -- # val= 00:06:07.571 07:23:23 -- accel/accel.sh@22 -- # case "$var" in 00:06:07.571 07:23:23 -- accel/accel.sh@20 -- # IFS=: 00:06:07.571 07:23:23 -- accel/accel.sh@20 -- # read -r var val 00:06:07.571 07:23:23 -- accel/accel.sh@21 -- # val= 00:06:07.571 07:23:23 -- accel/accel.sh@22 -- # case "$var" in 00:06:07.571 07:23:23 -- accel/accel.sh@20 -- # IFS=: 00:06:07.571 07:23:23 -- accel/accel.sh@20 -- # read -r var val 00:06:07.571 07:23:23 -- accel/accel.sh@21 -- # val= 00:06:07.571 07:23:23 -- accel/accel.sh@22 -- # case "$var" in 00:06:07.571 07:23:23 -- accel/accel.sh@20 -- # IFS=: 00:06:07.571 07:23:23 -- accel/accel.sh@20 -- # read -r var val 00:06:07.571 07:23:23 -- accel/accel.sh@21 -- # val= 00:06:07.571 07:23:23 -- accel/accel.sh@22 -- # case "$var" in 00:06:07.571 07:23:23 -- accel/accel.sh@20 -- # IFS=: 00:06:07.571 07:23:23 -- accel/accel.sh@20 -- # read -r var val 00:06:07.571 07:23:23 -- accel/accel.sh@21 -- # val= 00:06:07.571 07:23:23 -- accel/accel.sh@22 -- # case "$var" in 00:06:07.571 07:23:23 -- accel/accel.sh@20 -- # IFS=: 00:06:07.571 07:23:23 -- accel/accel.sh@20 -- # read -r var val 00:06:07.571 07:23:23 -- accel/accel.sh@21 -- # val= 00:06:07.571 07:23:23 -- accel/accel.sh@22 -- # case "$var" in 00:06:07.571 07:23:23 -- accel/accel.sh@20 -- # IFS=: 00:06:07.571 07:23:23 -- accel/accel.sh@20 -- # read -r var val 00:06:07.571 07:23:23 -- accel/accel.sh@28 -- # [[ -n software ]] 00:06:07.571 07:23:23 -- accel/accel.sh@28 -- # [[ -n decompress ]] 00:06:07.571 07:23:23 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:07.571 00:06:07.571 real 0m2.949s 00:06:07.571 user 0m2.650s 00:06:07.571 sys 0m0.292s 00:06:07.571 07:23:23 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:07.571 07:23:23 -- common/autotest_common.sh@10 -- # set +x 00:06:07.571 ************************************ 00:06:07.571 END TEST accel_decomp 00:06:07.571 ************************************ 00:06:07.571 07:23:23 -- accel/accel.sh@110 -- # run_test accel_decmop_full accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 00:06:07.571 07:23:23 -- common/autotest_common.sh@1077 -- # '[' 11 -le 1 ']' 00:06:07.571 07:23:23 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:06:07.571 07:23:23 -- common/autotest_common.sh@10 -- # set +x 00:06:07.571 ************************************ 00:06:07.571 START TEST accel_decmop_full 00:06:07.571 ************************************ 00:06:07.571 07:23:23 -- common/autotest_common.sh@1104 -- # accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 00:06:07.571 07:23:23 -- accel/accel.sh@16 -- # local accel_opc 00:06:07.571 07:23:23 -- accel/accel.sh@17 -- # local accel_module 00:06:07.571 07:23:23 -- accel/accel.sh@18 -- # accel_perf -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 00:06:07.571 07:23:23 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 00:06:07.571 07:23:23 -- accel/accel.sh@12 -- # build_accel_config 00:06:07.571 07:23:23 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:06:07.571 07:23:23 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:07.571 07:23:23 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:07.571 07:23:23 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:06:07.571 07:23:23 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:06:07.571 07:23:23 -- accel/accel.sh@41 -- # local IFS=, 00:06:07.571 07:23:23 -- accel/accel.sh@42 -- # jq -r . 00:06:07.571 [2024-07-14 07:23:23.372747] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:06:07.571 [2024-07-14 07:23:23.372825] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3985654 ] 00:06:07.571 EAL: No free 2048 kB hugepages reported on node 1 00:06:07.571 [2024-07-14 07:23:23.434092] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:07.571 [2024-07-14 07:23:23.554661] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:08.989 07:23:24 -- accel/accel.sh@18 -- # out='Preparing input file... 00:06:08.989 00:06:08.989 SPDK Configuration: 00:06:08.989 Core mask: 0x1 00:06:08.989 00:06:08.989 Accel Perf Configuration: 00:06:08.989 Workload Type: decompress 00:06:08.989 Transfer size: 111250 bytes 00:06:08.989 Vector count 1 00:06:08.989 Module: software 00:06:08.989 File Name: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:06:08.989 Queue depth: 32 00:06:08.989 Allocate depth: 32 00:06:08.989 # threads/core: 1 00:06:08.989 Run time: 1 seconds 00:06:08.989 Verify: Yes 00:06:08.989 00:06:08.989 Running for 1 seconds... 00:06:08.989 00:06:08.989 Core,Thread Transfers Bandwidth Failed Miscompares 00:06:08.989 ------------------------------------------------------------------------------------ 00:06:08.989 0,0 3776/s 155 MiB/s 0 0 00:06:08.989 ==================================================================================== 00:06:08.989 Total 3776/s 400 MiB/s 0 0' 00:06:08.989 07:23:24 -- accel/accel.sh@20 -- # IFS=: 00:06:08.989 07:23:24 -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 00:06:08.989 07:23:24 -- accel/accel.sh@20 -- # read -r var val 00:06:08.990 07:23:24 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 00:06:08.990 07:23:24 -- accel/accel.sh@12 -- # build_accel_config 00:06:08.990 07:23:24 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:06:08.990 07:23:24 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:08.990 07:23:24 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:08.990 07:23:24 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:06:08.990 07:23:24 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:06:08.990 07:23:24 -- accel/accel.sh@41 -- # local IFS=, 00:06:08.990 07:23:24 -- accel/accel.sh@42 -- # jq -r . 00:06:08.990 [2024-07-14 07:23:24.871973] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:06:08.990 [2024-07-14 07:23:24.872057] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3985833 ] 00:06:08.990 EAL: No free 2048 kB hugepages reported on node 1 00:06:08.990 [2024-07-14 07:23:24.932943] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:08.990 [2024-07-14 07:23:25.053369] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:08.990 07:23:25 -- accel/accel.sh@21 -- # val= 00:06:08.990 07:23:25 -- accel/accel.sh@22 -- # case "$var" in 00:06:08.990 07:23:25 -- accel/accel.sh@20 -- # IFS=: 00:06:08.990 07:23:25 -- accel/accel.sh@20 -- # read -r var val 00:06:08.990 07:23:25 -- accel/accel.sh@21 -- # val= 00:06:08.990 07:23:25 -- accel/accel.sh@22 -- # case "$var" in 00:06:08.990 07:23:25 -- accel/accel.sh@20 -- # IFS=: 00:06:08.990 07:23:25 -- accel/accel.sh@20 -- # read -r var val 00:06:08.990 07:23:25 -- accel/accel.sh@21 -- # val= 00:06:08.990 07:23:25 -- accel/accel.sh@22 -- # case "$var" in 00:06:08.990 07:23:25 -- accel/accel.sh@20 -- # IFS=: 00:06:08.990 07:23:25 -- accel/accel.sh@20 -- # read -r var val 00:06:08.990 07:23:25 -- accel/accel.sh@21 -- # val=0x1 00:06:08.990 07:23:25 -- accel/accel.sh@22 -- # case "$var" in 00:06:08.990 07:23:25 -- accel/accel.sh@20 -- # IFS=: 00:06:08.990 07:23:25 -- accel/accel.sh@20 -- # read -r var val 00:06:08.990 07:23:25 -- accel/accel.sh@21 -- # val= 00:06:08.990 07:23:25 -- accel/accel.sh@22 -- # case "$var" in 00:06:08.990 07:23:25 -- accel/accel.sh@20 -- # IFS=: 00:06:08.990 07:23:25 -- accel/accel.sh@20 -- # read -r var val 00:06:08.990 07:23:25 -- accel/accel.sh@21 -- # val= 00:06:08.990 07:23:25 -- accel/accel.sh@22 -- # case "$var" in 00:06:08.990 07:23:25 -- accel/accel.sh@20 -- # IFS=: 00:06:08.990 07:23:25 -- accel/accel.sh@20 -- # read -r var val 00:06:08.990 07:23:25 -- accel/accel.sh@21 -- # val=decompress 00:06:08.990 07:23:25 -- accel/accel.sh@22 -- # case "$var" in 00:06:08.990 07:23:25 -- accel/accel.sh@24 -- # accel_opc=decompress 00:06:08.990 07:23:25 -- accel/accel.sh@20 -- # IFS=: 00:06:08.990 07:23:25 -- accel/accel.sh@20 -- # read -r var val 00:06:08.990 07:23:25 -- accel/accel.sh@21 -- # val='111250 bytes' 00:06:08.990 07:23:25 -- accel/accel.sh@22 -- # case "$var" in 00:06:08.990 07:23:25 -- accel/accel.sh@20 -- # IFS=: 00:06:08.990 07:23:25 -- accel/accel.sh@20 -- # read -r var val 00:06:08.990 07:23:25 -- accel/accel.sh@21 -- # val= 00:06:08.990 07:23:25 -- accel/accel.sh@22 -- # case "$var" in 00:06:08.990 07:23:25 -- accel/accel.sh@20 -- # IFS=: 00:06:08.990 07:23:25 -- accel/accel.sh@20 -- # read -r var val 00:06:08.990 07:23:25 -- accel/accel.sh@21 -- # val=software 00:06:08.990 07:23:25 -- accel/accel.sh@22 -- # case "$var" in 00:06:08.990 07:23:25 -- accel/accel.sh@23 -- # accel_module=software 00:06:08.990 07:23:25 -- accel/accel.sh@20 -- # IFS=: 00:06:08.990 07:23:25 -- accel/accel.sh@20 -- # read -r var val 00:06:08.990 07:23:25 -- accel/accel.sh@21 -- # val=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:06:08.990 07:23:25 -- accel/accel.sh@22 -- # case "$var" in 00:06:08.990 07:23:25 -- accel/accel.sh@20 -- # IFS=: 00:06:08.990 07:23:25 -- accel/accel.sh@20 -- # read -r var val 00:06:08.990 07:23:25 -- accel/accel.sh@21 -- # val=32 00:06:08.990 07:23:25 -- accel/accel.sh@22 -- # case "$var" in 00:06:08.990 07:23:25 -- accel/accel.sh@20 -- # IFS=: 00:06:08.990 07:23:25 -- accel/accel.sh@20 -- # read -r var val 00:06:08.990 07:23:25 -- accel/accel.sh@21 -- # val=32 00:06:08.990 07:23:25 -- accel/accel.sh@22 -- # case "$var" in 00:06:08.990 07:23:25 -- accel/accel.sh@20 -- # IFS=: 00:06:08.990 07:23:25 -- accel/accel.sh@20 -- # read -r var val 00:06:08.990 07:23:25 -- accel/accel.sh@21 -- # val=1 00:06:08.990 07:23:25 -- accel/accel.sh@22 -- # case "$var" in 00:06:08.990 07:23:25 -- accel/accel.sh@20 -- # IFS=: 00:06:08.990 07:23:25 -- accel/accel.sh@20 -- # read -r var val 00:06:08.990 07:23:25 -- accel/accel.sh@21 -- # val='1 seconds' 00:06:08.990 07:23:25 -- accel/accel.sh@22 -- # case "$var" in 00:06:08.990 07:23:25 -- accel/accel.sh@20 -- # IFS=: 00:06:08.990 07:23:25 -- accel/accel.sh@20 -- # read -r var val 00:06:08.990 07:23:25 -- accel/accel.sh@21 -- # val=Yes 00:06:08.990 07:23:25 -- accel/accel.sh@22 -- # case "$var" in 00:06:08.990 07:23:25 -- accel/accel.sh@20 -- # IFS=: 00:06:08.990 07:23:25 -- accel/accel.sh@20 -- # read -r var val 00:06:08.990 07:23:25 -- accel/accel.sh@21 -- # val= 00:06:08.990 07:23:25 -- accel/accel.sh@22 -- # case "$var" in 00:06:08.990 07:23:25 -- accel/accel.sh@20 -- # IFS=: 00:06:08.990 07:23:25 -- accel/accel.sh@20 -- # read -r var val 00:06:08.990 07:23:25 -- accel/accel.sh@21 -- # val= 00:06:08.990 07:23:25 -- accel/accel.sh@22 -- # case "$var" in 00:06:08.990 07:23:25 -- accel/accel.sh@20 -- # IFS=: 00:06:08.990 07:23:25 -- accel/accel.sh@20 -- # read -r var val 00:06:10.365 07:23:26 -- accel/accel.sh@21 -- # val= 00:06:10.365 07:23:26 -- accel/accel.sh@22 -- # case "$var" in 00:06:10.365 07:23:26 -- accel/accel.sh@20 -- # IFS=: 00:06:10.365 07:23:26 -- accel/accel.sh@20 -- # read -r var val 00:06:10.365 07:23:26 -- accel/accel.sh@21 -- # val= 00:06:10.365 07:23:26 -- accel/accel.sh@22 -- # case "$var" in 00:06:10.365 07:23:26 -- accel/accel.sh@20 -- # IFS=: 00:06:10.365 07:23:26 -- accel/accel.sh@20 -- # read -r var val 00:06:10.365 07:23:26 -- accel/accel.sh@21 -- # val= 00:06:10.365 07:23:26 -- accel/accel.sh@22 -- # case "$var" in 00:06:10.365 07:23:26 -- accel/accel.sh@20 -- # IFS=: 00:06:10.365 07:23:26 -- accel/accel.sh@20 -- # read -r var val 00:06:10.365 07:23:26 -- accel/accel.sh@21 -- # val= 00:06:10.365 07:23:26 -- accel/accel.sh@22 -- # case "$var" in 00:06:10.365 07:23:26 -- accel/accel.sh@20 -- # IFS=: 00:06:10.365 07:23:26 -- accel/accel.sh@20 -- # read -r var val 00:06:10.365 07:23:26 -- accel/accel.sh@21 -- # val= 00:06:10.365 07:23:26 -- accel/accel.sh@22 -- # case "$var" in 00:06:10.365 07:23:26 -- accel/accel.sh@20 -- # IFS=: 00:06:10.365 07:23:26 -- accel/accel.sh@20 -- # read -r var val 00:06:10.365 07:23:26 -- accel/accel.sh@21 -- # val= 00:06:10.365 07:23:26 -- accel/accel.sh@22 -- # case "$var" in 00:06:10.365 07:23:26 -- accel/accel.sh@20 -- # IFS=: 00:06:10.365 07:23:26 -- accel/accel.sh@20 -- # read -r var val 00:06:10.365 07:23:26 -- accel/accel.sh@28 -- # [[ -n software ]] 00:06:10.365 07:23:26 -- accel/accel.sh@28 -- # [[ -n decompress ]] 00:06:10.365 07:23:26 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:10.365 00:06:10.365 real 0m3.000s 00:06:10.365 user 0m2.706s 00:06:10.365 sys 0m0.287s 00:06:10.365 07:23:26 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:10.365 07:23:26 -- common/autotest_common.sh@10 -- # set +x 00:06:10.365 ************************************ 00:06:10.365 END TEST accel_decmop_full 00:06:10.365 ************************************ 00:06:10.365 07:23:26 -- accel/accel.sh@111 -- # run_test accel_decomp_mcore accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -m 0xf 00:06:10.365 07:23:26 -- common/autotest_common.sh@1077 -- # '[' 11 -le 1 ']' 00:06:10.365 07:23:26 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:06:10.365 07:23:26 -- common/autotest_common.sh@10 -- # set +x 00:06:10.365 ************************************ 00:06:10.365 START TEST accel_decomp_mcore 00:06:10.365 ************************************ 00:06:10.365 07:23:26 -- common/autotest_common.sh@1104 -- # accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -m 0xf 00:06:10.365 07:23:26 -- accel/accel.sh@16 -- # local accel_opc 00:06:10.365 07:23:26 -- accel/accel.sh@17 -- # local accel_module 00:06:10.365 07:23:26 -- accel/accel.sh@18 -- # accel_perf -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -m 0xf 00:06:10.365 07:23:26 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -m 0xf 00:06:10.365 07:23:26 -- accel/accel.sh@12 -- # build_accel_config 00:06:10.365 07:23:26 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:06:10.365 07:23:26 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:10.365 07:23:26 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:10.365 07:23:26 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:06:10.365 07:23:26 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:06:10.365 07:23:26 -- accel/accel.sh@41 -- # local IFS=, 00:06:10.365 07:23:26 -- accel/accel.sh@42 -- # jq -r . 00:06:10.365 [2024-07-14 07:23:26.397153] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:06:10.366 [2024-07-14 07:23:26.397233] [ DPDK EAL parameters: accel_perf --no-shconf -c 0xf --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3985998 ] 00:06:10.366 EAL: No free 2048 kB hugepages reported on node 1 00:06:10.366 [2024-07-14 07:23:26.459219] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:06:10.625 [2024-07-14 07:23:26.584112] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:06:10.625 [2024-07-14 07:23:26.584166] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:06:10.625 [2024-07-14 07:23:26.584219] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:06:10.625 [2024-07-14 07:23:26.584222] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:11.999 07:23:27 -- accel/accel.sh@18 -- # out='Preparing input file... 00:06:11.999 00:06:11.999 SPDK Configuration: 00:06:11.999 Core mask: 0xf 00:06:11.999 00:06:11.999 Accel Perf Configuration: 00:06:11.999 Workload Type: decompress 00:06:11.999 Transfer size: 4096 bytes 00:06:11.999 Vector count 1 00:06:11.999 Module: software 00:06:11.999 File Name: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:06:11.999 Queue depth: 32 00:06:11.999 Allocate depth: 32 00:06:11.999 # threads/core: 1 00:06:11.999 Run time: 1 seconds 00:06:11.999 Verify: Yes 00:06:11.999 00:06:11.999 Running for 1 seconds... 00:06:11.999 00:06:11.999 Core,Thread Transfers Bandwidth Failed Miscompares 00:06:11.999 ------------------------------------------------------------------------------------ 00:06:11.999 0,0 50112/s 92 MiB/s 0 0 00:06:11.999 3,0 50688/s 93 MiB/s 0 0 00:06:11.999 2,0 50752/s 93 MiB/s 0 0 00:06:11.999 1,0 50624/s 93 MiB/s 0 0 00:06:11.999 ==================================================================================== 00:06:11.999 Total 202176/s 789 MiB/s 0 0' 00:06:11.999 07:23:27 -- accel/accel.sh@20 -- # IFS=: 00:06:11.999 07:23:27 -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -m 0xf 00:06:11.999 07:23:27 -- accel/accel.sh@20 -- # read -r var val 00:06:11.999 07:23:27 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -m 0xf 00:06:11.999 07:23:27 -- accel/accel.sh@12 -- # build_accel_config 00:06:11.999 07:23:27 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:06:11.999 07:23:27 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:11.999 07:23:27 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:11.999 07:23:27 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:06:11.999 07:23:27 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:06:11.999 07:23:27 -- accel/accel.sh@41 -- # local IFS=, 00:06:11.999 07:23:27 -- accel/accel.sh@42 -- # jq -r . 00:06:11.999 [2024-07-14 07:23:27.888398] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:06:11.999 [2024-07-14 07:23:27.888479] [ DPDK EAL parameters: accel_perf --no-shconf -c 0xf --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3986229 ] 00:06:11.999 EAL: No free 2048 kB hugepages reported on node 1 00:06:11.999 [2024-07-14 07:23:27.955370] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:06:11.999 [2024-07-14 07:23:28.079174] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:06:11.999 [2024-07-14 07:23:28.079217] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:06:11.999 [2024-07-14 07:23:28.079272] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:06:11.999 [2024-07-14 07:23:28.079275] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:11.999 07:23:28 -- accel/accel.sh@21 -- # val= 00:06:11.999 07:23:28 -- accel/accel.sh@22 -- # case "$var" in 00:06:11.999 07:23:28 -- accel/accel.sh@20 -- # IFS=: 00:06:11.999 07:23:28 -- accel/accel.sh@20 -- # read -r var val 00:06:11.999 07:23:28 -- accel/accel.sh@21 -- # val= 00:06:11.999 07:23:28 -- accel/accel.sh@22 -- # case "$var" in 00:06:11.999 07:23:28 -- accel/accel.sh@20 -- # IFS=: 00:06:11.999 07:23:28 -- accel/accel.sh@20 -- # read -r var val 00:06:11.999 07:23:28 -- accel/accel.sh@21 -- # val= 00:06:11.999 07:23:28 -- accel/accel.sh@22 -- # case "$var" in 00:06:11.999 07:23:28 -- accel/accel.sh@20 -- # IFS=: 00:06:11.999 07:23:28 -- accel/accel.sh@20 -- # read -r var val 00:06:11.999 07:23:28 -- accel/accel.sh@21 -- # val=0xf 00:06:11.999 07:23:28 -- accel/accel.sh@22 -- # case "$var" in 00:06:11.999 07:23:28 -- accel/accel.sh@20 -- # IFS=: 00:06:11.999 07:23:28 -- accel/accel.sh@20 -- # read -r var val 00:06:11.999 07:23:28 -- accel/accel.sh@21 -- # val= 00:06:11.999 07:23:28 -- accel/accel.sh@22 -- # case "$var" in 00:06:11.999 07:23:28 -- accel/accel.sh@20 -- # IFS=: 00:06:11.999 07:23:28 -- accel/accel.sh@20 -- # read -r var val 00:06:11.999 07:23:28 -- accel/accel.sh@21 -- # val= 00:06:11.999 07:23:28 -- accel/accel.sh@22 -- # case "$var" in 00:06:11.999 07:23:28 -- accel/accel.sh@20 -- # IFS=: 00:06:11.999 07:23:28 -- accel/accel.sh@20 -- # read -r var val 00:06:11.999 07:23:28 -- accel/accel.sh@21 -- # val=decompress 00:06:11.999 07:23:28 -- accel/accel.sh@22 -- # case "$var" in 00:06:11.999 07:23:28 -- accel/accel.sh@24 -- # accel_opc=decompress 00:06:11.999 07:23:28 -- accel/accel.sh@20 -- # IFS=: 00:06:11.999 07:23:28 -- accel/accel.sh@20 -- # read -r var val 00:06:11.999 07:23:28 -- accel/accel.sh@21 -- # val='4096 bytes' 00:06:11.999 07:23:28 -- accel/accel.sh@22 -- # case "$var" in 00:06:11.999 07:23:28 -- accel/accel.sh@20 -- # IFS=: 00:06:11.999 07:23:28 -- accel/accel.sh@20 -- # read -r var val 00:06:11.999 07:23:28 -- accel/accel.sh@21 -- # val= 00:06:11.999 07:23:28 -- accel/accel.sh@22 -- # case "$var" in 00:06:11.999 07:23:28 -- accel/accel.sh@20 -- # IFS=: 00:06:11.999 07:23:28 -- accel/accel.sh@20 -- # read -r var val 00:06:11.999 07:23:28 -- accel/accel.sh@21 -- # val=software 00:06:11.999 07:23:28 -- accel/accel.sh@22 -- # case "$var" in 00:06:11.999 07:23:28 -- accel/accel.sh@23 -- # accel_module=software 00:06:11.999 07:23:28 -- accel/accel.sh@20 -- # IFS=: 00:06:11.999 07:23:28 -- accel/accel.sh@20 -- # read -r var val 00:06:11.999 07:23:28 -- accel/accel.sh@21 -- # val=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:06:11.999 07:23:28 -- accel/accel.sh@22 -- # case "$var" in 00:06:11.999 07:23:28 -- accel/accel.sh@20 -- # IFS=: 00:06:11.999 07:23:28 -- accel/accel.sh@20 -- # read -r var val 00:06:11.999 07:23:28 -- accel/accel.sh@21 -- # val=32 00:06:11.999 07:23:28 -- accel/accel.sh@22 -- # case "$var" in 00:06:11.999 07:23:28 -- accel/accel.sh@20 -- # IFS=: 00:06:11.999 07:23:28 -- accel/accel.sh@20 -- # read -r var val 00:06:11.999 07:23:28 -- accel/accel.sh@21 -- # val=32 00:06:11.999 07:23:28 -- accel/accel.sh@22 -- # case "$var" in 00:06:11.999 07:23:28 -- accel/accel.sh@20 -- # IFS=: 00:06:11.999 07:23:28 -- accel/accel.sh@20 -- # read -r var val 00:06:11.999 07:23:28 -- accel/accel.sh@21 -- # val=1 00:06:11.999 07:23:28 -- accel/accel.sh@22 -- # case "$var" in 00:06:11.999 07:23:28 -- accel/accel.sh@20 -- # IFS=: 00:06:11.999 07:23:28 -- accel/accel.sh@20 -- # read -r var val 00:06:11.999 07:23:28 -- accel/accel.sh@21 -- # val='1 seconds' 00:06:11.999 07:23:28 -- accel/accel.sh@22 -- # case "$var" in 00:06:11.999 07:23:28 -- accel/accel.sh@20 -- # IFS=: 00:06:11.999 07:23:28 -- accel/accel.sh@20 -- # read -r var val 00:06:11.999 07:23:28 -- accel/accel.sh@21 -- # val=Yes 00:06:11.999 07:23:28 -- accel/accel.sh@22 -- # case "$var" in 00:06:11.999 07:23:28 -- accel/accel.sh@20 -- # IFS=: 00:06:11.999 07:23:28 -- accel/accel.sh@20 -- # read -r var val 00:06:11.999 07:23:28 -- accel/accel.sh@21 -- # val= 00:06:11.999 07:23:28 -- accel/accel.sh@22 -- # case "$var" in 00:06:11.999 07:23:28 -- accel/accel.sh@20 -- # IFS=: 00:06:11.999 07:23:28 -- accel/accel.sh@20 -- # read -r var val 00:06:11.999 07:23:28 -- accel/accel.sh@21 -- # val= 00:06:11.999 07:23:28 -- accel/accel.sh@22 -- # case "$var" in 00:06:11.999 07:23:28 -- accel/accel.sh@20 -- # IFS=: 00:06:11.999 07:23:28 -- accel/accel.sh@20 -- # read -r var val 00:06:13.371 07:23:29 -- accel/accel.sh@21 -- # val= 00:06:13.371 07:23:29 -- accel/accel.sh@22 -- # case "$var" in 00:06:13.371 07:23:29 -- accel/accel.sh@20 -- # IFS=: 00:06:13.371 07:23:29 -- accel/accel.sh@20 -- # read -r var val 00:06:13.371 07:23:29 -- accel/accel.sh@21 -- # val= 00:06:13.371 07:23:29 -- accel/accel.sh@22 -- # case "$var" in 00:06:13.371 07:23:29 -- accel/accel.sh@20 -- # IFS=: 00:06:13.371 07:23:29 -- accel/accel.sh@20 -- # read -r var val 00:06:13.371 07:23:29 -- accel/accel.sh@21 -- # val= 00:06:13.371 07:23:29 -- accel/accel.sh@22 -- # case "$var" in 00:06:13.371 07:23:29 -- accel/accel.sh@20 -- # IFS=: 00:06:13.371 07:23:29 -- accel/accel.sh@20 -- # read -r var val 00:06:13.371 07:23:29 -- accel/accel.sh@21 -- # val= 00:06:13.371 07:23:29 -- accel/accel.sh@22 -- # case "$var" in 00:06:13.371 07:23:29 -- accel/accel.sh@20 -- # IFS=: 00:06:13.371 07:23:29 -- accel/accel.sh@20 -- # read -r var val 00:06:13.371 07:23:29 -- accel/accel.sh@21 -- # val= 00:06:13.371 07:23:29 -- accel/accel.sh@22 -- # case "$var" in 00:06:13.371 07:23:29 -- accel/accel.sh@20 -- # IFS=: 00:06:13.371 07:23:29 -- accel/accel.sh@20 -- # read -r var val 00:06:13.371 07:23:29 -- accel/accel.sh@21 -- # val= 00:06:13.371 07:23:29 -- accel/accel.sh@22 -- # case "$var" in 00:06:13.371 07:23:29 -- accel/accel.sh@20 -- # IFS=: 00:06:13.371 07:23:29 -- accel/accel.sh@20 -- # read -r var val 00:06:13.371 07:23:29 -- accel/accel.sh@21 -- # val= 00:06:13.371 07:23:29 -- accel/accel.sh@22 -- # case "$var" in 00:06:13.371 07:23:29 -- accel/accel.sh@20 -- # IFS=: 00:06:13.371 07:23:29 -- accel/accel.sh@20 -- # read -r var val 00:06:13.371 07:23:29 -- accel/accel.sh@21 -- # val= 00:06:13.371 07:23:29 -- accel/accel.sh@22 -- # case "$var" in 00:06:13.371 07:23:29 -- accel/accel.sh@20 -- # IFS=: 00:06:13.371 07:23:29 -- accel/accel.sh@20 -- # read -r var val 00:06:13.371 07:23:29 -- accel/accel.sh@21 -- # val= 00:06:13.371 07:23:29 -- accel/accel.sh@22 -- # case "$var" in 00:06:13.371 07:23:29 -- accel/accel.sh@20 -- # IFS=: 00:06:13.371 07:23:29 -- accel/accel.sh@20 -- # read -r var val 00:06:13.371 07:23:29 -- accel/accel.sh@28 -- # [[ -n software ]] 00:06:13.371 07:23:29 -- accel/accel.sh@28 -- # [[ -n decompress ]] 00:06:13.371 07:23:29 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:13.371 00:06:13.371 real 0m2.996s 00:06:13.371 user 0m9.620s 00:06:13.371 sys 0m0.313s 00:06:13.371 07:23:29 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:13.371 07:23:29 -- common/autotest_common.sh@10 -- # set +x 00:06:13.371 ************************************ 00:06:13.371 END TEST accel_decomp_mcore 00:06:13.371 ************************************ 00:06:13.371 07:23:29 -- accel/accel.sh@112 -- # run_test accel_decomp_full_mcore accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 -m 0xf 00:06:13.371 07:23:29 -- common/autotest_common.sh@1077 -- # '[' 13 -le 1 ']' 00:06:13.371 07:23:29 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:06:13.371 07:23:29 -- common/autotest_common.sh@10 -- # set +x 00:06:13.371 ************************************ 00:06:13.371 START TEST accel_decomp_full_mcore 00:06:13.371 ************************************ 00:06:13.371 07:23:29 -- common/autotest_common.sh@1104 -- # accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 -m 0xf 00:06:13.371 07:23:29 -- accel/accel.sh@16 -- # local accel_opc 00:06:13.371 07:23:29 -- accel/accel.sh@17 -- # local accel_module 00:06:13.371 07:23:29 -- accel/accel.sh@18 -- # accel_perf -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 -m 0xf 00:06:13.371 07:23:29 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 -m 0xf 00:06:13.371 07:23:29 -- accel/accel.sh@12 -- # build_accel_config 00:06:13.371 07:23:29 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:06:13.371 07:23:29 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:13.371 07:23:29 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:13.371 07:23:29 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:06:13.371 07:23:29 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:06:13.371 07:23:29 -- accel/accel.sh@41 -- # local IFS=, 00:06:13.371 07:23:29 -- accel/accel.sh@42 -- # jq -r . 00:06:13.371 [2024-07-14 07:23:29.421154] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:06:13.371 [2024-07-14 07:23:29.421236] [ DPDK EAL parameters: accel_perf --no-shconf -c 0xf --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3986428 ] 00:06:13.371 EAL: No free 2048 kB hugepages reported on node 1 00:06:13.371 [2024-07-14 07:23:29.484432] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:06:13.629 [2024-07-14 07:23:29.609517] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:06:13.629 [2024-07-14 07:23:29.609574] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:06:13.629 [2024-07-14 07:23:29.609626] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:06:13.629 [2024-07-14 07:23:29.609630] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:15.004 07:23:30 -- accel/accel.sh@18 -- # out='Preparing input file... 00:06:15.004 00:06:15.004 SPDK Configuration: 00:06:15.004 Core mask: 0xf 00:06:15.004 00:06:15.004 Accel Perf Configuration: 00:06:15.004 Workload Type: decompress 00:06:15.004 Transfer size: 111250 bytes 00:06:15.004 Vector count 1 00:06:15.004 Module: software 00:06:15.004 File Name: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:06:15.004 Queue depth: 32 00:06:15.004 Allocate depth: 32 00:06:15.004 # threads/core: 1 00:06:15.004 Run time: 1 seconds 00:06:15.004 Verify: Yes 00:06:15.004 00:06:15.004 Running for 1 seconds... 00:06:15.004 00:06:15.004 Core,Thread Transfers Bandwidth Failed Miscompares 00:06:15.004 ------------------------------------------------------------------------------------ 00:06:15.004 0,0 3776/s 155 MiB/s 0 0 00:06:15.004 3,0 3776/s 155 MiB/s 0 0 00:06:15.004 2,0 3776/s 155 MiB/s 0 0 00:06:15.004 1,0 3776/s 155 MiB/s 0 0 00:06:15.004 ==================================================================================== 00:06:15.004 Total 15104/s 1602 MiB/s 0 0' 00:06:15.004 07:23:30 -- accel/accel.sh@20 -- # IFS=: 00:06:15.004 07:23:30 -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 -m 0xf 00:06:15.004 07:23:30 -- accel/accel.sh@20 -- # read -r var val 00:06:15.004 07:23:30 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 -m 0xf 00:06:15.004 07:23:30 -- accel/accel.sh@12 -- # build_accel_config 00:06:15.004 07:23:30 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:06:15.004 07:23:30 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:15.004 07:23:30 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:15.004 07:23:30 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:06:15.004 07:23:30 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:06:15.004 07:23:30 -- accel/accel.sh@41 -- # local IFS=, 00:06:15.004 07:23:30 -- accel/accel.sh@42 -- # jq -r . 00:06:15.004 [2024-07-14 07:23:30.937296] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:06:15.004 [2024-07-14 07:23:30.937378] [ DPDK EAL parameters: accel_perf --no-shconf -c 0xf --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3986569 ] 00:06:15.004 EAL: No free 2048 kB hugepages reported on node 1 00:06:15.004 [2024-07-14 07:23:30.999384] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:06:15.004 [2024-07-14 07:23:31.119391] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:06:15.004 [2024-07-14 07:23:31.119449] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:06:15.004 [2024-07-14 07:23:31.119501] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:06:15.004 [2024-07-14 07:23:31.119504] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:15.262 07:23:31 -- accel/accel.sh@21 -- # val= 00:06:15.262 07:23:31 -- accel/accel.sh@22 -- # case "$var" in 00:06:15.262 07:23:31 -- accel/accel.sh@20 -- # IFS=: 00:06:15.262 07:23:31 -- accel/accel.sh@20 -- # read -r var val 00:06:15.262 07:23:31 -- accel/accel.sh@21 -- # val= 00:06:15.262 07:23:31 -- accel/accel.sh@22 -- # case "$var" in 00:06:15.262 07:23:31 -- accel/accel.sh@20 -- # IFS=: 00:06:15.262 07:23:31 -- accel/accel.sh@20 -- # read -r var val 00:06:15.262 07:23:31 -- accel/accel.sh@21 -- # val= 00:06:15.262 07:23:31 -- accel/accel.sh@22 -- # case "$var" in 00:06:15.262 07:23:31 -- accel/accel.sh@20 -- # IFS=: 00:06:15.262 07:23:31 -- accel/accel.sh@20 -- # read -r var val 00:06:15.262 07:23:31 -- accel/accel.sh@21 -- # val=0xf 00:06:15.262 07:23:31 -- accel/accel.sh@22 -- # case "$var" in 00:06:15.262 07:23:31 -- accel/accel.sh@20 -- # IFS=: 00:06:15.262 07:23:31 -- accel/accel.sh@20 -- # read -r var val 00:06:15.262 07:23:31 -- accel/accel.sh@21 -- # val= 00:06:15.262 07:23:31 -- accel/accel.sh@22 -- # case "$var" in 00:06:15.262 07:23:31 -- accel/accel.sh@20 -- # IFS=: 00:06:15.262 07:23:31 -- accel/accel.sh@20 -- # read -r var val 00:06:15.262 07:23:31 -- accel/accel.sh@21 -- # val= 00:06:15.262 07:23:31 -- accel/accel.sh@22 -- # case "$var" in 00:06:15.262 07:23:31 -- accel/accel.sh@20 -- # IFS=: 00:06:15.262 07:23:31 -- accel/accel.sh@20 -- # read -r var val 00:06:15.262 07:23:31 -- accel/accel.sh@21 -- # val=decompress 00:06:15.262 07:23:31 -- accel/accel.sh@22 -- # case "$var" in 00:06:15.262 07:23:31 -- accel/accel.sh@24 -- # accel_opc=decompress 00:06:15.262 07:23:31 -- accel/accel.sh@20 -- # IFS=: 00:06:15.262 07:23:31 -- accel/accel.sh@20 -- # read -r var val 00:06:15.262 07:23:31 -- accel/accel.sh@21 -- # val='111250 bytes' 00:06:15.262 07:23:31 -- accel/accel.sh@22 -- # case "$var" in 00:06:15.262 07:23:31 -- accel/accel.sh@20 -- # IFS=: 00:06:15.262 07:23:31 -- accel/accel.sh@20 -- # read -r var val 00:06:15.262 07:23:31 -- accel/accel.sh@21 -- # val= 00:06:15.262 07:23:31 -- accel/accel.sh@22 -- # case "$var" in 00:06:15.262 07:23:31 -- accel/accel.sh@20 -- # IFS=: 00:06:15.262 07:23:31 -- accel/accel.sh@20 -- # read -r var val 00:06:15.262 07:23:31 -- accel/accel.sh@21 -- # val=software 00:06:15.262 07:23:31 -- accel/accel.sh@22 -- # case "$var" in 00:06:15.262 07:23:31 -- accel/accel.sh@23 -- # accel_module=software 00:06:15.262 07:23:31 -- accel/accel.sh@20 -- # IFS=: 00:06:15.262 07:23:31 -- accel/accel.sh@20 -- # read -r var val 00:06:15.263 07:23:31 -- accel/accel.sh@21 -- # val=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:06:15.263 07:23:31 -- accel/accel.sh@22 -- # case "$var" in 00:06:15.263 07:23:31 -- accel/accel.sh@20 -- # IFS=: 00:06:15.263 07:23:31 -- accel/accel.sh@20 -- # read -r var val 00:06:15.263 07:23:31 -- accel/accel.sh@21 -- # val=32 00:06:15.263 07:23:31 -- accel/accel.sh@22 -- # case "$var" in 00:06:15.263 07:23:31 -- accel/accel.sh@20 -- # IFS=: 00:06:15.263 07:23:31 -- accel/accel.sh@20 -- # read -r var val 00:06:15.263 07:23:31 -- accel/accel.sh@21 -- # val=32 00:06:15.263 07:23:31 -- accel/accel.sh@22 -- # case "$var" in 00:06:15.263 07:23:31 -- accel/accel.sh@20 -- # IFS=: 00:06:15.263 07:23:31 -- accel/accel.sh@20 -- # read -r var val 00:06:15.263 07:23:31 -- accel/accel.sh@21 -- # val=1 00:06:15.263 07:23:31 -- accel/accel.sh@22 -- # case "$var" in 00:06:15.263 07:23:31 -- accel/accel.sh@20 -- # IFS=: 00:06:15.263 07:23:31 -- accel/accel.sh@20 -- # read -r var val 00:06:15.263 07:23:31 -- accel/accel.sh@21 -- # val='1 seconds' 00:06:15.263 07:23:31 -- accel/accel.sh@22 -- # case "$var" in 00:06:15.263 07:23:31 -- accel/accel.sh@20 -- # IFS=: 00:06:15.263 07:23:31 -- accel/accel.sh@20 -- # read -r var val 00:06:15.263 07:23:31 -- accel/accel.sh@21 -- # val=Yes 00:06:15.263 07:23:31 -- accel/accel.sh@22 -- # case "$var" in 00:06:15.263 07:23:31 -- accel/accel.sh@20 -- # IFS=: 00:06:15.263 07:23:31 -- accel/accel.sh@20 -- # read -r var val 00:06:15.263 07:23:31 -- accel/accel.sh@21 -- # val= 00:06:15.263 07:23:31 -- accel/accel.sh@22 -- # case "$var" in 00:06:15.263 07:23:31 -- accel/accel.sh@20 -- # IFS=: 00:06:15.263 07:23:31 -- accel/accel.sh@20 -- # read -r var val 00:06:15.263 07:23:31 -- accel/accel.sh@21 -- # val= 00:06:15.263 07:23:31 -- accel/accel.sh@22 -- # case "$var" in 00:06:15.263 07:23:31 -- accel/accel.sh@20 -- # IFS=: 00:06:15.263 07:23:31 -- accel/accel.sh@20 -- # read -r var val 00:06:16.638 07:23:32 -- accel/accel.sh@21 -- # val= 00:06:16.638 07:23:32 -- accel/accel.sh@22 -- # case "$var" in 00:06:16.638 07:23:32 -- accel/accel.sh@20 -- # IFS=: 00:06:16.638 07:23:32 -- accel/accel.sh@20 -- # read -r var val 00:06:16.638 07:23:32 -- accel/accel.sh@21 -- # val= 00:06:16.638 07:23:32 -- accel/accel.sh@22 -- # case "$var" in 00:06:16.638 07:23:32 -- accel/accel.sh@20 -- # IFS=: 00:06:16.638 07:23:32 -- accel/accel.sh@20 -- # read -r var val 00:06:16.638 07:23:32 -- accel/accel.sh@21 -- # val= 00:06:16.638 07:23:32 -- accel/accel.sh@22 -- # case "$var" in 00:06:16.638 07:23:32 -- accel/accel.sh@20 -- # IFS=: 00:06:16.638 07:23:32 -- accel/accel.sh@20 -- # read -r var val 00:06:16.638 07:23:32 -- accel/accel.sh@21 -- # val= 00:06:16.638 07:23:32 -- accel/accel.sh@22 -- # case "$var" in 00:06:16.638 07:23:32 -- accel/accel.sh@20 -- # IFS=: 00:06:16.638 07:23:32 -- accel/accel.sh@20 -- # read -r var val 00:06:16.638 07:23:32 -- accel/accel.sh@21 -- # val= 00:06:16.638 07:23:32 -- accel/accel.sh@22 -- # case "$var" in 00:06:16.638 07:23:32 -- accel/accel.sh@20 -- # IFS=: 00:06:16.638 07:23:32 -- accel/accel.sh@20 -- # read -r var val 00:06:16.638 07:23:32 -- accel/accel.sh@21 -- # val= 00:06:16.638 07:23:32 -- accel/accel.sh@22 -- # case "$var" in 00:06:16.638 07:23:32 -- accel/accel.sh@20 -- # IFS=: 00:06:16.638 07:23:32 -- accel/accel.sh@20 -- # read -r var val 00:06:16.638 07:23:32 -- accel/accel.sh@21 -- # val= 00:06:16.638 07:23:32 -- accel/accel.sh@22 -- # case "$var" in 00:06:16.638 07:23:32 -- accel/accel.sh@20 -- # IFS=: 00:06:16.638 07:23:32 -- accel/accel.sh@20 -- # read -r var val 00:06:16.638 07:23:32 -- accel/accel.sh@21 -- # val= 00:06:16.638 07:23:32 -- accel/accel.sh@22 -- # case "$var" in 00:06:16.638 07:23:32 -- accel/accel.sh@20 -- # IFS=: 00:06:16.638 07:23:32 -- accel/accel.sh@20 -- # read -r var val 00:06:16.638 07:23:32 -- accel/accel.sh@21 -- # val= 00:06:16.638 07:23:32 -- accel/accel.sh@22 -- # case "$var" in 00:06:16.638 07:23:32 -- accel/accel.sh@20 -- # IFS=: 00:06:16.638 07:23:32 -- accel/accel.sh@20 -- # read -r var val 00:06:16.638 07:23:32 -- accel/accel.sh@28 -- # [[ -n software ]] 00:06:16.638 07:23:32 -- accel/accel.sh@28 -- # [[ -n decompress ]] 00:06:16.638 07:23:32 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:16.638 00:06:16.638 real 0m3.006s 00:06:16.638 user 0m9.679s 00:06:16.638 sys 0m0.300s 00:06:16.638 07:23:32 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:16.638 07:23:32 -- common/autotest_common.sh@10 -- # set +x 00:06:16.638 ************************************ 00:06:16.638 END TEST accel_decomp_full_mcore 00:06:16.638 ************************************ 00:06:16.638 07:23:32 -- accel/accel.sh@113 -- # run_test accel_decomp_mthread accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -T 2 00:06:16.638 07:23:32 -- common/autotest_common.sh@1077 -- # '[' 11 -le 1 ']' 00:06:16.638 07:23:32 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:06:16.638 07:23:32 -- common/autotest_common.sh@10 -- # set +x 00:06:16.638 ************************************ 00:06:16.638 START TEST accel_decomp_mthread 00:06:16.638 ************************************ 00:06:16.638 07:23:32 -- common/autotest_common.sh@1104 -- # accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -T 2 00:06:16.638 07:23:32 -- accel/accel.sh@16 -- # local accel_opc 00:06:16.638 07:23:32 -- accel/accel.sh@17 -- # local accel_module 00:06:16.638 07:23:32 -- accel/accel.sh@18 -- # accel_perf -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -T 2 00:06:16.638 07:23:32 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -T 2 00:06:16.638 07:23:32 -- accel/accel.sh@12 -- # build_accel_config 00:06:16.638 07:23:32 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:06:16.638 07:23:32 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:16.638 07:23:32 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:16.638 07:23:32 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:06:16.638 07:23:32 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:06:16.638 07:23:32 -- accel/accel.sh@41 -- # local IFS=, 00:06:16.638 07:23:32 -- accel/accel.sh@42 -- # jq -r . 00:06:16.638 [2024-07-14 07:23:32.453929] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:06:16.638 [2024-07-14 07:23:32.454009] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3986873 ] 00:06:16.638 EAL: No free 2048 kB hugepages reported on node 1 00:06:16.638 [2024-07-14 07:23:32.515736] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:16.638 [2024-07-14 07:23:32.636683] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:18.010 07:23:33 -- accel/accel.sh@18 -- # out='Preparing input file... 00:06:18.010 00:06:18.010 SPDK Configuration: 00:06:18.010 Core mask: 0x1 00:06:18.010 00:06:18.010 Accel Perf Configuration: 00:06:18.010 Workload Type: decompress 00:06:18.011 Transfer size: 4096 bytes 00:06:18.011 Vector count 1 00:06:18.011 Module: software 00:06:18.011 File Name: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:06:18.011 Queue depth: 32 00:06:18.011 Allocate depth: 32 00:06:18.011 # threads/core: 2 00:06:18.011 Run time: 1 seconds 00:06:18.011 Verify: Yes 00:06:18.011 00:06:18.011 Running for 1 seconds... 00:06:18.011 00:06:18.011 Core,Thread Transfers Bandwidth Failed Miscompares 00:06:18.011 ------------------------------------------------------------------------------------ 00:06:18.011 0,1 28096/s 51 MiB/s 0 0 00:06:18.011 0,0 27968/s 51 MiB/s 0 0 00:06:18.011 ==================================================================================== 00:06:18.011 Total 56064/s 219 MiB/s 0 0' 00:06:18.011 07:23:33 -- accel/accel.sh@20 -- # IFS=: 00:06:18.011 07:23:33 -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -T 2 00:06:18.011 07:23:33 -- accel/accel.sh@20 -- # read -r var val 00:06:18.011 07:23:33 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -T 2 00:06:18.011 07:23:33 -- accel/accel.sh@12 -- # build_accel_config 00:06:18.011 07:23:33 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:06:18.011 07:23:33 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:18.011 07:23:33 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:18.011 07:23:33 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:06:18.011 07:23:33 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:06:18.011 07:23:33 -- accel/accel.sh@41 -- # local IFS=, 00:06:18.011 07:23:33 -- accel/accel.sh@42 -- # jq -r . 00:06:18.011 [2024-07-14 07:23:33.947060] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:06:18.011 [2024-07-14 07:23:33.947144] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3987014 ] 00:06:18.011 EAL: No free 2048 kB hugepages reported on node 1 00:06:18.011 [2024-07-14 07:23:34.008485] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:18.011 [2024-07-14 07:23:34.129267] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:18.268 07:23:34 -- accel/accel.sh@21 -- # val= 00:06:18.268 07:23:34 -- accel/accel.sh@22 -- # case "$var" in 00:06:18.268 07:23:34 -- accel/accel.sh@20 -- # IFS=: 00:06:18.268 07:23:34 -- accel/accel.sh@20 -- # read -r var val 00:06:18.268 07:23:34 -- accel/accel.sh@21 -- # val= 00:06:18.268 07:23:34 -- accel/accel.sh@22 -- # case "$var" in 00:06:18.268 07:23:34 -- accel/accel.sh@20 -- # IFS=: 00:06:18.268 07:23:34 -- accel/accel.sh@20 -- # read -r var val 00:06:18.268 07:23:34 -- accel/accel.sh@21 -- # val= 00:06:18.268 07:23:34 -- accel/accel.sh@22 -- # case "$var" in 00:06:18.268 07:23:34 -- accel/accel.sh@20 -- # IFS=: 00:06:18.268 07:23:34 -- accel/accel.sh@20 -- # read -r var val 00:06:18.268 07:23:34 -- accel/accel.sh@21 -- # val=0x1 00:06:18.268 07:23:34 -- accel/accel.sh@22 -- # case "$var" in 00:06:18.268 07:23:34 -- accel/accel.sh@20 -- # IFS=: 00:06:18.268 07:23:34 -- accel/accel.sh@20 -- # read -r var val 00:06:18.268 07:23:34 -- accel/accel.sh@21 -- # val= 00:06:18.269 07:23:34 -- accel/accel.sh@22 -- # case "$var" in 00:06:18.269 07:23:34 -- accel/accel.sh@20 -- # IFS=: 00:06:18.269 07:23:34 -- accel/accel.sh@20 -- # read -r var val 00:06:18.269 07:23:34 -- accel/accel.sh@21 -- # val= 00:06:18.269 07:23:34 -- accel/accel.sh@22 -- # case "$var" in 00:06:18.269 07:23:34 -- accel/accel.sh@20 -- # IFS=: 00:06:18.269 07:23:34 -- accel/accel.sh@20 -- # read -r var val 00:06:18.269 07:23:34 -- accel/accel.sh@21 -- # val=decompress 00:06:18.269 07:23:34 -- accel/accel.sh@22 -- # case "$var" in 00:06:18.269 07:23:34 -- accel/accel.sh@24 -- # accel_opc=decompress 00:06:18.269 07:23:34 -- accel/accel.sh@20 -- # IFS=: 00:06:18.269 07:23:34 -- accel/accel.sh@20 -- # read -r var val 00:06:18.269 07:23:34 -- accel/accel.sh@21 -- # val='4096 bytes' 00:06:18.269 07:23:34 -- accel/accel.sh@22 -- # case "$var" in 00:06:18.269 07:23:34 -- accel/accel.sh@20 -- # IFS=: 00:06:18.269 07:23:34 -- accel/accel.sh@20 -- # read -r var val 00:06:18.269 07:23:34 -- accel/accel.sh@21 -- # val= 00:06:18.269 07:23:34 -- accel/accel.sh@22 -- # case "$var" in 00:06:18.269 07:23:34 -- accel/accel.sh@20 -- # IFS=: 00:06:18.269 07:23:34 -- accel/accel.sh@20 -- # read -r var val 00:06:18.269 07:23:34 -- accel/accel.sh@21 -- # val=software 00:06:18.269 07:23:34 -- accel/accel.sh@22 -- # case "$var" in 00:06:18.269 07:23:34 -- accel/accel.sh@23 -- # accel_module=software 00:06:18.269 07:23:34 -- accel/accel.sh@20 -- # IFS=: 00:06:18.269 07:23:34 -- accel/accel.sh@20 -- # read -r var val 00:06:18.269 07:23:34 -- accel/accel.sh@21 -- # val=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:06:18.269 07:23:34 -- accel/accel.sh@22 -- # case "$var" in 00:06:18.269 07:23:34 -- accel/accel.sh@20 -- # IFS=: 00:06:18.269 07:23:34 -- accel/accel.sh@20 -- # read -r var val 00:06:18.269 07:23:34 -- accel/accel.sh@21 -- # val=32 00:06:18.269 07:23:34 -- accel/accel.sh@22 -- # case "$var" in 00:06:18.269 07:23:34 -- accel/accel.sh@20 -- # IFS=: 00:06:18.269 07:23:34 -- accel/accel.sh@20 -- # read -r var val 00:06:18.269 07:23:34 -- accel/accel.sh@21 -- # val=32 00:06:18.269 07:23:34 -- accel/accel.sh@22 -- # case "$var" in 00:06:18.269 07:23:34 -- accel/accel.sh@20 -- # IFS=: 00:06:18.269 07:23:34 -- accel/accel.sh@20 -- # read -r var val 00:06:18.269 07:23:34 -- accel/accel.sh@21 -- # val=2 00:06:18.269 07:23:34 -- accel/accel.sh@22 -- # case "$var" in 00:06:18.269 07:23:34 -- accel/accel.sh@20 -- # IFS=: 00:06:18.269 07:23:34 -- accel/accel.sh@20 -- # read -r var val 00:06:18.269 07:23:34 -- accel/accel.sh@21 -- # val='1 seconds' 00:06:18.269 07:23:34 -- accel/accel.sh@22 -- # case "$var" in 00:06:18.269 07:23:34 -- accel/accel.sh@20 -- # IFS=: 00:06:18.269 07:23:34 -- accel/accel.sh@20 -- # read -r var val 00:06:18.269 07:23:34 -- accel/accel.sh@21 -- # val=Yes 00:06:18.269 07:23:34 -- accel/accel.sh@22 -- # case "$var" in 00:06:18.269 07:23:34 -- accel/accel.sh@20 -- # IFS=: 00:06:18.269 07:23:34 -- accel/accel.sh@20 -- # read -r var val 00:06:18.269 07:23:34 -- accel/accel.sh@21 -- # val= 00:06:18.269 07:23:34 -- accel/accel.sh@22 -- # case "$var" in 00:06:18.269 07:23:34 -- accel/accel.sh@20 -- # IFS=: 00:06:18.269 07:23:34 -- accel/accel.sh@20 -- # read -r var val 00:06:18.269 07:23:34 -- accel/accel.sh@21 -- # val= 00:06:18.269 07:23:34 -- accel/accel.sh@22 -- # case "$var" in 00:06:18.269 07:23:34 -- accel/accel.sh@20 -- # IFS=: 00:06:18.269 07:23:34 -- accel/accel.sh@20 -- # read -r var val 00:06:19.642 07:23:35 -- accel/accel.sh@21 -- # val= 00:06:19.642 07:23:35 -- accel/accel.sh@22 -- # case "$var" in 00:06:19.642 07:23:35 -- accel/accel.sh@20 -- # IFS=: 00:06:19.642 07:23:35 -- accel/accel.sh@20 -- # read -r var val 00:06:19.642 07:23:35 -- accel/accel.sh@21 -- # val= 00:06:19.642 07:23:35 -- accel/accel.sh@22 -- # case "$var" in 00:06:19.642 07:23:35 -- accel/accel.sh@20 -- # IFS=: 00:06:19.642 07:23:35 -- accel/accel.sh@20 -- # read -r var val 00:06:19.642 07:23:35 -- accel/accel.sh@21 -- # val= 00:06:19.642 07:23:35 -- accel/accel.sh@22 -- # case "$var" in 00:06:19.642 07:23:35 -- accel/accel.sh@20 -- # IFS=: 00:06:19.642 07:23:35 -- accel/accel.sh@20 -- # read -r var val 00:06:19.642 07:23:35 -- accel/accel.sh@21 -- # val= 00:06:19.642 07:23:35 -- accel/accel.sh@22 -- # case "$var" in 00:06:19.642 07:23:35 -- accel/accel.sh@20 -- # IFS=: 00:06:19.642 07:23:35 -- accel/accel.sh@20 -- # read -r var val 00:06:19.642 07:23:35 -- accel/accel.sh@21 -- # val= 00:06:19.642 07:23:35 -- accel/accel.sh@22 -- # case "$var" in 00:06:19.642 07:23:35 -- accel/accel.sh@20 -- # IFS=: 00:06:19.642 07:23:35 -- accel/accel.sh@20 -- # read -r var val 00:06:19.642 07:23:35 -- accel/accel.sh@21 -- # val= 00:06:19.642 07:23:35 -- accel/accel.sh@22 -- # case "$var" in 00:06:19.642 07:23:35 -- accel/accel.sh@20 -- # IFS=: 00:06:19.642 07:23:35 -- accel/accel.sh@20 -- # read -r var val 00:06:19.642 07:23:35 -- accel/accel.sh@21 -- # val= 00:06:19.642 07:23:35 -- accel/accel.sh@22 -- # case "$var" in 00:06:19.642 07:23:35 -- accel/accel.sh@20 -- # IFS=: 00:06:19.642 07:23:35 -- accel/accel.sh@20 -- # read -r var val 00:06:19.642 07:23:35 -- accel/accel.sh@28 -- # [[ -n software ]] 00:06:19.642 07:23:35 -- accel/accel.sh@28 -- # [[ -n decompress ]] 00:06:19.642 07:23:35 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:19.642 00:06:19.642 real 0m2.989s 00:06:19.642 user 0m2.683s 00:06:19.642 sys 0m0.299s 00:06:19.643 07:23:35 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:19.643 07:23:35 -- common/autotest_common.sh@10 -- # set +x 00:06:19.643 ************************************ 00:06:19.643 END TEST accel_decomp_mthread 00:06:19.643 ************************************ 00:06:19.643 07:23:35 -- accel/accel.sh@114 -- # run_test accel_deomp_full_mthread accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 -T 2 00:06:19.643 07:23:35 -- common/autotest_common.sh@1077 -- # '[' 13 -le 1 ']' 00:06:19.643 07:23:35 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:06:19.643 07:23:35 -- common/autotest_common.sh@10 -- # set +x 00:06:19.643 ************************************ 00:06:19.643 START TEST accel_deomp_full_mthread 00:06:19.643 ************************************ 00:06:19.643 07:23:35 -- common/autotest_common.sh@1104 -- # accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 -T 2 00:06:19.643 07:23:35 -- accel/accel.sh@16 -- # local accel_opc 00:06:19.643 07:23:35 -- accel/accel.sh@17 -- # local accel_module 00:06:19.643 07:23:35 -- accel/accel.sh@18 -- # accel_perf -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 -T 2 00:06:19.643 07:23:35 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 -T 2 00:06:19.643 07:23:35 -- accel/accel.sh@12 -- # build_accel_config 00:06:19.643 07:23:35 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:06:19.643 07:23:35 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:19.643 07:23:35 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:19.643 07:23:35 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:06:19.643 07:23:35 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:06:19.643 07:23:35 -- accel/accel.sh@41 -- # local IFS=, 00:06:19.643 07:23:35 -- accel/accel.sh@42 -- # jq -r . 00:06:19.643 [2024-07-14 07:23:35.468137] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:06:19.643 [2024-07-14 07:23:35.468218] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3987175 ] 00:06:19.643 EAL: No free 2048 kB hugepages reported on node 1 00:06:19.643 [2024-07-14 07:23:35.530495] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:19.643 [2024-07-14 07:23:35.650966] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:21.016 07:23:36 -- accel/accel.sh@18 -- # out='Preparing input file... 00:06:21.016 00:06:21.016 SPDK Configuration: 00:06:21.016 Core mask: 0x1 00:06:21.016 00:06:21.016 Accel Perf Configuration: 00:06:21.016 Workload Type: decompress 00:06:21.016 Transfer size: 111250 bytes 00:06:21.016 Vector count 1 00:06:21.016 Module: software 00:06:21.016 File Name: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:06:21.016 Queue depth: 32 00:06:21.016 Allocate depth: 32 00:06:21.016 # threads/core: 2 00:06:21.016 Run time: 1 seconds 00:06:21.016 Verify: Yes 00:06:21.016 00:06:21.016 Running for 1 seconds... 00:06:21.016 00:06:21.016 Core,Thread Transfers Bandwidth Failed Miscompares 00:06:21.016 ------------------------------------------------------------------------------------ 00:06:21.016 0,1 1952/s 80 MiB/s 0 0 00:06:21.016 0,0 1920/s 79 MiB/s 0 0 00:06:21.016 ==================================================================================== 00:06:21.016 Total 3872/s 410 MiB/s 0 0' 00:06:21.016 07:23:36 -- accel/accel.sh@20 -- # IFS=: 00:06:21.016 07:23:36 -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 -T 2 00:06:21.016 07:23:36 -- accel/accel.sh@20 -- # read -r var val 00:06:21.016 07:23:36 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 -T 2 00:06:21.016 07:23:36 -- accel/accel.sh@12 -- # build_accel_config 00:06:21.016 07:23:36 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:06:21.016 07:23:36 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:21.016 07:23:36 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:21.016 07:23:36 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:06:21.016 07:23:36 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:06:21.016 07:23:36 -- accel/accel.sh@41 -- # local IFS=, 00:06:21.016 07:23:36 -- accel/accel.sh@42 -- # jq -r . 00:06:21.016 [2024-07-14 07:23:36.981123] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:06:21.016 [2024-07-14 07:23:36.981206] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3987436 ] 00:06:21.016 EAL: No free 2048 kB hugepages reported on node 1 00:06:21.016 [2024-07-14 07:23:37.038215] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:21.016 [2024-07-14 07:23:37.155996] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:21.274 07:23:37 -- accel/accel.sh@21 -- # val= 00:06:21.274 07:23:37 -- accel/accel.sh@22 -- # case "$var" in 00:06:21.274 07:23:37 -- accel/accel.sh@20 -- # IFS=: 00:06:21.274 07:23:37 -- accel/accel.sh@20 -- # read -r var val 00:06:21.274 07:23:37 -- accel/accel.sh@21 -- # val= 00:06:21.274 07:23:37 -- accel/accel.sh@22 -- # case "$var" in 00:06:21.274 07:23:37 -- accel/accel.sh@20 -- # IFS=: 00:06:21.274 07:23:37 -- accel/accel.sh@20 -- # read -r var val 00:06:21.274 07:23:37 -- accel/accel.sh@21 -- # val= 00:06:21.274 07:23:37 -- accel/accel.sh@22 -- # case "$var" in 00:06:21.274 07:23:37 -- accel/accel.sh@20 -- # IFS=: 00:06:21.274 07:23:37 -- accel/accel.sh@20 -- # read -r var val 00:06:21.274 07:23:37 -- accel/accel.sh@21 -- # val=0x1 00:06:21.274 07:23:37 -- accel/accel.sh@22 -- # case "$var" in 00:06:21.274 07:23:37 -- accel/accel.sh@20 -- # IFS=: 00:06:21.274 07:23:37 -- accel/accel.sh@20 -- # read -r var val 00:06:21.274 07:23:37 -- accel/accel.sh@21 -- # val= 00:06:21.274 07:23:37 -- accel/accel.sh@22 -- # case "$var" in 00:06:21.274 07:23:37 -- accel/accel.sh@20 -- # IFS=: 00:06:21.274 07:23:37 -- accel/accel.sh@20 -- # read -r var val 00:06:21.274 07:23:37 -- accel/accel.sh@21 -- # val= 00:06:21.274 07:23:37 -- accel/accel.sh@22 -- # case "$var" in 00:06:21.275 07:23:37 -- accel/accel.sh@20 -- # IFS=: 00:06:21.275 07:23:37 -- accel/accel.sh@20 -- # read -r var val 00:06:21.275 07:23:37 -- accel/accel.sh@21 -- # val=decompress 00:06:21.275 07:23:37 -- accel/accel.sh@22 -- # case "$var" in 00:06:21.275 07:23:37 -- accel/accel.sh@24 -- # accel_opc=decompress 00:06:21.275 07:23:37 -- accel/accel.sh@20 -- # IFS=: 00:06:21.275 07:23:37 -- accel/accel.sh@20 -- # read -r var val 00:06:21.275 07:23:37 -- accel/accel.sh@21 -- # val='111250 bytes' 00:06:21.275 07:23:37 -- accel/accel.sh@22 -- # case "$var" in 00:06:21.275 07:23:37 -- accel/accel.sh@20 -- # IFS=: 00:06:21.275 07:23:37 -- accel/accel.sh@20 -- # read -r var val 00:06:21.275 07:23:37 -- accel/accel.sh@21 -- # val= 00:06:21.275 07:23:37 -- accel/accel.sh@22 -- # case "$var" in 00:06:21.275 07:23:37 -- accel/accel.sh@20 -- # IFS=: 00:06:21.275 07:23:37 -- accel/accel.sh@20 -- # read -r var val 00:06:21.275 07:23:37 -- accel/accel.sh@21 -- # val=software 00:06:21.275 07:23:37 -- accel/accel.sh@22 -- # case "$var" in 00:06:21.275 07:23:37 -- accel/accel.sh@23 -- # accel_module=software 00:06:21.275 07:23:37 -- accel/accel.sh@20 -- # IFS=: 00:06:21.275 07:23:37 -- accel/accel.sh@20 -- # read -r var val 00:06:21.275 07:23:37 -- accel/accel.sh@21 -- # val=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:06:21.275 07:23:37 -- accel/accel.sh@22 -- # case "$var" in 00:06:21.275 07:23:37 -- accel/accel.sh@20 -- # IFS=: 00:06:21.275 07:23:37 -- accel/accel.sh@20 -- # read -r var val 00:06:21.275 07:23:37 -- accel/accel.sh@21 -- # val=32 00:06:21.275 07:23:37 -- accel/accel.sh@22 -- # case "$var" in 00:06:21.275 07:23:37 -- accel/accel.sh@20 -- # IFS=: 00:06:21.275 07:23:37 -- accel/accel.sh@20 -- # read -r var val 00:06:21.275 07:23:37 -- accel/accel.sh@21 -- # val=32 00:06:21.275 07:23:37 -- accel/accel.sh@22 -- # case "$var" in 00:06:21.275 07:23:37 -- accel/accel.sh@20 -- # IFS=: 00:06:21.275 07:23:37 -- accel/accel.sh@20 -- # read -r var val 00:06:21.275 07:23:37 -- accel/accel.sh@21 -- # val=2 00:06:21.275 07:23:37 -- accel/accel.sh@22 -- # case "$var" in 00:06:21.275 07:23:37 -- accel/accel.sh@20 -- # IFS=: 00:06:21.275 07:23:37 -- accel/accel.sh@20 -- # read -r var val 00:06:21.275 07:23:37 -- accel/accel.sh@21 -- # val='1 seconds' 00:06:21.275 07:23:37 -- accel/accel.sh@22 -- # case "$var" in 00:06:21.275 07:23:37 -- accel/accel.sh@20 -- # IFS=: 00:06:21.275 07:23:37 -- accel/accel.sh@20 -- # read -r var val 00:06:21.275 07:23:37 -- accel/accel.sh@21 -- # val=Yes 00:06:21.275 07:23:37 -- accel/accel.sh@22 -- # case "$var" in 00:06:21.275 07:23:37 -- accel/accel.sh@20 -- # IFS=: 00:06:21.275 07:23:37 -- accel/accel.sh@20 -- # read -r var val 00:06:21.275 07:23:37 -- accel/accel.sh@21 -- # val= 00:06:21.275 07:23:37 -- accel/accel.sh@22 -- # case "$var" in 00:06:21.275 07:23:37 -- accel/accel.sh@20 -- # IFS=: 00:06:21.275 07:23:37 -- accel/accel.sh@20 -- # read -r var val 00:06:21.275 07:23:37 -- accel/accel.sh@21 -- # val= 00:06:21.275 07:23:37 -- accel/accel.sh@22 -- # case "$var" in 00:06:21.275 07:23:37 -- accel/accel.sh@20 -- # IFS=: 00:06:21.275 07:23:37 -- accel/accel.sh@20 -- # read -r var val 00:06:22.647 07:23:38 -- accel/accel.sh@21 -- # val= 00:06:22.647 07:23:38 -- accel/accel.sh@22 -- # case "$var" in 00:06:22.647 07:23:38 -- accel/accel.sh@20 -- # IFS=: 00:06:22.647 07:23:38 -- accel/accel.sh@20 -- # read -r var val 00:06:22.647 07:23:38 -- accel/accel.sh@21 -- # val= 00:06:22.647 07:23:38 -- accel/accel.sh@22 -- # case "$var" in 00:06:22.647 07:23:38 -- accel/accel.sh@20 -- # IFS=: 00:06:22.647 07:23:38 -- accel/accel.sh@20 -- # read -r var val 00:06:22.647 07:23:38 -- accel/accel.sh@21 -- # val= 00:06:22.647 07:23:38 -- accel/accel.sh@22 -- # case "$var" in 00:06:22.647 07:23:38 -- accel/accel.sh@20 -- # IFS=: 00:06:22.647 07:23:38 -- accel/accel.sh@20 -- # read -r var val 00:06:22.647 07:23:38 -- accel/accel.sh@21 -- # val= 00:06:22.647 07:23:38 -- accel/accel.sh@22 -- # case "$var" in 00:06:22.647 07:23:38 -- accel/accel.sh@20 -- # IFS=: 00:06:22.647 07:23:38 -- accel/accel.sh@20 -- # read -r var val 00:06:22.647 07:23:38 -- accel/accel.sh@21 -- # val= 00:06:22.647 07:23:38 -- accel/accel.sh@22 -- # case "$var" in 00:06:22.647 07:23:38 -- accel/accel.sh@20 -- # IFS=: 00:06:22.647 07:23:38 -- accel/accel.sh@20 -- # read -r var val 00:06:22.647 07:23:38 -- accel/accel.sh@21 -- # val= 00:06:22.647 07:23:38 -- accel/accel.sh@22 -- # case "$var" in 00:06:22.647 07:23:38 -- accel/accel.sh@20 -- # IFS=: 00:06:22.647 07:23:38 -- accel/accel.sh@20 -- # read -r var val 00:06:22.647 07:23:38 -- accel/accel.sh@21 -- # val= 00:06:22.647 07:23:38 -- accel/accel.sh@22 -- # case "$var" in 00:06:22.647 07:23:38 -- accel/accel.sh@20 -- # IFS=: 00:06:22.647 07:23:38 -- accel/accel.sh@20 -- # read -r var val 00:06:22.647 07:23:38 -- accel/accel.sh@28 -- # [[ -n software ]] 00:06:22.647 07:23:38 -- accel/accel.sh@28 -- # [[ -n decompress ]] 00:06:22.647 07:23:38 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:22.647 00:06:22.647 real 0m3.033s 00:06:22.647 user 0m2.743s 00:06:22.647 sys 0m0.282s 00:06:22.647 07:23:38 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:22.647 07:23:38 -- common/autotest_common.sh@10 -- # set +x 00:06:22.647 ************************************ 00:06:22.647 END TEST accel_deomp_full_mthread 00:06:22.647 ************************************ 00:06:22.647 07:23:38 -- accel/accel.sh@116 -- # [[ n == y ]] 00:06:22.647 07:23:38 -- accel/accel.sh@129 -- # run_test accel_dif_functional_tests /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/dif/dif -c /dev/fd/62 00:06:22.647 07:23:38 -- accel/accel.sh@129 -- # build_accel_config 00:06:22.647 07:23:38 -- common/autotest_common.sh@1077 -- # '[' 4 -le 1 ']' 00:06:22.647 07:23:38 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:06:22.647 07:23:38 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:06:22.647 07:23:38 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:22.647 07:23:38 -- common/autotest_common.sh@10 -- # set +x 00:06:22.647 07:23:38 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:22.647 07:23:38 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:06:22.647 07:23:38 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:06:22.647 07:23:38 -- accel/accel.sh@41 -- # local IFS=, 00:06:22.647 07:23:38 -- accel/accel.sh@42 -- # jq -r . 00:06:22.647 ************************************ 00:06:22.647 START TEST accel_dif_functional_tests 00:06:22.647 ************************************ 00:06:22.647 07:23:38 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/dif/dif -c /dev/fd/62 00:06:22.647 [2024-07-14 07:23:38.547895] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:06:22.647 [2024-07-14 07:23:38.547982] [ DPDK EAL parameters: DIF --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3987604 ] 00:06:22.647 EAL: No free 2048 kB hugepages reported on node 1 00:06:22.647 [2024-07-14 07:23:38.614615] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 3 00:06:22.647 [2024-07-14 07:23:38.737498] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:06:22.647 [2024-07-14 07:23:38.737556] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:06:22.647 [2024-07-14 07:23:38.737560] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:22.906 00:06:22.906 00:06:22.906 CUnit - A unit testing framework for C - Version 2.1-3 00:06:22.906 http://cunit.sourceforge.net/ 00:06:22.906 00:06:22.906 00:06:22.906 Suite: accel_dif 00:06:22.906 Test: verify: DIF generated, GUARD check ...passed 00:06:22.906 Test: verify: DIF generated, APPTAG check ...passed 00:06:22.906 Test: verify: DIF generated, REFTAG check ...passed 00:06:22.906 Test: verify: DIF not generated, GUARD check ...[2024-07-14 07:23:38.840182] dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=10, Expected=5a5a, Actual=7867 00:06:22.906 [2024-07-14 07:23:38.840254] dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=10, Expected=5a5a, Actual=7867 00:06:22.906 passed 00:06:22.906 Test: verify: DIF not generated, APPTAG check ...[2024-07-14 07:23:38.840303] dif.c: 792:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=10, Expected=14, Actual=5a5a 00:06:22.906 [2024-07-14 07:23:38.840334] dif.c: 792:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=10, Expected=14, Actual=5a5a 00:06:22.906 passed 00:06:22.906 Test: verify: DIF not generated, REFTAG check ...[2024-07-14 07:23:38.840369] dif.c: 813:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=10, Expected=a, Actual=5a5a5a5a 00:06:22.906 [2024-07-14 07:23:38.840396] dif.c: 813:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=10, Expected=a, Actual=5a5a5a5a 00:06:22.906 passed 00:06:22.906 Test: verify: APPTAG correct, APPTAG check ...passed 00:06:22.906 Test: verify: APPTAG incorrect, APPTAG check ...[2024-07-14 07:23:38.840463] dif.c: 792:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=30, Expected=28, Actual=14 00:06:22.906 passed 00:06:22.906 Test: verify: APPTAG incorrect, no APPTAG check ...passed 00:06:22.906 Test: verify: REFTAG incorrect, REFTAG ignore ...passed 00:06:22.906 Test: verify: REFTAG_INIT correct, REFTAG check ...passed 00:06:22.906 Test: verify: REFTAG_INIT incorrect, REFTAG check ...[2024-07-14 07:23:38.840618] dif.c: 813:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=10, Expected=a, Actual=10 00:06:22.906 passed 00:06:22.906 Test: generate copy: DIF generated, GUARD check ...passed 00:06:22.906 Test: generate copy: DIF generated, APTTAG check ...passed 00:06:22.906 Test: generate copy: DIF generated, REFTAG check ...passed 00:06:22.906 Test: generate copy: DIF generated, no GUARD check flag set ...passed 00:06:22.906 Test: generate copy: DIF generated, no APPTAG check flag set ...passed 00:06:22.906 Test: generate copy: DIF generated, no REFTAG check flag set ...passed 00:06:22.906 Test: generate copy: iovecs-len validate ...[2024-07-14 07:23:38.840875] dif.c:1167:spdk_dif_generate_copy: *ERROR*: Size of bounce_iovs arrays are not valid or misaligned with block_size. 00:06:22.906 passed 00:06:22.906 Test: generate copy: buffer alignment validate ...passed 00:06:22.906 00:06:22.906 Run Summary: Type Total Ran Passed Failed Inactive 00:06:22.906 suites 1 1 n/a 0 0 00:06:22.906 tests 20 20 20 0 0 00:06:22.906 asserts 204 204 204 0 n/a 00:06:22.906 00:06:22.906 Elapsed time = 0.003 seconds 00:06:23.165 00:06:23.165 real 0m0.602s 00:06:23.165 user 0m0.914s 00:06:23.165 sys 0m0.191s 00:06:23.165 07:23:39 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:23.165 07:23:39 -- common/autotest_common.sh@10 -- # set +x 00:06:23.165 ************************************ 00:06:23.165 END TEST accel_dif_functional_tests 00:06:23.165 ************************************ 00:06:23.165 00:06:23.165 real 1m3.197s 00:06:23.165 user 1m10.966s 00:06:23.165 sys 0m7.305s 00:06:23.165 07:23:39 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:23.165 07:23:39 -- common/autotest_common.sh@10 -- # set +x 00:06:23.165 ************************************ 00:06:23.165 END TEST accel 00:06:23.165 ************************************ 00:06:23.165 07:23:39 -- spdk/autotest.sh@190 -- # run_test accel_rpc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/accel_rpc.sh 00:06:23.165 07:23:39 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:06:23.165 07:23:39 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:06:23.165 07:23:39 -- common/autotest_common.sh@10 -- # set +x 00:06:23.165 ************************************ 00:06:23.165 START TEST accel_rpc 00:06:23.165 ************************************ 00:06:23.165 07:23:39 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/accel_rpc.sh 00:06:23.165 * Looking for test storage... 00:06:23.165 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel 00:06:23.165 07:23:39 -- accel/accel_rpc.sh@11 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:06:23.165 07:23:39 -- accel/accel_rpc.sh@14 -- # spdk_tgt_pid=3987781 00:06:23.165 07:23:39 -- accel/accel_rpc.sh@13 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --wait-for-rpc 00:06:23.165 07:23:39 -- accel/accel_rpc.sh@15 -- # waitforlisten 3987781 00:06:23.165 07:23:39 -- common/autotest_common.sh@819 -- # '[' -z 3987781 ']' 00:06:23.165 07:23:39 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:23.165 07:23:39 -- common/autotest_common.sh@824 -- # local max_retries=100 00:06:23.165 07:23:39 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:23.165 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:23.165 07:23:39 -- common/autotest_common.sh@828 -- # xtrace_disable 00:06:23.165 07:23:39 -- common/autotest_common.sh@10 -- # set +x 00:06:23.165 [2024-07-14 07:23:39.258164] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:06:23.165 [2024-07-14 07:23:39.258259] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3987781 ] 00:06:23.165 EAL: No free 2048 kB hugepages reported on node 1 00:06:23.165 [2024-07-14 07:23:39.322362] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:23.423 [2024-07-14 07:23:39.428046] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:06:23.423 [2024-07-14 07:23:39.428225] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:23.423 07:23:39 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:06:23.423 07:23:39 -- common/autotest_common.sh@852 -- # return 0 00:06:23.423 07:23:39 -- accel/accel_rpc.sh@45 -- # [[ y == y ]] 00:06:23.423 07:23:39 -- accel/accel_rpc.sh@45 -- # [[ 0 -gt 0 ]] 00:06:23.423 07:23:39 -- accel/accel_rpc.sh@49 -- # [[ y == y ]] 00:06:23.423 07:23:39 -- accel/accel_rpc.sh@49 -- # [[ 0 -gt 0 ]] 00:06:23.423 07:23:39 -- accel/accel_rpc.sh@53 -- # run_test accel_assign_opcode accel_assign_opcode_test_suite 00:06:23.423 07:23:39 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:06:23.423 07:23:39 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:06:23.423 07:23:39 -- common/autotest_common.sh@10 -- # set +x 00:06:23.423 ************************************ 00:06:23.423 START TEST accel_assign_opcode 00:06:23.423 ************************************ 00:06:23.423 07:23:39 -- common/autotest_common.sh@1104 -- # accel_assign_opcode_test_suite 00:06:23.423 07:23:39 -- accel/accel_rpc.sh@38 -- # rpc_cmd accel_assign_opc -o copy -m incorrect 00:06:23.423 07:23:39 -- common/autotest_common.sh@551 -- # xtrace_disable 00:06:23.423 07:23:39 -- common/autotest_common.sh@10 -- # set +x 00:06:23.423 [2024-07-14 07:23:39.464741] accel_rpc.c: 168:rpc_accel_assign_opc: *NOTICE*: Operation copy will be assigned to module incorrect 00:06:23.423 07:23:39 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:06:23.423 07:23:39 -- accel/accel_rpc.sh@40 -- # rpc_cmd accel_assign_opc -o copy -m software 00:06:23.423 07:23:39 -- common/autotest_common.sh@551 -- # xtrace_disable 00:06:23.423 07:23:39 -- common/autotest_common.sh@10 -- # set +x 00:06:23.423 [2024-07-14 07:23:39.472753] accel_rpc.c: 168:rpc_accel_assign_opc: *NOTICE*: Operation copy will be assigned to module software 00:06:23.423 07:23:39 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:06:23.423 07:23:39 -- accel/accel_rpc.sh@41 -- # rpc_cmd framework_start_init 00:06:23.423 07:23:39 -- common/autotest_common.sh@551 -- # xtrace_disable 00:06:23.423 07:23:39 -- common/autotest_common.sh@10 -- # set +x 00:06:23.714 07:23:39 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:06:23.714 07:23:39 -- accel/accel_rpc.sh@42 -- # rpc_cmd accel_get_opc_assignments 00:06:23.714 07:23:39 -- accel/accel_rpc.sh@42 -- # grep software 00:06:23.714 07:23:39 -- accel/accel_rpc.sh@42 -- # jq -r .copy 00:06:23.714 07:23:39 -- common/autotest_common.sh@551 -- # xtrace_disable 00:06:23.714 07:23:39 -- common/autotest_common.sh@10 -- # set +x 00:06:23.714 07:23:39 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:06:23.714 software 00:06:23.714 00:06:23.714 real 0m0.313s 00:06:23.714 user 0m0.038s 00:06:23.714 sys 0m0.011s 00:06:23.714 07:23:39 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:23.714 07:23:39 -- common/autotest_common.sh@10 -- # set +x 00:06:23.714 ************************************ 00:06:23.714 END TEST accel_assign_opcode 00:06:23.714 ************************************ 00:06:23.714 07:23:39 -- accel/accel_rpc.sh@55 -- # killprocess 3987781 00:06:23.714 07:23:39 -- common/autotest_common.sh@926 -- # '[' -z 3987781 ']' 00:06:23.714 07:23:39 -- common/autotest_common.sh@930 -- # kill -0 3987781 00:06:23.714 07:23:39 -- common/autotest_common.sh@931 -- # uname 00:06:23.714 07:23:39 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:06:23.714 07:23:39 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 3987781 00:06:23.714 07:23:39 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:06:23.714 07:23:39 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:06:23.714 07:23:39 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 3987781' 00:06:23.714 killing process with pid 3987781 00:06:23.715 07:23:39 -- common/autotest_common.sh@945 -- # kill 3987781 00:06:23.715 07:23:39 -- common/autotest_common.sh@950 -- # wait 3987781 00:06:24.282 00:06:24.282 real 0m1.146s 00:06:24.282 user 0m1.049s 00:06:24.282 sys 0m0.430s 00:06:24.282 07:23:40 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:24.282 07:23:40 -- common/autotest_common.sh@10 -- # set +x 00:06:24.282 ************************************ 00:06:24.282 END TEST accel_rpc 00:06:24.282 ************************************ 00:06:24.282 07:23:40 -- spdk/autotest.sh@191 -- # run_test app_cmdline /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/cmdline.sh 00:06:24.282 07:23:40 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:06:24.282 07:23:40 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:06:24.282 07:23:40 -- common/autotest_common.sh@10 -- # set +x 00:06:24.282 ************************************ 00:06:24.282 START TEST app_cmdline 00:06:24.282 ************************************ 00:06:24.282 07:23:40 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/cmdline.sh 00:06:24.282 * Looking for test storage... 00:06:24.282 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app 00:06:24.282 07:23:40 -- app/cmdline.sh@14 -- # trap 'killprocess $spdk_tgt_pid' EXIT 00:06:24.282 07:23:40 -- app/cmdline.sh@17 -- # spdk_tgt_pid=3987986 00:06:24.282 07:23:40 -- app/cmdline.sh@16 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --rpcs-allowed spdk_get_version,rpc_get_methods 00:06:24.282 07:23:40 -- app/cmdline.sh@18 -- # waitforlisten 3987986 00:06:24.282 07:23:40 -- common/autotest_common.sh@819 -- # '[' -z 3987986 ']' 00:06:24.282 07:23:40 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:24.282 07:23:40 -- common/autotest_common.sh@824 -- # local max_retries=100 00:06:24.283 07:23:40 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:24.283 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:24.283 07:23:40 -- common/autotest_common.sh@828 -- # xtrace_disable 00:06:24.283 07:23:40 -- common/autotest_common.sh@10 -- # set +x 00:06:24.283 [2024-07-14 07:23:40.431498] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:06:24.283 [2024-07-14 07:23:40.431588] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3987986 ] 00:06:24.541 EAL: No free 2048 kB hugepages reported on node 1 00:06:24.541 [2024-07-14 07:23:40.491771] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:24.541 [2024-07-14 07:23:40.601428] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:06:24.541 [2024-07-14 07:23:40.601601] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:25.475 07:23:41 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:06:25.475 07:23:41 -- common/autotest_common.sh@852 -- # return 0 00:06:25.475 07:23:41 -- app/cmdline.sh@20 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py spdk_get_version 00:06:25.475 { 00:06:25.475 "version": "SPDK v24.01.1-pre git sha1 4b94202c6", 00:06:25.475 "fields": { 00:06:25.475 "major": 24, 00:06:25.475 "minor": 1, 00:06:25.475 "patch": 1, 00:06:25.475 "suffix": "-pre", 00:06:25.475 "commit": "4b94202c6" 00:06:25.475 } 00:06:25.475 } 00:06:25.733 07:23:41 -- app/cmdline.sh@22 -- # expected_methods=() 00:06:25.733 07:23:41 -- app/cmdline.sh@23 -- # expected_methods+=("rpc_get_methods") 00:06:25.733 07:23:41 -- app/cmdline.sh@24 -- # expected_methods+=("spdk_get_version") 00:06:25.733 07:23:41 -- app/cmdline.sh@26 -- # methods=($(rpc_cmd rpc_get_methods | jq -r ".[]" | sort)) 00:06:25.733 07:23:41 -- app/cmdline.sh@26 -- # rpc_cmd rpc_get_methods 00:06:25.733 07:23:41 -- app/cmdline.sh@26 -- # jq -r '.[]' 00:06:25.733 07:23:41 -- common/autotest_common.sh@551 -- # xtrace_disable 00:06:25.733 07:23:41 -- app/cmdline.sh@26 -- # sort 00:06:25.733 07:23:41 -- common/autotest_common.sh@10 -- # set +x 00:06:25.733 07:23:41 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:06:25.733 07:23:41 -- app/cmdline.sh@27 -- # (( 2 == 2 )) 00:06:25.733 07:23:41 -- app/cmdline.sh@28 -- # [[ rpc_get_methods spdk_get_version == \r\p\c\_\g\e\t\_\m\e\t\h\o\d\s\ \s\p\d\k\_\g\e\t\_\v\e\r\s\i\o\n ]] 00:06:25.733 07:23:41 -- app/cmdline.sh@30 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:06:25.733 07:23:41 -- common/autotest_common.sh@640 -- # local es=0 00:06:25.733 07:23:41 -- common/autotest_common.sh@642 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:06:25.733 07:23:41 -- common/autotest_common.sh@628 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:06:25.733 07:23:41 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:06:25.733 07:23:41 -- common/autotest_common.sh@632 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:06:25.733 07:23:41 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:06:25.733 07:23:41 -- common/autotest_common.sh@634 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:06:25.733 07:23:41 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:06:25.733 07:23:41 -- common/autotest_common.sh@634 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:06:25.733 07:23:41 -- common/autotest_common.sh@634 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:06:25.733 07:23:41 -- common/autotest_common.sh@643 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:06:25.991 request: 00:06:25.991 { 00:06:25.991 "method": "env_dpdk_get_mem_stats", 00:06:25.991 "req_id": 1 00:06:25.991 } 00:06:25.991 Got JSON-RPC error response 00:06:25.991 response: 00:06:25.991 { 00:06:25.991 "code": -32601, 00:06:25.991 "message": "Method not found" 00:06:25.991 } 00:06:25.991 07:23:41 -- common/autotest_common.sh@643 -- # es=1 00:06:25.991 07:23:41 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:06:25.991 07:23:41 -- common/autotest_common.sh@662 -- # [[ -n '' ]] 00:06:25.991 07:23:41 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:06:25.991 07:23:41 -- app/cmdline.sh@1 -- # killprocess 3987986 00:06:25.991 07:23:41 -- common/autotest_common.sh@926 -- # '[' -z 3987986 ']' 00:06:25.991 07:23:41 -- common/autotest_common.sh@930 -- # kill -0 3987986 00:06:25.991 07:23:41 -- common/autotest_common.sh@931 -- # uname 00:06:25.991 07:23:41 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:06:25.991 07:23:41 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 3987986 00:06:25.991 07:23:42 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:06:25.991 07:23:42 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:06:25.991 07:23:42 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 3987986' 00:06:25.991 killing process with pid 3987986 00:06:25.991 07:23:42 -- common/autotest_common.sh@945 -- # kill 3987986 00:06:25.991 07:23:42 -- common/autotest_common.sh@950 -- # wait 3987986 00:06:26.558 00:06:26.558 real 0m2.161s 00:06:26.558 user 0m2.715s 00:06:26.558 sys 0m0.531s 00:06:26.558 07:23:42 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:26.558 07:23:42 -- common/autotest_common.sh@10 -- # set +x 00:06:26.558 ************************************ 00:06:26.558 END TEST app_cmdline 00:06:26.558 ************************************ 00:06:26.558 07:23:42 -- spdk/autotest.sh@192 -- # run_test version /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/version.sh 00:06:26.558 07:23:42 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:06:26.558 07:23:42 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:06:26.558 07:23:42 -- common/autotest_common.sh@10 -- # set +x 00:06:26.558 ************************************ 00:06:26.558 START TEST version 00:06:26.558 ************************************ 00:06:26.558 07:23:42 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/version.sh 00:06:26.558 * Looking for test storage... 00:06:26.558 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app 00:06:26.558 07:23:42 -- app/version.sh@17 -- # get_header_version major 00:06:26.558 07:23:42 -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MAJOR[[:space:]]+' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/version.h 00:06:26.558 07:23:42 -- app/version.sh@14 -- # cut -f2 00:06:26.558 07:23:42 -- app/version.sh@14 -- # tr -d '"' 00:06:26.558 07:23:42 -- app/version.sh@17 -- # major=24 00:06:26.558 07:23:42 -- app/version.sh@18 -- # get_header_version minor 00:06:26.558 07:23:42 -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MINOR[[:space:]]+' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/version.h 00:06:26.558 07:23:42 -- app/version.sh@14 -- # cut -f2 00:06:26.558 07:23:42 -- app/version.sh@14 -- # tr -d '"' 00:06:26.558 07:23:42 -- app/version.sh@18 -- # minor=1 00:06:26.558 07:23:42 -- app/version.sh@19 -- # get_header_version patch 00:06:26.558 07:23:42 -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_PATCH[[:space:]]+' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/version.h 00:06:26.558 07:23:42 -- app/version.sh@14 -- # cut -f2 00:06:26.558 07:23:42 -- app/version.sh@14 -- # tr -d '"' 00:06:26.558 07:23:42 -- app/version.sh@19 -- # patch=1 00:06:26.558 07:23:42 -- app/version.sh@20 -- # get_header_version suffix 00:06:26.558 07:23:42 -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_SUFFIX[[:space:]]+' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/version.h 00:06:26.558 07:23:42 -- app/version.sh@14 -- # cut -f2 00:06:26.558 07:23:42 -- app/version.sh@14 -- # tr -d '"' 00:06:26.558 07:23:42 -- app/version.sh@20 -- # suffix=-pre 00:06:26.558 07:23:42 -- app/version.sh@22 -- # version=24.1 00:06:26.558 07:23:42 -- app/version.sh@25 -- # (( patch != 0 )) 00:06:26.558 07:23:42 -- app/version.sh@25 -- # version=24.1.1 00:06:26.558 07:23:42 -- app/version.sh@28 -- # version=24.1.1rc0 00:06:26.558 07:23:42 -- app/version.sh@30 -- # PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python 00:06:26.558 07:23:42 -- app/version.sh@30 -- # python3 -c 'import spdk; print(spdk.__version__)' 00:06:26.558 07:23:42 -- app/version.sh@30 -- # py_version=24.1.1rc0 00:06:26.558 07:23:42 -- app/version.sh@31 -- # [[ 24.1.1rc0 == \2\4\.\1\.\1\r\c\0 ]] 00:06:26.558 00:06:26.558 real 0m0.100s 00:06:26.558 user 0m0.055s 00:06:26.558 sys 0m0.067s 00:06:26.558 07:23:42 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:26.558 07:23:42 -- common/autotest_common.sh@10 -- # set +x 00:06:26.558 ************************************ 00:06:26.558 END TEST version 00:06:26.558 ************************************ 00:06:26.558 07:23:42 -- spdk/autotest.sh@194 -- # '[' 0 -eq 1 ']' 00:06:26.558 07:23:42 -- spdk/autotest.sh@204 -- # uname -s 00:06:26.558 07:23:42 -- spdk/autotest.sh@204 -- # [[ Linux == Linux ]] 00:06:26.558 07:23:42 -- spdk/autotest.sh@205 -- # [[ 0 -eq 1 ]] 00:06:26.558 07:23:42 -- spdk/autotest.sh@205 -- # [[ 0 -eq 1 ]] 00:06:26.558 07:23:42 -- spdk/autotest.sh@217 -- # '[' 0 -eq 1 ']' 00:06:26.558 07:23:42 -- spdk/autotest.sh@264 -- # '[' 0 -eq 1 ']' 00:06:26.558 07:23:42 -- spdk/autotest.sh@268 -- # timing_exit lib 00:06:26.558 07:23:42 -- common/autotest_common.sh@718 -- # xtrace_disable 00:06:26.558 07:23:42 -- common/autotest_common.sh@10 -- # set +x 00:06:26.558 07:23:42 -- spdk/autotest.sh@270 -- # '[' 0 -eq 1 ']' 00:06:26.558 07:23:42 -- spdk/autotest.sh@278 -- # '[' 0 -eq 1 ']' 00:06:26.558 07:23:42 -- spdk/autotest.sh@287 -- # '[' 1 -eq 1 ']' 00:06:26.558 07:23:42 -- spdk/autotest.sh@288 -- # export NET_TYPE 00:06:26.558 07:23:42 -- spdk/autotest.sh@291 -- # '[' tcp = rdma ']' 00:06:26.558 07:23:42 -- spdk/autotest.sh@294 -- # '[' tcp = tcp ']' 00:06:26.558 07:23:42 -- spdk/autotest.sh@295 -- # run_test nvmf_tcp /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf.sh --transport=tcp 00:06:26.558 07:23:42 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:06:26.558 07:23:42 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:06:26.558 07:23:42 -- common/autotest_common.sh@10 -- # set +x 00:06:26.558 ************************************ 00:06:26.558 START TEST nvmf_tcp 00:06:26.558 ************************************ 00:06:26.558 07:23:42 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf.sh --transport=tcp 00:06:26.558 * Looking for test storage... 00:06:26.558 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf 00:06:26.558 07:23:42 -- nvmf/nvmf.sh@10 -- # uname -s 00:06:26.558 07:23:42 -- nvmf/nvmf.sh@10 -- # '[' '!' Linux = Linux ']' 00:06:26.558 07:23:42 -- nvmf/nvmf.sh@14 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:06:26.558 07:23:42 -- nvmf/common.sh@7 -- # uname -s 00:06:26.558 07:23:42 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:06:26.558 07:23:42 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:06:26.558 07:23:42 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:06:26.558 07:23:42 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:06:26.558 07:23:42 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:06:26.558 07:23:42 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:06:26.558 07:23:42 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:06:26.558 07:23:42 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:06:26.558 07:23:42 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:06:26.558 07:23:42 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:06:26.817 07:23:42 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:06:26.817 07:23:42 -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:06:26.817 07:23:42 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:06:26.817 07:23:42 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:06:26.817 07:23:42 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:06:26.817 07:23:42 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:06:26.817 07:23:42 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:06:26.817 07:23:42 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:06:26.817 07:23:42 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:06:26.817 07:23:42 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:26.817 07:23:42 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:26.817 07:23:42 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:26.817 07:23:42 -- paths/export.sh@5 -- # export PATH 00:06:26.817 07:23:42 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:26.817 07:23:42 -- nvmf/common.sh@46 -- # : 0 00:06:26.817 07:23:42 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:06:26.817 07:23:42 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:06:26.817 07:23:42 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:06:26.817 07:23:42 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:06:26.817 07:23:42 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:06:26.817 07:23:42 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:06:26.817 07:23:42 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:06:26.817 07:23:42 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:06:26.817 07:23:42 -- nvmf/nvmf.sh@16 -- # trap 'exit 1' SIGINT SIGTERM EXIT 00:06:26.817 07:23:42 -- nvmf/nvmf.sh@18 -- # TEST_ARGS=("$@") 00:06:26.817 07:23:42 -- nvmf/nvmf.sh@20 -- # timing_enter target 00:06:26.817 07:23:42 -- common/autotest_common.sh@712 -- # xtrace_disable 00:06:26.817 07:23:42 -- common/autotest_common.sh@10 -- # set +x 00:06:26.817 07:23:42 -- nvmf/nvmf.sh@22 -- # [[ 0 -eq 0 ]] 00:06:26.817 07:23:42 -- nvmf/nvmf.sh@23 -- # run_test nvmf_example /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_example.sh --transport=tcp 00:06:26.818 07:23:42 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:06:26.818 07:23:42 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:06:26.818 07:23:42 -- common/autotest_common.sh@10 -- # set +x 00:06:26.818 ************************************ 00:06:26.818 START TEST nvmf_example 00:06:26.818 ************************************ 00:06:26.818 07:23:42 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_example.sh --transport=tcp 00:06:26.818 * Looking for test storage... 00:06:26.818 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:06:26.818 07:23:42 -- target/nvmf_example.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:06:26.818 07:23:42 -- nvmf/common.sh@7 -- # uname -s 00:06:26.818 07:23:42 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:06:26.818 07:23:42 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:06:26.818 07:23:42 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:06:26.818 07:23:42 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:06:26.818 07:23:42 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:06:26.818 07:23:42 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:06:26.818 07:23:42 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:06:26.818 07:23:42 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:06:26.818 07:23:42 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:06:26.818 07:23:42 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:06:26.818 07:23:42 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:06:26.818 07:23:42 -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:06:26.818 07:23:42 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:06:26.818 07:23:42 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:06:26.818 07:23:42 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:06:26.818 07:23:42 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:06:26.818 07:23:42 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:06:26.818 07:23:42 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:06:26.818 07:23:42 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:06:26.818 07:23:42 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:26.818 07:23:42 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:26.818 07:23:42 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:26.818 07:23:42 -- paths/export.sh@5 -- # export PATH 00:06:26.818 07:23:42 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:26.818 07:23:42 -- nvmf/common.sh@46 -- # : 0 00:06:26.818 07:23:42 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:06:26.818 07:23:42 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:06:26.818 07:23:42 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:06:26.818 07:23:42 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:06:26.818 07:23:42 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:06:26.818 07:23:42 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:06:26.818 07:23:42 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:06:26.818 07:23:42 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:06:26.818 07:23:42 -- target/nvmf_example.sh@11 -- # NVMF_EXAMPLE=("$SPDK_EXAMPLE_DIR/nvmf") 00:06:26.818 07:23:42 -- target/nvmf_example.sh@13 -- # MALLOC_BDEV_SIZE=64 00:06:26.818 07:23:42 -- target/nvmf_example.sh@14 -- # MALLOC_BLOCK_SIZE=512 00:06:26.818 07:23:42 -- target/nvmf_example.sh@24 -- # build_nvmf_example_args 00:06:26.818 07:23:42 -- target/nvmf_example.sh@17 -- # '[' 0 -eq 1 ']' 00:06:26.818 07:23:42 -- target/nvmf_example.sh@20 -- # NVMF_EXAMPLE+=(-i "$NVMF_APP_SHM_ID" -g 10000) 00:06:26.818 07:23:42 -- target/nvmf_example.sh@21 -- # NVMF_EXAMPLE+=("${NO_HUGE[@]}") 00:06:26.818 07:23:42 -- target/nvmf_example.sh@40 -- # timing_enter nvmf_example_test 00:06:26.818 07:23:42 -- common/autotest_common.sh@712 -- # xtrace_disable 00:06:26.818 07:23:42 -- common/autotest_common.sh@10 -- # set +x 00:06:26.818 07:23:42 -- target/nvmf_example.sh@41 -- # nvmftestinit 00:06:26.818 07:23:42 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:06:26.818 07:23:42 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:06:26.818 07:23:42 -- nvmf/common.sh@436 -- # prepare_net_devs 00:06:26.818 07:23:42 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:06:26.818 07:23:42 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:06:26.818 07:23:42 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:06:26.818 07:23:42 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:06:26.818 07:23:42 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:06:26.818 07:23:42 -- nvmf/common.sh@402 -- # [[ phy != virt ]] 00:06:26.818 07:23:42 -- nvmf/common.sh@402 -- # gather_supported_nvmf_pci_devs 00:06:26.818 07:23:42 -- nvmf/common.sh@284 -- # xtrace_disable 00:06:26.818 07:23:42 -- common/autotest_common.sh@10 -- # set +x 00:06:28.720 07:23:44 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:06:28.720 07:23:44 -- nvmf/common.sh@290 -- # pci_devs=() 00:06:28.720 07:23:44 -- nvmf/common.sh@290 -- # local -a pci_devs 00:06:28.720 07:23:44 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:06:28.720 07:23:44 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:06:28.720 07:23:44 -- nvmf/common.sh@292 -- # pci_drivers=() 00:06:28.720 07:23:44 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:06:28.720 07:23:44 -- nvmf/common.sh@294 -- # net_devs=() 00:06:28.720 07:23:44 -- nvmf/common.sh@294 -- # local -ga net_devs 00:06:28.720 07:23:44 -- nvmf/common.sh@295 -- # e810=() 00:06:28.720 07:23:44 -- nvmf/common.sh@295 -- # local -ga e810 00:06:28.720 07:23:44 -- nvmf/common.sh@296 -- # x722=() 00:06:28.720 07:23:44 -- nvmf/common.sh@296 -- # local -ga x722 00:06:28.720 07:23:44 -- nvmf/common.sh@297 -- # mlx=() 00:06:28.720 07:23:44 -- nvmf/common.sh@297 -- # local -ga mlx 00:06:28.720 07:23:44 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:06:28.720 07:23:44 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:06:28.720 07:23:44 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:06:28.720 07:23:44 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:06:28.720 07:23:44 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:06:28.720 07:23:44 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:06:28.720 07:23:44 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:06:28.720 07:23:44 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:06:28.720 07:23:44 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:06:28.720 07:23:44 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:06:28.720 07:23:44 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:06:28.720 07:23:44 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:06:28.720 07:23:44 -- nvmf/common.sh@320 -- # [[ tcp == rdma ]] 00:06:28.720 07:23:44 -- nvmf/common.sh@326 -- # [[ e810 == mlx5 ]] 00:06:28.720 07:23:44 -- nvmf/common.sh@328 -- # [[ e810 == e810 ]] 00:06:28.720 07:23:44 -- nvmf/common.sh@329 -- # pci_devs=("${e810[@]}") 00:06:28.720 07:23:44 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:06:28.720 07:23:44 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:06:28.720 07:23:44 -- nvmf/common.sh@340 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:06:28.720 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:06:28.720 07:23:44 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:06:28.720 07:23:44 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:06:28.720 07:23:44 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:06:28.720 07:23:44 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:06:28.720 07:23:44 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:06:28.720 07:23:44 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:06:28.720 07:23:44 -- nvmf/common.sh@340 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:06:28.720 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:06:28.720 07:23:44 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:06:28.720 07:23:44 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:06:28.720 07:23:44 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:06:28.720 07:23:44 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:06:28.720 07:23:44 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:06:28.720 07:23:44 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:06:28.720 07:23:44 -- nvmf/common.sh@371 -- # [[ e810 == e810 ]] 00:06:28.720 07:23:44 -- nvmf/common.sh@371 -- # [[ tcp == rdma ]] 00:06:28.720 07:23:44 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:06:28.720 07:23:44 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:06:28.720 07:23:44 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:06:28.720 07:23:44 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:06:28.720 07:23:44 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:06:28.720 Found net devices under 0000:0a:00.0: cvl_0_0 00:06:28.720 07:23:44 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:06:28.720 07:23:44 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:06:28.720 07:23:44 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:06:28.720 07:23:44 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:06:28.720 07:23:44 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:06:28.720 07:23:44 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:06:28.720 Found net devices under 0000:0a:00.1: cvl_0_1 00:06:28.720 07:23:44 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:06:28.720 07:23:44 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:06:28.720 07:23:44 -- nvmf/common.sh@402 -- # is_hw=yes 00:06:28.720 07:23:44 -- nvmf/common.sh@404 -- # [[ yes == yes ]] 00:06:28.720 07:23:44 -- nvmf/common.sh@405 -- # [[ tcp == tcp ]] 00:06:28.720 07:23:44 -- nvmf/common.sh@406 -- # nvmf_tcp_init 00:06:28.721 07:23:44 -- nvmf/common.sh@228 -- # NVMF_INITIATOR_IP=10.0.0.1 00:06:28.721 07:23:44 -- nvmf/common.sh@229 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:06:28.721 07:23:44 -- nvmf/common.sh@230 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:06:28.721 07:23:44 -- nvmf/common.sh@233 -- # (( 2 > 1 )) 00:06:28.721 07:23:44 -- nvmf/common.sh@235 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:06:28.721 07:23:44 -- nvmf/common.sh@236 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:06:28.721 07:23:44 -- nvmf/common.sh@239 -- # NVMF_SECOND_TARGET_IP= 00:06:28.721 07:23:44 -- nvmf/common.sh@241 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:06:28.721 07:23:44 -- nvmf/common.sh@242 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:06:28.721 07:23:44 -- nvmf/common.sh@243 -- # ip -4 addr flush cvl_0_0 00:06:28.721 07:23:44 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_1 00:06:28.721 07:23:44 -- nvmf/common.sh@247 -- # ip netns add cvl_0_0_ns_spdk 00:06:28.721 07:23:44 -- nvmf/common.sh@250 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:06:28.721 07:23:44 -- nvmf/common.sh@253 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:06:28.721 07:23:44 -- nvmf/common.sh@254 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:06:28.721 07:23:44 -- nvmf/common.sh@257 -- # ip link set cvl_0_1 up 00:06:28.721 07:23:44 -- nvmf/common.sh@259 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:06:28.721 07:23:44 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:06:28.721 07:23:44 -- nvmf/common.sh@263 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:06:28.721 07:23:44 -- nvmf/common.sh@266 -- # ping -c 1 10.0.0.2 00:06:28.721 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:06:28.721 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.125 ms 00:06:28.721 00:06:28.721 --- 10.0.0.2 ping statistics --- 00:06:28.721 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:06:28.721 rtt min/avg/max/mdev = 0.125/0.125/0.125/0.000 ms 00:06:28.721 07:23:44 -- nvmf/common.sh@267 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:06:28.721 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:06:28.721 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.078 ms 00:06:28.721 00:06:28.721 --- 10.0.0.1 ping statistics --- 00:06:28.721 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:06:28.721 rtt min/avg/max/mdev = 0.078/0.078/0.078/0.000 ms 00:06:28.721 07:23:44 -- nvmf/common.sh@269 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:06:28.721 07:23:44 -- nvmf/common.sh@410 -- # return 0 00:06:28.721 07:23:44 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:06:28.721 07:23:44 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:06:28.721 07:23:44 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:06:28.721 07:23:44 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:06:28.721 07:23:44 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:06:28.721 07:23:44 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:06:28.721 07:23:44 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:06:28.721 07:23:44 -- target/nvmf_example.sh@42 -- # nvmfexamplestart '-m 0xF' 00:06:28.721 07:23:44 -- target/nvmf_example.sh@27 -- # timing_enter start_nvmf_example 00:06:28.721 07:23:44 -- common/autotest_common.sh@712 -- # xtrace_disable 00:06:28.721 07:23:44 -- common/autotest_common.sh@10 -- # set +x 00:06:28.721 07:23:44 -- target/nvmf_example.sh@29 -- # '[' tcp == tcp ']' 00:06:28.721 07:23:44 -- target/nvmf_example.sh@30 -- # NVMF_EXAMPLE=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_EXAMPLE[@]}") 00:06:28.721 07:23:44 -- target/nvmf_example.sh@34 -- # nvmfpid=3989967 00:06:28.721 07:23:44 -- target/nvmf_example.sh@33 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/nvmf -i 0 -g 10000 -m 0xF 00:06:28.721 07:23:44 -- target/nvmf_example.sh@35 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:06:28.721 07:23:44 -- target/nvmf_example.sh@36 -- # waitforlisten 3989967 00:06:28.721 07:23:44 -- common/autotest_common.sh@819 -- # '[' -z 3989967 ']' 00:06:28.721 07:23:44 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:28.721 07:23:44 -- common/autotest_common.sh@824 -- # local max_retries=100 00:06:28.721 07:23:44 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:28.721 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:28.721 07:23:44 -- common/autotest_common.sh@828 -- # xtrace_disable 00:06:28.721 07:23:44 -- common/autotest_common.sh@10 -- # set +x 00:06:28.979 EAL: No free 2048 kB hugepages reported on node 1 00:06:29.913 07:23:45 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:06:29.913 07:23:45 -- common/autotest_common.sh@852 -- # return 0 00:06:29.913 07:23:45 -- target/nvmf_example.sh@37 -- # timing_exit start_nvmf_example 00:06:29.913 07:23:45 -- common/autotest_common.sh@718 -- # xtrace_disable 00:06:29.913 07:23:45 -- common/autotest_common.sh@10 -- # set +x 00:06:29.913 07:23:45 -- target/nvmf_example.sh@45 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:06:29.913 07:23:45 -- common/autotest_common.sh@551 -- # xtrace_disable 00:06:29.913 07:23:45 -- common/autotest_common.sh@10 -- # set +x 00:06:29.913 07:23:45 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:06:29.913 07:23:45 -- target/nvmf_example.sh@47 -- # rpc_cmd bdev_malloc_create 64 512 00:06:29.913 07:23:45 -- common/autotest_common.sh@551 -- # xtrace_disable 00:06:29.913 07:23:45 -- common/autotest_common.sh@10 -- # set +x 00:06:29.913 07:23:45 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:06:29.913 07:23:45 -- target/nvmf_example.sh@47 -- # malloc_bdevs='Malloc0 ' 00:06:29.913 07:23:45 -- target/nvmf_example.sh@49 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:06:29.913 07:23:45 -- common/autotest_common.sh@551 -- # xtrace_disable 00:06:29.913 07:23:45 -- common/autotest_common.sh@10 -- # set +x 00:06:29.913 07:23:45 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:06:29.913 07:23:45 -- target/nvmf_example.sh@52 -- # for malloc_bdev in $malloc_bdevs 00:06:29.913 07:23:45 -- target/nvmf_example.sh@53 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:06:29.913 07:23:45 -- common/autotest_common.sh@551 -- # xtrace_disable 00:06:29.913 07:23:45 -- common/autotest_common.sh@10 -- # set +x 00:06:29.913 07:23:45 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:06:29.913 07:23:45 -- target/nvmf_example.sh@57 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:06:29.913 07:23:45 -- common/autotest_common.sh@551 -- # xtrace_disable 00:06:29.913 07:23:45 -- common/autotest_common.sh@10 -- # set +x 00:06:29.913 07:23:45 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:06:29.913 07:23:45 -- target/nvmf_example.sh@59 -- # perf=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf 00:06:29.913 07:23:45 -- target/nvmf_example.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 64 -o 4096 -w randrw -M 30 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:06:29.913 EAL: No free 2048 kB hugepages reported on node 1 00:06:42.113 Initializing NVMe Controllers 00:06:42.113 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:06:42.113 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:06:42.113 Initialization complete. Launching workers. 00:06:42.113 ======================================================== 00:06:42.113 Latency(us) 00:06:42.113 Device Information : IOPS MiB/s Average min max 00:06:42.113 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 15475.30 60.45 4137.41 784.03 16437.54 00:06:42.113 ======================================================== 00:06:42.113 Total : 15475.30 60.45 4137.41 784.03 16437.54 00:06:42.113 00:06:42.113 07:23:56 -- target/nvmf_example.sh@65 -- # trap - SIGINT SIGTERM EXIT 00:06:42.113 07:23:56 -- target/nvmf_example.sh@66 -- # nvmftestfini 00:06:42.113 07:23:56 -- nvmf/common.sh@476 -- # nvmfcleanup 00:06:42.113 07:23:56 -- nvmf/common.sh@116 -- # sync 00:06:42.113 07:23:56 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:06:42.113 07:23:56 -- nvmf/common.sh@119 -- # set +e 00:06:42.113 07:23:56 -- nvmf/common.sh@120 -- # for i in {1..20} 00:06:42.113 07:23:56 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:06:42.113 rmmod nvme_tcp 00:06:42.113 rmmod nvme_fabrics 00:06:42.113 rmmod nvme_keyring 00:06:42.113 07:23:56 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:06:42.113 07:23:56 -- nvmf/common.sh@123 -- # set -e 00:06:42.113 07:23:56 -- nvmf/common.sh@124 -- # return 0 00:06:42.113 07:23:56 -- nvmf/common.sh@477 -- # '[' -n 3989967 ']' 00:06:42.113 07:23:56 -- nvmf/common.sh@478 -- # killprocess 3989967 00:06:42.113 07:23:56 -- common/autotest_common.sh@926 -- # '[' -z 3989967 ']' 00:06:42.113 07:23:56 -- common/autotest_common.sh@930 -- # kill -0 3989967 00:06:42.113 07:23:56 -- common/autotest_common.sh@931 -- # uname 00:06:42.113 07:23:56 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:06:42.113 07:23:56 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 3989967 00:06:42.113 07:23:56 -- common/autotest_common.sh@932 -- # process_name=nvmf 00:06:42.113 07:23:56 -- common/autotest_common.sh@936 -- # '[' nvmf = sudo ']' 00:06:42.113 07:23:56 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 3989967' 00:06:42.113 killing process with pid 3989967 00:06:42.113 07:23:56 -- common/autotest_common.sh@945 -- # kill 3989967 00:06:42.113 07:23:56 -- common/autotest_common.sh@950 -- # wait 3989967 00:06:42.113 nvmf threads initialize successfully 00:06:42.113 bdev subsystem init successfully 00:06:42.113 created a nvmf target service 00:06:42.113 create targets's poll groups done 00:06:42.113 all subsystems of target started 00:06:42.113 nvmf target is running 00:06:42.113 all subsystems of target stopped 00:06:42.113 destroy targets's poll groups done 00:06:42.113 destroyed the nvmf target service 00:06:42.113 bdev subsystem finish successfully 00:06:42.113 nvmf threads destroy successfully 00:06:42.113 07:23:56 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:06:42.113 07:23:56 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:06:42.113 07:23:56 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:06:42.113 07:23:56 -- nvmf/common.sh@273 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:06:42.113 07:23:56 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:06:42.113 07:23:56 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:06:42.113 07:23:56 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:06:42.113 07:23:56 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:06:42.681 07:23:58 -- nvmf/common.sh@278 -- # ip -4 addr flush cvl_0_1 00:06:42.681 07:23:58 -- target/nvmf_example.sh@67 -- # timing_exit nvmf_example_test 00:06:42.681 07:23:58 -- common/autotest_common.sh@718 -- # xtrace_disable 00:06:42.681 07:23:58 -- common/autotest_common.sh@10 -- # set +x 00:06:42.681 00:06:42.681 real 0m15.917s 00:06:42.681 user 0m45.741s 00:06:42.681 sys 0m3.127s 00:06:42.681 07:23:58 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:42.681 07:23:58 -- common/autotest_common.sh@10 -- # set +x 00:06:42.681 ************************************ 00:06:42.681 END TEST nvmf_example 00:06:42.681 ************************************ 00:06:42.681 07:23:58 -- nvmf/nvmf.sh@24 -- # run_test nvmf_filesystem /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/filesystem.sh --transport=tcp 00:06:42.681 07:23:58 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:06:42.681 07:23:58 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:06:42.681 07:23:58 -- common/autotest_common.sh@10 -- # set +x 00:06:42.681 ************************************ 00:06:42.681 START TEST nvmf_filesystem 00:06:42.681 ************************************ 00:06:42.681 07:23:58 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/filesystem.sh --transport=tcp 00:06:42.681 * Looking for test storage... 00:06:42.681 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:06:42.681 07:23:58 -- target/filesystem.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh 00:06:42.681 07:23:58 -- common/autotest_common.sh@7 -- # rpc_py=rpc_cmd 00:06:42.681 07:23:58 -- common/autotest_common.sh@34 -- # set -e 00:06:42.681 07:23:58 -- common/autotest_common.sh@35 -- # shopt -s nullglob 00:06:42.681 07:23:58 -- common/autotest_common.sh@36 -- # shopt -s extglob 00:06:42.681 07:23:58 -- common/autotest_common.sh@38 -- # [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/build_config.sh ]] 00:06:42.681 07:23:58 -- common/autotest_common.sh@39 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/build_config.sh 00:06:42.681 07:23:58 -- common/build_config.sh@1 -- # CONFIG_WPDK_DIR= 00:06:42.681 07:23:58 -- common/build_config.sh@2 -- # CONFIG_ASAN=n 00:06:42.681 07:23:58 -- common/build_config.sh@3 -- # CONFIG_VBDEV_COMPRESS=n 00:06:42.681 07:23:58 -- common/build_config.sh@4 -- # CONFIG_HAVE_EXECINFO_H=y 00:06:42.681 07:23:58 -- common/build_config.sh@5 -- # CONFIG_USDT=n 00:06:42.681 07:23:58 -- common/build_config.sh@6 -- # CONFIG_CUSTOMOCF=n 00:06:42.681 07:23:58 -- common/build_config.sh@7 -- # CONFIG_PREFIX=/usr/local 00:06:42.681 07:23:58 -- common/build_config.sh@8 -- # CONFIG_RBD=n 00:06:42.681 07:23:58 -- common/build_config.sh@9 -- # CONFIG_LIBDIR= 00:06:42.681 07:23:58 -- common/build_config.sh@10 -- # CONFIG_IDXD=y 00:06:42.681 07:23:58 -- common/build_config.sh@11 -- # CONFIG_NVME_CUSE=y 00:06:42.681 07:23:58 -- common/build_config.sh@12 -- # CONFIG_SMA=n 00:06:42.681 07:23:58 -- common/build_config.sh@13 -- # CONFIG_VTUNE=n 00:06:42.681 07:23:58 -- common/build_config.sh@14 -- # CONFIG_TSAN=n 00:06:42.681 07:23:58 -- common/build_config.sh@15 -- # CONFIG_RDMA_SEND_WITH_INVAL=y 00:06:42.681 07:23:58 -- common/build_config.sh@16 -- # CONFIG_VFIO_USER_DIR= 00:06:42.681 07:23:58 -- common/build_config.sh@17 -- # CONFIG_PGO_CAPTURE=n 00:06:42.681 07:23:58 -- common/build_config.sh@18 -- # CONFIG_HAVE_UUID_GENERATE_SHA1=y 00:06:42.681 07:23:58 -- common/build_config.sh@19 -- # CONFIG_ENV=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk 00:06:42.681 07:23:58 -- common/build_config.sh@20 -- # CONFIG_LTO=n 00:06:42.681 07:23:58 -- common/build_config.sh@21 -- # CONFIG_ISCSI_INITIATOR=y 00:06:42.681 07:23:58 -- common/build_config.sh@22 -- # CONFIG_CET=n 00:06:42.681 07:23:58 -- common/build_config.sh@23 -- # CONFIG_VBDEV_COMPRESS_MLX5=n 00:06:42.681 07:23:58 -- common/build_config.sh@24 -- # CONFIG_OCF_PATH= 00:06:42.681 07:23:58 -- common/build_config.sh@25 -- # CONFIG_RDMA_SET_TOS=y 00:06:42.681 07:23:58 -- common/build_config.sh@26 -- # CONFIG_HAVE_ARC4RANDOM=y 00:06:42.681 07:23:58 -- common/build_config.sh@27 -- # CONFIG_HAVE_LIBARCHIVE=n 00:06:42.681 07:23:58 -- common/build_config.sh@28 -- # CONFIG_UBLK=y 00:06:42.681 07:23:58 -- common/build_config.sh@29 -- # CONFIG_ISAL_CRYPTO=y 00:06:42.681 07:23:58 -- common/build_config.sh@30 -- # CONFIG_OPENSSL_PATH= 00:06:42.681 07:23:58 -- common/build_config.sh@31 -- # CONFIG_OCF=n 00:06:42.681 07:23:58 -- common/build_config.sh@32 -- # CONFIG_FUSE=n 00:06:42.681 07:23:58 -- common/build_config.sh@33 -- # CONFIG_VTUNE_DIR= 00:06:42.681 07:23:58 -- common/build_config.sh@34 -- # CONFIG_FUZZER_LIB= 00:06:42.681 07:23:58 -- common/build_config.sh@35 -- # CONFIG_FUZZER=n 00:06:42.681 07:23:58 -- common/build_config.sh@36 -- # CONFIG_DPDK_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build 00:06:42.682 07:23:58 -- common/build_config.sh@37 -- # CONFIG_CRYPTO=n 00:06:42.682 07:23:58 -- common/build_config.sh@38 -- # CONFIG_PGO_USE=n 00:06:42.682 07:23:58 -- common/build_config.sh@39 -- # CONFIG_VHOST=y 00:06:42.682 07:23:58 -- common/build_config.sh@40 -- # CONFIG_DAOS=n 00:06:42.682 07:23:58 -- common/build_config.sh@41 -- # CONFIG_DPDK_INC_DIR= 00:06:42.682 07:23:58 -- common/build_config.sh@42 -- # CONFIG_DAOS_DIR= 00:06:42.682 07:23:58 -- common/build_config.sh@43 -- # CONFIG_UNIT_TESTS=n 00:06:42.682 07:23:58 -- common/build_config.sh@44 -- # CONFIG_RDMA_SET_ACK_TIMEOUT=y 00:06:42.682 07:23:58 -- common/build_config.sh@45 -- # CONFIG_VIRTIO=y 00:06:42.682 07:23:58 -- common/build_config.sh@46 -- # CONFIG_COVERAGE=y 00:06:42.682 07:23:58 -- common/build_config.sh@47 -- # CONFIG_RDMA=y 00:06:42.682 07:23:58 -- common/build_config.sh@48 -- # CONFIG_FIO_SOURCE_DIR=/usr/src/fio 00:06:42.682 07:23:58 -- common/build_config.sh@49 -- # CONFIG_URING_PATH= 00:06:42.682 07:23:58 -- common/build_config.sh@50 -- # CONFIG_XNVME=n 00:06:42.682 07:23:58 -- common/build_config.sh@51 -- # CONFIG_VFIO_USER=n 00:06:42.682 07:23:58 -- common/build_config.sh@52 -- # CONFIG_ARCH=native 00:06:42.682 07:23:58 -- common/build_config.sh@53 -- # CONFIG_URING_ZNS=n 00:06:42.682 07:23:58 -- common/build_config.sh@54 -- # CONFIG_WERROR=y 00:06:42.682 07:23:58 -- common/build_config.sh@55 -- # CONFIG_HAVE_LIBBSD=n 00:06:42.682 07:23:58 -- common/build_config.sh@56 -- # CONFIG_UBSAN=y 00:06:42.682 07:23:58 -- common/build_config.sh@57 -- # CONFIG_IPSEC_MB_DIR= 00:06:42.682 07:23:58 -- common/build_config.sh@58 -- # CONFIG_GOLANG=n 00:06:42.682 07:23:58 -- common/build_config.sh@59 -- # CONFIG_ISAL=y 00:06:42.682 07:23:58 -- common/build_config.sh@60 -- # CONFIG_IDXD_KERNEL=y 00:06:42.682 07:23:58 -- common/build_config.sh@61 -- # CONFIG_DPDK_LIB_DIR= 00:06:42.682 07:23:58 -- common/build_config.sh@62 -- # CONFIG_RDMA_PROV=verbs 00:06:42.682 07:23:58 -- common/build_config.sh@63 -- # CONFIG_APPS=y 00:06:42.682 07:23:58 -- common/build_config.sh@64 -- # CONFIG_SHARED=y 00:06:42.682 07:23:58 -- common/build_config.sh@65 -- # CONFIG_FC_PATH= 00:06:42.682 07:23:58 -- common/build_config.sh@66 -- # CONFIG_DPDK_PKG_CONFIG=n 00:06:42.682 07:23:58 -- common/build_config.sh@67 -- # CONFIG_FC=n 00:06:42.682 07:23:58 -- common/build_config.sh@68 -- # CONFIG_AVAHI=n 00:06:42.682 07:23:58 -- common/build_config.sh@69 -- # CONFIG_FIO_PLUGIN=y 00:06:42.682 07:23:58 -- common/build_config.sh@70 -- # CONFIG_RAID5F=n 00:06:42.682 07:23:58 -- common/build_config.sh@71 -- # CONFIG_EXAMPLES=y 00:06:42.682 07:23:58 -- common/build_config.sh@72 -- # CONFIG_TESTS=y 00:06:42.682 07:23:58 -- common/build_config.sh@73 -- # CONFIG_CRYPTO_MLX5=n 00:06:42.682 07:23:58 -- common/build_config.sh@74 -- # CONFIG_MAX_LCORES= 00:06:42.682 07:23:58 -- common/build_config.sh@75 -- # CONFIG_IPSEC_MB=n 00:06:42.682 07:23:58 -- common/build_config.sh@76 -- # CONFIG_DEBUG=y 00:06:42.682 07:23:58 -- common/build_config.sh@77 -- # CONFIG_DPDK_COMPRESSDEV=n 00:06:42.682 07:23:58 -- common/build_config.sh@78 -- # CONFIG_CROSS_PREFIX= 00:06:42.682 07:23:58 -- common/build_config.sh@79 -- # CONFIG_URING=n 00:06:42.682 07:23:58 -- common/autotest_common.sh@48 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/applications.sh 00:06:42.682 07:23:58 -- common/applications.sh@8 -- # dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/applications.sh 00:06:42.682 07:23:58 -- common/applications.sh@8 -- # readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common 00:06:42.682 07:23:58 -- common/applications.sh@8 -- # _root=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common 00:06:42.682 07:23:58 -- common/applications.sh@9 -- # _root=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:06:42.682 07:23:58 -- common/applications.sh@10 -- # _app_dir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin 00:06:42.682 07:23:58 -- common/applications.sh@11 -- # _test_app_dir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app 00:06:42.682 07:23:58 -- common/applications.sh@12 -- # _examples_dir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples 00:06:42.682 07:23:58 -- common/applications.sh@14 -- # VHOST_FUZZ_APP=("$_test_app_dir/fuzz/vhost_fuzz/vhost_fuzz") 00:06:42.682 07:23:58 -- common/applications.sh@15 -- # ISCSI_APP=("$_app_dir/iscsi_tgt") 00:06:42.682 07:23:58 -- common/applications.sh@16 -- # NVMF_APP=("$_app_dir/nvmf_tgt") 00:06:42.682 07:23:58 -- common/applications.sh@17 -- # VHOST_APP=("$_app_dir/vhost") 00:06:42.682 07:23:58 -- common/applications.sh@18 -- # DD_APP=("$_app_dir/spdk_dd") 00:06:42.682 07:23:58 -- common/applications.sh@19 -- # SPDK_APP=("$_app_dir/spdk_tgt") 00:06:42.682 07:23:58 -- common/applications.sh@22 -- # [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/config.h ]] 00:06:42.682 07:23:58 -- common/applications.sh@23 -- # [[ #ifndef SPDK_CONFIG_H 00:06:42.682 #define SPDK_CONFIG_H 00:06:42.682 #define SPDK_CONFIG_APPS 1 00:06:42.682 #define SPDK_CONFIG_ARCH native 00:06:42.682 #undef SPDK_CONFIG_ASAN 00:06:42.682 #undef SPDK_CONFIG_AVAHI 00:06:42.682 #undef SPDK_CONFIG_CET 00:06:42.682 #define SPDK_CONFIG_COVERAGE 1 00:06:42.682 #define SPDK_CONFIG_CROSS_PREFIX 00:06:42.682 #undef SPDK_CONFIG_CRYPTO 00:06:42.682 #undef SPDK_CONFIG_CRYPTO_MLX5 00:06:42.682 #undef SPDK_CONFIG_CUSTOMOCF 00:06:42.682 #undef SPDK_CONFIG_DAOS 00:06:42.682 #define SPDK_CONFIG_DAOS_DIR 00:06:42.682 #define SPDK_CONFIG_DEBUG 1 00:06:42.682 #undef SPDK_CONFIG_DPDK_COMPRESSDEV 00:06:42.682 #define SPDK_CONFIG_DPDK_DIR /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build 00:06:42.682 #define SPDK_CONFIG_DPDK_INC_DIR 00:06:42.682 #define SPDK_CONFIG_DPDK_LIB_DIR 00:06:42.682 #undef SPDK_CONFIG_DPDK_PKG_CONFIG 00:06:42.682 #define SPDK_CONFIG_ENV /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk 00:06:42.682 #define SPDK_CONFIG_EXAMPLES 1 00:06:42.682 #undef SPDK_CONFIG_FC 00:06:42.682 #define SPDK_CONFIG_FC_PATH 00:06:42.682 #define SPDK_CONFIG_FIO_PLUGIN 1 00:06:42.682 #define SPDK_CONFIG_FIO_SOURCE_DIR /usr/src/fio 00:06:42.682 #undef SPDK_CONFIG_FUSE 00:06:42.682 #undef SPDK_CONFIG_FUZZER 00:06:42.682 #define SPDK_CONFIG_FUZZER_LIB 00:06:42.682 #undef SPDK_CONFIG_GOLANG 00:06:42.682 #define SPDK_CONFIG_HAVE_ARC4RANDOM 1 00:06:42.682 #define SPDK_CONFIG_HAVE_EXECINFO_H 1 00:06:42.682 #undef SPDK_CONFIG_HAVE_LIBARCHIVE 00:06:42.682 #undef SPDK_CONFIG_HAVE_LIBBSD 00:06:42.682 #define SPDK_CONFIG_HAVE_UUID_GENERATE_SHA1 1 00:06:42.682 #define SPDK_CONFIG_IDXD 1 00:06:42.682 #define SPDK_CONFIG_IDXD_KERNEL 1 00:06:42.682 #undef SPDK_CONFIG_IPSEC_MB 00:06:42.682 #define SPDK_CONFIG_IPSEC_MB_DIR 00:06:42.682 #define SPDK_CONFIG_ISAL 1 00:06:42.682 #define SPDK_CONFIG_ISAL_CRYPTO 1 00:06:42.682 #define SPDK_CONFIG_ISCSI_INITIATOR 1 00:06:42.682 #define SPDK_CONFIG_LIBDIR 00:06:42.682 #undef SPDK_CONFIG_LTO 00:06:42.682 #define SPDK_CONFIG_MAX_LCORES 00:06:42.682 #define SPDK_CONFIG_NVME_CUSE 1 00:06:42.682 #undef SPDK_CONFIG_OCF 00:06:42.682 #define SPDK_CONFIG_OCF_PATH 00:06:42.682 #define SPDK_CONFIG_OPENSSL_PATH 00:06:42.682 #undef SPDK_CONFIG_PGO_CAPTURE 00:06:42.682 #undef SPDK_CONFIG_PGO_USE 00:06:42.682 #define SPDK_CONFIG_PREFIX /usr/local 00:06:42.682 #undef SPDK_CONFIG_RAID5F 00:06:42.682 #undef SPDK_CONFIG_RBD 00:06:42.682 #define SPDK_CONFIG_RDMA 1 00:06:42.682 #define SPDK_CONFIG_RDMA_PROV verbs 00:06:42.682 #define SPDK_CONFIG_RDMA_SEND_WITH_INVAL 1 00:06:42.682 #define SPDK_CONFIG_RDMA_SET_ACK_TIMEOUT 1 00:06:42.682 #define SPDK_CONFIG_RDMA_SET_TOS 1 00:06:42.682 #define SPDK_CONFIG_SHARED 1 00:06:42.682 #undef SPDK_CONFIG_SMA 00:06:42.682 #define SPDK_CONFIG_TESTS 1 00:06:42.682 #undef SPDK_CONFIG_TSAN 00:06:42.682 #define SPDK_CONFIG_UBLK 1 00:06:42.682 #define SPDK_CONFIG_UBSAN 1 00:06:42.682 #undef SPDK_CONFIG_UNIT_TESTS 00:06:42.682 #undef SPDK_CONFIG_URING 00:06:42.682 #define SPDK_CONFIG_URING_PATH 00:06:42.682 #undef SPDK_CONFIG_URING_ZNS 00:06:42.682 #undef SPDK_CONFIG_USDT 00:06:42.682 #undef SPDK_CONFIG_VBDEV_COMPRESS 00:06:42.682 #undef SPDK_CONFIG_VBDEV_COMPRESS_MLX5 00:06:42.682 #undef SPDK_CONFIG_VFIO_USER 00:06:42.682 #define SPDK_CONFIG_VFIO_USER_DIR 00:06:42.682 #define SPDK_CONFIG_VHOST 1 00:06:42.682 #define SPDK_CONFIG_VIRTIO 1 00:06:42.682 #undef SPDK_CONFIG_VTUNE 00:06:42.682 #define SPDK_CONFIG_VTUNE_DIR 00:06:42.682 #define SPDK_CONFIG_WERROR 1 00:06:42.682 #define SPDK_CONFIG_WPDK_DIR 00:06:42.682 #undef SPDK_CONFIG_XNVME 00:06:42.682 #endif /* SPDK_CONFIG_H */ == *\#\d\e\f\i\n\e\ \S\P\D\K\_\C\O\N\F\I\G\_\D\E\B\U\G* ]] 00:06:42.682 07:23:58 -- common/applications.sh@24 -- # (( SPDK_AUTOTEST_DEBUG_APPS )) 00:06:42.682 07:23:58 -- common/autotest_common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:06:42.682 07:23:58 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:06:42.682 07:23:58 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:06:42.682 07:23:58 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:06:42.682 07:23:58 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:42.682 07:23:58 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:42.682 07:23:58 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:42.682 07:23:58 -- paths/export.sh@5 -- # export PATH 00:06:42.682 07:23:58 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:42.682 07:23:58 -- common/autotest_common.sh@50 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/common 00:06:42.682 07:23:58 -- pm/common@6 -- # dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/common 00:06:42.682 07:23:58 -- pm/common@6 -- # readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm 00:06:42.682 07:23:58 -- pm/common@6 -- # _pmdir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm 00:06:42.682 07:23:58 -- pm/common@7 -- # readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/../../../ 00:06:42.682 07:23:58 -- pm/common@7 -- # _pmrootdir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:06:42.682 07:23:58 -- pm/common@16 -- # TEST_TAG=N/A 00:06:42.682 07:23:58 -- pm/common@17 -- # TEST_TAG_FILE=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/.run_test_name 00:06:42.682 07:23:58 -- common/autotest_common.sh@52 -- # : 1 00:06:42.682 07:23:58 -- common/autotest_common.sh@53 -- # export RUN_NIGHTLY 00:06:42.682 07:23:58 -- common/autotest_common.sh@56 -- # : 0 00:06:42.683 07:23:58 -- common/autotest_common.sh@57 -- # export SPDK_AUTOTEST_DEBUG_APPS 00:06:42.683 07:23:58 -- common/autotest_common.sh@58 -- # : 0 00:06:42.683 07:23:58 -- common/autotest_common.sh@59 -- # export SPDK_RUN_VALGRIND 00:06:42.683 07:23:58 -- common/autotest_common.sh@60 -- # : 1 00:06:42.683 07:23:58 -- common/autotest_common.sh@61 -- # export SPDK_RUN_FUNCTIONAL_TEST 00:06:42.683 07:23:58 -- common/autotest_common.sh@62 -- # : 0 00:06:42.683 07:23:58 -- common/autotest_common.sh@63 -- # export SPDK_TEST_UNITTEST 00:06:42.683 07:23:58 -- common/autotest_common.sh@64 -- # : 00:06:42.683 07:23:58 -- common/autotest_common.sh@65 -- # export SPDK_TEST_AUTOBUILD 00:06:42.683 07:23:58 -- common/autotest_common.sh@66 -- # : 0 00:06:42.683 07:23:58 -- common/autotest_common.sh@67 -- # export SPDK_TEST_RELEASE_BUILD 00:06:42.683 07:23:58 -- common/autotest_common.sh@68 -- # : 0 00:06:42.683 07:23:58 -- common/autotest_common.sh@69 -- # export SPDK_TEST_ISAL 00:06:42.683 07:23:58 -- common/autotest_common.sh@70 -- # : 0 00:06:42.683 07:23:58 -- common/autotest_common.sh@71 -- # export SPDK_TEST_ISCSI 00:06:42.683 07:23:58 -- common/autotest_common.sh@72 -- # : 0 00:06:42.683 07:23:58 -- common/autotest_common.sh@73 -- # export SPDK_TEST_ISCSI_INITIATOR 00:06:42.683 07:23:58 -- common/autotest_common.sh@74 -- # : 0 00:06:42.683 07:23:58 -- common/autotest_common.sh@75 -- # export SPDK_TEST_NVME 00:06:42.683 07:23:58 -- common/autotest_common.sh@76 -- # : 0 00:06:42.683 07:23:58 -- common/autotest_common.sh@77 -- # export SPDK_TEST_NVME_PMR 00:06:42.683 07:23:58 -- common/autotest_common.sh@78 -- # : 0 00:06:42.683 07:23:58 -- common/autotest_common.sh@79 -- # export SPDK_TEST_NVME_BP 00:06:42.683 07:23:58 -- common/autotest_common.sh@80 -- # : 1 00:06:42.683 07:23:58 -- common/autotest_common.sh@81 -- # export SPDK_TEST_NVME_CLI 00:06:42.683 07:23:58 -- common/autotest_common.sh@82 -- # : 0 00:06:42.683 07:23:58 -- common/autotest_common.sh@83 -- # export SPDK_TEST_NVME_CUSE 00:06:42.683 07:23:58 -- common/autotest_common.sh@84 -- # : 0 00:06:42.683 07:23:58 -- common/autotest_common.sh@85 -- # export SPDK_TEST_NVME_FDP 00:06:42.683 07:23:58 -- common/autotest_common.sh@86 -- # : 1 00:06:42.683 07:23:58 -- common/autotest_common.sh@87 -- # export SPDK_TEST_NVMF 00:06:42.683 07:23:58 -- common/autotest_common.sh@88 -- # : 0 00:06:42.683 07:23:58 -- common/autotest_common.sh@89 -- # export SPDK_TEST_VFIOUSER 00:06:42.683 07:23:58 -- common/autotest_common.sh@90 -- # : 0 00:06:42.683 07:23:58 -- common/autotest_common.sh@91 -- # export SPDK_TEST_VFIOUSER_QEMU 00:06:42.683 07:23:58 -- common/autotest_common.sh@92 -- # : 0 00:06:42.683 07:23:58 -- common/autotest_common.sh@93 -- # export SPDK_TEST_FUZZER 00:06:42.683 07:23:58 -- common/autotest_common.sh@94 -- # : 0 00:06:42.683 07:23:58 -- common/autotest_common.sh@95 -- # export SPDK_TEST_FUZZER_SHORT 00:06:42.683 07:23:58 -- common/autotest_common.sh@96 -- # : tcp 00:06:42.683 07:23:58 -- common/autotest_common.sh@97 -- # export SPDK_TEST_NVMF_TRANSPORT 00:06:42.683 07:23:58 -- common/autotest_common.sh@98 -- # : 0 00:06:42.683 07:23:58 -- common/autotest_common.sh@99 -- # export SPDK_TEST_RBD 00:06:42.683 07:23:58 -- common/autotest_common.sh@100 -- # : 0 00:06:42.683 07:23:58 -- common/autotest_common.sh@101 -- # export SPDK_TEST_VHOST 00:06:42.683 07:23:58 -- common/autotest_common.sh@102 -- # : 0 00:06:42.683 07:23:58 -- common/autotest_common.sh@103 -- # export SPDK_TEST_BLOCKDEV 00:06:42.683 07:23:58 -- common/autotest_common.sh@104 -- # : 0 00:06:42.683 07:23:58 -- common/autotest_common.sh@105 -- # export SPDK_TEST_IOAT 00:06:42.683 07:23:58 -- common/autotest_common.sh@106 -- # : 0 00:06:42.683 07:23:58 -- common/autotest_common.sh@107 -- # export SPDK_TEST_BLOBFS 00:06:42.683 07:23:58 -- common/autotest_common.sh@108 -- # : 0 00:06:42.683 07:23:58 -- common/autotest_common.sh@109 -- # export SPDK_TEST_VHOST_INIT 00:06:42.683 07:23:58 -- common/autotest_common.sh@110 -- # : 0 00:06:42.683 07:23:58 -- common/autotest_common.sh@111 -- # export SPDK_TEST_LVOL 00:06:42.683 07:23:58 -- common/autotest_common.sh@112 -- # : 0 00:06:42.683 07:23:58 -- common/autotest_common.sh@113 -- # export SPDK_TEST_VBDEV_COMPRESS 00:06:42.683 07:23:58 -- common/autotest_common.sh@114 -- # : 0 00:06:42.683 07:23:58 -- common/autotest_common.sh@115 -- # export SPDK_RUN_ASAN 00:06:42.683 07:23:58 -- common/autotest_common.sh@116 -- # : 1 00:06:42.683 07:23:58 -- common/autotest_common.sh@117 -- # export SPDK_RUN_UBSAN 00:06:42.683 07:23:58 -- common/autotest_common.sh@118 -- # : 00:06:42.683 07:23:58 -- common/autotest_common.sh@119 -- # export SPDK_RUN_EXTERNAL_DPDK 00:06:42.683 07:23:58 -- common/autotest_common.sh@120 -- # : 0 00:06:42.683 07:23:58 -- common/autotest_common.sh@121 -- # export SPDK_RUN_NON_ROOT 00:06:42.683 07:23:58 -- common/autotest_common.sh@122 -- # : 0 00:06:42.683 07:23:58 -- common/autotest_common.sh@123 -- # export SPDK_TEST_CRYPTO 00:06:42.683 07:23:58 -- common/autotest_common.sh@124 -- # : 0 00:06:42.683 07:23:58 -- common/autotest_common.sh@125 -- # export SPDK_TEST_FTL 00:06:42.683 07:23:58 -- common/autotest_common.sh@126 -- # : 0 00:06:42.683 07:23:58 -- common/autotest_common.sh@127 -- # export SPDK_TEST_OCF 00:06:42.683 07:23:58 -- common/autotest_common.sh@128 -- # : 0 00:06:42.683 07:23:58 -- common/autotest_common.sh@129 -- # export SPDK_TEST_VMD 00:06:42.683 07:23:58 -- common/autotest_common.sh@130 -- # : 0 00:06:42.683 07:23:58 -- common/autotest_common.sh@131 -- # export SPDK_TEST_OPAL 00:06:42.683 07:23:58 -- common/autotest_common.sh@132 -- # : 00:06:42.683 07:23:58 -- common/autotest_common.sh@133 -- # export SPDK_TEST_NATIVE_DPDK 00:06:42.683 07:23:58 -- common/autotest_common.sh@134 -- # : true 00:06:42.683 07:23:58 -- common/autotest_common.sh@135 -- # export SPDK_AUTOTEST_X 00:06:42.683 07:23:58 -- common/autotest_common.sh@136 -- # : 0 00:06:42.683 07:23:58 -- common/autotest_common.sh@137 -- # export SPDK_TEST_RAID5 00:06:42.683 07:23:58 -- common/autotest_common.sh@138 -- # : 0 00:06:42.683 07:23:58 -- common/autotest_common.sh@139 -- # export SPDK_TEST_URING 00:06:42.683 07:23:58 -- common/autotest_common.sh@140 -- # : 0 00:06:42.683 07:23:58 -- common/autotest_common.sh@141 -- # export SPDK_TEST_USDT 00:06:42.683 07:23:58 -- common/autotest_common.sh@142 -- # : 0 00:06:42.683 07:23:58 -- common/autotest_common.sh@143 -- # export SPDK_TEST_USE_IGB_UIO 00:06:42.683 07:23:58 -- common/autotest_common.sh@144 -- # : 0 00:06:42.683 07:23:58 -- common/autotest_common.sh@145 -- # export SPDK_TEST_SCHEDULER 00:06:42.683 07:23:58 -- common/autotest_common.sh@146 -- # : 0 00:06:42.683 07:23:58 -- common/autotest_common.sh@147 -- # export SPDK_TEST_SCANBUILD 00:06:42.683 07:23:58 -- common/autotest_common.sh@148 -- # : e810 00:06:42.683 07:23:58 -- common/autotest_common.sh@149 -- # export SPDK_TEST_NVMF_NICS 00:06:42.683 07:23:58 -- common/autotest_common.sh@150 -- # : 0 00:06:42.683 07:23:58 -- common/autotest_common.sh@151 -- # export SPDK_TEST_SMA 00:06:42.683 07:23:58 -- common/autotest_common.sh@152 -- # : 0 00:06:42.683 07:23:58 -- common/autotest_common.sh@153 -- # export SPDK_TEST_DAOS 00:06:42.683 07:23:58 -- common/autotest_common.sh@154 -- # : 0 00:06:42.683 07:23:58 -- common/autotest_common.sh@155 -- # export SPDK_TEST_XNVME 00:06:42.683 07:23:58 -- common/autotest_common.sh@156 -- # : 0 00:06:42.683 07:23:58 -- common/autotest_common.sh@157 -- # export SPDK_TEST_ACCEL_DSA 00:06:42.683 07:23:58 -- common/autotest_common.sh@158 -- # : 0 00:06:42.683 07:23:58 -- common/autotest_common.sh@159 -- # export SPDK_TEST_ACCEL_IAA 00:06:42.683 07:23:58 -- common/autotest_common.sh@160 -- # : 0 00:06:42.683 07:23:58 -- common/autotest_common.sh@161 -- # export SPDK_TEST_ACCEL_IOAT 00:06:42.683 07:23:58 -- common/autotest_common.sh@163 -- # : 00:06:42.683 07:23:58 -- common/autotest_common.sh@164 -- # export SPDK_TEST_FUZZER_TARGET 00:06:42.683 07:23:58 -- common/autotest_common.sh@165 -- # : 0 00:06:42.683 07:23:58 -- common/autotest_common.sh@166 -- # export SPDK_TEST_NVMF_MDNS 00:06:42.683 07:23:58 -- common/autotest_common.sh@167 -- # : 0 00:06:42.683 07:23:58 -- common/autotest_common.sh@168 -- # export SPDK_JSONRPC_GO_CLIENT 00:06:42.683 07:23:58 -- common/autotest_common.sh@171 -- # export SPDK_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib 00:06:42.683 07:23:58 -- common/autotest_common.sh@171 -- # SPDK_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib 00:06:42.683 07:23:58 -- common/autotest_common.sh@172 -- # export DPDK_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib 00:06:42.683 07:23:58 -- common/autotest_common.sh@172 -- # DPDK_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib 00:06:42.683 07:23:58 -- common/autotest_common.sh@173 -- # export VFIO_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:06:42.683 07:23:58 -- common/autotest_common.sh@173 -- # VFIO_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:06:42.683 07:23:58 -- common/autotest_common.sh@174 -- # export LD_LIBRARY_PATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:06:42.683 07:23:58 -- common/autotest_common.sh@174 -- # LD_LIBRARY_PATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:06:42.683 07:23:58 -- common/autotest_common.sh@177 -- # export PCI_BLOCK_SYNC_ON_RESET=yes 00:06:42.683 07:23:58 -- common/autotest_common.sh@177 -- # PCI_BLOCK_SYNC_ON_RESET=yes 00:06:42.684 07:23:58 -- common/autotest_common.sh@181 -- # export PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python 00:06:42.684 07:23:58 -- common/autotest_common.sh@181 -- # PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python 00:06:42.684 07:23:58 -- common/autotest_common.sh@185 -- # export PYTHONDONTWRITEBYTECODE=1 00:06:42.684 07:23:58 -- common/autotest_common.sh@185 -- # PYTHONDONTWRITEBYTECODE=1 00:06:42.684 07:23:58 -- common/autotest_common.sh@189 -- # export ASAN_OPTIONS=new_delete_type_mismatch=0:disable_coredump=0:abort_on_error=1:use_sigaltstack=0 00:06:42.684 07:23:58 -- common/autotest_common.sh@189 -- # ASAN_OPTIONS=new_delete_type_mismatch=0:disable_coredump=0:abort_on_error=1:use_sigaltstack=0 00:06:42.684 07:23:58 -- common/autotest_common.sh@190 -- # export UBSAN_OPTIONS=halt_on_error=1:print_stacktrace=1:abort_on_error=1:disable_coredump=0:exitcode=134 00:06:42.684 07:23:58 -- common/autotest_common.sh@190 -- # UBSAN_OPTIONS=halt_on_error=1:print_stacktrace=1:abort_on_error=1:disable_coredump=0:exitcode=134 00:06:42.684 07:23:58 -- common/autotest_common.sh@194 -- # asan_suppression_file=/var/tmp/asan_suppression_file 00:06:42.684 07:23:58 -- common/autotest_common.sh@195 -- # rm -rf /var/tmp/asan_suppression_file 00:06:42.684 07:23:58 -- common/autotest_common.sh@196 -- # cat 00:06:42.684 07:23:58 -- common/autotest_common.sh@222 -- # echo leak:libfuse3.so 00:06:42.684 07:23:58 -- common/autotest_common.sh@224 -- # export LSAN_OPTIONS=suppressions=/var/tmp/asan_suppression_file 00:06:42.684 07:23:58 -- common/autotest_common.sh@224 -- # LSAN_OPTIONS=suppressions=/var/tmp/asan_suppression_file 00:06:42.684 07:23:58 -- common/autotest_common.sh@226 -- # export DEFAULT_RPC_ADDR=/var/tmp/spdk.sock 00:06:42.684 07:23:58 -- common/autotest_common.sh@226 -- # DEFAULT_RPC_ADDR=/var/tmp/spdk.sock 00:06:42.684 07:23:58 -- common/autotest_common.sh@228 -- # '[' -z /var/spdk/dependencies ']' 00:06:42.684 07:23:58 -- common/autotest_common.sh@231 -- # export DEPENDENCY_DIR 00:06:42.684 07:23:58 -- common/autotest_common.sh@235 -- # export SPDK_BIN_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin 00:06:42.684 07:23:58 -- common/autotest_common.sh@235 -- # SPDK_BIN_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin 00:06:42.684 07:23:58 -- common/autotest_common.sh@236 -- # export SPDK_EXAMPLE_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples 00:06:42.684 07:23:58 -- common/autotest_common.sh@236 -- # SPDK_EXAMPLE_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples 00:06:42.684 07:23:58 -- common/autotest_common.sh@239 -- # export QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:06:42.684 07:23:58 -- common/autotest_common.sh@239 -- # QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:06:42.684 07:23:58 -- common/autotest_common.sh@240 -- # export VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:06:42.684 07:23:58 -- common/autotest_common.sh@240 -- # VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:06:42.684 07:23:58 -- common/autotest_common.sh@242 -- # export AR_TOOL=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/ar-xnvme-fixer 00:06:42.684 07:23:58 -- common/autotest_common.sh@242 -- # AR_TOOL=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/ar-xnvme-fixer 00:06:42.684 07:23:58 -- common/autotest_common.sh@245 -- # export UNBIND_ENTIRE_IOMMU_GROUP=yes 00:06:42.684 07:23:58 -- common/autotest_common.sh@245 -- # UNBIND_ENTIRE_IOMMU_GROUP=yes 00:06:42.684 07:23:58 -- common/autotest_common.sh@248 -- # '[' 0 -eq 0 ']' 00:06:42.684 07:23:58 -- common/autotest_common.sh@249 -- # export valgrind= 00:06:42.684 07:23:58 -- common/autotest_common.sh@249 -- # valgrind= 00:06:42.684 07:23:58 -- common/autotest_common.sh@255 -- # uname -s 00:06:42.684 07:23:58 -- common/autotest_common.sh@255 -- # '[' Linux = Linux ']' 00:06:42.684 07:23:58 -- common/autotest_common.sh@256 -- # HUGEMEM=4096 00:06:42.684 07:23:58 -- common/autotest_common.sh@257 -- # export CLEAR_HUGE=yes 00:06:42.684 07:23:58 -- common/autotest_common.sh@257 -- # CLEAR_HUGE=yes 00:06:42.684 07:23:58 -- common/autotest_common.sh@258 -- # [[ 0 -eq 1 ]] 00:06:42.684 07:23:58 -- common/autotest_common.sh@258 -- # [[ 0 -eq 1 ]] 00:06:42.684 07:23:58 -- common/autotest_common.sh@265 -- # MAKE=make 00:06:42.684 07:23:58 -- common/autotest_common.sh@266 -- # MAKEFLAGS=-j48 00:06:42.684 07:23:58 -- common/autotest_common.sh@282 -- # export HUGEMEM=4096 00:06:42.684 07:23:58 -- common/autotest_common.sh@282 -- # HUGEMEM=4096 00:06:42.684 07:23:58 -- common/autotest_common.sh@284 -- # '[' -z /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output ']' 00:06:42.684 07:23:58 -- common/autotest_common.sh@289 -- # NO_HUGE=() 00:06:42.684 07:23:58 -- common/autotest_common.sh@290 -- # TEST_MODE= 00:06:42.684 07:23:58 -- common/autotest_common.sh@291 -- # for i in "$@" 00:06:42.684 07:23:58 -- common/autotest_common.sh@292 -- # case "$i" in 00:06:42.684 07:23:58 -- common/autotest_common.sh@297 -- # TEST_TRANSPORT=tcp 00:06:42.684 07:23:58 -- common/autotest_common.sh@309 -- # [[ -z 3991779 ]] 00:06:42.684 07:23:58 -- common/autotest_common.sh@309 -- # kill -0 3991779 00:06:42.684 07:23:58 -- common/autotest_common.sh@1665 -- # set_test_storage 2147483648 00:06:42.684 07:23:58 -- common/autotest_common.sh@319 -- # [[ -v testdir ]] 00:06:42.684 07:23:58 -- common/autotest_common.sh@321 -- # local requested_size=2147483648 00:06:42.684 07:23:58 -- common/autotest_common.sh@322 -- # local mount target_dir 00:06:42.684 07:23:58 -- common/autotest_common.sh@324 -- # local -A mounts fss sizes avails uses 00:06:42.684 07:23:58 -- common/autotest_common.sh@325 -- # local source fs size avail mount use 00:06:42.684 07:23:58 -- common/autotest_common.sh@327 -- # local storage_fallback storage_candidates 00:06:42.684 07:23:58 -- common/autotest_common.sh@329 -- # mktemp -udt spdk.XXXXXX 00:06:42.684 07:23:58 -- common/autotest_common.sh@329 -- # storage_fallback=/tmp/spdk.aioMnl 00:06:42.684 07:23:58 -- common/autotest_common.sh@334 -- # storage_candidates=("$testdir" "$storage_fallback/tests/${testdir##*/}" "$storage_fallback") 00:06:42.684 07:23:58 -- common/autotest_common.sh@336 -- # [[ -n '' ]] 00:06:42.684 07:23:58 -- common/autotest_common.sh@341 -- # [[ -n '' ]] 00:06:42.684 07:23:58 -- common/autotest_common.sh@346 -- # mkdir -p /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target /tmp/spdk.aioMnl/tests/target /tmp/spdk.aioMnl 00:06:42.684 07:23:58 -- common/autotest_common.sh@349 -- # requested_size=2214592512 00:06:42.684 07:23:58 -- common/autotest_common.sh@351 -- # read -r source fs size use avail _ mount 00:06:42.684 07:23:58 -- common/autotest_common.sh@318 -- # df -T 00:06:42.684 07:23:58 -- common/autotest_common.sh@318 -- # grep -v Filesystem 00:06:42.684 07:23:58 -- common/autotest_common.sh@352 -- # mounts["$mount"]=spdk_devtmpfs 00:06:42.684 07:23:58 -- common/autotest_common.sh@352 -- # fss["$mount"]=devtmpfs 00:06:42.684 07:23:58 -- common/autotest_common.sh@353 -- # avails["$mount"]=67108864 00:06:42.684 07:23:58 -- common/autotest_common.sh@353 -- # sizes["$mount"]=67108864 00:06:42.684 07:23:58 -- common/autotest_common.sh@354 -- # uses["$mount"]=0 00:06:42.684 07:23:58 -- common/autotest_common.sh@351 -- # read -r source fs size use avail _ mount 00:06:42.684 07:23:58 -- common/autotest_common.sh@352 -- # mounts["$mount"]=/dev/pmem0 00:06:42.684 07:23:58 -- common/autotest_common.sh@352 -- # fss["$mount"]=ext2 00:06:42.684 07:23:58 -- common/autotest_common.sh@353 -- # avails["$mount"]=953643008 00:06:42.684 07:23:58 -- common/autotest_common.sh@353 -- # sizes["$mount"]=5284429824 00:06:42.684 07:23:58 -- common/autotest_common.sh@354 -- # uses["$mount"]=4330786816 00:06:42.684 07:23:58 -- common/autotest_common.sh@351 -- # read -r source fs size use avail _ mount 00:06:42.684 07:23:58 -- common/autotest_common.sh@352 -- # mounts["$mount"]=spdk_root 00:06:42.684 07:23:58 -- common/autotest_common.sh@352 -- # fss["$mount"]=overlay 00:06:42.684 07:23:58 -- common/autotest_common.sh@353 -- # avails["$mount"]=55587287040 00:06:42.684 07:23:58 -- common/autotest_common.sh@353 -- # sizes["$mount"]=61994708992 00:06:42.684 07:23:58 -- common/autotest_common.sh@354 -- # uses["$mount"]=6407421952 00:06:42.684 07:23:58 -- common/autotest_common.sh@351 -- # read -r source fs size use avail _ mount 00:06:42.684 07:23:58 -- common/autotest_common.sh@352 -- # mounts["$mount"]=tmpfs 00:06:42.684 07:23:58 -- common/autotest_common.sh@352 -- # fss["$mount"]=tmpfs 00:06:42.684 07:23:58 -- common/autotest_common.sh@353 -- # avails["$mount"]=30943834112 00:06:42.684 07:23:58 -- common/autotest_common.sh@353 -- # sizes["$mount"]=30997352448 00:06:42.684 07:23:58 -- common/autotest_common.sh@354 -- # uses["$mount"]=53518336 00:06:42.684 07:23:58 -- common/autotest_common.sh@351 -- # read -r source fs size use avail _ mount 00:06:42.684 07:23:58 -- common/autotest_common.sh@352 -- # mounts["$mount"]=tmpfs 00:06:42.684 07:23:58 -- common/autotest_common.sh@352 -- # fss["$mount"]=tmpfs 00:06:42.684 07:23:58 -- common/autotest_common.sh@353 -- # avails["$mount"]=12390182912 00:06:42.684 07:23:58 -- common/autotest_common.sh@353 -- # sizes["$mount"]=12398944256 00:06:42.684 07:23:58 -- common/autotest_common.sh@354 -- # uses["$mount"]=8761344 00:06:42.684 07:23:58 -- common/autotest_common.sh@351 -- # read -r source fs size use avail _ mount 00:06:42.684 07:23:58 -- common/autotest_common.sh@352 -- # mounts["$mount"]=tmpfs 00:06:42.684 07:23:58 -- common/autotest_common.sh@352 -- # fss["$mount"]=tmpfs 00:06:42.684 07:23:58 -- common/autotest_common.sh@353 -- # avails["$mount"]=30996111360 00:06:42.684 07:23:58 -- common/autotest_common.sh@353 -- # sizes["$mount"]=30997356544 00:06:42.684 07:23:58 -- common/autotest_common.sh@354 -- # uses["$mount"]=1245184 00:06:42.684 07:23:58 -- common/autotest_common.sh@351 -- # read -r source fs size use avail _ mount 00:06:42.684 07:23:58 -- common/autotest_common.sh@352 -- # mounts["$mount"]=tmpfs 00:06:42.684 07:23:58 -- common/autotest_common.sh@352 -- # fss["$mount"]=tmpfs 00:06:42.684 07:23:58 -- common/autotest_common.sh@353 -- # avails["$mount"]=6199463936 00:06:42.684 07:23:58 -- common/autotest_common.sh@353 -- # sizes["$mount"]=6199468032 00:06:42.684 07:23:58 -- common/autotest_common.sh@354 -- # uses["$mount"]=4096 00:06:42.684 07:23:58 -- common/autotest_common.sh@351 -- # read -r source fs size use avail _ mount 00:06:42.684 07:23:58 -- common/autotest_common.sh@357 -- # printf '* Looking for test storage...\n' 00:06:42.684 * Looking for test storage... 00:06:42.684 07:23:58 -- common/autotest_common.sh@359 -- # local target_space new_size 00:06:42.684 07:23:58 -- common/autotest_common.sh@360 -- # for target_dir in "${storage_candidates[@]}" 00:06:42.684 07:23:58 -- common/autotest_common.sh@363 -- # df /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:06:42.684 07:23:58 -- common/autotest_common.sh@363 -- # awk '$1 !~ /Filesystem/{print $6}' 00:06:42.684 07:23:58 -- common/autotest_common.sh@363 -- # mount=/ 00:06:42.684 07:23:58 -- common/autotest_common.sh@365 -- # target_space=55587287040 00:06:42.684 07:23:58 -- common/autotest_common.sh@366 -- # (( target_space == 0 || target_space < requested_size )) 00:06:42.684 07:23:58 -- common/autotest_common.sh@369 -- # (( target_space >= requested_size )) 00:06:42.684 07:23:58 -- common/autotest_common.sh@371 -- # [[ overlay == tmpfs ]] 00:06:42.684 07:23:58 -- common/autotest_common.sh@371 -- # [[ overlay == ramfs ]] 00:06:42.684 07:23:58 -- common/autotest_common.sh@371 -- # [[ / == / ]] 00:06:42.684 07:23:58 -- common/autotest_common.sh@372 -- # new_size=8622014464 00:06:42.684 07:23:58 -- common/autotest_common.sh@373 -- # (( new_size * 100 / sizes[/] > 95 )) 00:06:42.684 07:23:58 -- common/autotest_common.sh@378 -- # export SPDK_TEST_STORAGE=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:06:42.684 07:23:58 -- common/autotest_common.sh@378 -- # SPDK_TEST_STORAGE=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:06:42.684 07:23:58 -- common/autotest_common.sh@379 -- # printf '* Found test storage at %s\n' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:06:42.684 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:06:42.684 07:23:58 -- common/autotest_common.sh@380 -- # return 0 00:06:42.684 07:23:58 -- common/autotest_common.sh@1667 -- # set -o errtrace 00:06:42.684 07:23:58 -- common/autotest_common.sh@1668 -- # shopt -s extdebug 00:06:42.684 07:23:58 -- common/autotest_common.sh@1669 -- # trap 'trap - ERR; print_backtrace >&2' ERR 00:06:42.684 07:23:58 -- common/autotest_common.sh@1671 -- # PS4=' \t -- ${BASH_SOURCE#${BASH_SOURCE%/*/*}/}@${LINENO} -- \$ ' 00:06:42.685 07:23:58 -- common/autotest_common.sh@1672 -- # true 00:06:42.685 07:23:58 -- common/autotest_common.sh@1674 -- # xtrace_fd 00:06:42.685 07:23:58 -- common/autotest_common.sh@25 -- # [[ -n 14 ]] 00:06:42.685 07:23:58 -- common/autotest_common.sh@25 -- # [[ -e /proc/self/fd/14 ]] 00:06:42.685 07:23:58 -- common/autotest_common.sh@27 -- # exec 00:06:42.685 07:23:58 -- common/autotest_common.sh@29 -- # exec 00:06:42.685 07:23:58 -- common/autotest_common.sh@31 -- # xtrace_restore 00:06:42.685 07:23:58 -- common/autotest_common.sh@16 -- # unset -v 'X_STACK[0 - 1 < 0 ? 0 : 0 - 1]' 00:06:42.685 07:23:58 -- common/autotest_common.sh@17 -- # (( 0 == 0 )) 00:06:42.685 07:23:58 -- common/autotest_common.sh@18 -- # set -x 00:06:42.685 07:23:58 -- target/filesystem.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:06:42.685 07:23:58 -- nvmf/common.sh@7 -- # uname -s 00:06:42.685 07:23:58 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:06:42.685 07:23:58 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:06:42.685 07:23:58 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:06:42.685 07:23:58 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:06:42.685 07:23:58 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:06:42.685 07:23:58 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:06:42.685 07:23:58 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:06:42.685 07:23:58 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:06:42.685 07:23:58 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:06:42.685 07:23:58 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:06:42.685 07:23:58 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:06:42.685 07:23:58 -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:06:42.685 07:23:58 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:06:42.685 07:23:58 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:06:42.685 07:23:58 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:06:42.685 07:23:58 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:06:42.685 07:23:58 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:06:42.685 07:23:58 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:06:42.685 07:23:58 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:06:42.685 07:23:58 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:42.685 07:23:58 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:42.685 07:23:58 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:42.685 07:23:58 -- paths/export.sh@5 -- # export PATH 00:06:42.685 07:23:58 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:42.685 07:23:58 -- nvmf/common.sh@46 -- # : 0 00:06:42.685 07:23:58 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:06:42.685 07:23:58 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:06:42.685 07:23:58 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:06:42.685 07:23:58 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:06:42.685 07:23:58 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:06:42.685 07:23:58 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:06:42.685 07:23:58 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:06:42.685 07:23:58 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:06:42.685 07:23:58 -- target/filesystem.sh@12 -- # MALLOC_BDEV_SIZE=512 00:06:42.685 07:23:58 -- target/filesystem.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:06:42.685 07:23:58 -- target/filesystem.sh@15 -- # nvmftestinit 00:06:42.685 07:23:58 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:06:42.685 07:23:58 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:06:42.685 07:23:58 -- nvmf/common.sh@436 -- # prepare_net_devs 00:06:42.685 07:23:58 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:06:42.685 07:23:58 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:06:42.685 07:23:58 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:06:42.685 07:23:58 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:06:42.685 07:23:58 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:06:42.943 07:23:58 -- nvmf/common.sh@402 -- # [[ phy != virt ]] 00:06:42.943 07:23:58 -- nvmf/common.sh@402 -- # gather_supported_nvmf_pci_devs 00:06:42.943 07:23:58 -- nvmf/common.sh@284 -- # xtrace_disable 00:06:42.943 07:23:58 -- common/autotest_common.sh@10 -- # set +x 00:06:44.841 07:24:00 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:06:44.841 07:24:00 -- nvmf/common.sh@290 -- # pci_devs=() 00:06:44.841 07:24:00 -- nvmf/common.sh@290 -- # local -a pci_devs 00:06:44.841 07:24:00 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:06:44.841 07:24:00 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:06:44.841 07:24:00 -- nvmf/common.sh@292 -- # pci_drivers=() 00:06:44.841 07:24:00 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:06:44.841 07:24:00 -- nvmf/common.sh@294 -- # net_devs=() 00:06:44.841 07:24:00 -- nvmf/common.sh@294 -- # local -ga net_devs 00:06:44.841 07:24:00 -- nvmf/common.sh@295 -- # e810=() 00:06:44.841 07:24:00 -- nvmf/common.sh@295 -- # local -ga e810 00:06:44.841 07:24:00 -- nvmf/common.sh@296 -- # x722=() 00:06:44.841 07:24:00 -- nvmf/common.sh@296 -- # local -ga x722 00:06:44.841 07:24:00 -- nvmf/common.sh@297 -- # mlx=() 00:06:44.841 07:24:00 -- nvmf/common.sh@297 -- # local -ga mlx 00:06:44.841 07:24:00 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:06:44.841 07:24:00 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:06:44.841 07:24:00 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:06:44.841 07:24:00 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:06:44.841 07:24:00 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:06:44.841 07:24:00 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:06:44.841 07:24:00 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:06:44.841 07:24:00 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:06:44.841 07:24:00 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:06:44.841 07:24:00 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:06:44.841 07:24:00 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:06:44.841 07:24:00 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:06:44.841 07:24:00 -- nvmf/common.sh@320 -- # [[ tcp == rdma ]] 00:06:44.841 07:24:00 -- nvmf/common.sh@326 -- # [[ e810 == mlx5 ]] 00:06:44.841 07:24:00 -- nvmf/common.sh@328 -- # [[ e810 == e810 ]] 00:06:44.841 07:24:00 -- nvmf/common.sh@329 -- # pci_devs=("${e810[@]}") 00:06:44.841 07:24:00 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:06:44.841 07:24:00 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:06:44.841 07:24:00 -- nvmf/common.sh@340 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:06:44.841 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:06:44.841 07:24:00 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:06:44.841 07:24:00 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:06:44.841 07:24:00 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:06:44.841 07:24:00 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:06:44.841 07:24:00 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:06:44.841 07:24:00 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:06:44.841 07:24:00 -- nvmf/common.sh@340 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:06:44.841 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:06:44.841 07:24:00 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:06:44.841 07:24:00 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:06:44.841 07:24:00 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:06:44.841 07:24:00 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:06:44.841 07:24:00 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:06:44.841 07:24:00 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:06:44.841 07:24:00 -- nvmf/common.sh@371 -- # [[ e810 == e810 ]] 00:06:44.841 07:24:00 -- nvmf/common.sh@371 -- # [[ tcp == rdma ]] 00:06:44.841 07:24:00 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:06:44.841 07:24:00 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:06:44.841 07:24:00 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:06:44.841 07:24:00 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:06:44.841 07:24:00 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:06:44.841 Found net devices under 0000:0a:00.0: cvl_0_0 00:06:44.841 07:24:00 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:06:44.841 07:24:00 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:06:44.841 07:24:00 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:06:44.841 07:24:00 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:06:44.841 07:24:00 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:06:44.841 07:24:00 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:06:44.841 Found net devices under 0000:0a:00.1: cvl_0_1 00:06:44.842 07:24:00 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:06:44.842 07:24:00 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:06:44.842 07:24:00 -- nvmf/common.sh@402 -- # is_hw=yes 00:06:44.842 07:24:00 -- nvmf/common.sh@404 -- # [[ yes == yes ]] 00:06:44.842 07:24:00 -- nvmf/common.sh@405 -- # [[ tcp == tcp ]] 00:06:44.842 07:24:00 -- nvmf/common.sh@406 -- # nvmf_tcp_init 00:06:44.842 07:24:00 -- nvmf/common.sh@228 -- # NVMF_INITIATOR_IP=10.0.0.1 00:06:44.842 07:24:00 -- nvmf/common.sh@229 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:06:44.842 07:24:00 -- nvmf/common.sh@230 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:06:44.842 07:24:00 -- nvmf/common.sh@233 -- # (( 2 > 1 )) 00:06:44.842 07:24:00 -- nvmf/common.sh@235 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:06:44.842 07:24:00 -- nvmf/common.sh@236 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:06:44.842 07:24:00 -- nvmf/common.sh@239 -- # NVMF_SECOND_TARGET_IP= 00:06:44.842 07:24:00 -- nvmf/common.sh@241 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:06:44.842 07:24:00 -- nvmf/common.sh@242 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:06:44.842 07:24:00 -- nvmf/common.sh@243 -- # ip -4 addr flush cvl_0_0 00:06:44.842 07:24:00 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_1 00:06:44.842 07:24:00 -- nvmf/common.sh@247 -- # ip netns add cvl_0_0_ns_spdk 00:06:44.842 07:24:00 -- nvmf/common.sh@250 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:06:44.842 07:24:00 -- nvmf/common.sh@253 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:06:44.842 07:24:00 -- nvmf/common.sh@254 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:06:44.842 07:24:00 -- nvmf/common.sh@257 -- # ip link set cvl_0_1 up 00:06:44.842 07:24:00 -- nvmf/common.sh@259 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:06:44.842 07:24:00 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:06:44.842 07:24:00 -- nvmf/common.sh@263 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:06:44.842 07:24:00 -- nvmf/common.sh@266 -- # ping -c 1 10.0.0.2 00:06:44.842 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:06:44.842 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.155 ms 00:06:44.842 00:06:44.842 --- 10.0.0.2 ping statistics --- 00:06:44.842 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:06:44.842 rtt min/avg/max/mdev = 0.155/0.155/0.155/0.000 ms 00:06:44.842 07:24:00 -- nvmf/common.sh@267 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:06:44.842 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:06:44.842 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.240 ms 00:06:44.842 00:06:44.842 --- 10.0.0.1 ping statistics --- 00:06:44.842 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:06:44.842 rtt min/avg/max/mdev = 0.240/0.240/0.240/0.000 ms 00:06:44.842 07:24:00 -- nvmf/common.sh@269 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:06:44.842 07:24:00 -- nvmf/common.sh@410 -- # return 0 00:06:44.842 07:24:00 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:06:44.842 07:24:00 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:06:44.842 07:24:00 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:06:44.842 07:24:00 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:06:44.842 07:24:00 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:06:44.842 07:24:00 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:06:44.842 07:24:00 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:06:44.842 07:24:00 -- target/filesystem.sh@105 -- # run_test nvmf_filesystem_no_in_capsule nvmf_filesystem_part 0 00:06:44.842 07:24:00 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:06:44.842 07:24:00 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:06:44.842 07:24:00 -- common/autotest_common.sh@10 -- # set +x 00:06:44.842 ************************************ 00:06:44.842 START TEST nvmf_filesystem_no_in_capsule 00:06:44.842 ************************************ 00:06:44.842 07:24:01 -- common/autotest_common.sh@1104 -- # nvmf_filesystem_part 0 00:06:44.842 07:24:01 -- target/filesystem.sh@47 -- # in_capsule=0 00:06:44.842 07:24:01 -- target/filesystem.sh@49 -- # nvmfappstart -m 0xF 00:06:44.842 07:24:01 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:06:44.842 07:24:01 -- common/autotest_common.sh@712 -- # xtrace_disable 00:06:44.842 07:24:01 -- common/autotest_common.sh@10 -- # set +x 00:06:44.842 07:24:01 -- nvmf/common.sh@469 -- # nvmfpid=3993468 00:06:44.842 07:24:01 -- nvmf/common.sh@468 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:06:44.842 07:24:01 -- nvmf/common.sh@470 -- # waitforlisten 3993468 00:06:44.842 07:24:01 -- common/autotest_common.sh@819 -- # '[' -z 3993468 ']' 00:06:44.842 07:24:01 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:44.842 07:24:01 -- common/autotest_common.sh@824 -- # local max_retries=100 00:06:44.842 07:24:01 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:44.842 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:44.842 07:24:01 -- common/autotest_common.sh@828 -- # xtrace_disable 00:06:44.842 07:24:01 -- common/autotest_common.sh@10 -- # set +x 00:06:45.100 [2024-07-14 07:24:01.050386] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:06:45.100 [2024-07-14 07:24:01.050470] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:06:45.100 EAL: No free 2048 kB hugepages reported on node 1 00:06:45.100 [2024-07-14 07:24:01.114340] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:06:45.100 [2024-07-14 07:24:01.222028] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:06:45.100 [2024-07-14 07:24:01.222183] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:06:45.100 [2024-07-14 07:24:01.222200] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:06:45.100 [2024-07-14 07:24:01.222212] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:06:45.100 [2024-07-14 07:24:01.222268] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:06:45.100 [2024-07-14 07:24:01.222330] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:06:45.100 [2024-07-14 07:24:01.222396] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:06:45.100 [2024-07-14 07:24:01.222399] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:46.031 07:24:02 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:06:46.031 07:24:02 -- common/autotest_common.sh@852 -- # return 0 00:06:46.031 07:24:02 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:06:46.031 07:24:02 -- common/autotest_common.sh@718 -- # xtrace_disable 00:06:46.031 07:24:02 -- common/autotest_common.sh@10 -- # set +x 00:06:46.031 07:24:02 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:06:46.031 07:24:02 -- target/filesystem.sh@50 -- # malloc_name=Malloc1 00:06:46.031 07:24:02 -- target/filesystem.sh@52 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -c 0 00:06:46.031 07:24:02 -- common/autotest_common.sh@551 -- # xtrace_disable 00:06:46.031 07:24:02 -- common/autotest_common.sh@10 -- # set +x 00:06:46.031 [2024-07-14 07:24:02.030425] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:06:46.031 07:24:02 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:06:46.031 07:24:02 -- target/filesystem.sh@53 -- # rpc_cmd bdev_malloc_create 512 512 -b Malloc1 00:06:46.031 07:24:02 -- common/autotest_common.sh@551 -- # xtrace_disable 00:06:46.031 07:24:02 -- common/autotest_common.sh@10 -- # set +x 00:06:46.288 Malloc1 00:06:46.288 07:24:02 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:06:46.288 07:24:02 -- target/filesystem.sh@54 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:06:46.288 07:24:02 -- common/autotest_common.sh@551 -- # xtrace_disable 00:06:46.288 07:24:02 -- common/autotest_common.sh@10 -- # set +x 00:06:46.288 07:24:02 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:06:46.288 07:24:02 -- target/filesystem.sh@55 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:06:46.288 07:24:02 -- common/autotest_common.sh@551 -- # xtrace_disable 00:06:46.288 07:24:02 -- common/autotest_common.sh@10 -- # set +x 00:06:46.288 07:24:02 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:06:46.288 07:24:02 -- target/filesystem.sh@56 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:06:46.288 07:24:02 -- common/autotest_common.sh@551 -- # xtrace_disable 00:06:46.288 07:24:02 -- common/autotest_common.sh@10 -- # set +x 00:06:46.288 [2024-07-14 07:24:02.223968] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:06:46.288 07:24:02 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:06:46.288 07:24:02 -- target/filesystem.sh@58 -- # get_bdev_size Malloc1 00:06:46.288 07:24:02 -- common/autotest_common.sh@1357 -- # local bdev_name=Malloc1 00:06:46.288 07:24:02 -- common/autotest_common.sh@1358 -- # local bdev_info 00:06:46.288 07:24:02 -- common/autotest_common.sh@1359 -- # local bs 00:06:46.288 07:24:02 -- common/autotest_common.sh@1360 -- # local nb 00:06:46.288 07:24:02 -- common/autotest_common.sh@1361 -- # rpc_cmd bdev_get_bdevs -b Malloc1 00:06:46.288 07:24:02 -- common/autotest_common.sh@551 -- # xtrace_disable 00:06:46.288 07:24:02 -- common/autotest_common.sh@10 -- # set +x 00:06:46.288 07:24:02 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:06:46.288 07:24:02 -- common/autotest_common.sh@1361 -- # bdev_info='[ 00:06:46.288 { 00:06:46.288 "name": "Malloc1", 00:06:46.288 "aliases": [ 00:06:46.288 "8fed145f-6577-45b3-81f3-7d22de784e28" 00:06:46.288 ], 00:06:46.288 "product_name": "Malloc disk", 00:06:46.288 "block_size": 512, 00:06:46.288 "num_blocks": 1048576, 00:06:46.288 "uuid": "8fed145f-6577-45b3-81f3-7d22de784e28", 00:06:46.288 "assigned_rate_limits": { 00:06:46.288 "rw_ios_per_sec": 0, 00:06:46.288 "rw_mbytes_per_sec": 0, 00:06:46.288 "r_mbytes_per_sec": 0, 00:06:46.288 "w_mbytes_per_sec": 0 00:06:46.288 }, 00:06:46.288 "claimed": true, 00:06:46.288 "claim_type": "exclusive_write", 00:06:46.288 "zoned": false, 00:06:46.288 "supported_io_types": { 00:06:46.288 "read": true, 00:06:46.288 "write": true, 00:06:46.288 "unmap": true, 00:06:46.288 "write_zeroes": true, 00:06:46.288 "flush": true, 00:06:46.288 "reset": true, 00:06:46.288 "compare": false, 00:06:46.288 "compare_and_write": false, 00:06:46.288 "abort": true, 00:06:46.288 "nvme_admin": false, 00:06:46.288 "nvme_io": false 00:06:46.288 }, 00:06:46.288 "memory_domains": [ 00:06:46.288 { 00:06:46.288 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:06:46.288 "dma_device_type": 2 00:06:46.288 } 00:06:46.288 ], 00:06:46.288 "driver_specific": {} 00:06:46.288 } 00:06:46.288 ]' 00:06:46.288 07:24:02 -- common/autotest_common.sh@1362 -- # jq '.[] .block_size' 00:06:46.288 07:24:02 -- common/autotest_common.sh@1362 -- # bs=512 00:06:46.288 07:24:02 -- common/autotest_common.sh@1363 -- # jq '.[] .num_blocks' 00:06:46.288 07:24:02 -- common/autotest_common.sh@1363 -- # nb=1048576 00:06:46.288 07:24:02 -- common/autotest_common.sh@1366 -- # bdev_size=512 00:06:46.288 07:24:02 -- common/autotest_common.sh@1367 -- # echo 512 00:06:46.288 07:24:02 -- target/filesystem.sh@58 -- # malloc_size=536870912 00:06:46.288 07:24:02 -- target/filesystem.sh@60 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:06:46.855 07:24:02 -- target/filesystem.sh@62 -- # waitforserial SPDKISFASTANDAWESOME 00:06:46.855 07:24:02 -- common/autotest_common.sh@1177 -- # local i=0 00:06:46.855 07:24:02 -- common/autotest_common.sh@1178 -- # local nvme_device_counter=1 nvme_devices=0 00:06:46.855 07:24:02 -- common/autotest_common.sh@1179 -- # [[ -n '' ]] 00:06:46.855 07:24:02 -- common/autotest_common.sh@1184 -- # sleep 2 00:06:48.803 07:24:04 -- common/autotest_common.sh@1185 -- # (( i++ <= 15 )) 00:06:48.803 07:24:04 -- common/autotest_common.sh@1186 -- # lsblk -l -o NAME,SERIAL 00:06:48.803 07:24:04 -- common/autotest_common.sh@1186 -- # grep -c SPDKISFASTANDAWESOME 00:06:48.803 07:24:04 -- common/autotest_common.sh@1186 -- # nvme_devices=1 00:06:48.803 07:24:04 -- common/autotest_common.sh@1187 -- # (( nvme_devices == nvme_device_counter )) 00:06:48.803 07:24:04 -- common/autotest_common.sh@1187 -- # return 0 00:06:48.803 07:24:04 -- target/filesystem.sh@63 -- # lsblk -l -o NAME,SERIAL 00:06:48.803 07:24:04 -- target/filesystem.sh@63 -- # grep -oP '([\w]*)(?=\s+SPDKISFASTANDAWESOME)' 00:06:48.803 07:24:04 -- target/filesystem.sh@63 -- # nvme_name=nvme0n1 00:06:48.803 07:24:04 -- target/filesystem.sh@64 -- # sec_size_to_bytes nvme0n1 00:06:48.803 07:24:04 -- setup/common.sh@76 -- # local dev=nvme0n1 00:06:48.803 07:24:04 -- setup/common.sh@78 -- # [[ -e /sys/block/nvme0n1 ]] 00:06:48.803 07:24:04 -- setup/common.sh@80 -- # echo 536870912 00:06:48.803 07:24:04 -- target/filesystem.sh@64 -- # nvme_size=536870912 00:06:48.803 07:24:04 -- target/filesystem.sh@66 -- # mkdir -p /mnt/device 00:06:48.803 07:24:04 -- target/filesystem.sh@67 -- # (( nvme_size == malloc_size )) 00:06:48.803 07:24:04 -- target/filesystem.sh@68 -- # parted -s /dev/nvme0n1 mklabel gpt mkpart SPDK_TEST 0% 100% 00:06:49.369 07:24:05 -- target/filesystem.sh@69 -- # partprobe 00:06:49.369 07:24:05 -- target/filesystem.sh@70 -- # sleep 1 00:06:50.742 07:24:06 -- target/filesystem.sh@76 -- # '[' 0 -eq 0 ']' 00:06:50.742 07:24:06 -- target/filesystem.sh@77 -- # run_test filesystem_ext4 nvmf_filesystem_create ext4 nvme0n1 00:06:50.742 07:24:06 -- common/autotest_common.sh@1077 -- # '[' 4 -le 1 ']' 00:06:50.742 07:24:06 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:06:50.742 07:24:06 -- common/autotest_common.sh@10 -- # set +x 00:06:50.742 ************************************ 00:06:50.742 START TEST filesystem_ext4 00:06:50.742 ************************************ 00:06:50.742 07:24:06 -- common/autotest_common.sh@1104 -- # nvmf_filesystem_create ext4 nvme0n1 00:06:50.742 07:24:06 -- target/filesystem.sh@18 -- # fstype=ext4 00:06:50.742 07:24:06 -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:06:50.742 07:24:06 -- target/filesystem.sh@21 -- # make_filesystem ext4 /dev/nvme0n1p1 00:06:50.742 07:24:06 -- common/autotest_common.sh@902 -- # local fstype=ext4 00:06:50.742 07:24:06 -- common/autotest_common.sh@903 -- # local dev_name=/dev/nvme0n1p1 00:06:50.742 07:24:06 -- common/autotest_common.sh@904 -- # local i=0 00:06:50.742 07:24:06 -- common/autotest_common.sh@905 -- # local force 00:06:50.742 07:24:06 -- common/autotest_common.sh@907 -- # '[' ext4 = ext4 ']' 00:06:50.742 07:24:06 -- common/autotest_common.sh@908 -- # force=-F 00:06:50.742 07:24:06 -- common/autotest_common.sh@913 -- # mkfs.ext4 -F /dev/nvme0n1p1 00:06:50.742 mke2fs 1.46.5 (30-Dec-2021) 00:06:50.742 Discarding device blocks: 0/522240 done 00:06:50.742 Creating filesystem with 522240 1k blocks and 130560 inodes 00:06:50.742 Filesystem UUID: 6eae9682-5c94-4153-90f9-4daddd700afb 00:06:50.742 Superblock backups stored on blocks: 00:06:50.742 8193, 24577, 40961, 57345, 73729, 204801, 221185, 401409 00:06:50.742 00:06:50.742 Allocating group tables: 0/64 done 00:06:50.742 Writing inode tables: 0/64 done 00:06:51.000 Creating journal (8192 blocks): done 00:06:52.094 Writing superblocks and filesystem accounting information: 0/64 done 00:06:52.094 00:06:52.094 07:24:08 -- common/autotest_common.sh@921 -- # return 0 00:06:52.094 07:24:08 -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:06:52.094 07:24:08 -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:06:52.094 07:24:08 -- target/filesystem.sh@25 -- # sync 00:06:52.094 07:24:08 -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:06:52.094 07:24:08 -- target/filesystem.sh@27 -- # sync 00:06:52.094 07:24:08 -- target/filesystem.sh@29 -- # i=0 00:06:52.094 07:24:08 -- target/filesystem.sh@30 -- # umount /mnt/device 00:06:52.352 07:24:08 -- target/filesystem.sh@37 -- # kill -0 3993468 00:06:52.352 07:24:08 -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:06:52.352 07:24:08 -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:06:52.352 07:24:08 -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:06:52.352 07:24:08 -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:06:52.352 00:06:52.352 real 0m1.740s 00:06:52.352 user 0m0.019s 00:06:52.352 sys 0m0.056s 00:06:52.352 07:24:08 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:52.352 07:24:08 -- common/autotest_common.sh@10 -- # set +x 00:06:52.352 ************************************ 00:06:52.352 END TEST filesystem_ext4 00:06:52.352 ************************************ 00:06:52.352 07:24:08 -- target/filesystem.sh@78 -- # run_test filesystem_btrfs nvmf_filesystem_create btrfs nvme0n1 00:06:52.352 07:24:08 -- common/autotest_common.sh@1077 -- # '[' 4 -le 1 ']' 00:06:52.352 07:24:08 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:06:52.352 07:24:08 -- common/autotest_common.sh@10 -- # set +x 00:06:52.352 ************************************ 00:06:52.352 START TEST filesystem_btrfs 00:06:52.352 ************************************ 00:06:52.352 07:24:08 -- common/autotest_common.sh@1104 -- # nvmf_filesystem_create btrfs nvme0n1 00:06:52.352 07:24:08 -- target/filesystem.sh@18 -- # fstype=btrfs 00:06:52.352 07:24:08 -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:06:52.352 07:24:08 -- target/filesystem.sh@21 -- # make_filesystem btrfs /dev/nvme0n1p1 00:06:52.352 07:24:08 -- common/autotest_common.sh@902 -- # local fstype=btrfs 00:06:52.352 07:24:08 -- common/autotest_common.sh@903 -- # local dev_name=/dev/nvme0n1p1 00:06:52.352 07:24:08 -- common/autotest_common.sh@904 -- # local i=0 00:06:52.352 07:24:08 -- common/autotest_common.sh@905 -- # local force 00:06:52.352 07:24:08 -- common/autotest_common.sh@907 -- # '[' btrfs = ext4 ']' 00:06:52.352 07:24:08 -- common/autotest_common.sh@910 -- # force=-f 00:06:52.352 07:24:08 -- common/autotest_common.sh@913 -- # mkfs.btrfs -f /dev/nvme0n1p1 00:06:52.610 btrfs-progs v6.6.2 00:06:52.610 See https://btrfs.readthedocs.io for more information. 00:06:52.610 00:06:52.610 Performing full device TRIM /dev/nvme0n1p1 (510.00MiB) ... 00:06:52.610 NOTE: several default settings have changed in version 5.15, please make sure 00:06:52.610 this does not affect your deployments: 00:06:52.610 - DUP for metadata (-m dup) 00:06:52.610 - enabled no-holes (-O no-holes) 00:06:52.610 - enabled free-space-tree (-R free-space-tree) 00:06:52.610 00:06:52.610 Label: (null) 00:06:52.610 UUID: 505555bf-b7e9-4fe1-ac75-cc659cc7234f 00:06:52.610 Node size: 16384 00:06:52.610 Sector size: 4096 00:06:52.610 Filesystem size: 510.00MiB 00:06:52.610 Block group profiles: 00:06:52.610 Data: single 8.00MiB 00:06:52.610 Metadata: DUP 32.00MiB 00:06:52.610 System: DUP 8.00MiB 00:06:52.610 SSD detected: yes 00:06:52.610 Zoned device: no 00:06:52.610 Incompat features: extref, skinny-metadata, no-holes, free-space-tree 00:06:52.610 Runtime features: free-space-tree 00:06:52.610 Checksum: crc32c 00:06:52.610 Number of devices: 1 00:06:52.610 Devices: 00:06:52.610 ID SIZE PATH 00:06:52.610 1 510.00MiB /dev/nvme0n1p1 00:06:52.610 00:06:52.610 07:24:08 -- common/autotest_common.sh@921 -- # return 0 00:06:52.610 07:24:08 -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:06:52.868 07:24:08 -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:06:52.868 07:24:08 -- target/filesystem.sh@25 -- # sync 00:06:52.868 07:24:08 -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:06:52.868 07:24:08 -- target/filesystem.sh@27 -- # sync 00:06:52.868 07:24:08 -- target/filesystem.sh@29 -- # i=0 00:06:52.868 07:24:08 -- target/filesystem.sh@30 -- # umount /mnt/device 00:06:52.868 07:24:09 -- target/filesystem.sh@37 -- # kill -0 3993468 00:06:52.868 07:24:09 -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:06:52.868 07:24:09 -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:06:52.868 07:24:09 -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:06:52.868 07:24:09 -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:06:52.868 00:06:52.868 real 0m0.730s 00:06:52.868 user 0m0.025s 00:06:52.868 sys 0m0.100s 00:06:52.868 07:24:09 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:52.868 07:24:09 -- common/autotest_common.sh@10 -- # set +x 00:06:52.868 ************************************ 00:06:52.868 END TEST filesystem_btrfs 00:06:52.868 ************************************ 00:06:53.127 07:24:09 -- target/filesystem.sh@79 -- # run_test filesystem_xfs nvmf_filesystem_create xfs nvme0n1 00:06:53.127 07:24:09 -- common/autotest_common.sh@1077 -- # '[' 4 -le 1 ']' 00:06:53.127 07:24:09 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:06:53.127 07:24:09 -- common/autotest_common.sh@10 -- # set +x 00:06:53.127 ************************************ 00:06:53.127 START TEST filesystem_xfs 00:06:53.127 ************************************ 00:06:53.127 07:24:09 -- common/autotest_common.sh@1104 -- # nvmf_filesystem_create xfs nvme0n1 00:06:53.127 07:24:09 -- target/filesystem.sh@18 -- # fstype=xfs 00:06:53.127 07:24:09 -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:06:53.127 07:24:09 -- target/filesystem.sh@21 -- # make_filesystem xfs /dev/nvme0n1p1 00:06:53.127 07:24:09 -- common/autotest_common.sh@902 -- # local fstype=xfs 00:06:53.127 07:24:09 -- common/autotest_common.sh@903 -- # local dev_name=/dev/nvme0n1p1 00:06:53.127 07:24:09 -- common/autotest_common.sh@904 -- # local i=0 00:06:53.127 07:24:09 -- common/autotest_common.sh@905 -- # local force 00:06:53.127 07:24:09 -- common/autotest_common.sh@907 -- # '[' xfs = ext4 ']' 00:06:53.127 07:24:09 -- common/autotest_common.sh@910 -- # force=-f 00:06:53.127 07:24:09 -- common/autotest_common.sh@913 -- # mkfs.xfs -f /dev/nvme0n1p1 00:06:53.127 meta-data=/dev/nvme0n1p1 isize=512 agcount=4, agsize=32640 blks 00:06:53.127 = sectsz=512 attr=2, projid32bit=1 00:06:53.127 = crc=1 finobt=1, sparse=1, rmapbt=0 00:06:53.127 = reflink=1 bigtime=1 inobtcount=1 nrext64=0 00:06:53.127 data = bsize=4096 blocks=130560, imaxpct=25 00:06:53.127 = sunit=0 swidth=0 blks 00:06:53.127 naming =version 2 bsize=4096 ascii-ci=0, ftype=1 00:06:53.127 log =internal log bsize=4096 blocks=16384, version=2 00:06:53.127 = sectsz=512 sunit=0 blks, lazy-count=1 00:06:53.127 realtime =none extsz=4096 blocks=0, rtextents=0 00:06:54.062 Discarding blocks...Done. 00:06:54.062 07:24:09 -- common/autotest_common.sh@921 -- # return 0 00:06:54.062 07:24:09 -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:06:56.590 07:24:12 -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:06:56.590 07:24:12 -- target/filesystem.sh@25 -- # sync 00:06:56.590 07:24:12 -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:06:56.590 07:24:12 -- target/filesystem.sh@27 -- # sync 00:06:56.590 07:24:12 -- target/filesystem.sh@29 -- # i=0 00:06:56.590 07:24:12 -- target/filesystem.sh@30 -- # umount /mnt/device 00:06:56.590 07:24:12 -- target/filesystem.sh@37 -- # kill -0 3993468 00:06:56.590 07:24:12 -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:06:56.590 07:24:12 -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:06:56.590 07:24:12 -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:06:56.590 07:24:12 -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:06:56.590 00:06:56.590 real 0m3.382s 00:06:56.590 user 0m0.017s 00:06:56.590 sys 0m0.059s 00:06:56.590 07:24:12 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:56.590 07:24:12 -- common/autotest_common.sh@10 -- # set +x 00:06:56.590 ************************************ 00:06:56.590 END TEST filesystem_xfs 00:06:56.590 ************************************ 00:06:56.590 07:24:12 -- target/filesystem.sh@91 -- # flock /dev/nvme0n1 parted -s /dev/nvme0n1 rm 1 00:06:56.590 07:24:12 -- target/filesystem.sh@93 -- # sync 00:06:56.590 07:24:12 -- target/filesystem.sh@94 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:06:56.849 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:06:56.849 07:24:12 -- target/filesystem.sh@95 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:06:56.849 07:24:12 -- common/autotest_common.sh@1198 -- # local i=0 00:06:56.849 07:24:12 -- common/autotest_common.sh@1199 -- # lsblk -o NAME,SERIAL 00:06:56.849 07:24:12 -- common/autotest_common.sh@1199 -- # grep -q -w SPDKISFASTANDAWESOME 00:06:56.849 07:24:12 -- common/autotest_common.sh@1206 -- # lsblk -l -o NAME,SERIAL 00:06:56.849 07:24:12 -- common/autotest_common.sh@1206 -- # grep -q -w SPDKISFASTANDAWESOME 00:06:56.849 07:24:12 -- common/autotest_common.sh@1210 -- # return 0 00:06:56.849 07:24:12 -- target/filesystem.sh@97 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:06:56.849 07:24:12 -- common/autotest_common.sh@551 -- # xtrace_disable 00:06:56.849 07:24:12 -- common/autotest_common.sh@10 -- # set +x 00:06:56.849 07:24:12 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:06:56.849 07:24:12 -- target/filesystem.sh@99 -- # trap - SIGINT SIGTERM EXIT 00:06:56.849 07:24:12 -- target/filesystem.sh@101 -- # killprocess 3993468 00:06:56.849 07:24:12 -- common/autotest_common.sh@926 -- # '[' -z 3993468 ']' 00:06:56.849 07:24:12 -- common/autotest_common.sh@930 -- # kill -0 3993468 00:06:56.849 07:24:12 -- common/autotest_common.sh@931 -- # uname 00:06:56.849 07:24:12 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:06:56.849 07:24:12 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 3993468 00:06:56.849 07:24:12 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:06:56.849 07:24:12 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:06:56.849 07:24:12 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 3993468' 00:06:56.849 killing process with pid 3993468 00:06:56.849 07:24:12 -- common/autotest_common.sh@945 -- # kill 3993468 00:06:56.849 07:24:12 -- common/autotest_common.sh@950 -- # wait 3993468 00:06:57.417 07:24:13 -- target/filesystem.sh@102 -- # nvmfpid= 00:06:57.417 00:06:57.417 real 0m12.375s 00:06:57.417 user 0m47.531s 00:06:57.417 sys 0m1.799s 00:06:57.417 07:24:13 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:57.417 07:24:13 -- common/autotest_common.sh@10 -- # set +x 00:06:57.417 ************************************ 00:06:57.417 END TEST nvmf_filesystem_no_in_capsule 00:06:57.417 ************************************ 00:06:57.417 07:24:13 -- target/filesystem.sh@106 -- # run_test nvmf_filesystem_in_capsule nvmf_filesystem_part 4096 00:06:57.417 07:24:13 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:06:57.417 07:24:13 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:06:57.417 07:24:13 -- common/autotest_common.sh@10 -- # set +x 00:06:57.417 ************************************ 00:06:57.417 START TEST nvmf_filesystem_in_capsule 00:06:57.417 ************************************ 00:06:57.417 07:24:13 -- common/autotest_common.sh@1104 -- # nvmf_filesystem_part 4096 00:06:57.417 07:24:13 -- target/filesystem.sh@47 -- # in_capsule=4096 00:06:57.417 07:24:13 -- target/filesystem.sh@49 -- # nvmfappstart -m 0xF 00:06:57.417 07:24:13 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:06:57.417 07:24:13 -- common/autotest_common.sh@712 -- # xtrace_disable 00:06:57.417 07:24:13 -- common/autotest_common.sh@10 -- # set +x 00:06:57.417 07:24:13 -- nvmf/common.sh@469 -- # nvmfpid=3995753 00:06:57.417 07:24:13 -- nvmf/common.sh@468 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:06:57.417 07:24:13 -- nvmf/common.sh@470 -- # waitforlisten 3995753 00:06:57.417 07:24:13 -- common/autotest_common.sh@819 -- # '[' -z 3995753 ']' 00:06:57.417 07:24:13 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:57.417 07:24:13 -- common/autotest_common.sh@824 -- # local max_retries=100 00:06:57.417 07:24:13 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:57.417 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:57.417 07:24:13 -- common/autotest_common.sh@828 -- # xtrace_disable 00:06:57.417 07:24:13 -- common/autotest_common.sh@10 -- # set +x 00:06:57.417 [2024-07-14 07:24:13.452089] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:06:57.417 [2024-07-14 07:24:13.452174] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:06:57.417 EAL: No free 2048 kB hugepages reported on node 1 00:06:57.417 [2024-07-14 07:24:13.513933] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:06:57.675 [2024-07-14 07:24:13.622992] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:06:57.675 [2024-07-14 07:24:13.623162] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:06:57.675 [2024-07-14 07:24:13.623187] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:06:57.675 [2024-07-14 07:24:13.623206] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:06:57.675 [2024-07-14 07:24:13.623267] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:06:57.675 [2024-07-14 07:24:13.623331] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:06:57.675 [2024-07-14 07:24:13.623396] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:06:57.675 [2024-07-14 07:24:13.623403] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:58.607 07:24:14 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:06:58.607 07:24:14 -- common/autotest_common.sh@852 -- # return 0 00:06:58.607 07:24:14 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:06:58.608 07:24:14 -- common/autotest_common.sh@718 -- # xtrace_disable 00:06:58.608 07:24:14 -- common/autotest_common.sh@10 -- # set +x 00:06:58.608 07:24:14 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:06:58.608 07:24:14 -- target/filesystem.sh@50 -- # malloc_name=Malloc1 00:06:58.608 07:24:14 -- target/filesystem.sh@52 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -c 4096 00:06:58.608 07:24:14 -- common/autotest_common.sh@551 -- # xtrace_disable 00:06:58.608 07:24:14 -- common/autotest_common.sh@10 -- # set +x 00:06:58.608 [2024-07-14 07:24:14.468508] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:06:58.608 07:24:14 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:06:58.608 07:24:14 -- target/filesystem.sh@53 -- # rpc_cmd bdev_malloc_create 512 512 -b Malloc1 00:06:58.608 07:24:14 -- common/autotest_common.sh@551 -- # xtrace_disable 00:06:58.608 07:24:14 -- common/autotest_common.sh@10 -- # set +x 00:06:58.608 Malloc1 00:06:58.608 07:24:14 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:06:58.608 07:24:14 -- target/filesystem.sh@54 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:06:58.608 07:24:14 -- common/autotest_common.sh@551 -- # xtrace_disable 00:06:58.608 07:24:14 -- common/autotest_common.sh@10 -- # set +x 00:06:58.608 07:24:14 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:06:58.608 07:24:14 -- target/filesystem.sh@55 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:06:58.608 07:24:14 -- common/autotest_common.sh@551 -- # xtrace_disable 00:06:58.608 07:24:14 -- common/autotest_common.sh@10 -- # set +x 00:06:58.608 07:24:14 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:06:58.608 07:24:14 -- target/filesystem.sh@56 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:06:58.608 07:24:14 -- common/autotest_common.sh@551 -- # xtrace_disable 00:06:58.608 07:24:14 -- common/autotest_common.sh@10 -- # set +x 00:06:58.608 [2024-07-14 07:24:14.645334] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:06:58.608 07:24:14 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:06:58.608 07:24:14 -- target/filesystem.sh@58 -- # get_bdev_size Malloc1 00:06:58.608 07:24:14 -- common/autotest_common.sh@1357 -- # local bdev_name=Malloc1 00:06:58.608 07:24:14 -- common/autotest_common.sh@1358 -- # local bdev_info 00:06:58.608 07:24:14 -- common/autotest_common.sh@1359 -- # local bs 00:06:58.608 07:24:14 -- common/autotest_common.sh@1360 -- # local nb 00:06:58.608 07:24:14 -- common/autotest_common.sh@1361 -- # rpc_cmd bdev_get_bdevs -b Malloc1 00:06:58.608 07:24:14 -- common/autotest_common.sh@551 -- # xtrace_disable 00:06:58.608 07:24:14 -- common/autotest_common.sh@10 -- # set +x 00:06:58.608 07:24:14 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:06:58.608 07:24:14 -- common/autotest_common.sh@1361 -- # bdev_info='[ 00:06:58.608 { 00:06:58.608 "name": "Malloc1", 00:06:58.608 "aliases": [ 00:06:58.608 "a26b607a-a297-4b82-999f-d35903d436a9" 00:06:58.608 ], 00:06:58.608 "product_name": "Malloc disk", 00:06:58.608 "block_size": 512, 00:06:58.608 "num_blocks": 1048576, 00:06:58.608 "uuid": "a26b607a-a297-4b82-999f-d35903d436a9", 00:06:58.608 "assigned_rate_limits": { 00:06:58.608 "rw_ios_per_sec": 0, 00:06:58.608 "rw_mbytes_per_sec": 0, 00:06:58.608 "r_mbytes_per_sec": 0, 00:06:58.608 "w_mbytes_per_sec": 0 00:06:58.608 }, 00:06:58.608 "claimed": true, 00:06:58.608 "claim_type": "exclusive_write", 00:06:58.608 "zoned": false, 00:06:58.608 "supported_io_types": { 00:06:58.608 "read": true, 00:06:58.608 "write": true, 00:06:58.608 "unmap": true, 00:06:58.608 "write_zeroes": true, 00:06:58.608 "flush": true, 00:06:58.608 "reset": true, 00:06:58.608 "compare": false, 00:06:58.608 "compare_and_write": false, 00:06:58.608 "abort": true, 00:06:58.608 "nvme_admin": false, 00:06:58.608 "nvme_io": false 00:06:58.608 }, 00:06:58.608 "memory_domains": [ 00:06:58.608 { 00:06:58.608 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:06:58.608 "dma_device_type": 2 00:06:58.608 } 00:06:58.608 ], 00:06:58.608 "driver_specific": {} 00:06:58.608 } 00:06:58.608 ]' 00:06:58.608 07:24:14 -- common/autotest_common.sh@1362 -- # jq '.[] .block_size' 00:06:58.608 07:24:14 -- common/autotest_common.sh@1362 -- # bs=512 00:06:58.608 07:24:14 -- common/autotest_common.sh@1363 -- # jq '.[] .num_blocks' 00:06:58.608 07:24:14 -- common/autotest_common.sh@1363 -- # nb=1048576 00:06:58.608 07:24:14 -- common/autotest_common.sh@1366 -- # bdev_size=512 00:06:58.608 07:24:14 -- common/autotest_common.sh@1367 -- # echo 512 00:06:58.608 07:24:14 -- target/filesystem.sh@58 -- # malloc_size=536870912 00:06:58.608 07:24:14 -- target/filesystem.sh@60 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:06:59.542 07:24:15 -- target/filesystem.sh@62 -- # waitforserial SPDKISFASTANDAWESOME 00:06:59.542 07:24:15 -- common/autotest_common.sh@1177 -- # local i=0 00:06:59.542 07:24:15 -- common/autotest_common.sh@1178 -- # local nvme_device_counter=1 nvme_devices=0 00:06:59.542 07:24:15 -- common/autotest_common.sh@1179 -- # [[ -n '' ]] 00:06:59.542 07:24:15 -- common/autotest_common.sh@1184 -- # sleep 2 00:07:01.441 07:24:17 -- common/autotest_common.sh@1185 -- # (( i++ <= 15 )) 00:07:01.441 07:24:17 -- common/autotest_common.sh@1186 -- # lsblk -l -o NAME,SERIAL 00:07:01.441 07:24:17 -- common/autotest_common.sh@1186 -- # grep -c SPDKISFASTANDAWESOME 00:07:01.441 07:24:17 -- common/autotest_common.sh@1186 -- # nvme_devices=1 00:07:01.441 07:24:17 -- common/autotest_common.sh@1187 -- # (( nvme_devices == nvme_device_counter )) 00:07:01.441 07:24:17 -- common/autotest_common.sh@1187 -- # return 0 00:07:01.441 07:24:17 -- target/filesystem.sh@63 -- # lsblk -l -o NAME,SERIAL 00:07:01.441 07:24:17 -- target/filesystem.sh@63 -- # grep -oP '([\w]*)(?=\s+SPDKISFASTANDAWESOME)' 00:07:01.441 07:24:17 -- target/filesystem.sh@63 -- # nvme_name=nvme0n1 00:07:01.441 07:24:17 -- target/filesystem.sh@64 -- # sec_size_to_bytes nvme0n1 00:07:01.441 07:24:17 -- setup/common.sh@76 -- # local dev=nvme0n1 00:07:01.441 07:24:17 -- setup/common.sh@78 -- # [[ -e /sys/block/nvme0n1 ]] 00:07:01.441 07:24:17 -- setup/common.sh@80 -- # echo 536870912 00:07:01.441 07:24:17 -- target/filesystem.sh@64 -- # nvme_size=536870912 00:07:01.441 07:24:17 -- target/filesystem.sh@66 -- # mkdir -p /mnt/device 00:07:01.441 07:24:17 -- target/filesystem.sh@67 -- # (( nvme_size == malloc_size )) 00:07:01.441 07:24:17 -- target/filesystem.sh@68 -- # parted -s /dev/nvme0n1 mklabel gpt mkpart SPDK_TEST 0% 100% 00:07:02.006 07:24:17 -- target/filesystem.sh@69 -- # partprobe 00:07:02.572 07:24:18 -- target/filesystem.sh@70 -- # sleep 1 00:07:03.540 07:24:19 -- target/filesystem.sh@76 -- # '[' 4096 -eq 0 ']' 00:07:03.540 07:24:19 -- target/filesystem.sh@81 -- # run_test filesystem_in_capsule_ext4 nvmf_filesystem_create ext4 nvme0n1 00:07:03.540 07:24:19 -- common/autotest_common.sh@1077 -- # '[' 4 -le 1 ']' 00:07:03.540 07:24:19 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:07:03.540 07:24:19 -- common/autotest_common.sh@10 -- # set +x 00:07:03.540 ************************************ 00:07:03.540 START TEST filesystem_in_capsule_ext4 00:07:03.540 ************************************ 00:07:03.540 07:24:19 -- common/autotest_common.sh@1104 -- # nvmf_filesystem_create ext4 nvme0n1 00:07:03.540 07:24:19 -- target/filesystem.sh@18 -- # fstype=ext4 00:07:03.540 07:24:19 -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:07:03.540 07:24:19 -- target/filesystem.sh@21 -- # make_filesystem ext4 /dev/nvme0n1p1 00:07:03.540 07:24:19 -- common/autotest_common.sh@902 -- # local fstype=ext4 00:07:03.540 07:24:19 -- common/autotest_common.sh@903 -- # local dev_name=/dev/nvme0n1p1 00:07:03.540 07:24:19 -- common/autotest_common.sh@904 -- # local i=0 00:07:03.540 07:24:19 -- common/autotest_common.sh@905 -- # local force 00:07:03.540 07:24:19 -- common/autotest_common.sh@907 -- # '[' ext4 = ext4 ']' 00:07:03.540 07:24:19 -- common/autotest_common.sh@908 -- # force=-F 00:07:03.540 07:24:19 -- common/autotest_common.sh@913 -- # mkfs.ext4 -F /dev/nvme0n1p1 00:07:03.540 mke2fs 1.46.5 (30-Dec-2021) 00:07:03.540 Discarding device blocks: 0/522240 done 00:07:03.540 Creating filesystem with 522240 1k blocks and 130560 inodes 00:07:03.540 Filesystem UUID: a32c1a1f-bdaa-471c-bcbb-9a0ae9ba911a 00:07:03.540 Superblock backups stored on blocks: 00:07:03.540 8193, 24577, 40961, 57345, 73729, 204801, 221185, 401409 00:07:03.540 00:07:03.540 Allocating group tables: 0/64 done 00:07:03.540 Writing inode tables: 0/64 done 00:07:03.798 Creating journal (8192 blocks): done 00:07:04.620 Writing superblocks and filesystem accounting information: 0/64 2/64 done 00:07:04.620 00:07:04.620 07:24:20 -- common/autotest_common.sh@921 -- # return 0 00:07:04.620 07:24:20 -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:07:05.553 07:24:21 -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:07:05.553 07:24:21 -- target/filesystem.sh@25 -- # sync 00:07:05.553 07:24:21 -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:07:05.553 07:24:21 -- target/filesystem.sh@27 -- # sync 00:07:05.553 07:24:21 -- target/filesystem.sh@29 -- # i=0 00:07:05.553 07:24:21 -- target/filesystem.sh@30 -- # umount /mnt/device 00:07:05.553 07:24:21 -- target/filesystem.sh@37 -- # kill -0 3995753 00:07:05.553 07:24:21 -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:07:05.553 07:24:21 -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:07:05.553 07:24:21 -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:07:05.553 07:24:21 -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:07:05.553 00:07:05.553 real 0m2.187s 00:07:05.553 user 0m0.018s 00:07:05.553 sys 0m0.062s 00:07:05.553 07:24:21 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:05.553 07:24:21 -- common/autotest_common.sh@10 -- # set +x 00:07:05.553 ************************************ 00:07:05.553 END TEST filesystem_in_capsule_ext4 00:07:05.553 ************************************ 00:07:05.553 07:24:21 -- target/filesystem.sh@82 -- # run_test filesystem_in_capsule_btrfs nvmf_filesystem_create btrfs nvme0n1 00:07:05.553 07:24:21 -- common/autotest_common.sh@1077 -- # '[' 4 -le 1 ']' 00:07:05.553 07:24:21 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:07:05.553 07:24:21 -- common/autotest_common.sh@10 -- # set +x 00:07:05.553 ************************************ 00:07:05.553 START TEST filesystem_in_capsule_btrfs 00:07:05.553 ************************************ 00:07:05.553 07:24:21 -- common/autotest_common.sh@1104 -- # nvmf_filesystem_create btrfs nvme0n1 00:07:05.553 07:24:21 -- target/filesystem.sh@18 -- # fstype=btrfs 00:07:05.553 07:24:21 -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:07:05.553 07:24:21 -- target/filesystem.sh@21 -- # make_filesystem btrfs /dev/nvme0n1p1 00:07:05.553 07:24:21 -- common/autotest_common.sh@902 -- # local fstype=btrfs 00:07:05.553 07:24:21 -- common/autotest_common.sh@903 -- # local dev_name=/dev/nvme0n1p1 00:07:05.553 07:24:21 -- common/autotest_common.sh@904 -- # local i=0 00:07:05.553 07:24:21 -- common/autotest_common.sh@905 -- # local force 00:07:05.553 07:24:21 -- common/autotest_common.sh@907 -- # '[' btrfs = ext4 ']' 00:07:05.553 07:24:21 -- common/autotest_common.sh@910 -- # force=-f 00:07:05.553 07:24:21 -- common/autotest_common.sh@913 -- # mkfs.btrfs -f /dev/nvme0n1p1 00:07:06.118 btrfs-progs v6.6.2 00:07:06.118 See https://btrfs.readthedocs.io for more information. 00:07:06.118 00:07:06.118 Performing full device TRIM /dev/nvme0n1p1 (510.00MiB) ... 00:07:06.118 NOTE: several default settings have changed in version 5.15, please make sure 00:07:06.118 this does not affect your deployments: 00:07:06.118 - DUP for metadata (-m dup) 00:07:06.118 - enabled no-holes (-O no-holes) 00:07:06.118 - enabled free-space-tree (-R free-space-tree) 00:07:06.118 00:07:06.118 Label: (null) 00:07:06.118 UUID: 1afb88c8-225d-4daa-ba8d-c61467201ba0 00:07:06.118 Node size: 16384 00:07:06.118 Sector size: 4096 00:07:06.118 Filesystem size: 510.00MiB 00:07:06.118 Block group profiles: 00:07:06.118 Data: single 8.00MiB 00:07:06.118 Metadata: DUP 32.00MiB 00:07:06.118 System: DUP 8.00MiB 00:07:06.118 SSD detected: yes 00:07:06.118 Zoned device: no 00:07:06.118 Incompat features: extref, skinny-metadata, no-holes, free-space-tree 00:07:06.118 Runtime features: free-space-tree 00:07:06.118 Checksum: crc32c 00:07:06.118 Number of devices: 1 00:07:06.118 Devices: 00:07:06.118 ID SIZE PATH 00:07:06.118 1 510.00MiB /dev/nvme0n1p1 00:07:06.118 00:07:06.118 07:24:22 -- common/autotest_common.sh@921 -- # return 0 00:07:06.118 07:24:22 -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:07:07.051 07:24:23 -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:07:07.051 07:24:23 -- target/filesystem.sh@25 -- # sync 00:07:07.051 07:24:23 -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:07:07.051 07:24:23 -- target/filesystem.sh@27 -- # sync 00:07:07.051 07:24:23 -- target/filesystem.sh@29 -- # i=0 00:07:07.051 07:24:23 -- target/filesystem.sh@30 -- # umount /mnt/device 00:07:07.051 07:24:23 -- target/filesystem.sh@37 -- # kill -0 3995753 00:07:07.051 07:24:23 -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:07:07.051 07:24:23 -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:07:07.051 07:24:23 -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:07:07.051 07:24:23 -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:07:07.051 00:07:07.051 real 0m1.424s 00:07:07.051 user 0m0.016s 00:07:07.051 sys 0m0.115s 00:07:07.051 07:24:23 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:07.051 07:24:23 -- common/autotest_common.sh@10 -- # set +x 00:07:07.051 ************************************ 00:07:07.051 END TEST filesystem_in_capsule_btrfs 00:07:07.051 ************************************ 00:07:07.051 07:24:23 -- target/filesystem.sh@83 -- # run_test filesystem_in_capsule_xfs nvmf_filesystem_create xfs nvme0n1 00:07:07.051 07:24:23 -- common/autotest_common.sh@1077 -- # '[' 4 -le 1 ']' 00:07:07.051 07:24:23 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:07:07.051 07:24:23 -- common/autotest_common.sh@10 -- # set +x 00:07:07.051 ************************************ 00:07:07.051 START TEST filesystem_in_capsule_xfs 00:07:07.051 ************************************ 00:07:07.051 07:24:23 -- common/autotest_common.sh@1104 -- # nvmf_filesystem_create xfs nvme0n1 00:07:07.051 07:24:23 -- target/filesystem.sh@18 -- # fstype=xfs 00:07:07.051 07:24:23 -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:07:07.051 07:24:23 -- target/filesystem.sh@21 -- # make_filesystem xfs /dev/nvme0n1p1 00:07:07.051 07:24:23 -- common/autotest_common.sh@902 -- # local fstype=xfs 00:07:07.051 07:24:23 -- common/autotest_common.sh@903 -- # local dev_name=/dev/nvme0n1p1 00:07:07.051 07:24:23 -- common/autotest_common.sh@904 -- # local i=0 00:07:07.051 07:24:23 -- common/autotest_common.sh@905 -- # local force 00:07:07.051 07:24:23 -- common/autotest_common.sh@907 -- # '[' xfs = ext4 ']' 00:07:07.051 07:24:23 -- common/autotest_common.sh@910 -- # force=-f 00:07:07.051 07:24:23 -- common/autotest_common.sh@913 -- # mkfs.xfs -f /dev/nvme0n1p1 00:07:07.308 meta-data=/dev/nvme0n1p1 isize=512 agcount=4, agsize=32640 blks 00:07:07.308 = sectsz=512 attr=2, projid32bit=1 00:07:07.308 = crc=1 finobt=1, sparse=1, rmapbt=0 00:07:07.308 = reflink=1 bigtime=1 inobtcount=1 nrext64=0 00:07:07.308 data = bsize=4096 blocks=130560, imaxpct=25 00:07:07.308 = sunit=0 swidth=0 blks 00:07:07.308 naming =version 2 bsize=4096 ascii-ci=0, ftype=1 00:07:07.308 log =internal log bsize=4096 blocks=16384, version=2 00:07:07.308 = sectsz=512 sunit=0 blks, lazy-count=1 00:07:07.308 realtime =none extsz=4096 blocks=0, rtextents=0 00:07:07.871 Discarding blocks...Done. 00:07:07.871 07:24:23 -- common/autotest_common.sh@921 -- # return 0 00:07:07.871 07:24:23 -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:07:10.399 07:24:26 -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:07:10.399 07:24:26 -- target/filesystem.sh@25 -- # sync 00:07:10.399 07:24:26 -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:07:10.399 07:24:26 -- target/filesystem.sh@27 -- # sync 00:07:10.399 07:24:26 -- target/filesystem.sh@29 -- # i=0 00:07:10.399 07:24:26 -- target/filesystem.sh@30 -- # umount /mnt/device 00:07:10.399 07:24:26 -- target/filesystem.sh@37 -- # kill -0 3995753 00:07:10.399 07:24:26 -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:07:10.399 07:24:26 -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:07:10.399 07:24:26 -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:07:10.399 07:24:26 -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:07:10.399 00:07:10.399 real 0m3.383s 00:07:10.399 user 0m0.009s 00:07:10.399 sys 0m0.066s 00:07:10.399 07:24:26 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:10.399 07:24:26 -- common/autotest_common.sh@10 -- # set +x 00:07:10.399 ************************************ 00:07:10.399 END TEST filesystem_in_capsule_xfs 00:07:10.399 ************************************ 00:07:10.399 07:24:26 -- target/filesystem.sh@91 -- # flock /dev/nvme0n1 parted -s /dev/nvme0n1 rm 1 00:07:10.399 07:24:26 -- target/filesystem.sh@93 -- # sync 00:07:10.400 07:24:26 -- target/filesystem.sh@94 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:07:10.659 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:07:10.659 07:24:26 -- target/filesystem.sh@95 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:07:10.659 07:24:26 -- common/autotest_common.sh@1198 -- # local i=0 00:07:10.659 07:24:26 -- common/autotest_common.sh@1199 -- # lsblk -o NAME,SERIAL 00:07:10.659 07:24:26 -- common/autotest_common.sh@1199 -- # grep -q -w SPDKISFASTANDAWESOME 00:07:10.659 07:24:26 -- common/autotest_common.sh@1206 -- # lsblk -l -o NAME,SERIAL 00:07:10.659 07:24:26 -- common/autotest_common.sh@1206 -- # grep -q -w SPDKISFASTANDAWESOME 00:07:10.659 07:24:26 -- common/autotest_common.sh@1210 -- # return 0 00:07:10.659 07:24:26 -- target/filesystem.sh@97 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:07:10.659 07:24:26 -- common/autotest_common.sh@551 -- # xtrace_disable 00:07:10.659 07:24:26 -- common/autotest_common.sh@10 -- # set +x 00:07:10.659 07:24:26 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:07:10.659 07:24:26 -- target/filesystem.sh@99 -- # trap - SIGINT SIGTERM EXIT 00:07:10.659 07:24:26 -- target/filesystem.sh@101 -- # killprocess 3995753 00:07:10.659 07:24:26 -- common/autotest_common.sh@926 -- # '[' -z 3995753 ']' 00:07:10.659 07:24:26 -- common/autotest_common.sh@930 -- # kill -0 3995753 00:07:10.659 07:24:26 -- common/autotest_common.sh@931 -- # uname 00:07:10.659 07:24:26 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:07:10.659 07:24:26 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 3995753 00:07:10.659 07:24:26 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:07:10.659 07:24:26 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:07:10.659 07:24:26 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 3995753' 00:07:10.659 killing process with pid 3995753 00:07:10.659 07:24:26 -- common/autotest_common.sh@945 -- # kill 3995753 00:07:10.659 07:24:26 -- common/autotest_common.sh@950 -- # wait 3995753 00:07:11.227 07:24:27 -- target/filesystem.sh@102 -- # nvmfpid= 00:07:11.227 00:07:11.227 real 0m13.770s 00:07:11.227 user 0m53.003s 00:07:11.227 sys 0m1.940s 00:07:11.227 07:24:27 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:11.227 07:24:27 -- common/autotest_common.sh@10 -- # set +x 00:07:11.227 ************************************ 00:07:11.227 END TEST nvmf_filesystem_in_capsule 00:07:11.227 ************************************ 00:07:11.227 07:24:27 -- target/filesystem.sh@108 -- # nvmftestfini 00:07:11.227 07:24:27 -- nvmf/common.sh@476 -- # nvmfcleanup 00:07:11.227 07:24:27 -- nvmf/common.sh@116 -- # sync 00:07:11.227 07:24:27 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:07:11.227 07:24:27 -- nvmf/common.sh@119 -- # set +e 00:07:11.227 07:24:27 -- nvmf/common.sh@120 -- # for i in {1..20} 00:07:11.227 07:24:27 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:07:11.227 rmmod nvme_tcp 00:07:11.227 rmmod nvme_fabrics 00:07:11.227 rmmod nvme_keyring 00:07:11.227 07:24:27 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:07:11.227 07:24:27 -- nvmf/common.sh@123 -- # set -e 00:07:11.227 07:24:27 -- nvmf/common.sh@124 -- # return 0 00:07:11.227 07:24:27 -- nvmf/common.sh@477 -- # '[' -n '' ']' 00:07:11.227 07:24:27 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:07:11.227 07:24:27 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:07:11.227 07:24:27 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:07:11.227 07:24:27 -- nvmf/common.sh@273 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:07:11.227 07:24:27 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:07:11.227 07:24:27 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:11.227 07:24:27 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:07:11.227 07:24:27 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:13.136 07:24:29 -- nvmf/common.sh@278 -- # ip -4 addr flush cvl_0_1 00:07:13.136 00:07:13.136 real 0m30.606s 00:07:13.136 user 1m41.451s 00:07:13.136 sys 0m5.286s 00:07:13.136 07:24:29 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:13.136 07:24:29 -- common/autotest_common.sh@10 -- # set +x 00:07:13.136 ************************************ 00:07:13.136 END TEST nvmf_filesystem 00:07:13.136 ************************************ 00:07:13.394 07:24:29 -- nvmf/nvmf.sh@25 -- # run_test nvmf_discovery /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/discovery.sh --transport=tcp 00:07:13.394 07:24:29 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:07:13.394 07:24:29 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:07:13.394 07:24:29 -- common/autotest_common.sh@10 -- # set +x 00:07:13.394 ************************************ 00:07:13.394 START TEST nvmf_discovery 00:07:13.394 ************************************ 00:07:13.394 07:24:29 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/discovery.sh --transport=tcp 00:07:13.394 * Looking for test storage... 00:07:13.394 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:07:13.394 07:24:29 -- target/discovery.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:07:13.394 07:24:29 -- nvmf/common.sh@7 -- # uname -s 00:07:13.394 07:24:29 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:07:13.394 07:24:29 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:07:13.394 07:24:29 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:07:13.394 07:24:29 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:07:13.394 07:24:29 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:07:13.394 07:24:29 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:07:13.394 07:24:29 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:07:13.394 07:24:29 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:07:13.394 07:24:29 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:07:13.394 07:24:29 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:07:13.394 07:24:29 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:07:13.394 07:24:29 -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:07:13.394 07:24:29 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:07:13.394 07:24:29 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:07:13.394 07:24:29 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:07:13.394 07:24:29 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:07:13.394 07:24:29 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:07:13.394 07:24:29 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:07:13.394 07:24:29 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:07:13.394 07:24:29 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:13.394 07:24:29 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:13.394 07:24:29 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:13.394 07:24:29 -- paths/export.sh@5 -- # export PATH 00:07:13.394 07:24:29 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:13.394 07:24:29 -- nvmf/common.sh@46 -- # : 0 00:07:13.394 07:24:29 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:07:13.394 07:24:29 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:07:13.394 07:24:29 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:07:13.394 07:24:29 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:07:13.394 07:24:29 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:07:13.394 07:24:29 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:07:13.394 07:24:29 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:07:13.394 07:24:29 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:07:13.394 07:24:29 -- target/discovery.sh@11 -- # NULL_BDEV_SIZE=102400 00:07:13.394 07:24:29 -- target/discovery.sh@12 -- # NULL_BLOCK_SIZE=512 00:07:13.394 07:24:29 -- target/discovery.sh@13 -- # NVMF_PORT_REFERRAL=4430 00:07:13.395 07:24:29 -- target/discovery.sh@15 -- # hash nvme 00:07:13.395 07:24:29 -- target/discovery.sh@20 -- # nvmftestinit 00:07:13.395 07:24:29 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:07:13.395 07:24:29 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:07:13.395 07:24:29 -- nvmf/common.sh@436 -- # prepare_net_devs 00:07:13.395 07:24:29 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:07:13.395 07:24:29 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:07:13.395 07:24:29 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:13.395 07:24:29 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:07:13.395 07:24:29 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:13.395 07:24:29 -- nvmf/common.sh@402 -- # [[ phy != virt ]] 00:07:13.395 07:24:29 -- nvmf/common.sh@402 -- # gather_supported_nvmf_pci_devs 00:07:13.395 07:24:29 -- nvmf/common.sh@284 -- # xtrace_disable 00:07:13.395 07:24:29 -- common/autotest_common.sh@10 -- # set +x 00:07:15.299 07:24:31 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:07:15.299 07:24:31 -- nvmf/common.sh@290 -- # pci_devs=() 00:07:15.299 07:24:31 -- nvmf/common.sh@290 -- # local -a pci_devs 00:07:15.299 07:24:31 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:07:15.299 07:24:31 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:07:15.299 07:24:31 -- nvmf/common.sh@292 -- # pci_drivers=() 00:07:15.299 07:24:31 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:07:15.299 07:24:31 -- nvmf/common.sh@294 -- # net_devs=() 00:07:15.299 07:24:31 -- nvmf/common.sh@294 -- # local -ga net_devs 00:07:15.299 07:24:31 -- nvmf/common.sh@295 -- # e810=() 00:07:15.299 07:24:31 -- nvmf/common.sh@295 -- # local -ga e810 00:07:15.299 07:24:31 -- nvmf/common.sh@296 -- # x722=() 00:07:15.299 07:24:31 -- nvmf/common.sh@296 -- # local -ga x722 00:07:15.299 07:24:31 -- nvmf/common.sh@297 -- # mlx=() 00:07:15.299 07:24:31 -- nvmf/common.sh@297 -- # local -ga mlx 00:07:15.299 07:24:31 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:07:15.299 07:24:31 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:07:15.299 07:24:31 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:07:15.299 07:24:31 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:07:15.299 07:24:31 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:07:15.299 07:24:31 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:07:15.299 07:24:31 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:07:15.299 07:24:31 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:07:15.299 07:24:31 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:07:15.299 07:24:31 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:07:15.299 07:24:31 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:07:15.299 07:24:31 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:07:15.299 07:24:31 -- nvmf/common.sh@320 -- # [[ tcp == rdma ]] 00:07:15.299 07:24:31 -- nvmf/common.sh@326 -- # [[ e810 == mlx5 ]] 00:07:15.299 07:24:31 -- nvmf/common.sh@328 -- # [[ e810 == e810 ]] 00:07:15.299 07:24:31 -- nvmf/common.sh@329 -- # pci_devs=("${e810[@]}") 00:07:15.299 07:24:31 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:07:15.299 07:24:31 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:07:15.299 07:24:31 -- nvmf/common.sh@340 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:07:15.299 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:07:15.299 07:24:31 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:07:15.299 07:24:31 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:07:15.299 07:24:31 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:07:15.299 07:24:31 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:07:15.299 07:24:31 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:07:15.299 07:24:31 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:07:15.299 07:24:31 -- nvmf/common.sh@340 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:07:15.299 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:07:15.299 07:24:31 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:07:15.299 07:24:31 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:07:15.299 07:24:31 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:07:15.299 07:24:31 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:07:15.299 07:24:31 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:07:15.299 07:24:31 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:07:15.299 07:24:31 -- nvmf/common.sh@371 -- # [[ e810 == e810 ]] 00:07:15.299 07:24:31 -- nvmf/common.sh@371 -- # [[ tcp == rdma ]] 00:07:15.299 07:24:31 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:07:15.299 07:24:31 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:07:15.299 07:24:31 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:07:15.299 07:24:31 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:07:15.299 07:24:31 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:07:15.299 Found net devices under 0000:0a:00.0: cvl_0_0 00:07:15.299 07:24:31 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:07:15.299 07:24:31 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:07:15.299 07:24:31 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:07:15.299 07:24:31 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:07:15.299 07:24:31 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:07:15.299 07:24:31 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:07:15.299 Found net devices under 0000:0a:00.1: cvl_0_1 00:07:15.299 07:24:31 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:07:15.299 07:24:31 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:07:15.299 07:24:31 -- nvmf/common.sh@402 -- # is_hw=yes 00:07:15.299 07:24:31 -- nvmf/common.sh@404 -- # [[ yes == yes ]] 00:07:15.299 07:24:31 -- nvmf/common.sh@405 -- # [[ tcp == tcp ]] 00:07:15.299 07:24:31 -- nvmf/common.sh@406 -- # nvmf_tcp_init 00:07:15.299 07:24:31 -- nvmf/common.sh@228 -- # NVMF_INITIATOR_IP=10.0.0.1 00:07:15.299 07:24:31 -- nvmf/common.sh@229 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:07:15.299 07:24:31 -- nvmf/common.sh@230 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:07:15.299 07:24:31 -- nvmf/common.sh@233 -- # (( 2 > 1 )) 00:07:15.299 07:24:31 -- nvmf/common.sh@235 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:07:15.299 07:24:31 -- nvmf/common.sh@236 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:07:15.299 07:24:31 -- nvmf/common.sh@239 -- # NVMF_SECOND_TARGET_IP= 00:07:15.299 07:24:31 -- nvmf/common.sh@241 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:07:15.299 07:24:31 -- nvmf/common.sh@242 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:07:15.299 07:24:31 -- nvmf/common.sh@243 -- # ip -4 addr flush cvl_0_0 00:07:15.299 07:24:31 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_1 00:07:15.299 07:24:31 -- nvmf/common.sh@247 -- # ip netns add cvl_0_0_ns_spdk 00:07:15.558 07:24:31 -- nvmf/common.sh@250 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:07:15.558 07:24:31 -- nvmf/common.sh@253 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:07:15.558 07:24:31 -- nvmf/common.sh@254 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:07:15.558 07:24:31 -- nvmf/common.sh@257 -- # ip link set cvl_0_1 up 00:07:15.558 07:24:31 -- nvmf/common.sh@259 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:07:15.558 07:24:31 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:07:15.558 07:24:31 -- nvmf/common.sh@263 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:07:15.558 07:24:31 -- nvmf/common.sh@266 -- # ping -c 1 10.0.0.2 00:07:15.558 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:07:15.558 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.151 ms 00:07:15.558 00:07:15.558 --- 10.0.0.2 ping statistics --- 00:07:15.558 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:15.558 rtt min/avg/max/mdev = 0.151/0.151/0.151/0.000 ms 00:07:15.558 07:24:31 -- nvmf/common.sh@267 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:07:15.558 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:07:15.558 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.209 ms 00:07:15.558 00:07:15.558 --- 10.0.0.1 ping statistics --- 00:07:15.558 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:15.558 rtt min/avg/max/mdev = 0.209/0.209/0.209/0.000 ms 00:07:15.558 07:24:31 -- nvmf/common.sh@269 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:07:15.558 07:24:31 -- nvmf/common.sh@410 -- # return 0 00:07:15.558 07:24:31 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:07:15.558 07:24:31 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:07:15.558 07:24:31 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:07:15.558 07:24:31 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:07:15.558 07:24:31 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:07:15.558 07:24:31 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:07:15.558 07:24:31 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:07:15.558 07:24:31 -- target/discovery.sh@21 -- # nvmfappstart -m 0xF 00:07:15.558 07:24:31 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:07:15.558 07:24:31 -- common/autotest_common.sh@712 -- # xtrace_disable 00:07:15.558 07:24:31 -- common/autotest_common.sh@10 -- # set +x 00:07:15.558 07:24:31 -- nvmf/common.sh@469 -- # nvmfpid=3999550 00:07:15.558 07:24:31 -- nvmf/common.sh@468 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:07:15.558 07:24:31 -- nvmf/common.sh@470 -- # waitforlisten 3999550 00:07:15.558 07:24:31 -- common/autotest_common.sh@819 -- # '[' -z 3999550 ']' 00:07:15.558 07:24:31 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:15.558 07:24:31 -- common/autotest_common.sh@824 -- # local max_retries=100 00:07:15.559 07:24:31 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:15.559 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:15.559 07:24:31 -- common/autotest_common.sh@828 -- # xtrace_disable 00:07:15.559 07:24:31 -- common/autotest_common.sh@10 -- # set +x 00:07:15.559 [2024-07-14 07:24:31.671323] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:07:15.559 [2024-07-14 07:24:31.671411] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:07:15.559 EAL: No free 2048 kB hugepages reported on node 1 00:07:15.817 [2024-07-14 07:24:31.738724] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:07:15.817 [2024-07-14 07:24:31.848216] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:07:15.817 [2024-07-14 07:24:31.848374] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:07:15.817 [2024-07-14 07:24:31.848400] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:07:15.817 [2024-07-14 07:24:31.848419] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:07:15.817 [2024-07-14 07:24:31.848480] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:07:15.817 [2024-07-14 07:24:31.848506] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:07:15.817 [2024-07-14 07:24:31.848567] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:07:15.817 [2024-07-14 07:24:31.848575] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:16.751 07:24:32 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:07:16.751 07:24:32 -- common/autotest_common.sh@852 -- # return 0 00:07:16.751 07:24:32 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:07:16.751 07:24:32 -- common/autotest_common.sh@718 -- # xtrace_disable 00:07:16.751 07:24:32 -- common/autotest_common.sh@10 -- # set +x 00:07:16.751 07:24:32 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:07:16.751 07:24:32 -- target/discovery.sh@23 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:07:16.751 07:24:32 -- common/autotest_common.sh@551 -- # xtrace_disable 00:07:16.751 07:24:32 -- common/autotest_common.sh@10 -- # set +x 00:07:16.751 [2024-07-14 07:24:32.669497] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:07:16.751 07:24:32 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:07:16.751 07:24:32 -- target/discovery.sh@26 -- # seq 1 4 00:07:16.751 07:24:32 -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:07:16.751 07:24:32 -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null1 102400 512 00:07:16.751 07:24:32 -- common/autotest_common.sh@551 -- # xtrace_disable 00:07:16.751 07:24:32 -- common/autotest_common.sh@10 -- # set +x 00:07:16.751 Null1 00:07:16.751 07:24:32 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:07:16.751 07:24:32 -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:07:16.751 07:24:32 -- common/autotest_common.sh@551 -- # xtrace_disable 00:07:16.751 07:24:32 -- common/autotest_common.sh@10 -- # set +x 00:07:16.751 07:24:32 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:07:16.751 07:24:32 -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Null1 00:07:16.751 07:24:32 -- common/autotest_common.sh@551 -- # xtrace_disable 00:07:16.751 07:24:32 -- common/autotest_common.sh@10 -- # set +x 00:07:16.751 07:24:32 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:07:16.751 07:24:32 -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:07:16.751 07:24:32 -- common/autotest_common.sh@551 -- # xtrace_disable 00:07:16.751 07:24:32 -- common/autotest_common.sh@10 -- # set +x 00:07:16.751 [2024-07-14 07:24:32.709739] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:07:16.751 07:24:32 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:07:16.751 07:24:32 -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:07:16.751 07:24:32 -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null2 102400 512 00:07:16.751 07:24:32 -- common/autotest_common.sh@551 -- # xtrace_disable 00:07:16.751 07:24:32 -- common/autotest_common.sh@10 -- # set +x 00:07:16.751 Null2 00:07:16.751 07:24:32 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:07:16.751 07:24:32 -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK00000000000002 00:07:16.751 07:24:32 -- common/autotest_common.sh@551 -- # xtrace_disable 00:07:16.751 07:24:32 -- common/autotest_common.sh@10 -- # set +x 00:07:16.751 07:24:32 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:07:16.751 07:24:32 -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Null2 00:07:16.751 07:24:32 -- common/autotest_common.sh@551 -- # xtrace_disable 00:07:16.751 07:24:32 -- common/autotest_common.sh@10 -- # set +x 00:07:16.751 07:24:32 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:07:16.751 07:24:32 -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:07:16.751 07:24:32 -- common/autotest_common.sh@551 -- # xtrace_disable 00:07:16.751 07:24:32 -- common/autotest_common.sh@10 -- # set +x 00:07:16.751 07:24:32 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:07:16.751 07:24:32 -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:07:16.751 07:24:32 -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null3 102400 512 00:07:16.751 07:24:32 -- common/autotest_common.sh@551 -- # xtrace_disable 00:07:16.751 07:24:32 -- common/autotest_common.sh@10 -- # set +x 00:07:16.751 Null3 00:07:16.751 07:24:32 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:07:16.751 07:24:32 -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode3 -a -s SPDK00000000000003 00:07:16.751 07:24:32 -- common/autotest_common.sh@551 -- # xtrace_disable 00:07:16.751 07:24:32 -- common/autotest_common.sh@10 -- # set +x 00:07:16.751 07:24:32 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:07:16.751 07:24:32 -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode3 Null3 00:07:16.751 07:24:32 -- common/autotest_common.sh@551 -- # xtrace_disable 00:07:16.751 07:24:32 -- common/autotest_common.sh@10 -- # set +x 00:07:16.751 07:24:32 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:07:16.751 07:24:32 -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode3 -t tcp -a 10.0.0.2 -s 4420 00:07:16.751 07:24:32 -- common/autotest_common.sh@551 -- # xtrace_disable 00:07:16.751 07:24:32 -- common/autotest_common.sh@10 -- # set +x 00:07:16.751 07:24:32 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:07:16.751 07:24:32 -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:07:16.751 07:24:32 -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null4 102400 512 00:07:16.751 07:24:32 -- common/autotest_common.sh@551 -- # xtrace_disable 00:07:16.751 07:24:32 -- common/autotest_common.sh@10 -- # set +x 00:07:16.751 Null4 00:07:16.751 07:24:32 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:07:16.751 07:24:32 -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode4 -a -s SPDK00000000000004 00:07:16.751 07:24:32 -- common/autotest_common.sh@551 -- # xtrace_disable 00:07:16.751 07:24:32 -- common/autotest_common.sh@10 -- # set +x 00:07:16.751 07:24:32 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:07:16.751 07:24:32 -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode4 Null4 00:07:16.751 07:24:32 -- common/autotest_common.sh@551 -- # xtrace_disable 00:07:16.751 07:24:32 -- common/autotest_common.sh@10 -- # set +x 00:07:16.751 07:24:32 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:07:16.751 07:24:32 -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode4 -t tcp -a 10.0.0.2 -s 4420 00:07:16.751 07:24:32 -- common/autotest_common.sh@551 -- # xtrace_disable 00:07:16.751 07:24:32 -- common/autotest_common.sh@10 -- # set +x 00:07:16.751 07:24:32 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:07:16.751 07:24:32 -- target/discovery.sh@32 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:07:16.751 07:24:32 -- common/autotest_common.sh@551 -- # xtrace_disable 00:07:16.752 07:24:32 -- common/autotest_common.sh@10 -- # set +x 00:07:16.752 07:24:32 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:07:16.752 07:24:32 -- target/discovery.sh@35 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 10.0.0.2 -s 4430 00:07:16.752 07:24:32 -- common/autotest_common.sh@551 -- # xtrace_disable 00:07:16.752 07:24:32 -- common/autotest_common.sh@10 -- # set +x 00:07:16.752 07:24:32 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:07:16.752 07:24:32 -- target/discovery.sh@37 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -a 10.0.0.2 -s 4420 00:07:17.010 00:07:17.010 Discovery Log Number of Records 6, Generation counter 6 00:07:17.010 =====Discovery Log Entry 0====== 00:07:17.010 trtype: tcp 00:07:17.010 adrfam: ipv4 00:07:17.010 subtype: current discovery subsystem 00:07:17.010 treq: not required 00:07:17.010 portid: 0 00:07:17.010 trsvcid: 4420 00:07:17.010 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:07:17.010 traddr: 10.0.0.2 00:07:17.010 eflags: explicit discovery connections, duplicate discovery information 00:07:17.010 sectype: none 00:07:17.010 =====Discovery Log Entry 1====== 00:07:17.010 trtype: tcp 00:07:17.010 adrfam: ipv4 00:07:17.010 subtype: nvme subsystem 00:07:17.010 treq: not required 00:07:17.010 portid: 0 00:07:17.010 trsvcid: 4420 00:07:17.010 subnqn: nqn.2016-06.io.spdk:cnode1 00:07:17.010 traddr: 10.0.0.2 00:07:17.010 eflags: none 00:07:17.010 sectype: none 00:07:17.010 =====Discovery Log Entry 2====== 00:07:17.010 trtype: tcp 00:07:17.010 adrfam: ipv4 00:07:17.010 subtype: nvme subsystem 00:07:17.010 treq: not required 00:07:17.010 portid: 0 00:07:17.010 trsvcid: 4420 00:07:17.010 subnqn: nqn.2016-06.io.spdk:cnode2 00:07:17.010 traddr: 10.0.0.2 00:07:17.010 eflags: none 00:07:17.010 sectype: none 00:07:17.010 =====Discovery Log Entry 3====== 00:07:17.010 trtype: tcp 00:07:17.010 adrfam: ipv4 00:07:17.010 subtype: nvme subsystem 00:07:17.010 treq: not required 00:07:17.010 portid: 0 00:07:17.010 trsvcid: 4420 00:07:17.010 subnqn: nqn.2016-06.io.spdk:cnode3 00:07:17.010 traddr: 10.0.0.2 00:07:17.010 eflags: none 00:07:17.010 sectype: none 00:07:17.010 =====Discovery Log Entry 4====== 00:07:17.010 trtype: tcp 00:07:17.010 adrfam: ipv4 00:07:17.010 subtype: nvme subsystem 00:07:17.010 treq: not required 00:07:17.010 portid: 0 00:07:17.010 trsvcid: 4420 00:07:17.010 subnqn: nqn.2016-06.io.spdk:cnode4 00:07:17.010 traddr: 10.0.0.2 00:07:17.010 eflags: none 00:07:17.010 sectype: none 00:07:17.010 =====Discovery Log Entry 5====== 00:07:17.010 trtype: tcp 00:07:17.010 adrfam: ipv4 00:07:17.010 subtype: discovery subsystem referral 00:07:17.010 treq: not required 00:07:17.010 portid: 0 00:07:17.010 trsvcid: 4430 00:07:17.010 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:07:17.010 traddr: 10.0.0.2 00:07:17.010 eflags: none 00:07:17.010 sectype: none 00:07:17.010 07:24:33 -- target/discovery.sh@39 -- # echo 'Perform nvmf subsystem discovery via RPC' 00:07:17.010 Perform nvmf subsystem discovery via RPC 00:07:17.010 07:24:33 -- target/discovery.sh@40 -- # rpc_cmd nvmf_get_subsystems 00:07:17.010 07:24:33 -- common/autotest_common.sh@551 -- # xtrace_disable 00:07:17.010 07:24:33 -- common/autotest_common.sh@10 -- # set +x 00:07:17.010 [2024-07-14 07:24:33.034750] nvmf_rpc.c: 275:rpc_nvmf_get_subsystems: *WARNING*: rpc_nvmf_get_subsystems: deprecated feature listener.transport is deprecated in favor of trtype to be removed in v24.05 00:07:17.010 [ 00:07:17.010 { 00:07:17.010 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:07:17.010 "subtype": "Discovery", 00:07:17.010 "listen_addresses": [ 00:07:17.010 { 00:07:17.010 "transport": "TCP", 00:07:17.010 "trtype": "TCP", 00:07:17.010 "adrfam": "IPv4", 00:07:17.010 "traddr": "10.0.0.2", 00:07:17.010 "trsvcid": "4420" 00:07:17.010 } 00:07:17.010 ], 00:07:17.010 "allow_any_host": true, 00:07:17.010 "hosts": [] 00:07:17.010 }, 00:07:17.010 { 00:07:17.010 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:07:17.010 "subtype": "NVMe", 00:07:17.010 "listen_addresses": [ 00:07:17.010 { 00:07:17.010 "transport": "TCP", 00:07:17.010 "trtype": "TCP", 00:07:17.011 "adrfam": "IPv4", 00:07:17.011 "traddr": "10.0.0.2", 00:07:17.011 "trsvcid": "4420" 00:07:17.011 } 00:07:17.011 ], 00:07:17.011 "allow_any_host": true, 00:07:17.011 "hosts": [], 00:07:17.011 "serial_number": "SPDK00000000000001", 00:07:17.011 "model_number": "SPDK bdev Controller", 00:07:17.011 "max_namespaces": 32, 00:07:17.011 "min_cntlid": 1, 00:07:17.011 "max_cntlid": 65519, 00:07:17.011 "namespaces": [ 00:07:17.011 { 00:07:17.011 "nsid": 1, 00:07:17.011 "bdev_name": "Null1", 00:07:17.011 "name": "Null1", 00:07:17.011 "nguid": "4416A656999C4C698D1D5C20620FDD8D", 00:07:17.011 "uuid": "4416a656-999c-4c69-8d1d-5c20620fdd8d" 00:07:17.011 } 00:07:17.011 ] 00:07:17.011 }, 00:07:17.011 { 00:07:17.011 "nqn": "nqn.2016-06.io.spdk:cnode2", 00:07:17.011 "subtype": "NVMe", 00:07:17.011 "listen_addresses": [ 00:07:17.011 { 00:07:17.011 "transport": "TCP", 00:07:17.011 "trtype": "TCP", 00:07:17.011 "adrfam": "IPv4", 00:07:17.011 "traddr": "10.0.0.2", 00:07:17.011 "trsvcid": "4420" 00:07:17.011 } 00:07:17.011 ], 00:07:17.011 "allow_any_host": true, 00:07:17.011 "hosts": [], 00:07:17.011 "serial_number": "SPDK00000000000002", 00:07:17.011 "model_number": "SPDK bdev Controller", 00:07:17.011 "max_namespaces": 32, 00:07:17.011 "min_cntlid": 1, 00:07:17.011 "max_cntlid": 65519, 00:07:17.011 "namespaces": [ 00:07:17.011 { 00:07:17.011 "nsid": 1, 00:07:17.011 "bdev_name": "Null2", 00:07:17.011 "name": "Null2", 00:07:17.011 "nguid": "97CA4737FFCB4AD5B34A0B2B6E0AEAD8", 00:07:17.011 "uuid": "97ca4737-ffcb-4ad5-b34a-0b2b6e0aead8" 00:07:17.011 } 00:07:17.011 ] 00:07:17.011 }, 00:07:17.011 { 00:07:17.011 "nqn": "nqn.2016-06.io.spdk:cnode3", 00:07:17.011 "subtype": "NVMe", 00:07:17.011 "listen_addresses": [ 00:07:17.011 { 00:07:17.011 "transport": "TCP", 00:07:17.011 "trtype": "TCP", 00:07:17.011 "adrfam": "IPv4", 00:07:17.011 "traddr": "10.0.0.2", 00:07:17.011 "trsvcid": "4420" 00:07:17.011 } 00:07:17.011 ], 00:07:17.011 "allow_any_host": true, 00:07:17.011 "hosts": [], 00:07:17.011 "serial_number": "SPDK00000000000003", 00:07:17.011 "model_number": "SPDK bdev Controller", 00:07:17.011 "max_namespaces": 32, 00:07:17.011 "min_cntlid": 1, 00:07:17.011 "max_cntlid": 65519, 00:07:17.011 "namespaces": [ 00:07:17.011 { 00:07:17.011 "nsid": 1, 00:07:17.011 "bdev_name": "Null3", 00:07:17.011 "name": "Null3", 00:07:17.011 "nguid": "27F18B1877754C059B460300E2C652D7", 00:07:17.011 "uuid": "27f18b18-7775-4c05-9b46-0300e2c652d7" 00:07:17.011 } 00:07:17.011 ] 00:07:17.011 }, 00:07:17.011 { 00:07:17.011 "nqn": "nqn.2016-06.io.spdk:cnode4", 00:07:17.011 "subtype": "NVMe", 00:07:17.011 "listen_addresses": [ 00:07:17.011 { 00:07:17.011 "transport": "TCP", 00:07:17.011 "trtype": "TCP", 00:07:17.011 "adrfam": "IPv4", 00:07:17.011 "traddr": "10.0.0.2", 00:07:17.011 "trsvcid": "4420" 00:07:17.011 } 00:07:17.011 ], 00:07:17.011 "allow_any_host": true, 00:07:17.011 "hosts": [], 00:07:17.011 "serial_number": "SPDK00000000000004", 00:07:17.011 "model_number": "SPDK bdev Controller", 00:07:17.011 "max_namespaces": 32, 00:07:17.011 "min_cntlid": 1, 00:07:17.011 "max_cntlid": 65519, 00:07:17.011 "namespaces": [ 00:07:17.011 { 00:07:17.011 "nsid": 1, 00:07:17.011 "bdev_name": "Null4", 00:07:17.011 "name": "Null4", 00:07:17.011 "nguid": "F134D3F185814AE8B3FEC8605F7C5BCB", 00:07:17.011 "uuid": "f134d3f1-8581-4ae8-b3fe-c8605f7c5bcb" 00:07:17.011 } 00:07:17.011 ] 00:07:17.011 } 00:07:17.011 ] 00:07:17.011 07:24:33 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:07:17.011 07:24:33 -- target/discovery.sh@42 -- # seq 1 4 00:07:17.011 07:24:33 -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:07:17.011 07:24:33 -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:07:17.011 07:24:33 -- common/autotest_common.sh@551 -- # xtrace_disable 00:07:17.011 07:24:33 -- common/autotest_common.sh@10 -- # set +x 00:07:17.011 07:24:33 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:07:17.011 07:24:33 -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null1 00:07:17.011 07:24:33 -- common/autotest_common.sh@551 -- # xtrace_disable 00:07:17.011 07:24:33 -- common/autotest_common.sh@10 -- # set +x 00:07:17.011 07:24:33 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:07:17.011 07:24:33 -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:07:17.011 07:24:33 -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:07:17.011 07:24:33 -- common/autotest_common.sh@551 -- # xtrace_disable 00:07:17.011 07:24:33 -- common/autotest_common.sh@10 -- # set +x 00:07:17.011 07:24:33 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:07:17.011 07:24:33 -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null2 00:07:17.011 07:24:33 -- common/autotest_common.sh@551 -- # xtrace_disable 00:07:17.011 07:24:33 -- common/autotest_common.sh@10 -- # set +x 00:07:17.011 07:24:33 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:07:17.011 07:24:33 -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:07:17.011 07:24:33 -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode3 00:07:17.011 07:24:33 -- common/autotest_common.sh@551 -- # xtrace_disable 00:07:17.011 07:24:33 -- common/autotest_common.sh@10 -- # set +x 00:07:17.011 07:24:33 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:07:17.011 07:24:33 -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null3 00:07:17.011 07:24:33 -- common/autotest_common.sh@551 -- # xtrace_disable 00:07:17.011 07:24:33 -- common/autotest_common.sh@10 -- # set +x 00:07:17.011 07:24:33 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:07:17.011 07:24:33 -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:07:17.011 07:24:33 -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode4 00:07:17.011 07:24:33 -- common/autotest_common.sh@551 -- # xtrace_disable 00:07:17.011 07:24:33 -- common/autotest_common.sh@10 -- # set +x 00:07:17.011 07:24:33 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:07:17.011 07:24:33 -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null4 00:07:17.011 07:24:33 -- common/autotest_common.sh@551 -- # xtrace_disable 00:07:17.011 07:24:33 -- common/autotest_common.sh@10 -- # set +x 00:07:17.011 07:24:33 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:07:17.011 07:24:33 -- target/discovery.sh@47 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 10.0.0.2 -s 4430 00:07:17.011 07:24:33 -- common/autotest_common.sh@551 -- # xtrace_disable 00:07:17.011 07:24:33 -- common/autotest_common.sh@10 -- # set +x 00:07:17.011 07:24:33 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:07:17.011 07:24:33 -- target/discovery.sh@49 -- # rpc_cmd bdev_get_bdevs 00:07:17.011 07:24:33 -- common/autotest_common.sh@551 -- # xtrace_disable 00:07:17.011 07:24:33 -- target/discovery.sh@49 -- # jq -r '.[].name' 00:07:17.011 07:24:33 -- common/autotest_common.sh@10 -- # set +x 00:07:17.011 07:24:33 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:07:17.011 07:24:33 -- target/discovery.sh@49 -- # check_bdevs= 00:07:17.011 07:24:33 -- target/discovery.sh@50 -- # '[' -n '' ']' 00:07:17.011 07:24:33 -- target/discovery.sh@55 -- # trap - SIGINT SIGTERM EXIT 00:07:17.011 07:24:33 -- target/discovery.sh@57 -- # nvmftestfini 00:07:17.011 07:24:33 -- nvmf/common.sh@476 -- # nvmfcleanup 00:07:17.011 07:24:33 -- nvmf/common.sh@116 -- # sync 00:07:17.011 07:24:33 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:07:17.011 07:24:33 -- nvmf/common.sh@119 -- # set +e 00:07:17.011 07:24:33 -- nvmf/common.sh@120 -- # for i in {1..20} 00:07:17.011 07:24:33 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:07:17.011 rmmod nvme_tcp 00:07:17.269 rmmod nvme_fabrics 00:07:17.269 rmmod nvme_keyring 00:07:17.269 07:24:33 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:07:17.269 07:24:33 -- nvmf/common.sh@123 -- # set -e 00:07:17.269 07:24:33 -- nvmf/common.sh@124 -- # return 0 00:07:17.269 07:24:33 -- nvmf/common.sh@477 -- # '[' -n 3999550 ']' 00:07:17.269 07:24:33 -- nvmf/common.sh@478 -- # killprocess 3999550 00:07:17.269 07:24:33 -- common/autotest_common.sh@926 -- # '[' -z 3999550 ']' 00:07:17.269 07:24:33 -- common/autotest_common.sh@930 -- # kill -0 3999550 00:07:17.269 07:24:33 -- common/autotest_common.sh@931 -- # uname 00:07:17.269 07:24:33 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:07:17.269 07:24:33 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 3999550 00:07:17.269 07:24:33 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:07:17.269 07:24:33 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:07:17.269 07:24:33 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 3999550' 00:07:17.269 killing process with pid 3999550 00:07:17.269 07:24:33 -- common/autotest_common.sh@945 -- # kill 3999550 00:07:17.269 [2024-07-14 07:24:33.245989] app.c: 883:log_deprecation_hits: *WARNING*: rpc_nvmf_get_subsystems: deprecation 'listener.transport is deprecated in favor of trtype' scheduled for removal in v24.05 hit 1 times 00:07:17.269 07:24:33 -- common/autotest_common.sh@950 -- # wait 3999550 00:07:17.527 07:24:33 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:07:17.527 07:24:33 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:07:17.527 07:24:33 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:07:17.527 07:24:33 -- nvmf/common.sh@273 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:07:17.527 07:24:33 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:07:17.527 07:24:33 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:17.527 07:24:33 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:07:17.527 07:24:33 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:19.468 07:24:35 -- nvmf/common.sh@278 -- # ip -4 addr flush cvl_0_1 00:07:19.468 00:07:19.468 real 0m6.252s 00:07:19.468 user 0m7.545s 00:07:19.468 sys 0m1.924s 00:07:19.468 07:24:35 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:19.468 07:24:35 -- common/autotest_common.sh@10 -- # set +x 00:07:19.468 ************************************ 00:07:19.468 END TEST nvmf_discovery 00:07:19.468 ************************************ 00:07:19.468 07:24:35 -- nvmf/nvmf.sh@26 -- # run_test nvmf_referrals /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/referrals.sh --transport=tcp 00:07:19.468 07:24:35 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:07:19.468 07:24:35 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:07:19.468 07:24:35 -- common/autotest_common.sh@10 -- # set +x 00:07:19.468 ************************************ 00:07:19.468 START TEST nvmf_referrals 00:07:19.468 ************************************ 00:07:19.468 07:24:35 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/referrals.sh --transport=tcp 00:07:19.727 * Looking for test storage... 00:07:19.727 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:07:19.727 07:24:35 -- target/referrals.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:07:19.727 07:24:35 -- nvmf/common.sh@7 -- # uname -s 00:07:19.727 07:24:35 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:07:19.727 07:24:35 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:07:19.727 07:24:35 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:07:19.727 07:24:35 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:07:19.727 07:24:35 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:07:19.727 07:24:35 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:07:19.727 07:24:35 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:07:19.727 07:24:35 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:07:19.727 07:24:35 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:07:19.727 07:24:35 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:07:19.727 07:24:35 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:07:19.727 07:24:35 -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:07:19.727 07:24:35 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:07:19.727 07:24:35 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:07:19.727 07:24:35 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:07:19.727 07:24:35 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:07:19.727 07:24:35 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:07:19.727 07:24:35 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:07:19.727 07:24:35 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:07:19.727 07:24:35 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:19.727 07:24:35 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:19.727 07:24:35 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:19.727 07:24:35 -- paths/export.sh@5 -- # export PATH 00:07:19.728 07:24:35 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:19.728 07:24:35 -- nvmf/common.sh@46 -- # : 0 00:07:19.728 07:24:35 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:07:19.728 07:24:35 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:07:19.728 07:24:35 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:07:19.728 07:24:35 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:07:19.728 07:24:35 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:07:19.728 07:24:35 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:07:19.728 07:24:35 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:07:19.728 07:24:35 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:07:19.728 07:24:35 -- target/referrals.sh@11 -- # NVMF_REFERRAL_IP_1=127.0.0.2 00:07:19.728 07:24:35 -- target/referrals.sh@12 -- # NVMF_REFERRAL_IP_2=127.0.0.3 00:07:19.728 07:24:35 -- target/referrals.sh@13 -- # NVMF_REFERRAL_IP_3=127.0.0.4 00:07:19.728 07:24:35 -- target/referrals.sh@14 -- # NVMF_PORT_REFERRAL=4430 00:07:19.728 07:24:35 -- target/referrals.sh@15 -- # DISCOVERY_NQN=nqn.2014-08.org.nvmexpress.discovery 00:07:19.728 07:24:35 -- target/referrals.sh@16 -- # NQN=nqn.2016-06.io.spdk:cnode1 00:07:19.728 07:24:35 -- target/referrals.sh@37 -- # nvmftestinit 00:07:19.728 07:24:35 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:07:19.728 07:24:35 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:07:19.728 07:24:35 -- nvmf/common.sh@436 -- # prepare_net_devs 00:07:19.728 07:24:35 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:07:19.728 07:24:35 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:07:19.728 07:24:35 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:19.728 07:24:35 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:07:19.728 07:24:35 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:19.728 07:24:35 -- nvmf/common.sh@402 -- # [[ phy != virt ]] 00:07:19.728 07:24:35 -- nvmf/common.sh@402 -- # gather_supported_nvmf_pci_devs 00:07:19.728 07:24:35 -- nvmf/common.sh@284 -- # xtrace_disable 00:07:19.728 07:24:35 -- common/autotest_common.sh@10 -- # set +x 00:07:21.633 07:24:37 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:07:21.633 07:24:37 -- nvmf/common.sh@290 -- # pci_devs=() 00:07:21.633 07:24:37 -- nvmf/common.sh@290 -- # local -a pci_devs 00:07:21.633 07:24:37 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:07:21.633 07:24:37 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:07:21.633 07:24:37 -- nvmf/common.sh@292 -- # pci_drivers=() 00:07:21.633 07:24:37 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:07:21.633 07:24:37 -- nvmf/common.sh@294 -- # net_devs=() 00:07:21.633 07:24:37 -- nvmf/common.sh@294 -- # local -ga net_devs 00:07:21.633 07:24:37 -- nvmf/common.sh@295 -- # e810=() 00:07:21.633 07:24:37 -- nvmf/common.sh@295 -- # local -ga e810 00:07:21.633 07:24:37 -- nvmf/common.sh@296 -- # x722=() 00:07:21.633 07:24:37 -- nvmf/common.sh@296 -- # local -ga x722 00:07:21.633 07:24:37 -- nvmf/common.sh@297 -- # mlx=() 00:07:21.633 07:24:37 -- nvmf/common.sh@297 -- # local -ga mlx 00:07:21.633 07:24:37 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:07:21.633 07:24:37 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:07:21.633 07:24:37 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:07:21.633 07:24:37 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:07:21.633 07:24:37 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:07:21.633 07:24:37 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:07:21.633 07:24:37 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:07:21.633 07:24:37 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:07:21.633 07:24:37 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:07:21.633 07:24:37 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:07:21.633 07:24:37 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:07:21.633 07:24:37 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:07:21.633 07:24:37 -- nvmf/common.sh@320 -- # [[ tcp == rdma ]] 00:07:21.633 07:24:37 -- nvmf/common.sh@326 -- # [[ e810 == mlx5 ]] 00:07:21.633 07:24:37 -- nvmf/common.sh@328 -- # [[ e810 == e810 ]] 00:07:21.633 07:24:37 -- nvmf/common.sh@329 -- # pci_devs=("${e810[@]}") 00:07:21.633 07:24:37 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:07:21.633 07:24:37 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:07:21.633 07:24:37 -- nvmf/common.sh@340 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:07:21.633 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:07:21.633 07:24:37 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:07:21.633 07:24:37 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:07:21.633 07:24:37 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:07:21.633 07:24:37 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:07:21.633 07:24:37 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:07:21.633 07:24:37 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:07:21.633 07:24:37 -- nvmf/common.sh@340 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:07:21.633 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:07:21.633 07:24:37 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:07:21.633 07:24:37 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:07:21.633 07:24:37 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:07:21.633 07:24:37 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:07:21.633 07:24:37 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:07:21.633 07:24:37 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:07:21.633 07:24:37 -- nvmf/common.sh@371 -- # [[ e810 == e810 ]] 00:07:21.633 07:24:37 -- nvmf/common.sh@371 -- # [[ tcp == rdma ]] 00:07:21.633 07:24:37 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:07:21.633 07:24:37 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:07:21.633 07:24:37 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:07:21.633 07:24:37 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:07:21.633 07:24:37 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:07:21.633 Found net devices under 0000:0a:00.0: cvl_0_0 00:07:21.633 07:24:37 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:07:21.633 07:24:37 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:07:21.633 07:24:37 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:07:21.633 07:24:37 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:07:21.633 07:24:37 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:07:21.633 07:24:37 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:07:21.633 Found net devices under 0000:0a:00.1: cvl_0_1 00:07:21.633 07:24:37 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:07:21.633 07:24:37 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:07:21.633 07:24:37 -- nvmf/common.sh@402 -- # is_hw=yes 00:07:21.633 07:24:37 -- nvmf/common.sh@404 -- # [[ yes == yes ]] 00:07:21.633 07:24:37 -- nvmf/common.sh@405 -- # [[ tcp == tcp ]] 00:07:21.633 07:24:37 -- nvmf/common.sh@406 -- # nvmf_tcp_init 00:07:21.633 07:24:37 -- nvmf/common.sh@228 -- # NVMF_INITIATOR_IP=10.0.0.1 00:07:21.633 07:24:37 -- nvmf/common.sh@229 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:07:21.633 07:24:37 -- nvmf/common.sh@230 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:07:21.633 07:24:37 -- nvmf/common.sh@233 -- # (( 2 > 1 )) 00:07:21.633 07:24:37 -- nvmf/common.sh@235 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:07:21.633 07:24:37 -- nvmf/common.sh@236 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:07:21.633 07:24:37 -- nvmf/common.sh@239 -- # NVMF_SECOND_TARGET_IP= 00:07:21.634 07:24:37 -- nvmf/common.sh@241 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:07:21.634 07:24:37 -- nvmf/common.sh@242 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:07:21.634 07:24:37 -- nvmf/common.sh@243 -- # ip -4 addr flush cvl_0_0 00:07:21.634 07:24:37 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_1 00:07:21.634 07:24:37 -- nvmf/common.sh@247 -- # ip netns add cvl_0_0_ns_spdk 00:07:21.634 07:24:37 -- nvmf/common.sh@250 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:07:21.634 07:24:37 -- nvmf/common.sh@253 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:07:21.634 07:24:37 -- nvmf/common.sh@254 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:07:21.634 07:24:37 -- nvmf/common.sh@257 -- # ip link set cvl_0_1 up 00:07:21.634 07:24:37 -- nvmf/common.sh@259 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:07:21.891 07:24:37 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:07:21.891 07:24:37 -- nvmf/common.sh@263 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:07:21.891 07:24:37 -- nvmf/common.sh@266 -- # ping -c 1 10.0.0.2 00:07:21.891 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:07:21.891 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.300 ms 00:07:21.891 00:07:21.891 --- 10.0.0.2 ping statistics --- 00:07:21.891 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:21.891 rtt min/avg/max/mdev = 0.300/0.300/0.300/0.000 ms 00:07:21.891 07:24:37 -- nvmf/common.sh@267 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:07:21.891 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:07:21.891 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.115 ms 00:07:21.891 00:07:21.891 --- 10.0.0.1 ping statistics --- 00:07:21.891 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:21.891 rtt min/avg/max/mdev = 0.115/0.115/0.115/0.000 ms 00:07:21.891 07:24:37 -- nvmf/common.sh@269 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:07:21.891 07:24:37 -- nvmf/common.sh@410 -- # return 0 00:07:21.891 07:24:37 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:07:21.891 07:24:37 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:07:21.891 07:24:37 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:07:21.891 07:24:37 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:07:21.891 07:24:37 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:07:21.891 07:24:37 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:07:21.891 07:24:37 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:07:21.891 07:24:37 -- target/referrals.sh@38 -- # nvmfappstart -m 0xF 00:07:21.891 07:24:37 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:07:21.891 07:24:37 -- common/autotest_common.sh@712 -- # xtrace_disable 00:07:21.891 07:24:37 -- common/autotest_common.sh@10 -- # set +x 00:07:21.891 07:24:37 -- nvmf/common.sh@469 -- # nvmfpid=4001670 00:07:21.891 07:24:37 -- nvmf/common.sh@470 -- # waitforlisten 4001670 00:07:21.891 07:24:37 -- nvmf/common.sh@468 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:07:21.891 07:24:37 -- common/autotest_common.sh@819 -- # '[' -z 4001670 ']' 00:07:21.891 07:24:37 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:21.891 07:24:37 -- common/autotest_common.sh@824 -- # local max_retries=100 00:07:21.891 07:24:37 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:21.891 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:21.891 07:24:37 -- common/autotest_common.sh@828 -- # xtrace_disable 00:07:21.891 07:24:37 -- common/autotest_common.sh@10 -- # set +x 00:07:21.891 [2024-07-14 07:24:37.900428] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:07:21.891 [2024-07-14 07:24:37.900504] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:07:21.891 EAL: No free 2048 kB hugepages reported on node 1 00:07:21.891 [2024-07-14 07:24:37.970363] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:07:22.149 [2024-07-14 07:24:38.090581] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:07:22.149 [2024-07-14 07:24:38.090741] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:07:22.149 [2024-07-14 07:24:38.090761] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:07:22.149 [2024-07-14 07:24:38.090775] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:07:22.149 [2024-07-14 07:24:38.090844] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:07:22.149 [2024-07-14 07:24:38.090937] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:07:22.149 [2024-07-14 07:24:38.090899] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:07:22.149 [2024-07-14 07:24:38.090941] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:22.715 07:24:38 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:07:22.715 07:24:38 -- common/autotest_common.sh@852 -- # return 0 00:07:22.715 07:24:38 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:07:22.715 07:24:38 -- common/autotest_common.sh@718 -- # xtrace_disable 00:07:22.715 07:24:38 -- common/autotest_common.sh@10 -- # set +x 00:07:22.715 07:24:38 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:07:22.715 07:24:38 -- target/referrals.sh@40 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:07:22.715 07:24:38 -- common/autotest_common.sh@551 -- # xtrace_disable 00:07:22.715 07:24:38 -- common/autotest_common.sh@10 -- # set +x 00:07:22.715 [2024-07-14 07:24:38.844292] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:07:22.715 07:24:38 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:07:22.715 07:24:38 -- target/referrals.sh@41 -- # rpc_cmd nvmf_subsystem_add_listener -t tcp -a 10.0.0.2 -s 8009 discovery 00:07:22.715 07:24:38 -- common/autotest_common.sh@551 -- # xtrace_disable 00:07:22.715 07:24:38 -- common/autotest_common.sh@10 -- # set +x 00:07:22.715 [2024-07-14 07:24:38.856462] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 8009 *** 00:07:22.715 07:24:38 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:07:22.715 07:24:38 -- target/referrals.sh@44 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.2 -s 4430 00:07:22.715 07:24:38 -- common/autotest_common.sh@551 -- # xtrace_disable 00:07:22.715 07:24:38 -- common/autotest_common.sh@10 -- # set +x 00:07:22.715 07:24:38 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:07:22.715 07:24:38 -- target/referrals.sh@45 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.3 -s 4430 00:07:22.715 07:24:38 -- common/autotest_common.sh@551 -- # xtrace_disable 00:07:22.715 07:24:38 -- common/autotest_common.sh@10 -- # set +x 00:07:22.715 07:24:38 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:07:22.715 07:24:38 -- target/referrals.sh@46 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.4 -s 4430 00:07:22.715 07:24:38 -- common/autotest_common.sh@551 -- # xtrace_disable 00:07:22.715 07:24:38 -- common/autotest_common.sh@10 -- # set +x 00:07:22.715 07:24:38 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:07:22.974 07:24:38 -- target/referrals.sh@48 -- # rpc_cmd nvmf_discovery_get_referrals 00:07:22.974 07:24:38 -- target/referrals.sh@48 -- # jq length 00:07:22.974 07:24:38 -- common/autotest_common.sh@551 -- # xtrace_disable 00:07:22.974 07:24:38 -- common/autotest_common.sh@10 -- # set +x 00:07:22.974 07:24:38 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:07:22.974 07:24:38 -- target/referrals.sh@48 -- # (( 3 == 3 )) 00:07:22.974 07:24:38 -- target/referrals.sh@49 -- # get_referral_ips rpc 00:07:22.974 07:24:38 -- target/referrals.sh@19 -- # [[ rpc == \r\p\c ]] 00:07:22.974 07:24:38 -- target/referrals.sh@21 -- # rpc_cmd nvmf_discovery_get_referrals 00:07:22.974 07:24:38 -- target/referrals.sh@21 -- # jq -r '.[].address.traddr' 00:07:22.974 07:24:38 -- common/autotest_common.sh@551 -- # xtrace_disable 00:07:22.974 07:24:38 -- common/autotest_common.sh@10 -- # set +x 00:07:22.974 07:24:38 -- target/referrals.sh@21 -- # sort 00:07:22.974 07:24:38 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:07:22.974 07:24:38 -- target/referrals.sh@21 -- # echo 127.0.0.2 127.0.0.3 127.0.0.4 00:07:22.974 07:24:38 -- target/referrals.sh@49 -- # [[ 127.0.0.2 127.0.0.3 127.0.0.4 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\3\ \1\2\7\.\0\.\0\.\4 ]] 00:07:22.974 07:24:38 -- target/referrals.sh@50 -- # get_referral_ips nvme 00:07:22.974 07:24:38 -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:07:22.974 07:24:38 -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:07:22.974 07:24:38 -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -a 10.0.0.2 -s 8009 -o json 00:07:22.974 07:24:38 -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:07:22.974 07:24:38 -- target/referrals.sh@26 -- # sort 00:07:22.974 07:24:39 -- target/referrals.sh@26 -- # echo 127.0.0.2 127.0.0.3 127.0.0.4 00:07:22.974 07:24:39 -- target/referrals.sh@50 -- # [[ 127.0.0.2 127.0.0.3 127.0.0.4 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\3\ \1\2\7\.\0\.\0\.\4 ]] 00:07:22.974 07:24:39 -- target/referrals.sh@52 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.2 -s 4430 00:07:22.974 07:24:39 -- common/autotest_common.sh@551 -- # xtrace_disable 00:07:22.974 07:24:39 -- common/autotest_common.sh@10 -- # set +x 00:07:22.974 07:24:39 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:07:22.974 07:24:39 -- target/referrals.sh@53 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.3 -s 4430 00:07:22.974 07:24:39 -- common/autotest_common.sh@551 -- # xtrace_disable 00:07:22.974 07:24:39 -- common/autotest_common.sh@10 -- # set +x 00:07:22.974 07:24:39 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:07:22.974 07:24:39 -- target/referrals.sh@54 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.4 -s 4430 00:07:22.974 07:24:39 -- common/autotest_common.sh@551 -- # xtrace_disable 00:07:22.974 07:24:39 -- common/autotest_common.sh@10 -- # set +x 00:07:22.974 07:24:39 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:07:22.974 07:24:39 -- target/referrals.sh@56 -- # rpc_cmd nvmf_discovery_get_referrals 00:07:22.974 07:24:39 -- target/referrals.sh@56 -- # jq length 00:07:22.974 07:24:39 -- common/autotest_common.sh@551 -- # xtrace_disable 00:07:22.974 07:24:39 -- common/autotest_common.sh@10 -- # set +x 00:07:22.974 07:24:39 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:07:23.232 07:24:39 -- target/referrals.sh@56 -- # (( 0 == 0 )) 00:07:23.232 07:24:39 -- target/referrals.sh@57 -- # get_referral_ips nvme 00:07:23.232 07:24:39 -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:07:23.232 07:24:39 -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:07:23.232 07:24:39 -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -a 10.0.0.2 -s 8009 -o json 00:07:23.232 07:24:39 -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:07:23.232 07:24:39 -- target/referrals.sh@26 -- # sort 00:07:23.232 07:24:39 -- target/referrals.sh@26 -- # echo 00:07:23.232 07:24:39 -- target/referrals.sh@57 -- # [[ '' == '' ]] 00:07:23.232 07:24:39 -- target/referrals.sh@60 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.2 -s 4430 -n discovery 00:07:23.232 07:24:39 -- common/autotest_common.sh@551 -- # xtrace_disable 00:07:23.232 07:24:39 -- common/autotest_common.sh@10 -- # set +x 00:07:23.232 07:24:39 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:07:23.232 07:24:39 -- target/referrals.sh@62 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.2 -s 4430 -n nqn.2016-06.io.spdk:cnode1 00:07:23.232 07:24:39 -- common/autotest_common.sh@551 -- # xtrace_disable 00:07:23.232 07:24:39 -- common/autotest_common.sh@10 -- # set +x 00:07:23.232 07:24:39 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:07:23.232 07:24:39 -- target/referrals.sh@65 -- # get_referral_ips rpc 00:07:23.232 07:24:39 -- target/referrals.sh@19 -- # [[ rpc == \r\p\c ]] 00:07:23.232 07:24:39 -- target/referrals.sh@21 -- # rpc_cmd nvmf_discovery_get_referrals 00:07:23.232 07:24:39 -- target/referrals.sh@21 -- # jq -r '.[].address.traddr' 00:07:23.232 07:24:39 -- common/autotest_common.sh@551 -- # xtrace_disable 00:07:23.232 07:24:39 -- common/autotest_common.sh@10 -- # set +x 00:07:23.232 07:24:39 -- target/referrals.sh@21 -- # sort 00:07:23.232 07:24:39 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:07:23.232 07:24:39 -- target/referrals.sh@21 -- # echo 127.0.0.2 127.0.0.2 00:07:23.232 07:24:39 -- target/referrals.sh@65 -- # [[ 127.0.0.2 127.0.0.2 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\2 ]] 00:07:23.232 07:24:39 -- target/referrals.sh@66 -- # get_referral_ips nvme 00:07:23.232 07:24:39 -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:07:23.232 07:24:39 -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:07:23.233 07:24:39 -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -a 10.0.0.2 -s 8009 -o json 00:07:23.233 07:24:39 -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:07:23.233 07:24:39 -- target/referrals.sh@26 -- # sort 00:07:23.490 07:24:39 -- target/referrals.sh@26 -- # echo 127.0.0.2 127.0.0.2 00:07:23.491 07:24:39 -- target/referrals.sh@66 -- # [[ 127.0.0.2 127.0.0.2 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\2 ]] 00:07:23.491 07:24:39 -- target/referrals.sh@67 -- # get_discovery_entries 'nvme subsystem' 00:07:23.491 07:24:39 -- target/referrals.sh@67 -- # jq -r .subnqn 00:07:23.491 07:24:39 -- target/referrals.sh@31 -- # local 'subtype=nvme subsystem' 00:07:23.491 07:24:39 -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -a 10.0.0.2 -s 8009 -o json 00:07:23.491 07:24:39 -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "nvme subsystem")' 00:07:23.749 07:24:39 -- target/referrals.sh@67 -- # [[ nqn.2016-06.io.spdk:cnode1 == \n\q\n\.\2\0\1\6\-\0\6\.\i\o\.\s\p\d\k\:\c\n\o\d\e\1 ]] 00:07:23.749 07:24:39 -- target/referrals.sh@68 -- # get_discovery_entries 'discovery subsystem referral' 00:07:23.749 07:24:39 -- target/referrals.sh@68 -- # jq -r .subnqn 00:07:23.749 07:24:39 -- target/referrals.sh@31 -- # local 'subtype=discovery subsystem referral' 00:07:23.749 07:24:39 -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -a 10.0.0.2 -s 8009 -o json 00:07:23.749 07:24:39 -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "discovery subsystem referral")' 00:07:23.749 07:24:39 -- target/referrals.sh@68 -- # [[ nqn.2014-08.org.nvmexpress.discovery == \n\q\n\.\2\0\1\4\-\0\8\.\o\r\g\.\n\v\m\e\x\p\r\e\s\s\.\d\i\s\c\o\v\e\r\y ]] 00:07:23.749 07:24:39 -- target/referrals.sh@71 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.2 -s 4430 -n nqn.2016-06.io.spdk:cnode1 00:07:23.749 07:24:39 -- common/autotest_common.sh@551 -- # xtrace_disable 00:07:23.749 07:24:39 -- common/autotest_common.sh@10 -- # set +x 00:07:23.749 07:24:39 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:07:23.749 07:24:39 -- target/referrals.sh@73 -- # get_referral_ips rpc 00:07:23.749 07:24:39 -- target/referrals.sh@19 -- # [[ rpc == \r\p\c ]] 00:07:23.749 07:24:39 -- target/referrals.sh@21 -- # rpc_cmd nvmf_discovery_get_referrals 00:07:23.749 07:24:39 -- target/referrals.sh@21 -- # jq -r '.[].address.traddr' 00:07:23.749 07:24:39 -- common/autotest_common.sh@551 -- # xtrace_disable 00:07:23.749 07:24:39 -- common/autotest_common.sh@10 -- # set +x 00:07:23.749 07:24:39 -- target/referrals.sh@21 -- # sort 00:07:23.749 07:24:39 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:07:23.749 07:24:39 -- target/referrals.sh@21 -- # echo 127.0.0.2 00:07:23.749 07:24:39 -- target/referrals.sh@73 -- # [[ 127.0.0.2 == \1\2\7\.\0\.\0\.\2 ]] 00:07:23.749 07:24:39 -- target/referrals.sh@74 -- # get_referral_ips nvme 00:07:23.749 07:24:39 -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:07:23.749 07:24:39 -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:07:23.749 07:24:39 -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -a 10.0.0.2 -s 8009 -o json 00:07:23.749 07:24:39 -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:07:23.749 07:24:39 -- target/referrals.sh@26 -- # sort 00:07:23.749 07:24:39 -- target/referrals.sh@26 -- # echo 127.0.0.2 00:07:23.749 07:24:39 -- target/referrals.sh@74 -- # [[ 127.0.0.2 == \1\2\7\.\0\.\0\.\2 ]] 00:07:23.749 07:24:39 -- target/referrals.sh@75 -- # get_discovery_entries 'nvme subsystem' 00:07:23.749 07:24:39 -- target/referrals.sh@31 -- # local 'subtype=nvme subsystem' 00:07:23.749 07:24:39 -- target/referrals.sh@75 -- # jq -r .subnqn 00:07:23.749 07:24:39 -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -a 10.0.0.2 -s 8009 -o json 00:07:23.749 07:24:39 -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "nvme subsystem")' 00:07:24.008 07:24:40 -- target/referrals.sh@75 -- # [[ '' == '' ]] 00:07:24.008 07:24:40 -- target/referrals.sh@76 -- # get_discovery_entries 'discovery subsystem referral' 00:07:24.008 07:24:40 -- target/referrals.sh@76 -- # jq -r .subnqn 00:07:24.008 07:24:40 -- target/referrals.sh@31 -- # local 'subtype=discovery subsystem referral' 00:07:24.008 07:24:40 -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -a 10.0.0.2 -s 8009 -o json 00:07:24.008 07:24:40 -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "discovery subsystem referral")' 00:07:24.008 07:24:40 -- target/referrals.sh@76 -- # [[ nqn.2014-08.org.nvmexpress.discovery == \n\q\n\.\2\0\1\4\-\0\8\.\o\r\g\.\n\v\m\e\x\p\r\e\s\s\.\d\i\s\c\o\v\e\r\y ]] 00:07:24.008 07:24:40 -- target/referrals.sh@79 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.2 -s 4430 -n nqn.2014-08.org.nvmexpress.discovery 00:07:24.008 07:24:40 -- common/autotest_common.sh@551 -- # xtrace_disable 00:07:24.008 07:24:40 -- common/autotest_common.sh@10 -- # set +x 00:07:24.008 07:24:40 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:07:24.008 07:24:40 -- target/referrals.sh@82 -- # rpc_cmd nvmf_discovery_get_referrals 00:07:24.008 07:24:40 -- target/referrals.sh@82 -- # jq length 00:07:24.008 07:24:40 -- common/autotest_common.sh@551 -- # xtrace_disable 00:07:24.008 07:24:40 -- common/autotest_common.sh@10 -- # set +x 00:07:24.008 07:24:40 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:07:24.008 07:24:40 -- target/referrals.sh@82 -- # (( 0 == 0 )) 00:07:24.008 07:24:40 -- target/referrals.sh@83 -- # get_referral_ips nvme 00:07:24.008 07:24:40 -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:07:24.008 07:24:40 -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:07:24.008 07:24:40 -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -a 10.0.0.2 -s 8009 -o json 00:07:24.008 07:24:40 -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:07:24.008 07:24:40 -- target/referrals.sh@26 -- # sort 00:07:24.267 07:24:40 -- target/referrals.sh@26 -- # echo 00:07:24.267 07:24:40 -- target/referrals.sh@83 -- # [[ '' == '' ]] 00:07:24.267 07:24:40 -- target/referrals.sh@85 -- # trap - SIGINT SIGTERM EXIT 00:07:24.267 07:24:40 -- target/referrals.sh@86 -- # nvmftestfini 00:07:24.267 07:24:40 -- nvmf/common.sh@476 -- # nvmfcleanup 00:07:24.267 07:24:40 -- nvmf/common.sh@116 -- # sync 00:07:24.267 07:24:40 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:07:24.267 07:24:40 -- nvmf/common.sh@119 -- # set +e 00:07:24.267 07:24:40 -- nvmf/common.sh@120 -- # for i in {1..20} 00:07:24.267 07:24:40 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:07:24.267 rmmod nvme_tcp 00:07:24.267 rmmod nvme_fabrics 00:07:24.267 rmmod nvme_keyring 00:07:24.267 07:24:40 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:07:24.267 07:24:40 -- nvmf/common.sh@123 -- # set -e 00:07:24.267 07:24:40 -- nvmf/common.sh@124 -- # return 0 00:07:24.267 07:24:40 -- nvmf/common.sh@477 -- # '[' -n 4001670 ']' 00:07:24.267 07:24:40 -- nvmf/common.sh@478 -- # killprocess 4001670 00:07:24.267 07:24:40 -- common/autotest_common.sh@926 -- # '[' -z 4001670 ']' 00:07:24.267 07:24:40 -- common/autotest_common.sh@930 -- # kill -0 4001670 00:07:24.267 07:24:40 -- common/autotest_common.sh@931 -- # uname 00:07:24.267 07:24:40 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:07:24.267 07:24:40 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 4001670 00:07:24.267 07:24:40 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:07:24.267 07:24:40 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:07:24.267 07:24:40 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 4001670' 00:07:24.267 killing process with pid 4001670 00:07:24.267 07:24:40 -- common/autotest_common.sh@945 -- # kill 4001670 00:07:24.267 07:24:40 -- common/autotest_common.sh@950 -- # wait 4001670 00:07:24.527 07:24:40 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:07:24.527 07:24:40 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:07:24.527 07:24:40 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:07:24.527 07:24:40 -- nvmf/common.sh@273 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:07:24.527 07:24:40 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:07:24.527 07:24:40 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:24.527 07:24:40 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:07:24.527 07:24:40 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:27.063 07:24:42 -- nvmf/common.sh@278 -- # ip -4 addr flush cvl_0_1 00:07:27.063 00:07:27.063 real 0m7.087s 00:07:27.063 user 0m11.547s 00:07:27.063 sys 0m2.177s 00:07:27.063 07:24:42 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:27.063 07:24:42 -- common/autotest_common.sh@10 -- # set +x 00:07:27.063 ************************************ 00:07:27.063 END TEST nvmf_referrals 00:07:27.063 ************************************ 00:07:27.063 07:24:42 -- nvmf/nvmf.sh@27 -- # run_test nvmf_connect_disconnect /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_disconnect.sh --transport=tcp 00:07:27.063 07:24:42 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:07:27.063 07:24:42 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:07:27.063 07:24:42 -- common/autotest_common.sh@10 -- # set +x 00:07:27.063 ************************************ 00:07:27.063 START TEST nvmf_connect_disconnect 00:07:27.063 ************************************ 00:07:27.063 07:24:42 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_disconnect.sh --transport=tcp 00:07:27.063 * Looking for test storage... 00:07:27.063 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:07:27.063 07:24:42 -- target/connect_disconnect.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:07:27.063 07:24:42 -- nvmf/common.sh@7 -- # uname -s 00:07:27.063 07:24:42 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:07:27.063 07:24:42 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:07:27.063 07:24:42 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:07:27.063 07:24:42 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:07:27.063 07:24:42 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:07:27.063 07:24:42 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:07:27.063 07:24:42 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:07:27.063 07:24:42 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:07:27.063 07:24:42 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:07:27.063 07:24:42 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:07:27.063 07:24:42 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:07:27.063 07:24:42 -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:07:27.063 07:24:42 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:07:27.063 07:24:42 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:07:27.063 07:24:42 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:07:27.063 07:24:42 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:07:27.063 07:24:42 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:07:27.063 07:24:42 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:07:27.063 07:24:42 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:07:27.063 07:24:42 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:27.063 07:24:42 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:27.063 07:24:42 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:27.063 07:24:42 -- paths/export.sh@5 -- # export PATH 00:07:27.063 07:24:42 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:27.063 07:24:42 -- nvmf/common.sh@46 -- # : 0 00:07:27.063 07:24:42 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:07:27.063 07:24:42 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:07:27.063 07:24:42 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:07:27.063 07:24:42 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:07:27.063 07:24:42 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:07:27.063 07:24:42 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:07:27.063 07:24:42 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:07:27.063 07:24:42 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:07:27.063 07:24:42 -- target/connect_disconnect.sh@11 -- # MALLOC_BDEV_SIZE=64 00:07:27.063 07:24:42 -- target/connect_disconnect.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:07:27.063 07:24:42 -- target/connect_disconnect.sh@15 -- # nvmftestinit 00:07:27.063 07:24:42 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:07:27.063 07:24:42 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:07:27.063 07:24:42 -- nvmf/common.sh@436 -- # prepare_net_devs 00:07:27.063 07:24:42 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:07:27.063 07:24:42 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:07:27.063 07:24:42 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:27.063 07:24:42 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:07:27.063 07:24:42 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:27.063 07:24:42 -- nvmf/common.sh@402 -- # [[ phy != virt ]] 00:07:27.063 07:24:42 -- nvmf/common.sh@402 -- # gather_supported_nvmf_pci_devs 00:07:27.063 07:24:42 -- nvmf/common.sh@284 -- # xtrace_disable 00:07:27.063 07:24:42 -- common/autotest_common.sh@10 -- # set +x 00:07:28.964 07:24:44 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:07:28.964 07:24:44 -- nvmf/common.sh@290 -- # pci_devs=() 00:07:28.964 07:24:44 -- nvmf/common.sh@290 -- # local -a pci_devs 00:07:28.964 07:24:44 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:07:28.964 07:24:44 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:07:28.964 07:24:44 -- nvmf/common.sh@292 -- # pci_drivers=() 00:07:28.964 07:24:44 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:07:28.964 07:24:44 -- nvmf/common.sh@294 -- # net_devs=() 00:07:28.964 07:24:44 -- nvmf/common.sh@294 -- # local -ga net_devs 00:07:28.964 07:24:44 -- nvmf/common.sh@295 -- # e810=() 00:07:28.964 07:24:44 -- nvmf/common.sh@295 -- # local -ga e810 00:07:28.964 07:24:44 -- nvmf/common.sh@296 -- # x722=() 00:07:28.964 07:24:44 -- nvmf/common.sh@296 -- # local -ga x722 00:07:28.964 07:24:44 -- nvmf/common.sh@297 -- # mlx=() 00:07:28.964 07:24:44 -- nvmf/common.sh@297 -- # local -ga mlx 00:07:28.964 07:24:44 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:07:28.964 07:24:44 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:07:28.964 07:24:44 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:07:28.964 07:24:44 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:07:28.964 07:24:44 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:07:28.964 07:24:44 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:07:28.964 07:24:44 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:07:28.964 07:24:44 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:07:28.964 07:24:44 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:07:28.964 07:24:44 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:07:28.964 07:24:44 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:07:28.964 07:24:44 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:07:28.964 07:24:44 -- nvmf/common.sh@320 -- # [[ tcp == rdma ]] 00:07:28.964 07:24:44 -- nvmf/common.sh@326 -- # [[ e810 == mlx5 ]] 00:07:28.964 07:24:44 -- nvmf/common.sh@328 -- # [[ e810 == e810 ]] 00:07:28.964 07:24:44 -- nvmf/common.sh@329 -- # pci_devs=("${e810[@]}") 00:07:28.964 07:24:44 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:07:28.964 07:24:44 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:07:28.964 07:24:44 -- nvmf/common.sh@340 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:07:28.964 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:07:28.964 07:24:44 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:07:28.964 07:24:44 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:07:28.964 07:24:44 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:07:28.964 07:24:44 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:07:28.964 07:24:44 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:07:28.964 07:24:44 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:07:28.964 07:24:44 -- nvmf/common.sh@340 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:07:28.964 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:07:28.964 07:24:44 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:07:28.964 07:24:44 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:07:28.964 07:24:44 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:07:28.964 07:24:44 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:07:28.964 07:24:44 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:07:28.964 07:24:44 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:07:28.964 07:24:44 -- nvmf/common.sh@371 -- # [[ e810 == e810 ]] 00:07:28.964 07:24:44 -- nvmf/common.sh@371 -- # [[ tcp == rdma ]] 00:07:28.964 07:24:44 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:07:28.964 07:24:44 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:07:28.964 07:24:44 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:07:28.964 07:24:44 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:07:28.964 07:24:44 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:07:28.964 Found net devices under 0000:0a:00.0: cvl_0_0 00:07:28.964 07:24:44 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:07:28.964 07:24:44 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:07:28.964 07:24:44 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:07:28.964 07:24:44 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:07:28.964 07:24:44 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:07:28.964 07:24:44 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:07:28.964 Found net devices under 0000:0a:00.1: cvl_0_1 00:07:28.964 07:24:44 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:07:28.964 07:24:44 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:07:28.964 07:24:44 -- nvmf/common.sh@402 -- # is_hw=yes 00:07:28.964 07:24:44 -- nvmf/common.sh@404 -- # [[ yes == yes ]] 00:07:28.964 07:24:44 -- nvmf/common.sh@405 -- # [[ tcp == tcp ]] 00:07:28.964 07:24:44 -- nvmf/common.sh@406 -- # nvmf_tcp_init 00:07:28.964 07:24:44 -- nvmf/common.sh@228 -- # NVMF_INITIATOR_IP=10.0.0.1 00:07:28.964 07:24:44 -- nvmf/common.sh@229 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:07:28.964 07:24:44 -- nvmf/common.sh@230 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:07:28.964 07:24:44 -- nvmf/common.sh@233 -- # (( 2 > 1 )) 00:07:28.964 07:24:44 -- nvmf/common.sh@235 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:07:28.964 07:24:44 -- nvmf/common.sh@236 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:07:28.964 07:24:44 -- nvmf/common.sh@239 -- # NVMF_SECOND_TARGET_IP= 00:07:28.964 07:24:44 -- nvmf/common.sh@241 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:07:28.964 07:24:44 -- nvmf/common.sh@242 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:07:28.964 07:24:44 -- nvmf/common.sh@243 -- # ip -4 addr flush cvl_0_0 00:07:28.964 07:24:44 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_1 00:07:28.964 07:24:44 -- nvmf/common.sh@247 -- # ip netns add cvl_0_0_ns_spdk 00:07:28.964 07:24:44 -- nvmf/common.sh@250 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:07:28.964 07:24:44 -- nvmf/common.sh@253 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:07:28.964 07:24:44 -- nvmf/common.sh@254 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:07:28.964 07:24:44 -- nvmf/common.sh@257 -- # ip link set cvl_0_1 up 00:07:28.964 07:24:44 -- nvmf/common.sh@259 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:07:28.964 07:24:44 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:07:28.964 07:24:44 -- nvmf/common.sh@263 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:07:28.964 07:24:44 -- nvmf/common.sh@266 -- # ping -c 1 10.0.0.2 00:07:28.964 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:07:28.964 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.139 ms 00:07:28.964 00:07:28.964 --- 10.0.0.2 ping statistics --- 00:07:28.964 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:28.964 rtt min/avg/max/mdev = 0.139/0.139/0.139/0.000 ms 00:07:28.964 07:24:44 -- nvmf/common.sh@267 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:07:28.964 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:07:28.964 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.234 ms 00:07:28.964 00:07:28.964 --- 10.0.0.1 ping statistics --- 00:07:28.964 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:28.964 rtt min/avg/max/mdev = 0.234/0.234/0.234/0.000 ms 00:07:28.964 07:24:44 -- nvmf/common.sh@269 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:07:28.964 07:24:44 -- nvmf/common.sh@410 -- # return 0 00:07:28.964 07:24:44 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:07:28.964 07:24:44 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:07:28.964 07:24:44 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:07:28.964 07:24:44 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:07:28.964 07:24:44 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:07:28.964 07:24:44 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:07:28.964 07:24:44 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:07:28.964 07:24:44 -- target/connect_disconnect.sh@16 -- # nvmfappstart -m 0xF 00:07:28.964 07:24:44 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:07:28.964 07:24:44 -- common/autotest_common.sh@712 -- # xtrace_disable 00:07:28.964 07:24:44 -- common/autotest_common.sh@10 -- # set +x 00:07:28.964 07:24:44 -- nvmf/common.sh@469 -- # nvmfpid=4003993 00:07:28.964 07:24:44 -- nvmf/common.sh@468 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:07:28.964 07:24:44 -- nvmf/common.sh@470 -- # waitforlisten 4003993 00:07:28.964 07:24:44 -- common/autotest_common.sh@819 -- # '[' -z 4003993 ']' 00:07:28.964 07:24:44 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:28.964 07:24:44 -- common/autotest_common.sh@824 -- # local max_retries=100 00:07:28.964 07:24:44 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:28.964 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:28.964 07:24:44 -- common/autotest_common.sh@828 -- # xtrace_disable 00:07:28.964 07:24:44 -- common/autotest_common.sh@10 -- # set +x 00:07:28.964 [2024-07-14 07:24:44.946382] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:07:28.964 [2024-07-14 07:24:44.946456] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:07:28.964 EAL: No free 2048 kB hugepages reported on node 1 00:07:28.964 [2024-07-14 07:24:45.011241] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:07:28.964 [2024-07-14 07:24:45.120551] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:07:28.964 [2024-07-14 07:24:45.120689] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:07:28.964 [2024-07-14 07:24:45.120721] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:07:28.964 [2024-07-14 07:24:45.120733] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:07:28.964 [2024-07-14 07:24:45.120792] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:07:28.964 [2024-07-14 07:24:45.120821] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:07:28.964 [2024-07-14 07:24:45.120888] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:07:28.964 [2024-07-14 07:24:45.120892] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:29.896 07:24:45 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:07:29.896 07:24:45 -- common/autotest_common.sh@852 -- # return 0 00:07:29.896 07:24:45 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:07:29.896 07:24:45 -- common/autotest_common.sh@718 -- # xtrace_disable 00:07:29.896 07:24:45 -- common/autotest_common.sh@10 -- # set +x 00:07:29.896 07:24:45 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:07:29.896 07:24:45 -- target/connect_disconnect.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -c 0 00:07:29.896 07:24:45 -- common/autotest_common.sh@551 -- # xtrace_disable 00:07:29.896 07:24:45 -- common/autotest_common.sh@10 -- # set +x 00:07:29.896 [2024-07-14 07:24:45.931426] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:07:29.896 07:24:45 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:07:29.896 07:24:45 -- target/connect_disconnect.sh@20 -- # rpc_cmd bdev_malloc_create 64 512 00:07:29.896 07:24:45 -- common/autotest_common.sh@551 -- # xtrace_disable 00:07:29.896 07:24:45 -- common/autotest_common.sh@10 -- # set +x 00:07:29.896 07:24:45 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:07:29.896 07:24:45 -- target/connect_disconnect.sh@20 -- # bdev=Malloc0 00:07:29.896 07:24:45 -- target/connect_disconnect.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:07:29.896 07:24:45 -- common/autotest_common.sh@551 -- # xtrace_disable 00:07:29.896 07:24:45 -- common/autotest_common.sh@10 -- # set +x 00:07:29.896 07:24:45 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:07:29.896 07:24:45 -- target/connect_disconnect.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:07:29.896 07:24:45 -- common/autotest_common.sh@551 -- # xtrace_disable 00:07:29.896 07:24:45 -- common/autotest_common.sh@10 -- # set +x 00:07:29.896 07:24:45 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:07:29.896 07:24:45 -- target/connect_disconnect.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:07:29.896 07:24:45 -- common/autotest_common.sh@551 -- # xtrace_disable 00:07:29.896 07:24:45 -- common/autotest_common.sh@10 -- # set +x 00:07:29.896 [2024-07-14 07:24:45.982379] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:07:29.896 07:24:45 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:07:29.896 07:24:45 -- target/connect_disconnect.sh@26 -- # '[' 1 -eq 1 ']' 00:07:29.896 07:24:45 -- target/connect_disconnect.sh@27 -- # num_iterations=100 00:07:29.896 07:24:45 -- target/connect_disconnect.sh@29 -- # NVME_CONNECT='nvme connect -i 8' 00:07:29.896 07:24:45 -- target/connect_disconnect.sh@34 -- # set +x 00:07:32.421 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:07:34.951 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:07:36.850 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:07:39.438 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:07:41.334 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:07:43.857 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:07:46.383 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:07:48.907 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:07:50.805 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:07:53.328 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:07:55.224 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:07:57.766 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:08:00.318 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:08:02.838 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:08:04.737 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:08:07.270 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:08:09.797 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:08:11.693 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:08:14.216 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:08:16.741 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:08:18.674 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:08:21.201 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:08:23.731 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:08:25.628 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:08:28.153 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:08:30.679 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:08:32.589 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:08:35.112 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:08:37.632 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:08:39.588 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:08:42.110 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:08:44.635 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:08:46.532 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:08:49.055 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:08:50.952 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:08:53.479 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:08:56.006 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:08:57.903 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:00.458 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:02.982 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:04.880 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:07.405 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:09.926 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:12.449 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:14.347 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:16.870 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:18.765 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:21.322 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:23.842 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:25.738 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:28.261 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:30.781 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:32.680 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:35.206 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:37.730 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:40.294 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:42.210 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:44.734 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:47.258 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:49.152 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:51.677 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:54.203 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:56.136 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:58.657 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:01.222 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:03.120 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:05.650 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:08.174 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:10.066 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:12.591 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:15.116 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:17.014 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:19.549 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:21.480 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:24.003 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:26.528 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:28.425 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:30.950 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:32.843 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:35.363 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:37.888 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:39.785 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:42.350 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:44.875 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:46.770 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:49.294 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:51.192 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:53.721 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:56.248 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:58.788 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:00.688 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:03.223 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:05.751 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:07.651 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:10.182 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:12.709 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:15.238 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:17.132 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:19.656 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:21.558 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:21.558 07:28:37 -- target/connect_disconnect.sh@43 -- # trap - SIGINT SIGTERM EXIT 00:11:21.558 07:28:37 -- target/connect_disconnect.sh@45 -- # nvmftestfini 00:11:21.558 07:28:37 -- nvmf/common.sh@476 -- # nvmfcleanup 00:11:21.558 07:28:37 -- nvmf/common.sh@116 -- # sync 00:11:21.558 07:28:37 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:11:21.558 07:28:37 -- nvmf/common.sh@119 -- # set +e 00:11:21.558 07:28:37 -- nvmf/common.sh@120 -- # for i in {1..20} 00:11:21.558 07:28:37 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:11:21.558 rmmod nvme_tcp 00:11:21.558 rmmod nvme_fabrics 00:11:21.558 rmmod nvme_keyring 00:11:21.558 07:28:37 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:11:21.558 07:28:37 -- nvmf/common.sh@123 -- # set -e 00:11:21.558 07:28:37 -- nvmf/common.sh@124 -- # return 0 00:11:21.558 07:28:37 -- nvmf/common.sh@477 -- # '[' -n 4003993 ']' 00:11:21.558 07:28:37 -- nvmf/common.sh@478 -- # killprocess 4003993 00:11:21.558 07:28:37 -- common/autotest_common.sh@926 -- # '[' -z 4003993 ']' 00:11:21.558 07:28:37 -- common/autotest_common.sh@930 -- # kill -0 4003993 00:11:21.558 07:28:37 -- common/autotest_common.sh@931 -- # uname 00:11:21.558 07:28:37 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:11:21.558 07:28:37 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 4003993 00:11:21.558 07:28:37 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:11:21.558 07:28:37 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:11:21.558 07:28:37 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 4003993' 00:11:21.558 killing process with pid 4003993 00:11:21.558 07:28:37 -- common/autotest_common.sh@945 -- # kill 4003993 00:11:21.559 07:28:37 -- common/autotest_common.sh@950 -- # wait 4003993 00:11:21.849 07:28:37 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:11:21.849 07:28:37 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:11:21.849 07:28:37 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:11:21.849 07:28:37 -- nvmf/common.sh@273 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:11:21.849 07:28:37 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:11:21.849 07:28:37 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:21.849 07:28:37 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:11:21.849 07:28:37 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:24.389 07:28:39 -- nvmf/common.sh@278 -- # ip -4 addr flush cvl_0_1 00:11:24.389 00:11:24.389 real 3m57.283s 00:11:24.389 user 15m3.049s 00:11:24.390 sys 0m35.686s 00:11:24.390 07:28:40 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:11:24.390 07:28:40 -- common/autotest_common.sh@10 -- # set +x 00:11:24.390 ************************************ 00:11:24.390 END TEST nvmf_connect_disconnect 00:11:24.390 ************************************ 00:11:24.390 07:28:40 -- nvmf/nvmf.sh@28 -- # run_test nvmf_multitarget /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget.sh --transport=tcp 00:11:24.390 07:28:40 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:11:24.390 07:28:40 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:11:24.390 07:28:40 -- common/autotest_common.sh@10 -- # set +x 00:11:24.390 ************************************ 00:11:24.390 START TEST nvmf_multitarget 00:11:24.390 ************************************ 00:11:24.390 07:28:40 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget.sh --transport=tcp 00:11:24.390 * Looking for test storage... 00:11:24.390 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:11:24.390 07:28:40 -- target/multitarget.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:11:24.390 07:28:40 -- nvmf/common.sh@7 -- # uname -s 00:11:24.390 07:28:40 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:11:24.390 07:28:40 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:11:24.390 07:28:40 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:11:24.390 07:28:40 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:11:24.390 07:28:40 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:11:24.390 07:28:40 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:11:24.390 07:28:40 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:11:24.390 07:28:40 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:11:24.390 07:28:40 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:11:24.390 07:28:40 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:11:24.390 07:28:40 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:11:24.390 07:28:40 -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:11:24.390 07:28:40 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:11:24.390 07:28:40 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:11:24.390 07:28:40 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:11:24.390 07:28:40 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:11:24.390 07:28:40 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:11:24.390 07:28:40 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:11:24.390 07:28:40 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:11:24.390 07:28:40 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:24.390 07:28:40 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:24.390 07:28:40 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:24.390 07:28:40 -- paths/export.sh@5 -- # export PATH 00:11:24.390 07:28:40 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:24.390 07:28:40 -- nvmf/common.sh@46 -- # : 0 00:11:24.390 07:28:40 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:11:24.390 07:28:40 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:11:24.390 07:28:40 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:11:24.390 07:28:40 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:11:24.390 07:28:40 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:11:24.390 07:28:40 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:11:24.390 07:28:40 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:11:24.390 07:28:40 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:11:24.390 07:28:40 -- target/multitarget.sh@13 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py 00:11:24.390 07:28:40 -- target/multitarget.sh@15 -- # nvmftestinit 00:11:24.390 07:28:40 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:11:24.390 07:28:40 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:11:24.390 07:28:40 -- nvmf/common.sh@436 -- # prepare_net_devs 00:11:24.390 07:28:40 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:11:24.390 07:28:40 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:11:24.390 07:28:40 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:24.390 07:28:40 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:11:24.390 07:28:40 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:24.390 07:28:40 -- nvmf/common.sh@402 -- # [[ phy != virt ]] 00:11:24.390 07:28:40 -- nvmf/common.sh@402 -- # gather_supported_nvmf_pci_devs 00:11:24.390 07:28:40 -- nvmf/common.sh@284 -- # xtrace_disable 00:11:24.390 07:28:40 -- common/autotest_common.sh@10 -- # set +x 00:11:26.305 07:28:41 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:11:26.305 07:28:41 -- nvmf/common.sh@290 -- # pci_devs=() 00:11:26.305 07:28:41 -- nvmf/common.sh@290 -- # local -a pci_devs 00:11:26.305 07:28:41 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:11:26.305 07:28:41 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:11:26.305 07:28:41 -- nvmf/common.sh@292 -- # pci_drivers=() 00:11:26.305 07:28:41 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:11:26.305 07:28:41 -- nvmf/common.sh@294 -- # net_devs=() 00:11:26.305 07:28:41 -- nvmf/common.sh@294 -- # local -ga net_devs 00:11:26.305 07:28:41 -- nvmf/common.sh@295 -- # e810=() 00:11:26.305 07:28:41 -- nvmf/common.sh@295 -- # local -ga e810 00:11:26.305 07:28:41 -- nvmf/common.sh@296 -- # x722=() 00:11:26.305 07:28:41 -- nvmf/common.sh@296 -- # local -ga x722 00:11:26.305 07:28:41 -- nvmf/common.sh@297 -- # mlx=() 00:11:26.305 07:28:41 -- nvmf/common.sh@297 -- # local -ga mlx 00:11:26.305 07:28:41 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:11:26.305 07:28:41 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:11:26.305 07:28:41 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:11:26.305 07:28:41 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:11:26.305 07:28:41 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:11:26.305 07:28:41 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:11:26.306 07:28:42 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:11:26.306 07:28:42 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:11:26.306 07:28:42 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:11:26.306 07:28:42 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:11:26.306 07:28:42 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:11:26.306 07:28:42 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:11:26.306 07:28:42 -- nvmf/common.sh@320 -- # [[ tcp == rdma ]] 00:11:26.306 07:28:42 -- nvmf/common.sh@326 -- # [[ e810 == mlx5 ]] 00:11:26.306 07:28:42 -- nvmf/common.sh@328 -- # [[ e810 == e810 ]] 00:11:26.306 07:28:42 -- nvmf/common.sh@329 -- # pci_devs=("${e810[@]}") 00:11:26.306 07:28:42 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:11:26.306 07:28:42 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:11:26.306 07:28:42 -- nvmf/common.sh@340 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:11:26.306 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:11:26.306 07:28:42 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:11:26.306 07:28:42 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:11:26.306 07:28:42 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:11:26.306 07:28:42 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:11:26.306 07:28:42 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:11:26.306 07:28:42 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:11:26.306 07:28:42 -- nvmf/common.sh@340 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:11:26.306 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:11:26.306 07:28:42 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:11:26.306 07:28:42 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:11:26.306 07:28:42 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:11:26.306 07:28:42 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:11:26.306 07:28:42 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:11:26.306 07:28:42 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:11:26.306 07:28:42 -- nvmf/common.sh@371 -- # [[ e810 == e810 ]] 00:11:26.306 07:28:42 -- nvmf/common.sh@371 -- # [[ tcp == rdma ]] 00:11:26.306 07:28:42 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:11:26.306 07:28:42 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:26.306 07:28:42 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:11:26.306 07:28:42 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:26.306 07:28:42 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:11:26.306 Found net devices under 0000:0a:00.0: cvl_0_0 00:11:26.306 07:28:42 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:11:26.306 07:28:42 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:11:26.306 07:28:42 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:26.306 07:28:42 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:11:26.306 07:28:42 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:26.306 07:28:42 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:11:26.306 Found net devices under 0000:0a:00.1: cvl_0_1 00:11:26.306 07:28:42 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:11:26.306 07:28:42 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:11:26.306 07:28:42 -- nvmf/common.sh@402 -- # is_hw=yes 00:11:26.306 07:28:42 -- nvmf/common.sh@404 -- # [[ yes == yes ]] 00:11:26.306 07:28:42 -- nvmf/common.sh@405 -- # [[ tcp == tcp ]] 00:11:26.306 07:28:42 -- nvmf/common.sh@406 -- # nvmf_tcp_init 00:11:26.306 07:28:42 -- nvmf/common.sh@228 -- # NVMF_INITIATOR_IP=10.0.0.1 00:11:26.306 07:28:42 -- nvmf/common.sh@229 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:11:26.306 07:28:42 -- nvmf/common.sh@230 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:11:26.306 07:28:42 -- nvmf/common.sh@233 -- # (( 2 > 1 )) 00:11:26.306 07:28:42 -- nvmf/common.sh@235 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:11:26.306 07:28:42 -- nvmf/common.sh@236 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:11:26.306 07:28:42 -- nvmf/common.sh@239 -- # NVMF_SECOND_TARGET_IP= 00:11:26.306 07:28:42 -- nvmf/common.sh@241 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:11:26.306 07:28:42 -- nvmf/common.sh@242 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:11:26.306 07:28:42 -- nvmf/common.sh@243 -- # ip -4 addr flush cvl_0_0 00:11:26.306 07:28:42 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_1 00:11:26.306 07:28:42 -- nvmf/common.sh@247 -- # ip netns add cvl_0_0_ns_spdk 00:11:26.306 07:28:42 -- nvmf/common.sh@250 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:11:26.306 07:28:42 -- nvmf/common.sh@253 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:11:26.306 07:28:42 -- nvmf/common.sh@254 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:11:26.306 07:28:42 -- nvmf/common.sh@257 -- # ip link set cvl_0_1 up 00:11:26.306 07:28:42 -- nvmf/common.sh@259 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:11:26.306 07:28:42 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:11:26.306 07:28:42 -- nvmf/common.sh@263 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:11:26.306 07:28:42 -- nvmf/common.sh@266 -- # ping -c 1 10.0.0.2 00:11:26.306 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:11:26.306 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.250 ms 00:11:26.306 00:11:26.306 --- 10.0.0.2 ping statistics --- 00:11:26.306 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:26.306 rtt min/avg/max/mdev = 0.250/0.250/0.250/0.000 ms 00:11:26.306 07:28:42 -- nvmf/common.sh@267 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:11:26.306 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:11:26.306 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.236 ms 00:11:26.306 00:11:26.306 --- 10.0.0.1 ping statistics --- 00:11:26.306 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:26.306 rtt min/avg/max/mdev = 0.236/0.236/0.236/0.000 ms 00:11:26.306 07:28:42 -- nvmf/common.sh@269 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:11:26.306 07:28:42 -- nvmf/common.sh@410 -- # return 0 00:11:26.306 07:28:42 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:11:26.306 07:28:42 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:11:26.306 07:28:42 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:11:26.306 07:28:42 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:11:26.306 07:28:42 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:11:26.306 07:28:42 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:11:26.306 07:28:42 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:11:26.306 07:28:42 -- target/multitarget.sh@16 -- # nvmfappstart -m 0xF 00:11:26.306 07:28:42 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:11:26.306 07:28:42 -- common/autotest_common.sh@712 -- # xtrace_disable 00:11:26.306 07:28:42 -- common/autotest_common.sh@10 -- # set +x 00:11:26.306 07:28:42 -- nvmf/common.sh@469 -- # nvmfpid=4036148 00:11:26.306 07:28:42 -- nvmf/common.sh@468 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:11:26.306 07:28:42 -- nvmf/common.sh@470 -- # waitforlisten 4036148 00:11:26.306 07:28:42 -- common/autotest_common.sh@819 -- # '[' -z 4036148 ']' 00:11:26.306 07:28:42 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:26.306 07:28:42 -- common/autotest_common.sh@824 -- # local max_retries=100 00:11:26.306 07:28:42 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:26.306 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:26.306 07:28:42 -- common/autotest_common.sh@828 -- # xtrace_disable 00:11:26.306 07:28:42 -- common/autotest_common.sh@10 -- # set +x 00:11:26.306 [2024-07-14 07:28:42.220801] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:11:26.306 [2024-07-14 07:28:42.220901] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:11:26.307 EAL: No free 2048 kB hugepages reported on node 1 00:11:26.307 [2024-07-14 07:28:42.286000] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:11:26.307 [2024-07-14 07:28:42.391795] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:11:26.307 [2024-07-14 07:28:42.391972] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:11:26.307 [2024-07-14 07:28:42.391998] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:11:26.307 [2024-07-14 07:28:42.392011] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:11:26.307 [2024-07-14 07:28:42.392085] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:11:26.307 [2024-07-14 07:28:42.392112] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:11:26.307 [2024-07-14 07:28:42.392162] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:11:26.307 [2024-07-14 07:28:42.392165] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:11:27.242 07:28:43 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:11:27.242 07:28:43 -- common/autotest_common.sh@852 -- # return 0 00:11:27.242 07:28:43 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:11:27.242 07:28:43 -- common/autotest_common.sh@718 -- # xtrace_disable 00:11:27.242 07:28:43 -- common/autotest_common.sh@10 -- # set +x 00:11:27.242 07:28:43 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:11:27.242 07:28:43 -- target/multitarget.sh@18 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini $1; exit 1' SIGINT SIGTERM EXIT 00:11:27.242 07:28:43 -- target/multitarget.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_get_targets 00:11:27.243 07:28:43 -- target/multitarget.sh@21 -- # jq length 00:11:27.243 07:28:43 -- target/multitarget.sh@21 -- # '[' 1 '!=' 1 ']' 00:11:27.243 07:28:43 -- target/multitarget.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_create_target -n nvmf_tgt_1 -s 32 00:11:27.501 "nvmf_tgt_1" 00:11:27.501 07:28:43 -- target/multitarget.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_create_target -n nvmf_tgt_2 -s 32 00:11:27.501 "nvmf_tgt_2" 00:11:27.501 07:28:43 -- target/multitarget.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_get_targets 00:11:27.501 07:28:43 -- target/multitarget.sh@28 -- # jq length 00:11:27.501 07:28:43 -- target/multitarget.sh@28 -- # '[' 3 '!=' 3 ']' 00:11:27.501 07:28:43 -- target/multitarget.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_delete_target -n nvmf_tgt_1 00:11:27.759 true 00:11:27.759 07:28:43 -- target/multitarget.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_delete_target -n nvmf_tgt_2 00:11:27.759 true 00:11:27.759 07:28:43 -- target/multitarget.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_get_targets 00:11:27.759 07:28:43 -- target/multitarget.sh@35 -- # jq length 00:11:28.017 07:28:43 -- target/multitarget.sh@35 -- # '[' 1 '!=' 1 ']' 00:11:28.017 07:28:43 -- target/multitarget.sh@39 -- # trap - SIGINT SIGTERM EXIT 00:11:28.017 07:28:43 -- target/multitarget.sh@41 -- # nvmftestfini 00:11:28.017 07:28:43 -- nvmf/common.sh@476 -- # nvmfcleanup 00:11:28.017 07:28:43 -- nvmf/common.sh@116 -- # sync 00:11:28.017 07:28:43 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:11:28.017 07:28:43 -- nvmf/common.sh@119 -- # set +e 00:11:28.017 07:28:43 -- nvmf/common.sh@120 -- # for i in {1..20} 00:11:28.017 07:28:43 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:11:28.017 rmmod nvme_tcp 00:11:28.017 rmmod nvme_fabrics 00:11:28.017 rmmod nvme_keyring 00:11:28.017 07:28:44 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:11:28.017 07:28:44 -- nvmf/common.sh@123 -- # set -e 00:11:28.017 07:28:44 -- nvmf/common.sh@124 -- # return 0 00:11:28.017 07:28:44 -- nvmf/common.sh@477 -- # '[' -n 4036148 ']' 00:11:28.017 07:28:44 -- nvmf/common.sh@478 -- # killprocess 4036148 00:11:28.017 07:28:44 -- common/autotest_common.sh@926 -- # '[' -z 4036148 ']' 00:11:28.017 07:28:44 -- common/autotest_common.sh@930 -- # kill -0 4036148 00:11:28.017 07:28:44 -- common/autotest_common.sh@931 -- # uname 00:11:28.017 07:28:44 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:11:28.017 07:28:44 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 4036148 00:11:28.017 07:28:44 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:11:28.017 07:28:44 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:11:28.017 07:28:44 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 4036148' 00:11:28.017 killing process with pid 4036148 00:11:28.017 07:28:44 -- common/autotest_common.sh@945 -- # kill 4036148 00:11:28.017 07:28:44 -- common/autotest_common.sh@950 -- # wait 4036148 00:11:28.276 07:28:44 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:11:28.276 07:28:44 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:11:28.276 07:28:44 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:11:28.276 07:28:44 -- nvmf/common.sh@273 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:11:28.276 07:28:44 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:11:28.276 07:28:44 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:28.276 07:28:44 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:11:28.276 07:28:44 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:30.809 07:28:46 -- nvmf/common.sh@278 -- # ip -4 addr flush cvl_0_1 00:11:30.809 00:11:30.809 real 0m6.354s 00:11:30.809 user 0m9.284s 00:11:30.809 sys 0m1.883s 00:11:30.809 07:28:46 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:11:30.809 07:28:46 -- common/autotest_common.sh@10 -- # set +x 00:11:30.809 ************************************ 00:11:30.809 END TEST nvmf_multitarget 00:11:30.809 ************************************ 00:11:30.809 07:28:46 -- nvmf/nvmf.sh@29 -- # run_test nvmf_rpc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.sh --transport=tcp 00:11:30.809 07:28:46 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:11:30.809 07:28:46 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:11:30.809 07:28:46 -- common/autotest_common.sh@10 -- # set +x 00:11:30.809 ************************************ 00:11:30.809 START TEST nvmf_rpc 00:11:30.809 ************************************ 00:11:30.809 07:28:46 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.sh --transport=tcp 00:11:30.809 * Looking for test storage... 00:11:30.809 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:11:30.809 07:28:46 -- target/rpc.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:11:30.809 07:28:46 -- nvmf/common.sh@7 -- # uname -s 00:11:30.809 07:28:46 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:11:30.809 07:28:46 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:11:30.809 07:28:46 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:11:30.809 07:28:46 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:11:30.809 07:28:46 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:11:30.809 07:28:46 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:11:30.809 07:28:46 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:11:30.809 07:28:46 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:11:30.809 07:28:46 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:11:30.809 07:28:46 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:11:30.809 07:28:46 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:11:30.809 07:28:46 -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:11:30.809 07:28:46 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:11:30.809 07:28:46 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:11:30.809 07:28:46 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:11:30.809 07:28:46 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:11:30.809 07:28:46 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:11:30.809 07:28:46 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:11:30.809 07:28:46 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:11:30.809 07:28:46 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:30.809 07:28:46 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:30.809 07:28:46 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:30.809 07:28:46 -- paths/export.sh@5 -- # export PATH 00:11:30.809 07:28:46 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:30.809 07:28:46 -- nvmf/common.sh@46 -- # : 0 00:11:30.809 07:28:46 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:11:30.809 07:28:46 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:11:30.809 07:28:46 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:11:30.809 07:28:46 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:11:30.810 07:28:46 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:11:30.810 07:28:46 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:11:30.810 07:28:46 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:11:30.810 07:28:46 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:11:30.810 07:28:46 -- target/rpc.sh@11 -- # loops=5 00:11:30.810 07:28:46 -- target/rpc.sh@23 -- # nvmftestinit 00:11:30.810 07:28:46 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:11:30.810 07:28:46 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:11:30.810 07:28:46 -- nvmf/common.sh@436 -- # prepare_net_devs 00:11:30.810 07:28:46 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:11:30.810 07:28:46 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:11:30.810 07:28:46 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:30.810 07:28:46 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:11:30.810 07:28:46 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:30.810 07:28:46 -- nvmf/common.sh@402 -- # [[ phy != virt ]] 00:11:30.810 07:28:46 -- nvmf/common.sh@402 -- # gather_supported_nvmf_pci_devs 00:11:30.810 07:28:46 -- nvmf/common.sh@284 -- # xtrace_disable 00:11:30.810 07:28:46 -- common/autotest_common.sh@10 -- # set +x 00:11:32.709 07:28:48 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:11:32.709 07:28:48 -- nvmf/common.sh@290 -- # pci_devs=() 00:11:32.709 07:28:48 -- nvmf/common.sh@290 -- # local -a pci_devs 00:11:32.709 07:28:48 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:11:32.709 07:28:48 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:11:32.709 07:28:48 -- nvmf/common.sh@292 -- # pci_drivers=() 00:11:32.709 07:28:48 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:11:32.709 07:28:48 -- nvmf/common.sh@294 -- # net_devs=() 00:11:32.709 07:28:48 -- nvmf/common.sh@294 -- # local -ga net_devs 00:11:32.709 07:28:48 -- nvmf/common.sh@295 -- # e810=() 00:11:32.709 07:28:48 -- nvmf/common.sh@295 -- # local -ga e810 00:11:32.709 07:28:48 -- nvmf/common.sh@296 -- # x722=() 00:11:32.709 07:28:48 -- nvmf/common.sh@296 -- # local -ga x722 00:11:32.709 07:28:48 -- nvmf/common.sh@297 -- # mlx=() 00:11:32.709 07:28:48 -- nvmf/common.sh@297 -- # local -ga mlx 00:11:32.709 07:28:48 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:11:32.709 07:28:48 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:11:32.709 07:28:48 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:11:32.709 07:28:48 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:11:32.709 07:28:48 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:11:32.709 07:28:48 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:11:32.709 07:28:48 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:11:32.709 07:28:48 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:11:32.709 07:28:48 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:11:32.709 07:28:48 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:11:32.709 07:28:48 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:11:32.709 07:28:48 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:11:32.709 07:28:48 -- nvmf/common.sh@320 -- # [[ tcp == rdma ]] 00:11:32.709 07:28:48 -- nvmf/common.sh@326 -- # [[ e810 == mlx5 ]] 00:11:32.709 07:28:48 -- nvmf/common.sh@328 -- # [[ e810 == e810 ]] 00:11:32.709 07:28:48 -- nvmf/common.sh@329 -- # pci_devs=("${e810[@]}") 00:11:32.709 07:28:48 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:11:32.709 07:28:48 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:11:32.709 07:28:48 -- nvmf/common.sh@340 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:11:32.709 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:11:32.709 07:28:48 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:11:32.709 07:28:48 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:11:32.709 07:28:48 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:11:32.709 07:28:48 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:11:32.709 07:28:48 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:11:32.709 07:28:48 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:11:32.709 07:28:48 -- nvmf/common.sh@340 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:11:32.709 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:11:32.709 07:28:48 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:11:32.709 07:28:48 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:11:32.709 07:28:48 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:11:32.709 07:28:48 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:11:32.709 07:28:48 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:11:32.709 07:28:48 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:11:32.709 07:28:48 -- nvmf/common.sh@371 -- # [[ e810 == e810 ]] 00:11:32.709 07:28:48 -- nvmf/common.sh@371 -- # [[ tcp == rdma ]] 00:11:32.709 07:28:48 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:11:32.709 07:28:48 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:32.709 07:28:48 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:11:32.709 07:28:48 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:32.709 07:28:48 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:11:32.710 Found net devices under 0000:0a:00.0: cvl_0_0 00:11:32.710 07:28:48 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:11:32.710 07:28:48 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:11:32.710 07:28:48 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:32.710 07:28:48 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:11:32.710 07:28:48 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:32.710 07:28:48 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:11:32.710 Found net devices under 0000:0a:00.1: cvl_0_1 00:11:32.710 07:28:48 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:11:32.710 07:28:48 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:11:32.710 07:28:48 -- nvmf/common.sh@402 -- # is_hw=yes 00:11:32.710 07:28:48 -- nvmf/common.sh@404 -- # [[ yes == yes ]] 00:11:32.710 07:28:48 -- nvmf/common.sh@405 -- # [[ tcp == tcp ]] 00:11:32.710 07:28:48 -- nvmf/common.sh@406 -- # nvmf_tcp_init 00:11:32.710 07:28:48 -- nvmf/common.sh@228 -- # NVMF_INITIATOR_IP=10.0.0.1 00:11:32.710 07:28:48 -- nvmf/common.sh@229 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:11:32.710 07:28:48 -- nvmf/common.sh@230 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:11:32.710 07:28:48 -- nvmf/common.sh@233 -- # (( 2 > 1 )) 00:11:32.710 07:28:48 -- nvmf/common.sh@235 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:11:32.710 07:28:48 -- nvmf/common.sh@236 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:11:32.710 07:28:48 -- nvmf/common.sh@239 -- # NVMF_SECOND_TARGET_IP= 00:11:32.710 07:28:48 -- nvmf/common.sh@241 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:11:32.710 07:28:48 -- nvmf/common.sh@242 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:11:32.710 07:28:48 -- nvmf/common.sh@243 -- # ip -4 addr flush cvl_0_0 00:11:32.710 07:28:48 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_1 00:11:32.710 07:28:48 -- nvmf/common.sh@247 -- # ip netns add cvl_0_0_ns_spdk 00:11:32.710 07:28:48 -- nvmf/common.sh@250 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:11:32.710 07:28:48 -- nvmf/common.sh@253 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:11:32.710 07:28:48 -- nvmf/common.sh@254 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:11:32.710 07:28:48 -- nvmf/common.sh@257 -- # ip link set cvl_0_1 up 00:11:32.710 07:28:48 -- nvmf/common.sh@259 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:11:32.710 07:28:48 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:11:32.710 07:28:48 -- nvmf/common.sh@263 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:11:32.710 07:28:48 -- nvmf/common.sh@266 -- # ping -c 1 10.0.0.2 00:11:32.710 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:11:32.710 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.191 ms 00:11:32.710 00:11:32.710 --- 10.0.0.2 ping statistics --- 00:11:32.710 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:32.710 rtt min/avg/max/mdev = 0.191/0.191/0.191/0.000 ms 00:11:32.710 07:28:48 -- nvmf/common.sh@267 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:11:32.710 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:11:32.710 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.084 ms 00:11:32.710 00:11:32.710 --- 10.0.0.1 ping statistics --- 00:11:32.710 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:32.710 rtt min/avg/max/mdev = 0.084/0.084/0.084/0.000 ms 00:11:32.710 07:28:48 -- nvmf/common.sh@269 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:11:32.710 07:28:48 -- nvmf/common.sh@410 -- # return 0 00:11:32.710 07:28:48 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:11:32.710 07:28:48 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:11:32.710 07:28:48 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:11:32.710 07:28:48 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:11:32.710 07:28:48 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:11:32.710 07:28:48 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:11:32.710 07:28:48 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:11:32.710 07:28:48 -- target/rpc.sh@24 -- # nvmfappstart -m 0xF 00:11:32.710 07:28:48 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:11:32.710 07:28:48 -- common/autotest_common.sh@712 -- # xtrace_disable 00:11:32.710 07:28:48 -- common/autotest_common.sh@10 -- # set +x 00:11:32.710 07:28:48 -- nvmf/common.sh@469 -- # nvmfpid=4038393 00:11:32.710 07:28:48 -- nvmf/common.sh@468 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:11:32.710 07:28:48 -- nvmf/common.sh@470 -- # waitforlisten 4038393 00:11:32.710 07:28:48 -- common/autotest_common.sh@819 -- # '[' -z 4038393 ']' 00:11:32.710 07:28:48 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:32.710 07:28:48 -- common/autotest_common.sh@824 -- # local max_retries=100 00:11:32.710 07:28:48 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:32.710 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:32.710 07:28:48 -- common/autotest_common.sh@828 -- # xtrace_disable 00:11:32.710 07:28:48 -- common/autotest_common.sh@10 -- # set +x 00:11:32.710 [2024-07-14 07:28:48.627822] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:11:32.710 [2024-07-14 07:28:48.627917] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:11:32.710 EAL: No free 2048 kB hugepages reported on node 1 00:11:32.710 [2024-07-14 07:28:48.697493] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:11:32.710 [2024-07-14 07:28:48.816498] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:11:32.710 [2024-07-14 07:28:48.816679] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:11:32.710 [2024-07-14 07:28:48.816698] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:11:32.710 [2024-07-14 07:28:48.816713] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:11:32.710 [2024-07-14 07:28:48.816819] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:11:32.710 [2024-07-14 07:28:48.816875] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:11:32.710 [2024-07-14 07:28:48.816932] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:11:32.710 [2024-07-14 07:28:48.816935] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:11:33.644 07:28:49 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:11:33.644 07:28:49 -- common/autotest_common.sh@852 -- # return 0 00:11:33.644 07:28:49 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:11:33.644 07:28:49 -- common/autotest_common.sh@718 -- # xtrace_disable 00:11:33.644 07:28:49 -- common/autotest_common.sh@10 -- # set +x 00:11:33.644 07:28:49 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:11:33.644 07:28:49 -- target/rpc.sh@26 -- # rpc_cmd nvmf_get_stats 00:11:33.644 07:28:49 -- common/autotest_common.sh@551 -- # xtrace_disable 00:11:33.644 07:28:49 -- common/autotest_common.sh@10 -- # set +x 00:11:33.644 07:28:49 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:11:33.644 07:28:49 -- target/rpc.sh@26 -- # stats='{ 00:11:33.644 "tick_rate": 2700000000, 00:11:33.644 "poll_groups": [ 00:11:33.644 { 00:11:33.644 "name": "nvmf_tgt_poll_group_0", 00:11:33.644 "admin_qpairs": 0, 00:11:33.644 "io_qpairs": 0, 00:11:33.644 "current_admin_qpairs": 0, 00:11:33.644 "current_io_qpairs": 0, 00:11:33.644 "pending_bdev_io": 0, 00:11:33.644 "completed_nvme_io": 0, 00:11:33.644 "transports": [] 00:11:33.644 }, 00:11:33.644 { 00:11:33.644 "name": "nvmf_tgt_poll_group_1", 00:11:33.644 "admin_qpairs": 0, 00:11:33.644 "io_qpairs": 0, 00:11:33.644 "current_admin_qpairs": 0, 00:11:33.644 "current_io_qpairs": 0, 00:11:33.644 "pending_bdev_io": 0, 00:11:33.644 "completed_nvme_io": 0, 00:11:33.644 "transports": [] 00:11:33.644 }, 00:11:33.644 { 00:11:33.644 "name": "nvmf_tgt_poll_group_2", 00:11:33.644 "admin_qpairs": 0, 00:11:33.644 "io_qpairs": 0, 00:11:33.644 "current_admin_qpairs": 0, 00:11:33.644 "current_io_qpairs": 0, 00:11:33.644 "pending_bdev_io": 0, 00:11:33.644 "completed_nvme_io": 0, 00:11:33.644 "transports": [] 00:11:33.644 }, 00:11:33.644 { 00:11:33.644 "name": "nvmf_tgt_poll_group_3", 00:11:33.644 "admin_qpairs": 0, 00:11:33.644 "io_qpairs": 0, 00:11:33.644 "current_admin_qpairs": 0, 00:11:33.644 "current_io_qpairs": 0, 00:11:33.644 "pending_bdev_io": 0, 00:11:33.644 "completed_nvme_io": 0, 00:11:33.644 "transports": [] 00:11:33.644 } 00:11:33.644 ] 00:11:33.644 }' 00:11:33.644 07:28:49 -- target/rpc.sh@28 -- # jcount '.poll_groups[].name' 00:11:33.644 07:28:49 -- target/rpc.sh@14 -- # local 'filter=.poll_groups[].name' 00:11:33.644 07:28:49 -- target/rpc.sh@15 -- # jq '.poll_groups[].name' 00:11:33.644 07:28:49 -- target/rpc.sh@15 -- # wc -l 00:11:33.644 07:28:49 -- target/rpc.sh@28 -- # (( 4 == 4 )) 00:11:33.644 07:28:49 -- target/rpc.sh@29 -- # jq '.poll_groups[0].transports[0]' 00:11:33.644 07:28:49 -- target/rpc.sh@29 -- # [[ null == null ]] 00:11:33.644 07:28:49 -- target/rpc.sh@31 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:11:33.644 07:28:49 -- common/autotest_common.sh@551 -- # xtrace_disable 00:11:33.644 07:28:49 -- common/autotest_common.sh@10 -- # set +x 00:11:33.644 [2024-07-14 07:28:49.674611] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:11:33.644 07:28:49 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:11:33.644 07:28:49 -- target/rpc.sh@33 -- # rpc_cmd nvmf_get_stats 00:11:33.644 07:28:49 -- common/autotest_common.sh@551 -- # xtrace_disable 00:11:33.644 07:28:49 -- common/autotest_common.sh@10 -- # set +x 00:11:33.644 07:28:49 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:11:33.644 07:28:49 -- target/rpc.sh@33 -- # stats='{ 00:11:33.644 "tick_rate": 2700000000, 00:11:33.644 "poll_groups": [ 00:11:33.644 { 00:11:33.644 "name": "nvmf_tgt_poll_group_0", 00:11:33.644 "admin_qpairs": 0, 00:11:33.644 "io_qpairs": 0, 00:11:33.644 "current_admin_qpairs": 0, 00:11:33.644 "current_io_qpairs": 0, 00:11:33.644 "pending_bdev_io": 0, 00:11:33.644 "completed_nvme_io": 0, 00:11:33.644 "transports": [ 00:11:33.644 { 00:11:33.644 "trtype": "TCP" 00:11:33.644 } 00:11:33.644 ] 00:11:33.644 }, 00:11:33.644 { 00:11:33.644 "name": "nvmf_tgt_poll_group_1", 00:11:33.644 "admin_qpairs": 0, 00:11:33.644 "io_qpairs": 0, 00:11:33.644 "current_admin_qpairs": 0, 00:11:33.644 "current_io_qpairs": 0, 00:11:33.644 "pending_bdev_io": 0, 00:11:33.644 "completed_nvme_io": 0, 00:11:33.644 "transports": [ 00:11:33.644 { 00:11:33.644 "trtype": "TCP" 00:11:33.644 } 00:11:33.644 ] 00:11:33.644 }, 00:11:33.644 { 00:11:33.644 "name": "nvmf_tgt_poll_group_2", 00:11:33.644 "admin_qpairs": 0, 00:11:33.644 "io_qpairs": 0, 00:11:33.644 "current_admin_qpairs": 0, 00:11:33.644 "current_io_qpairs": 0, 00:11:33.644 "pending_bdev_io": 0, 00:11:33.644 "completed_nvme_io": 0, 00:11:33.644 "transports": [ 00:11:33.644 { 00:11:33.644 "trtype": "TCP" 00:11:33.644 } 00:11:33.644 ] 00:11:33.644 }, 00:11:33.644 { 00:11:33.644 "name": "nvmf_tgt_poll_group_3", 00:11:33.644 "admin_qpairs": 0, 00:11:33.644 "io_qpairs": 0, 00:11:33.644 "current_admin_qpairs": 0, 00:11:33.644 "current_io_qpairs": 0, 00:11:33.644 "pending_bdev_io": 0, 00:11:33.644 "completed_nvme_io": 0, 00:11:33.644 "transports": [ 00:11:33.644 { 00:11:33.644 "trtype": "TCP" 00:11:33.644 } 00:11:33.644 ] 00:11:33.644 } 00:11:33.644 ] 00:11:33.644 }' 00:11:33.644 07:28:49 -- target/rpc.sh@35 -- # jsum '.poll_groups[].admin_qpairs' 00:11:33.644 07:28:49 -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].admin_qpairs' 00:11:33.644 07:28:49 -- target/rpc.sh@20 -- # jq '.poll_groups[].admin_qpairs' 00:11:33.644 07:28:49 -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:11:33.644 07:28:49 -- target/rpc.sh@35 -- # (( 0 == 0 )) 00:11:33.644 07:28:49 -- target/rpc.sh@36 -- # jsum '.poll_groups[].io_qpairs' 00:11:33.644 07:28:49 -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].io_qpairs' 00:11:33.644 07:28:49 -- target/rpc.sh@20 -- # jq '.poll_groups[].io_qpairs' 00:11:33.644 07:28:49 -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:11:33.644 07:28:49 -- target/rpc.sh@36 -- # (( 0 == 0 )) 00:11:33.644 07:28:49 -- target/rpc.sh@38 -- # '[' rdma == tcp ']' 00:11:33.644 07:28:49 -- target/rpc.sh@46 -- # MALLOC_BDEV_SIZE=64 00:11:33.644 07:28:49 -- target/rpc.sh@47 -- # MALLOC_BLOCK_SIZE=512 00:11:33.644 07:28:49 -- target/rpc.sh@49 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:11:33.644 07:28:49 -- common/autotest_common.sh@551 -- # xtrace_disable 00:11:33.644 07:28:49 -- common/autotest_common.sh@10 -- # set +x 00:11:33.644 Malloc1 00:11:33.644 07:28:49 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:11:33.644 07:28:49 -- target/rpc.sh@52 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:11:33.644 07:28:49 -- common/autotest_common.sh@551 -- # xtrace_disable 00:11:33.644 07:28:49 -- common/autotest_common.sh@10 -- # set +x 00:11:33.644 07:28:49 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:11:33.644 07:28:49 -- target/rpc.sh@53 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:11:33.644 07:28:49 -- common/autotest_common.sh@551 -- # xtrace_disable 00:11:33.644 07:28:49 -- common/autotest_common.sh@10 -- # set +x 00:11:33.902 07:28:49 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:11:33.902 07:28:49 -- target/rpc.sh@54 -- # rpc_cmd nvmf_subsystem_allow_any_host -d nqn.2016-06.io.spdk:cnode1 00:11:33.902 07:28:49 -- common/autotest_common.sh@551 -- # xtrace_disable 00:11:33.902 07:28:49 -- common/autotest_common.sh@10 -- # set +x 00:11:33.902 07:28:49 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:11:33.902 07:28:49 -- target/rpc.sh@55 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:11:33.902 07:28:49 -- common/autotest_common.sh@551 -- # xtrace_disable 00:11:33.902 07:28:49 -- common/autotest_common.sh@10 -- # set +x 00:11:33.902 [2024-07-14 07:28:49.832068] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:11:33.902 07:28:49 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:11:33.902 07:28:49 -- target/rpc.sh@58 -- # NOT nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -a 10.0.0.2 -s 4420 00:11:33.902 07:28:49 -- common/autotest_common.sh@640 -- # local es=0 00:11:33.902 07:28:49 -- common/autotest_common.sh@642 -- # valid_exec_arg nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -a 10.0.0.2 -s 4420 00:11:33.902 07:28:49 -- common/autotest_common.sh@628 -- # local arg=nvme 00:11:33.902 07:28:49 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:11:33.902 07:28:49 -- common/autotest_common.sh@632 -- # type -t nvme 00:11:33.902 07:28:49 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:11:33.902 07:28:49 -- common/autotest_common.sh@634 -- # type -P nvme 00:11:33.902 07:28:49 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:11:33.903 07:28:49 -- common/autotest_common.sh@634 -- # arg=/usr/sbin/nvme 00:11:33.903 07:28:49 -- common/autotest_common.sh@634 -- # [[ -x /usr/sbin/nvme ]] 00:11:33.903 07:28:49 -- common/autotest_common.sh@643 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -a 10.0.0.2 -s 4420 00:11:33.903 [2024-07-14 07:28:49.854664] ctrlr.c: 715:nvmf_qpair_access_allowed: *ERROR*: Subsystem 'nqn.2016-06.io.spdk:cnode1' does not allow host 'nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55' 00:11:33.903 Failed to write to /dev/nvme-fabrics: Input/output error 00:11:33.903 could not add new controller: failed to write to nvme-fabrics device 00:11:33.903 07:28:49 -- common/autotest_common.sh@643 -- # es=1 00:11:33.903 07:28:49 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:11:33.903 07:28:49 -- common/autotest_common.sh@662 -- # [[ -n '' ]] 00:11:33.903 07:28:49 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:11:33.903 07:28:49 -- target/rpc.sh@61 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:11:33.903 07:28:49 -- common/autotest_common.sh@551 -- # xtrace_disable 00:11:33.903 07:28:49 -- common/autotest_common.sh@10 -- # set +x 00:11:33.903 07:28:49 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:11:33.903 07:28:49 -- target/rpc.sh@62 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:11:34.467 07:28:50 -- target/rpc.sh@63 -- # waitforserial SPDKISFASTANDAWESOME 00:11:34.467 07:28:50 -- common/autotest_common.sh@1177 -- # local i=0 00:11:34.467 07:28:50 -- common/autotest_common.sh@1178 -- # local nvme_device_counter=1 nvme_devices=0 00:11:34.467 07:28:50 -- common/autotest_common.sh@1179 -- # [[ -n '' ]] 00:11:34.467 07:28:50 -- common/autotest_common.sh@1184 -- # sleep 2 00:11:36.993 07:28:52 -- common/autotest_common.sh@1185 -- # (( i++ <= 15 )) 00:11:36.993 07:28:52 -- common/autotest_common.sh@1186 -- # lsblk -l -o NAME,SERIAL 00:11:36.993 07:28:52 -- common/autotest_common.sh@1186 -- # grep -c SPDKISFASTANDAWESOME 00:11:36.993 07:28:52 -- common/autotest_common.sh@1186 -- # nvme_devices=1 00:11:36.993 07:28:52 -- common/autotest_common.sh@1187 -- # (( nvme_devices == nvme_device_counter )) 00:11:36.993 07:28:52 -- common/autotest_common.sh@1187 -- # return 0 00:11:36.993 07:28:52 -- target/rpc.sh@64 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:11:36.993 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:36.993 07:28:52 -- target/rpc.sh@65 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:11:36.993 07:28:52 -- common/autotest_common.sh@1198 -- # local i=0 00:11:36.993 07:28:52 -- common/autotest_common.sh@1199 -- # lsblk -o NAME,SERIAL 00:11:36.993 07:28:52 -- common/autotest_common.sh@1199 -- # grep -q -w SPDKISFASTANDAWESOME 00:11:36.993 07:28:52 -- common/autotest_common.sh@1206 -- # lsblk -l -o NAME,SERIAL 00:11:36.993 07:28:52 -- common/autotest_common.sh@1206 -- # grep -q -w SPDKISFASTANDAWESOME 00:11:36.993 07:28:52 -- common/autotest_common.sh@1210 -- # return 0 00:11:36.993 07:28:52 -- target/rpc.sh@68 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2016-06.io.spdk:cnode1 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:11:36.993 07:28:52 -- common/autotest_common.sh@551 -- # xtrace_disable 00:11:36.993 07:28:52 -- common/autotest_common.sh@10 -- # set +x 00:11:36.993 07:28:52 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:11:36.993 07:28:52 -- target/rpc.sh@69 -- # NOT nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:11:36.993 07:28:52 -- common/autotest_common.sh@640 -- # local es=0 00:11:36.993 07:28:52 -- common/autotest_common.sh@642 -- # valid_exec_arg nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:11:36.993 07:28:52 -- common/autotest_common.sh@628 -- # local arg=nvme 00:11:36.993 07:28:52 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:11:36.993 07:28:52 -- common/autotest_common.sh@632 -- # type -t nvme 00:11:36.993 07:28:52 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:11:36.993 07:28:52 -- common/autotest_common.sh@634 -- # type -P nvme 00:11:36.993 07:28:52 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:11:36.993 07:28:52 -- common/autotest_common.sh@634 -- # arg=/usr/sbin/nvme 00:11:36.993 07:28:52 -- common/autotest_common.sh@634 -- # [[ -x /usr/sbin/nvme ]] 00:11:36.993 07:28:52 -- common/autotest_common.sh@643 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:11:36.993 [2024-07-14 07:28:52.694929] ctrlr.c: 715:nvmf_qpair_access_allowed: *ERROR*: Subsystem 'nqn.2016-06.io.spdk:cnode1' does not allow host 'nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55' 00:11:36.993 Failed to write to /dev/nvme-fabrics: Input/output error 00:11:36.993 could not add new controller: failed to write to nvme-fabrics device 00:11:36.993 07:28:52 -- common/autotest_common.sh@643 -- # es=1 00:11:36.993 07:28:52 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:11:36.993 07:28:52 -- common/autotest_common.sh@662 -- # [[ -n '' ]] 00:11:36.993 07:28:52 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:11:36.993 07:28:52 -- target/rpc.sh@72 -- # rpc_cmd nvmf_subsystem_allow_any_host -e nqn.2016-06.io.spdk:cnode1 00:11:36.993 07:28:52 -- common/autotest_common.sh@551 -- # xtrace_disable 00:11:36.993 07:28:52 -- common/autotest_common.sh@10 -- # set +x 00:11:36.993 07:28:52 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:11:36.993 07:28:52 -- target/rpc.sh@73 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:11:37.252 07:28:53 -- target/rpc.sh@74 -- # waitforserial SPDKISFASTANDAWESOME 00:11:37.252 07:28:53 -- common/autotest_common.sh@1177 -- # local i=0 00:11:37.252 07:28:53 -- common/autotest_common.sh@1178 -- # local nvme_device_counter=1 nvme_devices=0 00:11:37.252 07:28:53 -- common/autotest_common.sh@1179 -- # [[ -n '' ]] 00:11:37.252 07:28:53 -- common/autotest_common.sh@1184 -- # sleep 2 00:11:39.775 07:28:55 -- common/autotest_common.sh@1185 -- # (( i++ <= 15 )) 00:11:39.775 07:28:55 -- common/autotest_common.sh@1186 -- # lsblk -l -o NAME,SERIAL 00:11:39.775 07:28:55 -- common/autotest_common.sh@1186 -- # grep -c SPDKISFASTANDAWESOME 00:11:39.775 07:28:55 -- common/autotest_common.sh@1186 -- # nvme_devices=1 00:11:39.775 07:28:55 -- common/autotest_common.sh@1187 -- # (( nvme_devices == nvme_device_counter )) 00:11:39.775 07:28:55 -- common/autotest_common.sh@1187 -- # return 0 00:11:39.775 07:28:55 -- target/rpc.sh@75 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:11:39.775 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:39.775 07:28:55 -- target/rpc.sh@76 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:11:39.775 07:28:55 -- common/autotest_common.sh@1198 -- # local i=0 00:11:39.775 07:28:55 -- common/autotest_common.sh@1199 -- # lsblk -o NAME,SERIAL 00:11:39.775 07:28:55 -- common/autotest_common.sh@1199 -- # grep -q -w SPDKISFASTANDAWESOME 00:11:39.775 07:28:55 -- common/autotest_common.sh@1206 -- # lsblk -l -o NAME,SERIAL 00:11:39.775 07:28:55 -- common/autotest_common.sh@1206 -- # grep -q -w SPDKISFASTANDAWESOME 00:11:39.775 07:28:55 -- common/autotest_common.sh@1210 -- # return 0 00:11:39.775 07:28:55 -- target/rpc.sh@78 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:11:39.775 07:28:55 -- common/autotest_common.sh@551 -- # xtrace_disable 00:11:39.775 07:28:55 -- common/autotest_common.sh@10 -- # set +x 00:11:39.775 07:28:55 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:11:39.775 07:28:55 -- target/rpc.sh@81 -- # seq 1 5 00:11:39.775 07:28:55 -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:11:39.775 07:28:55 -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:11:39.775 07:28:55 -- common/autotest_common.sh@551 -- # xtrace_disable 00:11:39.775 07:28:55 -- common/autotest_common.sh@10 -- # set +x 00:11:39.775 07:28:55 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:11:39.775 07:28:55 -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:11:39.775 07:28:55 -- common/autotest_common.sh@551 -- # xtrace_disable 00:11:39.775 07:28:55 -- common/autotest_common.sh@10 -- # set +x 00:11:39.775 [2024-07-14 07:28:55.481885] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:11:39.775 07:28:55 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:11:39.775 07:28:55 -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:11:39.775 07:28:55 -- common/autotest_common.sh@551 -- # xtrace_disable 00:11:39.775 07:28:55 -- common/autotest_common.sh@10 -- # set +x 00:11:39.775 07:28:55 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:11:39.775 07:28:55 -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:11:39.775 07:28:55 -- common/autotest_common.sh@551 -- # xtrace_disable 00:11:39.775 07:28:55 -- common/autotest_common.sh@10 -- # set +x 00:11:39.775 07:28:55 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:11:39.775 07:28:55 -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:11:40.060 07:28:56 -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:11:40.060 07:28:56 -- common/autotest_common.sh@1177 -- # local i=0 00:11:40.060 07:28:56 -- common/autotest_common.sh@1178 -- # local nvme_device_counter=1 nvme_devices=0 00:11:40.060 07:28:56 -- common/autotest_common.sh@1179 -- # [[ -n '' ]] 00:11:40.060 07:28:56 -- common/autotest_common.sh@1184 -- # sleep 2 00:11:42.585 07:28:58 -- common/autotest_common.sh@1185 -- # (( i++ <= 15 )) 00:11:42.585 07:28:58 -- common/autotest_common.sh@1186 -- # lsblk -l -o NAME,SERIAL 00:11:42.585 07:28:58 -- common/autotest_common.sh@1186 -- # grep -c SPDKISFASTANDAWESOME 00:11:42.585 07:28:58 -- common/autotest_common.sh@1186 -- # nvme_devices=1 00:11:42.585 07:28:58 -- common/autotest_common.sh@1187 -- # (( nvme_devices == nvme_device_counter )) 00:11:42.585 07:28:58 -- common/autotest_common.sh@1187 -- # return 0 00:11:42.585 07:28:58 -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:11:42.585 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:42.585 07:28:58 -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:11:42.585 07:28:58 -- common/autotest_common.sh@1198 -- # local i=0 00:11:42.585 07:28:58 -- common/autotest_common.sh@1199 -- # lsblk -o NAME,SERIAL 00:11:42.585 07:28:58 -- common/autotest_common.sh@1199 -- # grep -q -w SPDKISFASTANDAWESOME 00:11:42.585 07:28:58 -- common/autotest_common.sh@1206 -- # lsblk -l -o NAME,SERIAL 00:11:42.585 07:28:58 -- common/autotest_common.sh@1206 -- # grep -q -w SPDKISFASTANDAWESOME 00:11:42.585 07:28:58 -- common/autotest_common.sh@1210 -- # return 0 00:11:42.585 07:28:58 -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:11:42.585 07:28:58 -- common/autotest_common.sh@551 -- # xtrace_disable 00:11:42.585 07:28:58 -- common/autotest_common.sh@10 -- # set +x 00:11:42.585 07:28:58 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:11:42.585 07:28:58 -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:11:42.585 07:28:58 -- common/autotest_common.sh@551 -- # xtrace_disable 00:11:42.585 07:28:58 -- common/autotest_common.sh@10 -- # set +x 00:11:42.585 07:28:58 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:11:42.585 07:28:58 -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:11:42.585 07:28:58 -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:11:42.585 07:28:58 -- common/autotest_common.sh@551 -- # xtrace_disable 00:11:42.585 07:28:58 -- common/autotest_common.sh@10 -- # set +x 00:11:42.585 07:28:58 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:11:42.585 07:28:58 -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:11:42.585 07:28:58 -- common/autotest_common.sh@551 -- # xtrace_disable 00:11:42.585 07:28:58 -- common/autotest_common.sh@10 -- # set +x 00:11:42.585 [2024-07-14 07:28:58.306310] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:11:42.585 07:28:58 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:11:42.585 07:28:58 -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:11:42.585 07:28:58 -- common/autotest_common.sh@551 -- # xtrace_disable 00:11:42.585 07:28:58 -- common/autotest_common.sh@10 -- # set +x 00:11:42.585 07:28:58 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:11:42.585 07:28:58 -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:11:42.586 07:28:58 -- common/autotest_common.sh@551 -- # xtrace_disable 00:11:42.586 07:28:58 -- common/autotest_common.sh@10 -- # set +x 00:11:42.586 07:28:58 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:11:42.586 07:28:58 -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:11:42.843 07:28:58 -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:11:42.843 07:28:58 -- common/autotest_common.sh@1177 -- # local i=0 00:11:42.843 07:28:58 -- common/autotest_common.sh@1178 -- # local nvme_device_counter=1 nvme_devices=0 00:11:42.843 07:28:58 -- common/autotest_common.sh@1179 -- # [[ -n '' ]] 00:11:42.844 07:28:58 -- common/autotest_common.sh@1184 -- # sleep 2 00:11:45.370 07:29:00 -- common/autotest_common.sh@1185 -- # (( i++ <= 15 )) 00:11:45.370 07:29:00 -- common/autotest_common.sh@1186 -- # lsblk -l -o NAME,SERIAL 00:11:45.370 07:29:00 -- common/autotest_common.sh@1186 -- # grep -c SPDKISFASTANDAWESOME 00:11:45.370 07:29:00 -- common/autotest_common.sh@1186 -- # nvme_devices=1 00:11:45.370 07:29:00 -- common/autotest_common.sh@1187 -- # (( nvme_devices == nvme_device_counter )) 00:11:45.370 07:29:00 -- common/autotest_common.sh@1187 -- # return 0 00:11:45.370 07:29:00 -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:11:45.370 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:45.370 07:29:01 -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:11:45.370 07:29:01 -- common/autotest_common.sh@1198 -- # local i=0 00:11:45.370 07:29:01 -- common/autotest_common.sh@1199 -- # lsblk -o NAME,SERIAL 00:11:45.370 07:29:01 -- common/autotest_common.sh@1199 -- # grep -q -w SPDKISFASTANDAWESOME 00:11:45.370 07:29:01 -- common/autotest_common.sh@1206 -- # lsblk -l -o NAME,SERIAL 00:11:45.370 07:29:01 -- common/autotest_common.sh@1206 -- # grep -q -w SPDKISFASTANDAWESOME 00:11:45.370 07:29:01 -- common/autotest_common.sh@1210 -- # return 0 00:11:45.370 07:29:01 -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:11:45.370 07:29:01 -- common/autotest_common.sh@551 -- # xtrace_disable 00:11:45.370 07:29:01 -- common/autotest_common.sh@10 -- # set +x 00:11:45.370 07:29:01 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:11:45.370 07:29:01 -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:11:45.370 07:29:01 -- common/autotest_common.sh@551 -- # xtrace_disable 00:11:45.370 07:29:01 -- common/autotest_common.sh@10 -- # set +x 00:11:45.370 07:29:01 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:11:45.370 07:29:01 -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:11:45.370 07:29:01 -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:11:45.370 07:29:01 -- common/autotest_common.sh@551 -- # xtrace_disable 00:11:45.370 07:29:01 -- common/autotest_common.sh@10 -- # set +x 00:11:45.370 07:29:01 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:11:45.370 07:29:01 -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:11:45.370 07:29:01 -- common/autotest_common.sh@551 -- # xtrace_disable 00:11:45.370 07:29:01 -- common/autotest_common.sh@10 -- # set +x 00:11:45.370 [2024-07-14 07:29:01.085275] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:11:45.370 07:29:01 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:11:45.370 07:29:01 -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:11:45.370 07:29:01 -- common/autotest_common.sh@551 -- # xtrace_disable 00:11:45.370 07:29:01 -- common/autotest_common.sh@10 -- # set +x 00:11:45.370 07:29:01 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:11:45.370 07:29:01 -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:11:45.370 07:29:01 -- common/autotest_common.sh@551 -- # xtrace_disable 00:11:45.370 07:29:01 -- common/autotest_common.sh@10 -- # set +x 00:11:45.370 07:29:01 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:11:45.370 07:29:01 -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:11:45.628 07:29:01 -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:11:45.628 07:29:01 -- common/autotest_common.sh@1177 -- # local i=0 00:11:45.628 07:29:01 -- common/autotest_common.sh@1178 -- # local nvme_device_counter=1 nvme_devices=0 00:11:45.628 07:29:01 -- common/autotest_common.sh@1179 -- # [[ -n '' ]] 00:11:45.628 07:29:01 -- common/autotest_common.sh@1184 -- # sleep 2 00:11:48.154 07:29:03 -- common/autotest_common.sh@1185 -- # (( i++ <= 15 )) 00:11:48.154 07:29:03 -- common/autotest_common.sh@1186 -- # lsblk -l -o NAME,SERIAL 00:11:48.154 07:29:03 -- common/autotest_common.sh@1186 -- # grep -c SPDKISFASTANDAWESOME 00:11:48.154 07:29:03 -- common/autotest_common.sh@1186 -- # nvme_devices=1 00:11:48.154 07:29:03 -- common/autotest_common.sh@1187 -- # (( nvme_devices == nvme_device_counter )) 00:11:48.154 07:29:03 -- common/autotest_common.sh@1187 -- # return 0 00:11:48.154 07:29:03 -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:11:48.154 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:48.154 07:29:03 -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:11:48.154 07:29:03 -- common/autotest_common.sh@1198 -- # local i=0 00:11:48.154 07:29:03 -- common/autotest_common.sh@1199 -- # lsblk -o NAME,SERIAL 00:11:48.154 07:29:03 -- common/autotest_common.sh@1199 -- # grep -q -w SPDKISFASTANDAWESOME 00:11:48.154 07:29:03 -- common/autotest_common.sh@1206 -- # lsblk -l -o NAME,SERIAL 00:11:48.154 07:29:03 -- common/autotest_common.sh@1206 -- # grep -q -w SPDKISFASTANDAWESOME 00:11:48.154 07:29:03 -- common/autotest_common.sh@1210 -- # return 0 00:11:48.154 07:29:03 -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:11:48.154 07:29:03 -- common/autotest_common.sh@551 -- # xtrace_disable 00:11:48.154 07:29:03 -- common/autotest_common.sh@10 -- # set +x 00:11:48.154 07:29:03 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:11:48.154 07:29:03 -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:11:48.154 07:29:03 -- common/autotest_common.sh@551 -- # xtrace_disable 00:11:48.154 07:29:03 -- common/autotest_common.sh@10 -- # set +x 00:11:48.154 07:29:04 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:11:48.154 07:29:04 -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:11:48.154 07:29:04 -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:11:48.154 07:29:04 -- common/autotest_common.sh@551 -- # xtrace_disable 00:11:48.154 07:29:04 -- common/autotest_common.sh@10 -- # set +x 00:11:48.154 07:29:04 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:11:48.154 07:29:04 -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:11:48.154 07:29:04 -- common/autotest_common.sh@551 -- # xtrace_disable 00:11:48.154 07:29:04 -- common/autotest_common.sh@10 -- # set +x 00:11:48.154 [2024-07-14 07:29:04.014545] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:11:48.154 07:29:04 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:11:48.154 07:29:04 -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:11:48.154 07:29:04 -- common/autotest_common.sh@551 -- # xtrace_disable 00:11:48.154 07:29:04 -- common/autotest_common.sh@10 -- # set +x 00:11:48.154 07:29:04 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:11:48.154 07:29:04 -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:11:48.154 07:29:04 -- common/autotest_common.sh@551 -- # xtrace_disable 00:11:48.154 07:29:04 -- common/autotest_common.sh@10 -- # set +x 00:11:48.154 07:29:04 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:11:48.154 07:29:04 -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:11:48.720 07:29:04 -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:11:48.720 07:29:04 -- common/autotest_common.sh@1177 -- # local i=0 00:11:48.720 07:29:04 -- common/autotest_common.sh@1178 -- # local nvme_device_counter=1 nvme_devices=0 00:11:48.721 07:29:04 -- common/autotest_common.sh@1179 -- # [[ -n '' ]] 00:11:48.721 07:29:04 -- common/autotest_common.sh@1184 -- # sleep 2 00:11:50.619 07:29:06 -- common/autotest_common.sh@1185 -- # (( i++ <= 15 )) 00:11:50.619 07:29:06 -- common/autotest_common.sh@1186 -- # lsblk -l -o NAME,SERIAL 00:11:50.619 07:29:06 -- common/autotest_common.sh@1186 -- # grep -c SPDKISFASTANDAWESOME 00:11:50.619 07:29:06 -- common/autotest_common.sh@1186 -- # nvme_devices=1 00:11:50.619 07:29:06 -- common/autotest_common.sh@1187 -- # (( nvme_devices == nvme_device_counter )) 00:11:50.619 07:29:06 -- common/autotest_common.sh@1187 -- # return 0 00:11:50.619 07:29:06 -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:11:50.619 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:50.619 07:29:06 -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:11:50.619 07:29:06 -- common/autotest_common.sh@1198 -- # local i=0 00:11:50.619 07:29:06 -- common/autotest_common.sh@1199 -- # lsblk -o NAME,SERIAL 00:11:50.619 07:29:06 -- common/autotest_common.sh@1199 -- # grep -q -w SPDKISFASTANDAWESOME 00:11:50.619 07:29:06 -- common/autotest_common.sh@1206 -- # lsblk -l -o NAME,SERIAL 00:11:50.619 07:29:06 -- common/autotest_common.sh@1206 -- # grep -q -w SPDKISFASTANDAWESOME 00:11:50.619 07:29:06 -- common/autotest_common.sh@1210 -- # return 0 00:11:50.619 07:29:06 -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:11:50.619 07:29:06 -- common/autotest_common.sh@551 -- # xtrace_disable 00:11:50.619 07:29:06 -- common/autotest_common.sh@10 -- # set +x 00:11:50.619 07:29:06 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:11:50.619 07:29:06 -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:11:50.619 07:29:06 -- common/autotest_common.sh@551 -- # xtrace_disable 00:11:50.619 07:29:06 -- common/autotest_common.sh@10 -- # set +x 00:11:50.619 07:29:06 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:11:50.619 07:29:06 -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:11:50.619 07:29:06 -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:11:50.619 07:29:06 -- common/autotest_common.sh@551 -- # xtrace_disable 00:11:50.619 07:29:06 -- common/autotest_common.sh@10 -- # set +x 00:11:50.878 07:29:06 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:11:50.878 07:29:06 -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:11:50.878 07:29:06 -- common/autotest_common.sh@551 -- # xtrace_disable 00:11:50.878 07:29:06 -- common/autotest_common.sh@10 -- # set +x 00:11:50.878 [2024-07-14 07:29:06.794240] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:11:50.878 07:29:06 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:11:50.878 07:29:06 -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:11:50.878 07:29:06 -- common/autotest_common.sh@551 -- # xtrace_disable 00:11:50.878 07:29:06 -- common/autotest_common.sh@10 -- # set +x 00:11:50.878 07:29:06 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:11:50.878 07:29:06 -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:11:50.878 07:29:06 -- common/autotest_common.sh@551 -- # xtrace_disable 00:11:50.878 07:29:06 -- common/autotest_common.sh@10 -- # set +x 00:11:50.878 07:29:06 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:11:50.878 07:29:06 -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:11:51.444 07:29:07 -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:11:51.444 07:29:07 -- common/autotest_common.sh@1177 -- # local i=0 00:11:51.444 07:29:07 -- common/autotest_common.sh@1178 -- # local nvme_device_counter=1 nvme_devices=0 00:11:51.444 07:29:07 -- common/autotest_common.sh@1179 -- # [[ -n '' ]] 00:11:51.444 07:29:07 -- common/autotest_common.sh@1184 -- # sleep 2 00:11:53.342 07:29:09 -- common/autotest_common.sh@1185 -- # (( i++ <= 15 )) 00:11:53.342 07:29:09 -- common/autotest_common.sh@1186 -- # lsblk -l -o NAME,SERIAL 00:11:53.342 07:29:09 -- common/autotest_common.sh@1186 -- # grep -c SPDKISFASTANDAWESOME 00:11:53.342 07:29:09 -- common/autotest_common.sh@1186 -- # nvme_devices=1 00:11:53.342 07:29:09 -- common/autotest_common.sh@1187 -- # (( nvme_devices == nvme_device_counter )) 00:11:53.342 07:29:09 -- common/autotest_common.sh@1187 -- # return 0 00:11:53.342 07:29:09 -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:11:53.342 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:53.342 07:29:09 -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:11:53.342 07:29:09 -- common/autotest_common.sh@1198 -- # local i=0 00:11:53.342 07:29:09 -- common/autotest_common.sh@1199 -- # lsblk -o NAME,SERIAL 00:11:53.342 07:29:09 -- common/autotest_common.sh@1199 -- # grep -q -w SPDKISFASTANDAWESOME 00:11:53.342 07:29:09 -- common/autotest_common.sh@1206 -- # lsblk -l -o NAME,SERIAL 00:11:53.342 07:29:09 -- common/autotest_common.sh@1206 -- # grep -q -w SPDKISFASTANDAWESOME 00:11:53.342 07:29:09 -- common/autotest_common.sh@1210 -- # return 0 00:11:53.342 07:29:09 -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:11:53.342 07:29:09 -- common/autotest_common.sh@551 -- # xtrace_disable 00:11:53.342 07:29:09 -- common/autotest_common.sh@10 -- # set +x 00:11:53.599 07:29:09 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:11:53.599 07:29:09 -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:11:53.599 07:29:09 -- common/autotest_common.sh@551 -- # xtrace_disable 00:11:53.599 07:29:09 -- common/autotest_common.sh@10 -- # set +x 00:11:53.599 07:29:09 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:11:53.599 07:29:09 -- target/rpc.sh@99 -- # seq 1 5 00:11:53.599 07:29:09 -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:11:53.599 07:29:09 -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:11:53.599 07:29:09 -- common/autotest_common.sh@551 -- # xtrace_disable 00:11:53.599 07:29:09 -- common/autotest_common.sh@10 -- # set +x 00:11:53.599 07:29:09 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:11:53.599 07:29:09 -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:11:53.599 07:29:09 -- common/autotest_common.sh@551 -- # xtrace_disable 00:11:53.599 07:29:09 -- common/autotest_common.sh@10 -- # set +x 00:11:53.599 [2024-07-14 07:29:09.545125] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:11:53.599 07:29:09 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:11:53.599 07:29:09 -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:11:53.599 07:29:09 -- common/autotest_common.sh@551 -- # xtrace_disable 00:11:53.599 07:29:09 -- common/autotest_common.sh@10 -- # set +x 00:11:53.599 07:29:09 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:11:53.599 07:29:09 -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:11:53.599 07:29:09 -- common/autotest_common.sh@551 -- # xtrace_disable 00:11:53.599 07:29:09 -- common/autotest_common.sh@10 -- # set +x 00:11:53.599 07:29:09 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:11:53.599 07:29:09 -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:11:53.599 07:29:09 -- common/autotest_common.sh@551 -- # xtrace_disable 00:11:53.599 07:29:09 -- common/autotest_common.sh@10 -- # set +x 00:11:53.599 07:29:09 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:11:53.599 07:29:09 -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:11:53.599 07:29:09 -- common/autotest_common.sh@551 -- # xtrace_disable 00:11:53.599 07:29:09 -- common/autotest_common.sh@10 -- # set +x 00:11:53.599 07:29:09 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:11:53.599 07:29:09 -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:11:53.599 07:29:09 -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:11:53.599 07:29:09 -- common/autotest_common.sh@551 -- # xtrace_disable 00:11:53.599 07:29:09 -- common/autotest_common.sh@10 -- # set +x 00:11:53.599 07:29:09 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:11:53.599 07:29:09 -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:11:53.599 07:29:09 -- common/autotest_common.sh@551 -- # xtrace_disable 00:11:53.599 07:29:09 -- common/autotest_common.sh@10 -- # set +x 00:11:53.599 [2024-07-14 07:29:09.593205] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:11:53.599 07:29:09 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:11:53.599 07:29:09 -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:11:53.599 07:29:09 -- common/autotest_common.sh@551 -- # xtrace_disable 00:11:53.599 07:29:09 -- common/autotest_common.sh@10 -- # set +x 00:11:53.599 07:29:09 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:11:53.599 07:29:09 -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:11:53.599 07:29:09 -- common/autotest_common.sh@551 -- # xtrace_disable 00:11:53.599 07:29:09 -- common/autotest_common.sh@10 -- # set +x 00:11:53.599 07:29:09 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:11:53.599 07:29:09 -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:11:53.599 07:29:09 -- common/autotest_common.sh@551 -- # xtrace_disable 00:11:53.599 07:29:09 -- common/autotest_common.sh@10 -- # set +x 00:11:53.599 07:29:09 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:11:53.599 07:29:09 -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:11:53.599 07:29:09 -- common/autotest_common.sh@551 -- # xtrace_disable 00:11:53.599 07:29:09 -- common/autotest_common.sh@10 -- # set +x 00:11:53.599 07:29:09 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:11:53.599 07:29:09 -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:11:53.599 07:29:09 -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:11:53.599 07:29:09 -- common/autotest_common.sh@551 -- # xtrace_disable 00:11:53.599 07:29:09 -- common/autotest_common.sh@10 -- # set +x 00:11:53.599 07:29:09 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:11:53.599 07:29:09 -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:11:53.599 07:29:09 -- common/autotest_common.sh@551 -- # xtrace_disable 00:11:53.599 07:29:09 -- common/autotest_common.sh@10 -- # set +x 00:11:53.599 [2024-07-14 07:29:09.641366] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:11:53.599 07:29:09 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:11:53.599 07:29:09 -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:11:53.599 07:29:09 -- common/autotest_common.sh@551 -- # xtrace_disable 00:11:53.599 07:29:09 -- common/autotest_common.sh@10 -- # set +x 00:11:53.599 07:29:09 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:11:53.599 07:29:09 -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:11:53.599 07:29:09 -- common/autotest_common.sh@551 -- # xtrace_disable 00:11:53.599 07:29:09 -- common/autotest_common.sh@10 -- # set +x 00:11:53.599 07:29:09 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:11:53.599 07:29:09 -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:11:53.599 07:29:09 -- common/autotest_common.sh@551 -- # xtrace_disable 00:11:53.599 07:29:09 -- common/autotest_common.sh@10 -- # set +x 00:11:53.599 07:29:09 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:11:53.599 07:29:09 -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:11:53.599 07:29:09 -- common/autotest_common.sh@551 -- # xtrace_disable 00:11:53.599 07:29:09 -- common/autotest_common.sh@10 -- # set +x 00:11:53.599 07:29:09 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:11:53.599 07:29:09 -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:11:53.599 07:29:09 -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:11:53.599 07:29:09 -- common/autotest_common.sh@551 -- # xtrace_disable 00:11:53.599 07:29:09 -- common/autotest_common.sh@10 -- # set +x 00:11:53.599 07:29:09 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:11:53.599 07:29:09 -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:11:53.599 07:29:09 -- common/autotest_common.sh@551 -- # xtrace_disable 00:11:53.599 07:29:09 -- common/autotest_common.sh@10 -- # set +x 00:11:53.599 [2024-07-14 07:29:09.689522] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:11:53.599 07:29:09 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:11:53.599 07:29:09 -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:11:53.599 07:29:09 -- common/autotest_common.sh@551 -- # xtrace_disable 00:11:53.599 07:29:09 -- common/autotest_common.sh@10 -- # set +x 00:11:53.599 07:29:09 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:11:53.599 07:29:09 -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:11:53.599 07:29:09 -- common/autotest_common.sh@551 -- # xtrace_disable 00:11:53.599 07:29:09 -- common/autotest_common.sh@10 -- # set +x 00:11:53.599 07:29:09 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:11:53.599 07:29:09 -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:11:53.599 07:29:09 -- common/autotest_common.sh@551 -- # xtrace_disable 00:11:53.600 07:29:09 -- common/autotest_common.sh@10 -- # set +x 00:11:53.600 07:29:09 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:11:53.600 07:29:09 -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:11:53.600 07:29:09 -- common/autotest_common.sh@551 -- # xtrace_disable 00:11:53.600 07:29:09 -- common/autotest_common.sh@10 -- # set +x 00:11:53.600 07:29:09 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:11:53.600 07:29:09 -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:11:53.600 07:29:09 -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:11:53.600 07:29:09 -- common/autotest_common.sh@551 -- # xtrace_disable 00:11:53.600 07:29:09 -- common/autotest_common.sh@10 -- # set +x 00:11:53.600 07:29:09 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:11:53.600 07:29:09 -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:11:53.600 07:29:09 -- common/autotest_common.sh@551 -- # xtrace_disable 00:11:53.600 07:29:09 -- common/autotest_common.sh@10 -- # set +x 00:11:53.600 [2024-07-14 07:29:09.737699] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:11:53.600 07:29:09 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:11:53.600 07:29:09 -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:11:53.600 07:29:09 -- common/autotest_common.sh@551 -- # xtrace_disable 00:11:53.600 07:29:09 -- common/autotest_common.sh@10 -- # set +x 00:11:53.600 07:29:09 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:11:53.600 07:29:09 -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:11:53.600 07:29:09 -- common/autotest_common.sh@551 -- # xtrace_disable 00:11:53.600 07:29:09 -- common/autotest_common.sh@10 -- # set +x 00:11:53.600 07:29:09 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:11:53.600 07:29:09 -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:11:53.600 07:29:09 -- common/autotest_common.sh@551 -- # xtrace_disable 00:11:53.600 07:29:09 -- common/autotest_common.sh@10 -- # set +x 00:11:53.600 07:29:09 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:11:53.600 07:29:09 -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:11:53.600 07:29:09 -- common/autotest_common.sh@551 -- # xtrace_disable 00:11:53.600 07:29:09 -- common/autotest_common.sh@10 -- # set +x 00:11:53.857 07:29:09 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:11:53.857 07:29:09 -- target/rpc.sh@110 -- # rpc_cmd nvmf_get_stats 00:11:53.857 07:29:09 -- common/autotest_common.sh@551 -- # xtrace_disable 00:11:53.857 07:29:09 -- common/autotest_common.sh@10 -- # set +x 00:11:53.857 07:29:09 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:11:53.857 07:29:09 -- target/rpc.sh@110 -- # stats='{ 00:11:53.857 "tick_rate": 2700000000, 00:11:53.857 "poll_groups": [ 00:11:53.857 { 00:11:53.857 "name": "nvmf_tgt_poll_group_0", 00:11:53.857 "admin_qpairs": 2, 00:11:53.857 "io_qpairs": 84, 00:11:53.857 "current_admin_qpairs": 0, 00:11:53.857 "current_io_qpairs": 0, 00:11:53.857 "pending_bdev_io": 0, 00:11:53.857 "completed_nvme_io": 248, 00:11:53.857 "transports": [ 00:11:53.857 { 00:11:53.857 "trtype": "TCP" 00:11:53.857 } 00:11:53.857 ] 00:11:53.857 }, 00:11:53.857 { 00:11:53.857 "name": "nvmf_tgt_poll_group_1", 00:11:53.857 "admin_qpairs": 2, 00:11:53.857 "io_qpairs": 84, 00:11:53.857 "current_admin_qpairs": 0, 00:11:53.857 "current_io_qpairs": 0, 00:11:53.857 "pending_bdev_io": 0, 00:11:53.857 "completed_nvme_io": 184, 00:11:53.857 "transports": [ 00:11:53.857 { 00:11:53.857 "trtype": "TCP" 00:11:53.857 } 00:11:53.857 ] 00:11:53.858 }, 00:11:53.858 { 00:11:53.858 "name": "nvmf_tgt_poll_group_2", 00:11:53.858 "admin_qpairs": 1, 00:11:53.858 "io_qpairs": 84, 00:11:53.858 "current_admin_qpairs": 0, 00:11:53.858 "current_io_qpairs": 0, 00:11:53.858 "pending_bdev_io": 0, 00:11:53.858 "completed_nvme_io": 135, 00:11:53.858 "transports": [ 00:11:53.858 { 00:11:53.858 "trtype": "TCP" 00:11:53.858 } 00:11:53.858 ] 00:11:53.858 }, 00:11:53.858 { 00:11:53.858 "name": "nvmf_tgt_poll_group_3", 00:11:53.858 "admin_qpairs": 2, 00:11:53.858 "io_qpairs": 84, 00:11:53.858 "current_admin_qpairs": 0, 00:11:53.858 "current_io_qpairs": 0, 00:11:53.858 "pending_bdev_io": 0, 00:11:53.858 "completed_nvme_io": 119, 00:11:53.858 "transports": [ 00:11:53.858 { 00:11:53.858 "trtype": "TCP" 00:11:53.858 } 00:11:53.858 ] 00:11:53.858 } 00:11:53.858 ] 00:11:53.858 }' 00:11:53.858 07:29:09 -- target/rpc.sh@112 -- # jsum '.poll_groups[].admin_qpairs' 00:11:53.858 07:29:09 -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].admin_qpairs' 00:11:53.858 07:29:09 -- target/rpc.sh@20 -- # jq '.poll_groups[].admin_qpairs' 00:11:53.858 07:29:09 -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:11:53.858 07:29:09 -- target/rpc.sh@112 -- # (( 7 > 0 )) 00:11:53.858 07:29:09 -- target/rpc.sh@113 -- # jsum '.poll_groups[].io_qpairs' 00:11:53.858 07:29:09 -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].io_qpairs' 00:11:53.858 07:29:09 -- target/rpc.sh@20 -- # jq '.poll_groups[].io_qpairs' 00:11:53.858 07:29:09 -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:11:53.858 07:29:09 -- target/rpc.sh@113 -- # (( 336 > 0 )) 00:11:53.858 07:29:09 -- target/rpc.sh@115 -- # '[' rdma == tcp ']' 00:11:53.858 07:29:09 -- target/rpc.sh@121 -- # trap - SIGINT SIGTERM EXIT 00:11:53.858 07:29:09 -- target/rpc.sh@123 -- # nvmftestfini 00:11:53.858 07:29:09 -- nvmf/common.sh@476 -- # nvmfcleanup 00:11:53.858 07:29:09 -- nvmf/common.sh@116 -- # sync 00:11:53.858 07:29:09 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:11:53.858 07:29:09 -- nvmf/common.sh@119 -- # set +e 00:11:53.858 07:29:09 -- nvmf/common.sh@120 -- # for i in {1..20} 00:11:53.858 07:29:09 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:11:53.858 rmmod nvme_tcp 00:11:53.858 rmmod nvme_fabrics 00:11:53.858 rmmod nvme_keyring 00:11:53.858 07:29:09 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:11:53.858 07:29:09 -- nvmf/common.sh@123 -- # set -e 00:11:53.858 07:29:09 -- nvmf/common.sh@124 -- # return 0 00:11:53.858 07:29:09 -- nvmf/common.sh@477 -- # '[' -n 4038393 ']' 00:11:53.858 07:29:09 -- nvmf/common.sh@478 -- # killprocess 4038393 00:11:53.858 07:29:09 -- common/autotest_common.sh@926 -- # '[' -z 4038393 ']' 00:11:53.858 07:29:09 -- common/autotest_common.sh@930 -- # kill -0 4038393 00:11:53.858 07:29:09 -- common/autotest_common.sh@931 -- # uname 00:11:53.858 07:29:09 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:11:53.858 07:29:09 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 4038393 00:11:53.858 07:29:09 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:11:53.858 07:29:09 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:11:53.858 07:29:09 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 4038393' 00:11:53.858 killing process with pid 4038393 00:11:53.858 07:29:09 -- common/autotest_common.sh@945 -- # kill 4038393 00:11:53.858 07:29:09 -- common/autotest_common.sh@950 -- # wait 4038393 00:11:54.117 07:29:10 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:11:54.117 07:29:10 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:11:54.117 07:29:10 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:11:54.117 07:29:10 -- nvmf/common.sh@273 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:11:54.117 07:29:10 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:11:54.117 07:29:10 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:54.117 07:29:10 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:11:54.117 07:29:10 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:56.651 07:29:12 -- nvmf/common.sh@278 -- # ip -4 addr flush cvl_0_1 00:11:56.651 00:11:56.651 real 0m25.872s 00:11:56.651 user 1m24.900s 00:11:56.651 sys 0m4.067s 00:11:56.651 07:29:12 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:11:56.651 07:29:12 -- common/autotest_common.sh@10 -- # set +x 00:11:56.651 ************************************ 00:11:56.651 END TEST nvmf_rpc 00:11:56.651 ************************************ 00:11:56.651 07:29:12 -- nvmf/nvmf.sh@30 -- # run_test nvmf_invalid /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/invalid.sh --transport=tcp 00:11:56.651 07:29:12 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:11:56.651 07:29:12 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:11:56.651 07:29:12 -- common/autotest_common.sh@10 -- # set +x 00:11:56.651 ************************************ 00:11:56.651 START TEST nvmf_invalid 00:11:56.651 ************************************ 00:11:56.651 07:29:12 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/invalid.sh --transport=tcp 00:11:56.651 * Looking for test storage... 00:11:56.651 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:11:56.651 07:29:12 -- target/invalid.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:11:56.651 07:29:12 -- nvmf/common.sh@7 -- # uname -s 00:11:56.651 07:29:12 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:11:56.651 07:29:12 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:11:56.651 07:29:12 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:11:56.651 07:29:12 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:11:56.651 07:29:12 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:11:56.651 07:29:12 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:11:56.651 07:29:12 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:11:56.651 07:29:12 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:11:56.651 07:29:12 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:11:56.651 07:29:12 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:11:56.651 07:29:12 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:11:56.651 07:29:12 -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:11:56.651 07:29:12 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:11:56.651 07:29:12 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:11:56.651 07:29:12 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:11:56.651 07:29:12 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:11:56.651 07:29:12 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:11:56.651 07:29:12 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:11:56.651 07:29:12 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:11:56.651 07:29:12 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:56.651 07:29:12 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:56.651 07:29:12 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:56.651 07:29:12 -- paths/export.sh@5 -- # export PATH 00:11:56.651 07:29:12 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:56.651 07:29:12 -- nvmf/common.sh@46 -- # : 0 00:11:56.651 07:29:12 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:11:56.651 07:29:12 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:11:56.651 07:29:12 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:11:56.651 07:29:12 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:11:56.651 07:29:12 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:11:56.651 07:29:12 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:11:56.651 07:29:12 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:11:56.651 07:29:12 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:11:56.651 07:29:12 -- target/invalid.sh@11 -- # multi_target_rpc=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py 00:11:56.651 07:29:12 -- target/invalid.sh@12 -- # rpc=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:11:56.651 07:29:12 -- target/invalid.sh@13 -- # nqn=nqn.2016-06.io.spdk:cnode 00:11:56.651 07:29:12 -- target/invalid.sh@14 -- # target=foobar 00:11:56.651 07:29:12 -- target/invalid.sh@16 -- # RANDOM=0 00:11:56.651 07:29:12 -- target/invalid.sh@34 -- # nvmftestinit 00:11:56.651 07:29:12 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:11:56.651 07:29:12 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:11:56.651 07:29:12 -- nvmf/common.sh@436 -- # prepare_net_devs 00:11:56.651 07:29:12 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:11:56.651 07:29:12 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:11:56.651 07:29:12 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:56.651 07:29:12 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:11:56.651 07:29:12 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:56.651 07:29:12 -- nvmf/common.sh@402 -- # [[ phy != virt ]] 00:11:56.651 07:29:12 -- nvmf/common.sh@402 -- # gather_supported_nvmf_pci_devs 00:11:56.651 07:29:12 -- nvmf/common.sh@284 -- # xtrace_disable 00:11:56.651 07:29:12 -- common/autotest_common.sh@10 -- # set +x 00:11:58.565 07:29:14 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:11:58.565 07:29:14 -- nvmf/common.sh@290 -- # pci_devs=() 00:11:58.565 07:29:14 -- nvmf/common.sh@290 -- # local -a pci_devs 00:11:58.565 07:29:14 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:11:58.565 07:29:14 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:11:58.565 07:29:14 -- nvmf/common.sh@292 -- # pci_drivers=() 00:11:58.565 07:29:14 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:11:58.565 07:29:14 -- nvmf/common.sh@294 -- # net_devs=() 00:11:58.565 07:29:14 -- nvmf/common.sh@294 -- # local -ga net_devs 00:11:58.565 07:29:14 -- nvmf/common.sh@295 -- # e810=() 00:11:58.565 07:29:14 -- nvmf/common.sh@295 -- # local -ga e810 00:11:58.565 07:29:14 -- nvmf/common.sh@296 -- # x722=() 00:11:58.565 07:29:14 -- nvmf/common.sh@296 -- # local -ga x722 00:11:58.565 07:29:14 -- nvmf/common.sh@297 -- # mlx=() 00:11:58.565 07:29:14 -- nvmf/common.sh@297 -- # local -ga mlx 00:11:58.565 07:29:14 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:11:58.565 07:29:14 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:11:58.565 07:29:14 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:11:58.565 07:29:14 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:11:58.565 07:29:14 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:11:58.565 07:29:14 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:11:58.565 07:29:14 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:11:58.565 07:29:14 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:11:58.565 07:29:14 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:11:58.565 07:29:14 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:11:58.565 07:29:14 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:11:58.565 07:29:14 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:11:58.565 07:29:14 -- nvmf/common.sh@320 -- # [[ tcp == rdma ]] 00:11:58.565 07:29:14 -- nvmf/common.sh@326 -- # [[ e810 == mlx5 ]] 00:11:58.565 07:29:14 -- nvmf/common.sh@328 -- # [[ e810 == e810 ]] 00:11:58.565 07:29:14 -- nvmf/common.sh@329 -- # pci_devs=("${e810[@]}") 00:11:58.565 07:29:14 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:11:58.565 07:29:14 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:11:58.565 07:29:14 -- nvmf/common.sh@340 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:11:58.565 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:11:58.565 07:29:14 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:11:58.565 07:29:14 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:11:58.565 07:29:14 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:11:58.565 07:29:14 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:11:58.565 07:29:14 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:11:58.565 07:29:14 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:11:58.565 07:29:14 -- nvmf/common.sh@340 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:11:58.565 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:11:58.565 07:29:14 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:11:58.565 07:29:14 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:11:58.565 07:29:14 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:11:58.565 07:29:14 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:11:58.565 07:29:14 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:11:58.565 07:29:14 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:11:58.565 07:29:14 -- nvmf/common.sh@371 -- # [[ e810 == e810 ]] 00:11:58.565 07:29:14 -- nvmf/common.sh@371 -- # [[ tcp == rdma ]] 00:11:58.565 07:29:14 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:11:58.565 07:29:14 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:58.565 07:29:14 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:11:58.565 07:29:14 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:58.565 07:29:14 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:11:58.565 Found net devices under 0000:0a:00.0: cvl_0_0 00:11:58.565 07:29:14 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:11:58.565 07:29:14 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:11:58.565 07:29:14 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:58.565 07:29:14 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:11:58.565 07:29:14 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:58.565 07:29:14 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:11:58.565 Found net devices under 0000:0a:00.1: cvl_0_1 00:11:58.565 07:29:14 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:11:58.565 07:29:14 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:11:58.565 07:29:14 -- nvmf/common.sh@402 -- # is_hw=yes 00:11:58.565 07:29:14 -- nvmf/common.sh@404 -- # [[ yes == yes ]] 00:11:58.565 07:29:14 -- nvmf/common.sh@405 -- # [[ tcp == tcp ]] 00:11:58.565 07:29:14 -- nvmf/common.sh@406 -- # nvmf_tcp_init 00:11:58.565 07:29:14 -- nvmf/common.sh@228 -- # NVMF_INITIATOR_IP=10.0.0.1 00:11:58.565 07:29:14 -- nvmf/common.sh@229 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:11:58.565 07:29:14 -- nvmf/common.sh@230 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:11:58.565 07:29:14 -- nvmf/common.sh@233 -- # (( 2 > 1 )) 00:11:58.565 07:29:14 -- nvmf/common.sh@235 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:11:58.565 07:29:14 -- nvmf/common.sh@236 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:11:58.565 07:29:14 -- nvmf/common.sh@239 -- # NVMF_SECOND_TARGET_IP= 00:11:58.565 07:29:14 -- nvmf/common.sh@241 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:11:58.565 07:29:14 -- nvmf/common.sh@242 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:11:58.565 07:29:14 -- nvmf/common.sh@243 -- # ip -4 addr flush cvl_0_0 00:11:58.565 07:29:14 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_1 00:11:58.565 07:29:14 -- nvmf/common.sh@247 -- # ip netns add cvl_0_0_ns_spdk 00:11:58.565 07:29:14 -- nvmf/common.sh@250 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:11:58.565 07:29:14 -- nvmf/common.sh@253 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:11:58.565 07:29:14 -- nvmf/common.sh@254 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:11:58.565 07:29:14 -- nvmf/common.sh@257 -- # ip link set cvl_0_1 up 00:11:58.565 07:29:14 -- nvmf/common.sh@259 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:11:58.565 07:29:14 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:11:58.565 07:29:14 -- nvmf/common.sh@263 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:11:58.565 07:29:14 -- nvmf/common.sh@266 -- # ping -c 1 10.0.0.2 00:11:58.565 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:11:58.565 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.141 ms 00:11:58.565 00:11:58.565 --- 10.0.0.2 ping statistics --- 00:11:58.565 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:58.565 rtt min/avg/max/mdev = 0.141/0.141/0.141/0.000 ms 00:11:58.565 07:29:14 -- nvmf/common.sh@267 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:11:58.565 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:11:58.565 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.214 ms 00:11:58.565 00:11:58.565 --- 10.0.0.1 ping statistics --- 00:11:58.565 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:58.565 rtt min/avg/max/mdev = 0.214/0.214/0.214/0.000 ms 00:11:58.565 07:29:14 -- nvmf/common.sh@269 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:11:58.565 07:29:14 -- nvmf/common.sh@410 -- # return 0 00:11:58.565 07:29:14 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:11:58.565 07:29:14 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:11:58.565 07:29:14 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:11:58.565 07:29:14 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:11:58.565 07:29:14 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:11:58.565 07:29:14 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:11:58.565 07:29:14 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:11:58.565 07:29:14 -- target/invalid.sh@35 -- # nvmfappstart -m 0xF 00:11:58.565 07:29:14 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:11:58.565 07:29:14 -- common/autotest_common.sh@712 -- # xtrace_disable 00:11:58.565 07:29:14 -- common/autotest_common.sh@10 -- # set +x 00:11:58.565 07:29:14 -- nvmf/common.sh@469 -- # nvmfpid=4043097 00:11:58.565 07:29:14 -- nvmf/common.sh@468 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:11:58.565 07:29:14 -- nvmf/common.sh@470 -- # waitforlisten 4043097 00:11:58.565 07:29:14 -- common/autotest_common.sh@819 -- # '[' -z 4043097 ']' 00:11:58.565 07:29:14 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:58.565 07:29:14 -- common/autotest_common.sh@824 -- # local max_retries=100 00:11:58.565 07:29:14 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:58.565 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:58.565 07:29:14 -- common/autotest_common.sh@828 -- # xtrace_disable 00:11:58.565 07:29:14 -- common/autotest_common.sh@10 -- # set +x 00:11:58.565 [2024-07-14 07:29:14.653385] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:11:58.565 [2024-07-14 07:29:14.653460] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:11:58.565 EAL: No free 2048 kB hugepages reported on node 1 00:11:58.565 [2024-07-14 07:29:14.732899] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:11:58.848 [2024-07-14 07:29:14.858725] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:11:58.848 [2024-07-14 07:29:14.858907] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:11:58.848 [2024-07-14 07:29:14.858928] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:11:58.848 [2024-07-14 07:29:14.858942] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:11:58.848 [2024-07-14 07:29:14.859002] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:11:58.848 [2024-07-14 07:29:14.859059] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:11:58.848 [2024-07-14 07:29:14.859084] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:11:58.848 [2024-07-14 07:29:14.859088] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:11:59.785 07:29:15 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:11:59.785 07:29:15 -- common/autotest_common.sh@852 -- # return 0 00:11:59.785 07:29:15 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:11:59.785 07:29:15 -- common/autotest_common.sh@718 -- # xtrace_disable 00:11:59.785 07:29:15 -- common/autotest_common.sh@10 -- # set +x 00:11:59.785 07:29:15 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:11:59.785 07:29:15 -- target/invalid.sh@37 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini $1; exit 1' SIGINT SIGTERM EXIT 00:11:59.785 07:29:15 -- target/invalid.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -t foobar nqn.2016-06.io.spdk:cnode20387 00:11:59.785 [2024-07-14 07:29:15.885061] nvmf_rpc.c: 401:rpc_nvmf_create_subsystem: *ERROR*: Unable to find target foobar 00:11:59.785 07:29:15 -- target/invalid.sh@40 -- # out='request: 00:11:59.785 { 00:11:59.785 "nqn": "nqn.2016-06.io.spdk:cnode20387", 00:11:59.785 "tgt_name": "foobar", 00:11:59.785 "method": "nvmf_create_subsystem", 00:11:59.785 "req_id": 1 00:11:59.785 } 00:11:59.785 Got JSON-RPC error response 00:11:59.785 response: 00:11:59.785 { 00:11:59.785 "code": -32603, 00:11:59.785 "message": "Unable to find target foobar" 00:11:59.785 }' 00:11:59.785 07:29:15 -- target/invalid.sh@41 -- # [[ request: 00:11:59.785 { 00:11:59.785 "nqn": "nqn.2016-06.io.spdk:cnode20387", 00:11:59.785 "tgt_name": "foobar", 00:11:59.785 "method": "nvmf_create_subsystem", 00:11:59.785 "req_id": 1 00:11:59.785 } 00:11:59.785 Got JSON-RPC error response 00:11:59.785 response: 00:11:59.785 { 00:11:59.785 "code": -32603, 00:11:59.785 "message": "Unable to find target foobar" 00:11:59.785 } == *\U\n\a\b\l\e\ \t\o\ \f\i\n\d\ \t\a\r\g\e\t* ]] 00:11:59.785 07:29:15 -- target/invalid.sh@45 -- # echo -e '\x1f' 00:11:59.785 07:29:15 -- target/invalid.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -s $'SPDKISFASTANDAWESOME\037' nqn.2016-06.io.spdk:cnode7750 00:12:00.042 [2024-07-14 07:29:16.121904] nvmf_rpc.c: 418:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode7750: invalid serial number 'SPDKISFASTANDAWESOME' 00:12:00.042 07:29:16 -- target/invalid.sh@45 -- # out='request: 00:12:00.042 { 00:12:00.042 "nqn": "nqn.2016-06.io.spdk:cnode7750", 00:12:00.042 "serial_number": "SPDKISFASTANDAWESOME\u001f", 00:12:00.042 "method": "nvmf_create_subsystem", 00:12:00.042 "req_id": 1 00:12:00.042 } 00:12:00.042 Got JSON-RPC error response 00:12:00.042 response: 00:12:00.042 { 00:12:00.042 "code": -32602, 00:12:00.043 "message": "Invalid SN SPDKISFASTANDAWESOME\u001f" 00:12:00.043 }' 00:12:00.043 07:29:16 -- target/invalid.sh@46 -- # [[ request: 00:12:00.043 { 00:12:00.043 "nqn": "nqn.2016-06.io.spdk:cnode7750", 00:12:00.043 "serial_number": "SPDKISFASTANDAWESOME\u001f", 00:12:00.043 "method": "nvmf_create_subsystem", 00:12:00.043 "req_id": 1 00:12:00.043 } 00:12:00.043 Got JSON-RPC error response 00:12:00.043 response: 00:12:00.043 { 00:12:00.043 "code": -32602, 00:12:00.043 "message": "Invalid SN SPDKISFASTANDAWESOME\u001f" 00:12:00.043 } == *\I\n\v\a\l\i\d\ \S\N* ]] 00:12:00.043 07:29:16 -- target/invalid.sh@50 -- # echo -e '\x1f' 00:12:00.043 07:29:16 -- target/invalid.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -d $'SPDK_Controller\037' nqn.2016-06.io.spdk:cnode30076 00:12:00.301 [2024-07-14 07:29:16.362662] nvmf_rpc.c: 427:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode30076: invalid model number 'SPDK_Controller' 00:12:00.301 07:29:16 -- target/invalid.sh@50 -- # out='request: 00:12:00.301 { 00:12:00.301 "nqn": "nqn.2016-06.io.spdk:cnode30076", 00:12:00.301 "model_number": "SPDK_Controller\u001f", 00:12:00.301 "method": "nvmf_create_subsystem", 00:12:00.301 "req_id": 1 00:12:00.301 } 00:12:00.301 Got JSON-RPC error response 00:12:00.301 response: 00:12:00.301 { 00:12:00.301 "code": -32602, 00:12:00.301 "message": "Invalid MN SPDK_Controller\u001f" 00:12:00.301 }' 00:12:00.301 07:29:16 -- target/invalid.sh@51 -- # [[ request: 00:12:00.301 { 00:12:00.301 "nqn": "nqn.2016-06.io.spdk:cnode30076", 00:12:00.301 "model_number": "SPDK_Controller\u001f", 00:12:00.301 "method": "nvmf_create_subsystem", 00:12:00.301 "req_id": 1 00:12:00.301 } 00:12:00.301 Got JSON-RPC error response 00:12:00.301 response: 00:12:00.301 { 00:12:00.301 "code": -32602, 00:12:00.301 "message": "Invalid MN SPDK_Controller\u001f" 00:12:00.301 } == *\I\n\v\a\l\i\d\ \M\N* ]] 00:12:00.301 07:29:16 -- target/invalid.sh@54 -- # gen_random_s 21 00:12:00.301 07:29:16 -- target/invalid.sh@19 -- # local length=21 ll 00:12:00.301 07:29:16 -- target/invalid.sh@21 -- # chars=('32' '33' '34' '35' '36' '37' '38' '39' '40' '41' '42' '43' '44' '45' '46' '47' '48' '49' '50' '51' '52' '53' '54' '55' '56' '57' '58' '59' '60' '61' '62' '63' '64' '65' '66' '67' '68' '69' '70' '71' '72' '73' '74' '75' '76' '77' '78' '79' '80' '81' '82' '83' '84' '85' '86' '87' '88' '89' '90' '91' '92' '93' '94' '95' '96' '97' '98' '99' '100' '101' '102' '103' '104' '105' '106' '107' '108' '109' '110' '111' '112' '113' '114' '115' '116' '117' '118' '119' '120' '121' '122' '123' '124' '125' '126' '127') 00:12:00.301 07:29:16 -- target/invalid.sh@21 -- # local chars 00:12:00.301 07:29:16 -- target/invalid.sh@22 -- # local string 00:12:00.301 07:29:16 -- target/invalid.sh@24 -- # (( ll = 0 )) 00:12:00.301 07:29:16 -- target/invalid.sh@24 -- # (( ll < length )) 00:12:00.301 07:29:16 -- target/invalid.sh@25 -- # printf %x 56 00:12:00.301 07:29:16 -- target/invalid.sh@25 -- # echo -e '\x38' 00:12:00.301 07:29:16 -- target/invalid.sh@25 -- # string+=8 00:12:00.301 07:29:16 -- target/invalid.sh@24 -- # (( ll++ )) 00:12:00.301 07:29:16 -- target/invalid.sh@24 -- # (( ll < length )) 00:12:00.301 07:29:16 -- target/invalid.sh@25 -- # printf %x 95 00:12:00.301 07:29:16 -- target/invalid.sh@25 -- # echo -e '\x5f' 00:12:00.301 07:29:16 -- target/invalid.sh@25 -- # string+=_ 00:12:00.301 07:29:16 -- target/invalid.sh@24 -- # (( ll++ )) 00:12:00.301 07:29:16 -- target/invalid.sh@24 -- # (( ll < length )) 00:12:00.301 07:29:16 -- target/invalid.sh@25 -- # printf %x 95 00:12:00.301 07:29:16 -- target/invalid.sh@25 -- # echo -e '\x5f' 00:12:00.301 07:29:16 -- target/invalid.sh@25 -- # string+=_ 00:12:00.301 07:29:16 -- target/invalid.sh@24 -- # (( ll++ )) 00:12:00.301 07:29:16 -- target/invalid.sh@24 -- # (( ll < length )) 00:12:00.301 07:29:16 -- target/invalid.sh@25 -- # printf %x 104 00:12:00.301 07:29:16 -- target/invalid.sh@25 -- # echo -e '\x68' 00:12:00.301 07:29:16 -- target/invalid.sh@25 -- # string+=h 00:12:00.301 07:29:16 -- target/invalid.sh@24 -- # (( ll++ )) 00:12:00.301 07:29:16 -- target/invalid.sh@24 -- # (( ll < length )) 00:12:00.301 07:29:16 -- target/invalid.sh@25 -- # printf %x 110 00:12:00.301 07:29:16 -- target/invalid.sh@25 -- # echo -e '\x6e' 00:12:00.301 07:29:16 -- target/invalid.sh@25 -- # string+=n 00:12:00.301 07:29:16 -- target/invalid.sh@24 -- # (( ll++ )) 00:12:00.301 07:29:16 -- target/invalid.sh@24 -- # (( ll < length )) 00:12:00.301 07:29:16 -- target/invalid.sh@25 -- # printf %x 74 00:12:00.301 07:29:16 -- target/invalid.sh@25 -- # echo -e '\x4a' 00:12:00.301 07:29:16 -- target/invalid.sh@25 -- # string+=J 00:12:00.301 07:29:16 -- target/invalid.sh@24 -- # (( ll++ )) 00:12:00.301 07:29:16 -- target/invalid.sh@24 -- # (( ll < length )) 00:12:00.301 07:29:16 -- target/invalid.sh@25 -- # printf %x 33 00:12:00.301 07:29:16 -- target/invalid.sh@25 -- # echo -e '\x21' 00:12:00.301 07:29:16 -- target/invalid.sh@25 -- # string+='!' 00:12:00.301 07:29:16 -- target/invalid.sh@24 -- # (( ll++ )) 00:12:00.301 07:29:16 -- target/invalid.sh@24 -- # (( ll < length )) 00:12:00.301 07:29:16 -- target/invalid.sh@25 -- # printf %x 93 00:12:00.301 07:29:16 -- target/invalid.sh@25 -- # echo -e '\x5d' 00:12:00.301 07:29:16 -- target/invalid.sh@25 -- # string+=']' 00:12:00.301 07:29:16 -- target/invalid.sh@24 -- # (( ll++ )) 00:12:00.301 07:29:16 -- target/invalid.sh@24 -- # (( ll < length )) 00:12:00.301 07:29:16 -- target/invalid.sh@25 -- # printf %x 43 00:12:00.301 07:29:16 -- target/invalid.sh@25 -- # echo -e '\x2b' 00:12:00.301 07:29:16 -- target/invalid.sh@25 -- # string+=+ 00:12:00.301 07:29:16 -- target/invalid.sh@24 -- # (( ll++ )) 00:12:00.301 07:29:16 -- target/invalid.sh@24 -- # (( ll < length )) 00:12:00.301 07:29:16 -- target/invalid.sh@25 -- # printf %x 96 00:12:00.301 07:29:16 -- target/invalid.sh@25 -- # echo -e '\x60' 00:12:00.301 07:29:16 -- target/invalid.sh@25 -- # string+='`' 00:12:00.301 07:29:16 -- target/invalid.sh@24 -- # (( ll++ )) 00:12:00.301 07:29:16 -- target/invalid.sh@24 -- # (( ll < length )) 00:12:00.301 07:29:16 -- target/invalid.sh@25 -- # printf %x 40 00:12:00.301 07:29:16 -- target/invalid.sh@25 -- # echo -e '\x28' 00:12:00.301 07:29:16 -- target/invalid.sh@25 -- # string+='(' 00:12:00.301 07:29:16 -- target/invalid.sh@24 -- # (( ll++ )) 00:12:00.301 07:29:16 -- target/invalid.sh@24 -- # (( ll < length )) 00:12:00.301 07:29:16 -- target/invalid.sh@25 -- # printf %x 100 00:12:00.301 07:29:16 -- target/invalid.sh@25 -- # echo -e '\x64' 00:12:00.301 07:29:16 -- target/invalid.sh@25 -- # string+=d 00:12:00.301 07:29:16 -- target/invalid.sh@24 -- # (( ll++ )) 00:12:00.301 07:29:16 -- target/invalid.sh@24 -- # (( ll < length )) 00:12:00.301 07:29:16 -- target/invalid.sh@25 -- # printf %x 43 00:12:00.301 07:29:16 -- target/invalid.sh@25 -- # echo -e '\x2b' 00:12:00.301 07:29:16 -- target/invalid.sh@25 -- # string+=+ 00:12:00.301 07:29:16 -- target/invalid.sh@24 -- # (( ll++ )) 00:12:00.301 07:29:16 -- target/invalid.sh@24 -- # (( ll < length )) 00:12:00.301 07:29:16 -- target/invalid.sh@25 -- # printf %x 111 00:12:00.301 07:29:16 -- target/invalid.sh@25 -- # echo -e '\x6f' 00:12:00.301 07:29:16 -- target/invalid.sh@25 -- # string+=o 00:12:00.301 07:29:16 -- target/invalid.sh@24 -- # (( ll++ )) 00:12:00.301 07:29:16 -- target/invalid.sh@24 -- # (( ll < length )) 00:12:00.301 07:29:16 -- target/invalid.sh@25 -- # printf %x 35 00:12:00.301 07:29:16 -- target/invalid.sh@25 -- # echo -e '\x23' 00:12:00.301 07:29:16 -- target/invalid.sh@25 -- # string+='#' 00:12:00.301 07:29:16 -- target/invalid.sh@24 -- # (( ll++ )) 00:12:00.301 07:29:16 -- target/invalid.sh@24 -- # (( ll < length )) 00:12:00.301 07:29:16 -- target/invalid.sh@25 -- # printf %x 40 00:12:00.301 07:29:16 -- target/invalid.sh@25 -- # echo -e '\x28' 00:12:00.301 07:29:16 -- target/invalid.sh@25 -- # string+='(' 00:12:00.301 07:29:16 -- target/invalid.sh@24 -- # (( ll++ )) 00:12:00.301 07:29:16 -- target/invalid.sh@24 -- # (( ll < length )) 00:12:00.301 07:29:16 -- target/invalid.sh@25 -- # printf %x 122 00:12:00.301 07:29:16 -- target/invalid.sh@25 -- # echo -e '\x7a' 00:12:00.301 07:29:16 -- target/invalid.sh@25 -- # string+=z 00:12:00.301 07:29:16 -- target/invalid.sh@24 -- # (( ll++ )) 00:12:00.301 07:29:16 -- target/invalid.sh@24 -- # (( ll < length )) 00:12:00.301 07:29:16 -- target/invalid.sh@25 -- # printf %x 32 00:12:00.301 07:29:16 -- target/invalid.sh@25 -- # echo -e '\x20' 00:12:00.301 07:29:16 -- target/invalid.sh@25 -- # string+=' ' 00:12:00.301 07:29:16 -- target/invalid.sh@24 -- # (( ll++ )) 00:12:00.301 07:29:16 -- target/invalid.sh@24 -- # (( ll < length )) 00:12:00.301 07:29:16 -- target/invalid.sh@25 -- # printf %x 111 00:12:00.301 07:29:16 -- target/invalid.sh@25 -- # echo -e '\x6f' 00:12:00.301 07:29:16 -- target/invalid.sh@25 -- # string+=o 00:12:00.301 07:29:16 -- target/invalid.sh@24 -- # (( ll++ )) 00:12:00.301 07:29:16 -- target/invalid.sh@24 -- # (( ll < length )) 00:12:00.301 07:29:16 -- target/invalid.sh@25 -- # printf %x 61 00:12:00.301 07:29:16 -- target/invalid.sh@25 -- # echo -e '\x3d' 00:12:00.301 07:29:16 -- target/invalid.sh@25 -- # string+== 00:12:00.301 07:29:16 -- target/invalid.sh@24 -- # (( ll++ )) 00:12:00.301 07:29:16 -- target/invalid.sh@24 -- # (( ll < length )) 00:12:00.301 07:29:16 -- target/invalid.sh@25 -- # printf %x 127 00:12:00.301 07:29:16 -- target/invalid.sh@25 -- # echo -e '\x7f' 00:12:00.301 07:29:16 -- target/invalid.sh@25 -- # string+=$'\177' 00:12:00.301 07:29:16 -- target/invalid.sh@24 -- # (( ll++ )) 00:12:00.301 07:29:16 -- target/invalid.sh@24 -- # (( ll < length )) 00:12:00.301 07:29:16 -- target/invalid.sh@28 -- # [[ 8 == \- ]] 00:12:00.301 07:29:16 -- target/invalid.sh@31 -- # echo '8__hnJ!]+`(d+o#(z o=' 00:12:00.301 07:29:16 -- target/invalid.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -s '8__hnJ!]+`(d+o#(z o=' nqn.2016-06.io.spdk:cnode13744 00:12:00.559 [2024-07-14 07:29:16.707820] nvmf_rpc.c: 418:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode13744: invalid serial number '8__hnJ!]+`(d+o#(z o=' 00:12:00.559 07:29:16 -- target/invalid.sh@54 -- # out='request: 00:12:00.559 { 00:12:00.559 "nqn": "nqn.2016-06.io.spdk:cnode13744", 00:12:00.559 "serial_number": "8__hnJ!]+`(d+o#(z o=\u007f", 00:12:00.559 "method": "nvmf_create_subsystem", 00:12:00.559 "req_id": 1 00:12:00.559 } 00:12:00.559 Got JSON-RPC error response 00:12:00.559 response: 00:12:00.559 { 00:12:00.559 "code": -32602, 00:12:00.559 "message": "Invalid SN 8__hnJ!]+`(d+o#(z o=\u007f" 00:12:00.559 }' 00:12:00.559 07:29:16 -- target/invalid.sh@55 -- # [[ request: 00:12:00.559 { 00:12:00.559 "nqn": "nqn.2016-06.io.spdk:cnode13744", 00:12:00.559 "serial_number": "8__hnJ!]+`(d+o#(z o=\u007f", 00:12:00.559 "method": "nvmf_create_subsystem", 00:12:00.559 "req_id": 1 00:12:00.559 } 00:12:00.559 Got JSON-RPC error response 00:12:00.559 response: 00:12:00.559 { 00:12:00.559 "code": -32602, 00:12:00.559 "message": "Invalid SN 8__hnJ!]+`(d+o#(z o=\u007f" 00:12:00.559 } == *\I\n\v\a\l\i\d\ \S\N* ]] 00:12:00.817 07:29:16 -- target/invalid.sh@58 -- # gen_random_s 41 00:12:00.817 07:29:16 -- target/invalid.sh@19 -- # local length=41 ll 00:12:00.817 07:29:16 -- target/invalid.sh@21 -- # chars=('32' '33' '34' '35' '36' '37' '38' '39' '40' '41' '42' '43' '44' '45' '46' '47' '48' '49' '50' '51' '52' '53' '54' '55' '56' '57' '58' '59' '60' '61' '62' '63' '64' '65' '66' '67' '68' '69' '70' '71' '72' '73' '74' '75' '76' '77' '78' '79' '80' '81' '82' '83' '84' '85' '86' '87' '88' '89' '90' '91' '92' '93' '94' '95' '96' '97' '98' '99' '100' '101' '102' '103' '104' '105' '106' '107' '108' '109' '110' '111' '112' '113' '114' '115' '116' '117' '118' '119' '120' '121' '122' '123' '124' '125' '126' '127') 00:12:00.817 07:29:16 -- target/invalid.sh@21 -- # local chars 00:12:00.817 07:29:16 -- target/invalid.sh@22 -- # local string 00:12:00.817 07:29:16 -- target/invalid.sh@24 -- # (( ll = 0 )) 00:12:00.817 07:29:16 -- target/invalid.sh@24 -- # (( ll < length )) 00:12:00.817 07:29:16 -- target/invalid.sh@25 -- # printf %x 80 00:12:00.817 07:29:16 -- target/invalid.sh@25 -- # echo -e '\x50' 00:12:00.817 07:29:16 -- target/invalid.sh@25 -- # string+=P 00:12:00.817 07:29:16 -- target/invalid.sh@24 -- # (( ll++ )) 00:12:00.817 07:29:16 -- target/invalid.sh@24 -- # (( ll < length )) 00:12:00.817 07:29:16 -- target/invalid.sh@25 -- # printf %x 108 00:12:00.817 07:29:16 -- target/invalid.sh@25 -- # echo -e '\x6c' 00:12:00.817 07:29:16 -- target/invalid.sh@25 -- # string+=l 00:12:00.817 07:29:16 -- target/invalid.sh@24 -- # (( ll++ )) 00:12:00.817 07:29:16 -- target/invalid.sh@24 -- # (( ll < length )) 00:12:00.817 07:29:16 -- target/invalid.sh@25 -- # printf %x 64 00:12:00.817 07:29:16 -- target/invalid.sh@25 -- # echo -e '\x40' 00:12:00.817 07:29:16 -- target/invalid.sh@25 -- # string+=@ 00:12:00.817 07:29:16 -- target/invalid.sh@24 -- # (( ll++ )) 00:12:00.817 07:29:16 -- target/invalid.sh@24 -- # (( ll < length )) 00:12:00.817 07:29:16 -- target/invalid.sh@25 -- # printf %x 92 00:12:00.817 07:29:16 -- target/invalid.sh@25 -- # echo -e '\x5c' 00:12:00.817 07:29:16 -- target/invalid.sh@25 -- # string+='\' 00:12:00.817 07:29:16 -- target/invalid.sh@24 -- # (( ll++ )) 00:12:00.817 07:29:16 -- target/invalid.sh@24 -- # (( ll < length )) 00:12:00.817 07:29:16 -- target/invalid.sh@25 -- # printf %x 119 00:12:00.817 07:29:16 -- target/invalid.sh@25 -- # echo -e '\x77' 00:12:00.817 07:29:16 -- target/invalid.sh@25 -- # string+=w 00:12:00.817 07:29:16 -- target/invalid.sh@24 -- # (( ll++ )) 00:12:00.817 07:29:16 -- target/invalid.sh@24 -- # (( ll < length )) 00:12:00.817 07:29:16 -- target/invalid.sh@25 -- # printf %x 67 00:12:00.817 07:29:16 -- target/invalid.sh@25 -- # echo -e '\x43' 00:12:00.817 07:29:16 -- target/invalid.sh@25 -- # string+=C 00:12:00.817 07:29:16 -- target/invalid.sh@24 -- # (( ll++ )) 00:12:00.817 07:29:16 -- target/invalid.sh@24 -- # (( ll < length )) 00:12:00.817 07:29:16 -- target/invalid.sh@25 -- # printf %x 109 00:12:00.817 07:29:16 -- target/invalid.sh@25 -- # echo -e '\x6d' 00:12:00.817 07:29:16 -- target/invalid.sh@25 -- # string+=m 00:12:00.817 07:29:16 -- target/invalid.sh@24 -- # (( ll++ )) 00:12:00.817 07:29:16 -- target/invalid.sh@24 -- # (( ll < length )) 00:12:00.817 07:29:16 -- target/invalid.sh@25 -- # printf %x 121 00:12:00.817 07:29:16 -- target/invalid.sh@25 -- # echo -e '\x79' 00:12:00.817 07:29:16 -- target/invalid.sh@25 -- # string+=y 00:12:00.817 07:29:16 -- target/invalid.sh@24 -- # (( ll++ )) 00:12:00.817 07:29:16 -- target/invalid.sh@24 -- # (( ll < length )) 00:12:00.817 07:29:16 -- target/invalid.sh@25 -- # printf %x 63 00:12:00.817 07:29:16 -- target/invalid.sh@25 -- # echo -e '\x3f' 00:12:00.817 07:29:16 -- target/invalid.sh@25 -- # string+='?' 00:12:00.817 07:29:16 -- target/invalid.sh@24 -- # (( ll++ )) 00:12:00.817 07:29:16 -- target/invalid.sh@24 -- # (( ll < length )) 00:12:00.817 07:29:16 -- target/invalid.sh@25 -- # printf %x 90 00:12:00.817 07:29:16 -- target/invalid.sh@25 -- # echo -e '\x5a' 00:12:00.817 07:29:16 -- target/invalid.sh@25 -- # string+=Z 00:12:00.817 07:29:16 -- target/invalid.sh@24 -- # (( ll++ )) 00:12:00.817 07:29:16 -- target/invalid.sh@24 -- # (( ll < length )) 00:12:00.817 07:29:16 -- target/invalid.sh@25 -- # printf %x 118 00:12:00.817 07:29:16 -- target/invalid.sh@25 -- # echo -e '\x76' 00:12:00.817 07:29:16 -- target/invalid.sh@25 -- # string+=v 00:12:00.817 07:29:16 -- target/invalid.sh@24 -- # (( ll++ )) 00:12:00.817 07:29:16 -- target/invalid.sh@24 -- # (( ll < length )) 00:12:00.817 07:29:16 -- target/invalid.sh@25 -- # printf %x 99 00:12:00.817 07:29:16 -- target/invalid.sh@25 -- # echo -e '\x63' 00:12:00.817 07:29:16 -- target/invalid.sh@25 -- # string+=c 00:12:00.817 07:29:16 -- target/invalid.sh@24 -- # (( ll++ )) 00:12:00.818 07:29:16 -- target/invalid.sh@24 -- # (( ll < length )) 00:12:00.818 07:29:16 -- target/invalid.sh@25 -- # printf %x 34 00:12:00.818 07:29:16 -- target/invalid.sh@25 -- # echo -e '\x22' 00:12:00.818 07:29:16 -- target/invalid.sh@25 -- # string+='"' 00:12:00.818 07:29:16 -- target/invalid.sh@24 -- # (( ll++ )) 00:12:00.818 07:29:16 -- target/invalid.sh@24 -- # (( ll < length )) 00:12:00.818 07:29:16 -- target/invalid.sh@25 -- # printf %x 103 00:12:00.818 07:29:16 -- target/invalid.sh@25 -- # echo -e '\x67' 00:12:00.818 07:29:16 -- target/invalid.sh@25 -- # string+=g 00:12:00.818 07:29:16 -- target/invalid.sh@24 -- # (( ll++ )) 00:12:00.818 07:29:16 -- target/invalid.sh@24 -- # (( ll < length )) 00:12:00.818 07:29:16 -- target/invalid.sh@25 -- # printf %x 118 00:12:00.818 07:29:16 -- target/invalid.sh@25 -- # echo -e '\x76' 00:12:00.818 07:29:16 -- target/invalid.sh@25 -- # string+=v 00:12:00.818 07:29:16 -- target/invalid.sh@24 -- # (( ll++ )) 00:12:00.818 07:29:16 -- target/invalid.sh@24 -- # (( ll < length )) 00:12:00.818 07:29:16 -- target/invalid.sh@25 -- # printf %x 87 00:12:00.818 07:29:16 -- target/invalid.sh@25 -- # echo -e '\x57' 00:12:00.818 07:29:16 -- target/invalid.sh@25 -- # string+=W 00:12:00.818 07:29:16 -- target/invalid.sh@24 -- # (( ll++ )) 00:12:00.818 07:29:16 -- target/invalid.sh@24 -- # (( ll < length )) 00:12:00.818 07:29:16 -- target/invalid.sh@25 -- # printf %x 83 00:12:00.818 07:29:16 -- target/invalid.sh@25 -- # echo -e '\x53' 00:12:00.818 07:29:16 -- target/invalid.sh@25 -- # string+=S 00:12:00.818 07:29:16 -- target/invalid.sh@24 -- # (( ll++ )) 00:12:00.818 07:29:16 -- target/invalid.sh@24 -- # (( ll < length )) 00:12:00.818 07:29:16 -- target/invalid.sh@25 -- # printf %x 112 00:12:00.818 07:29:16 -- target/invalid.sh@25 -- # echo -e '\x70' 00:12:00.818 07:29:16 -- target/invalid.sh@25 -- # string+=p 00:12:00.818 07:29:16 -- target/invalid.sh@24 -- # (( ll++ )) 00:12:00.818 07:29:16 -- target/invalid.sh@24 -- # (( ll < length )) 00:12:00.818 07:29:16 -- target/invalid.sh@25 -- # printf %x 118 00:12:00.818 07:29:16 -- target/invalid.sh@25 -- # echo -e '\x76' 00:12:00.818 07:29:16 -- target/invalid.sh@25 -- # string+=v 00:12:00.818 07:29:16 -- target/invalid.sh@24 -- # (( ll++ )) 00:12:00.818 07:29:16 -- target/invalid.sh@24 -- # (( ll < length )) 00:12:00.818 07:29:16 -- target/invalid.sh@25 -- # printf %x 79 00:12:00.818 07:29:16 -- target/invalid.sh@25 -- # echo -e '\x4f' 00:12:00.818 07:29:16 -- target/invalid.sh@25 -- # string+=O 00:12:00.818 07:29:16 -- target/invalid.sh@24 -- # (( ll++ )) 00:12:00.818 07:29:16 -- target/invalid.sh@24 -- # (( ll < length )) 00:12:00.818 07:29:16 -- target/invalid.sh@25 -- # printf %x 125 00:12:00.818 07:29:16 -- target/invalid.sh@25 -- # echo -e '\x7d' 00:12:00.818 07:29:16 -- target/invalid.sh@25 -- # string+='}' 00:12:00.818 07:29:16 -- target/invalid.sh@24 -- # (( ll++ )) 00:12:00.818 07:29:16 -- target/invalid.sh@24 -- # (( ll < length )) 00:12:00.818 07:29:16 -- target/invalid.sh@25 -- # printf %x 58 00:12:00.818 07:29:16 -- target/invalid.sh@25 -- # echo -e '\x3a' 00:12:00.818 07:29:16 -- target/invalid.sh@25 -- # string+=: 00:12:00.818 07:29:16 -- target/invalid.sh@24 -- # (( ll++ )) 00:12:00.818 07:29:16 -- target/invalid.sh@24 -- # (( ll < length )) 00:12:00.818 07:29:16 -- target/invalid.sh@25 -- # printf %x 119 00:12:00.818 07:29:16 -- target/invalid.sh@25 -- # echo -e '\x77' 00:12:00.818 07:29:16 -- target/invalid.sh@25 -- # string+=w 00:12:00.818 07:29:16 -- target/invalid.sh@24 -- # (( ll++ )) 00:12:00.818 07:29:16 -- target/invalid.sh@24 -- # (( ll < length )) 00:12:00.818 07:29:16 -- target/invalid.sh@25 -- # printf %x 124 00:12:00.818 07:29:16 -- target/invalid.sh@25 -- # echo -e '\x7c' 00:12:00.818 07:29:16 -- target/invalid.sh@25 -- # string+='|' 00:12:00.818 07:29:16 -- target/invalid.sh@24 -- # (( ll++ )) 00:12:00.818 07:29:16 -- target/invalid.sh@24 -- # (( ll < length )) 00:12:00.818 07:29:16 -- target/invalid.sh@25 -- # printf %x 48 00:12:00.818 07:29:16 -- target/invalid.sh@25 -- # echo -e '\x30' 00:12:00.818 07:29:16 -- target/invalid.sh@25 -- # string+=0 00:12:00.818 07:29:16 -- target/invalid.sh@24 -- # (( ll++ )) 00:12:00.818 07:29:16 -- target/invalid.sh@24 -- # (( ll < length )) 00:12:00.818 07:29:16 -- target/invalid.sh@25 -- # printf %x 55 00:12:00.818 07:29:16 -- target/invalid.sh@25 -- # echo -e '\x37' 00:12:00.818 07:29:16 -- target/invalid.sh@25 -- # string+=7 00:12:00.818 07:29:16 -- target/invalid.sh@24 -- # (( ll++ )) 00:12:00.818 07:29:16 -- target/invalid.sh@24 -- # (( ll < length )) 00:12:00.818 07:29:16 -- target/invalid.sh@25 -- # printf %x 92 00:12:00.818 07:29:16 -- target/invalid.sh@25 -- # echo -e '\x5c' 00:12:00.818 07:29:16 -- target/invalid.sh@25 -- # string+='\' 00:12:00.818 07:29:16 -- target/invalid.sh@24 -- # (( ll++ )) 00:12:00.818 07:29:16 -- target/invalid.sh@24 -- # (( ll < length )) 00:12:00.818 07:29:16 -- target/invalid.sh@25 -- # printf %x 101 00:12:00.818 07:29:16 -- target/invalid.sh@25 -- # echo -e '\x65' 00:12:00.818 07:29:16 -- target/invalid.sh@25 -- # string+=e 00:12:00.818 07:29:16 -- target/invalid.sh@24 -- # (( ll++ )) 00:12:00.818 07:29:16 -- target/invalid.sh@24 -- # (( ll < length )) 00:12:00.818 07:29:16 -- target/invalid.sh@25 -- # printf %x 58 00:12:00.818 07:29:16 -- target/invalid.sh@25 -- # echo -e '\x3a' 00:12:00.818 07:29:16 -- target/invalid.sh@25 -- # string+=: 00:12:00.818 07:29:16 -- target/invalid.sh@24 -- # (( ll++ )) 00:12:00.818 07:29:16 -- target/invalid.sh@24 -- # (( ll < length )) 00:12:00.818 07:29:16 -- target/invalid.sh@25 -- # printf %x 86 00:12:00.818 07:29:16 -- target/invalid.sh@25 -- # echo -e '\x56' 00:12:00.818 07:29:16 -- target/invalid.sh@25 -- # string+=V 00:12:00.818 07:29:16 -- target/invalid.sh@24 -- # (( ll++ )) 00:12:00.818 07:29:16 -- target/invalid.sh@24 -- # (( ll < length )) 00:12:00.818 07:29:16 -- target/invalid.sh@25 -- # printf %x 95 00:12:00.818 07:29:16 -- target/invalid.sh@25 -- # echo -e '\x5f' 00:12:00.818 07:29:16 -- target/invalid.sh@25 -- # string+=_ 00:12:00.818 07:29:16 -- target/invalid.sh@24 -- # (( ll++ )) 00:12:00.818 07:29:16 -- target/invalid.sh@24 -- # (( ll < length )) 00:12:00.818 07:29:16 -- target/invalid.sh@25 -- # printf %x 113 00:12:00.818 07:29:16 -- target/invalid.sh@25 -- # echo -e '\x71' 00:12:00.818 07:29:16 -- target/invalid.sh@25 -- # string+=q 00:12:00.818 07:29:16 -- target/invalid.sh@24 -- # (( ll++ )) 00:12:00.818 07:29:16 -- target/invalid.sh@24 -- # (( ll < length )) 00:12:00.818 07:29:16 -- target/invalid.sh@25 -- # printf %x 107 00:12:00.818 07:29:16 -- target/invalid.sh@25 -- # echo -e '\x6b' 00:12:00.818 07:29:16 -- target/invalid.sh@25 -- # string+=k 00:12:00.818 07:29:16 -- target/invalid.sh@24 -- # (( ll++ )) 00:12:00.818 07:29:16 -- target/invalid.sh@24 -- # (( ll < length )) 00:12:00.818 07:29:16 -- target/invalid.sh@25 -- # printf %x 60 00:12:00.818 07:29:16 -- target/invalid.sh@25 -- # echo -e '\x3c' 00:12:00.818 07:29:16 -- target/invalid.sh@25 -- # string+='<' 00:12:00.818 07:29:16 -- target/invalid.sh@24 -- # (( ll++ )) 00:12:00.818 07:29:16 -- target/invalid.sh@24 -- # (( ll < length )) 00:12:00.818 07:29:16 -- target/invalid.sh@25 -- # printf %x 93 00:12:00.818 07:29:16 -- target/invalid.sh@25 -- # echo -e '\x5d' 00:12:00.818 07:29:16 -- target/invalid.sh@25 -- # string+=']' 00:12:00.818 07:29:16 -- target/invalid.sh@24 -- # (( ll++ )) 00:12:00.818 07:29:16 -- target/invalid.sh@24 -- # (( ll < length )) 00:12:00.818 07:29:16 -- target/invalid.sh@25 -- # printf %x 37 00:12:00.818 07:29:16 -- target/invalid.sh@25 -- # echo -e '\x25' 00:12:00.818 07:29:16 -- target/invalid.sh@25 -- # string+=% 00:12:00.818 07:29:16 -- target/invalid.sh@24 -- # (( ll++ )) 00:12:00.818 07:29:16 -- target/invalid.sh@24 -- # (( ll < length )) 00:12:00.818 07:29:16 -- target/invalid.sh@25 -- # printf %x 40 00:12:00.818 07:29:16 -- target/invalid.sh@25 -- # echo -e '\x28' 00:12:00.818 07:29:16 -- target/invalid.sh@25 -- # string+='(' 00:12:00.818 07:29:16 -- target/invalid.sh@24 -- # (( ll++ )) 00:12:00.818 07:29:16 -- target/invalid.sh@24 -- # (( ll < length )) 00:12:00.818 07:29:16 -- target/invalid.sh@25 -- # printf %x 105 00:12:00.818 07:29:16 -- target/invalid.sh@25 -- # echo -e '\x69' 00:12:00.818 07:29:16 -- target/invalid.sh@25 -- # string+=i 00:12:00.818 07:29:16 -- target/invalid.sh@24 -- # (( ll++ )) 00:12:00.818 07:29:16 -- target/invalid.sh@24 -- # (( ll < length )) 00:12:00.818 07:29:16 -- target/invalid.sh@25 -- # printf %x 115 00:12:00.818 07:29:16 -- target/invalid.sh@25 -- # echo -e '\x73' 00:12:00.818 07:29:16 -- target/invalid.sh@25 -- # string+=s 00:12:00.818 07:29:16 -- target/invalid.sh@24 -- # (( ll++ )) 00:12:00.818 07:29:16 -- target/invalid.sh@24 -- # (( ll < length )) 00:12:00.818 07:29:16 -- target/invalid.sh@25 -- # printf %x 70 00:12:00.818 07:29:16 -- target/invalid.sh@25 -- # echo -e '\x46' 00:12:00.818 07:29:16 -- target/invalid.sh@25 -- # string+=F 00:12:00.818 07:29:16 -- target/invalid.sh@24 -- # (( ll++ )) 00:12:00.818 07:29:16 -- target/invalid.sh@24 -- # (( ll < length )) 00:12:00.818 07:29:16 -- target/invalid.sh@25 -- # printf %x 94 00:12:00.818 07:29:16 -- target/invalid.sh@25 -- # echo -e '\x5e' 00:12:00.818 07:29:16 -- target/invalid.sh@25 -- # string+='^' 00:12:00.818 07:29:16 -- target/invalid.sh@24 -- # (( ll++ )) 00:12:00.818 07:29:16 -- target/invalid.sh@24 -- # (( ll < length )) 00:12:00.818 07:29:16 -- target/invalid.sh@28 -- # [[ P == \- ]] 00:12:00.818 07:29:16 -- target/invalid.sh@31 -- # echo 'Pl@\wCmy?Zvc"gvWSpvO}:w|07\e:V_qk<]%(isF^' 00:12:00.818 07:29:16 -- target/invalid.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -d 'Pl@\wCmy?Zvc"gvWSpvO}:w|07\e:V_qk<]%(isF^' nqn.2016-06.io.spdk:cnode18401 00:12:01.076 [2024-07-14 07:29:17.044965] nvmf_rpc.c: 427:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode18401: invalid model number 'Pl@\wCmy?Zvc"gvWSpvO}:w|07\e:V_qk<]%(isF^' 00:12:01.076 07:29:17 -- target/invalid.sh@58 -- # out='request: 00:12:01.076 { 00:12:01.076 "nqn": "nqn.2016-06.io.spdk:cnode18401", 00:12:01.076 "model_number": "Pl@\\wCmy?Zvc\"gvWSpvO}:w|07\\e:V_qk<]%(isF^", 00:12:01.076 "method": "nvmf_create_subsystem", 00:12:01.076 "req_id": 1 00:12:01.076 } 00:12:01.076 Got JSON-RPC error response 00:12:01.076 response: 00:12:01.076 { 00:12:01.076 "code": -32602, 00:12:01.076 "message": "Invalid MN Pl@\\wCmy?Zvc\"gvWSpvO}:w|07\\e:V_qk<]%(isF^" 00:12:01.076 }' 00:12:01.076 07:29:17 -- target/invalid.sh@59 -- # [[ request: 00:12:01.076 { 00:12:01.076 "nqn": "nqn.2016-06.io.spdk:cnode18401", 00:12:01.076 "model_number": "Pl@\\wCmy?Zvc\"gvWSpvO}:w|07\\e:V_qk<]%(isF^", 00:12:01.076 "method": "nvmf_create_subsystem", 00:12:01.076 "req_id": 1 00:12:01.076 } 00:12:01.076 Got JSON-RPC error response 00:12:01.076 response: 00:12:01.076 { 00:12:01.076 "code": -32602, 00:12:01.076 "message": "Invalid MN Pl@\\wCmy?Zvc\"gvWSpvO}:w|07\\e:V_qk<]%(isF^" 00:12:01.076 } == *\I\n\v\a\l\i\d\ \M\N* ]] 00:12:01.076 07:29:17 -- target/invalid.sh@62 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport --trtype tcp 00:12:01.333 [2024-07-14 07:29:17.277781] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:12:01.333 07:29:17 -- target/invalid.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode -s SPDK001 -a 00:12:01.591 07:29:17 -- target/invalid.sh@64 -- # [[ tcp == \T\C\P ]] 00:12:01.591 07:29:17 -- target/invalid.sh@67 -- # echo '' 00:12:01.591 07:29:17 -- target/invalid.sh@67 -- # head -n 1 00:12:01.591 07:29:17 -- target/invalid.sh@67 -- # IP= 00:12:01.591 07:29:17 -- target/invalid.sh@69 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode -t tcp -a '' -s 4421 00:12:01.848 [2024-07-14 07:29:17.783522] nvmf_rpc.c: 783:nvmf_rpc_listen_paused: *ERROR*: Unable to remove listener, rc -2 00:12:01.848 07:29:17 -- target/invalid.sh@69 -- # out='request: 00:12:01.848 { 00:12:01.848 "nqn": "nqn.2016-06.io.spdk:cnode", 00:12:01.848 "listen_address": { 00:12:01.848 "trtype": "tcp", 00:12:01.848 "traddr": "", 00:12:01.848 "trsvcid": "4421" 00:12:01.848 }, 00:12:01.848 "method": "nvmf_subsystem_remove_listener", 00:12:01.848 "req_id": 1 00:12:01.848 } 00:12:01.848 Got JSON-RPC error response 00:12:01.848 response: 00:12:01.848 { 00:12:01.848 "code": -32602, 00:12:01.848 "message": "Invalid parameters" 00:12:01.848 }' 00:12:01.848 07:29:17 -- target/invalid.sh@70 -- # [[ request: 00:12:01.848 { 00:12:01.848 "nqn": "nqn.2016-06.io.spdk:cnode", 00:12:01.848 "listen_address": { 00:12:01.848 "trtype": "tcp", 00:12:01.848 "traddr": "", 00:12:01.848 "trsvcid": "4421" 00:12:01.848 }, 00:12:01.848 "method": "nvmf_subsystem_remove_listener", 00:12:01.848 "req_id": 1 00:12:01.848 } 00:12:01.848 Got JSON-RPC error response 00:12:01.848 response: 00:12:01.848 { 00:12:01.848 "code": -32602, 00:12:01.848 "message": "Invalid parameters" 00:12:01.848 } != *\U\n\a\b\l\e\ \t\o\ \s\t\o\p\ \l\i\s\t\e\n\e\r\.* ]] 00:12:01.848 07:29:17 -- target/invalid.sh@73 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode10076 -i 0 00:12:02.104 [2024-07-14 07:29:18.024334] nvmf_rpc.c: 439:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode10076: invalid cntlid range [0-65519] 00:12:02.104 07:29:18 -- target/invalid.sh@73 -- # out='request: 00:12:02.104 { 00:12:02.104 "nqn": "nqn.2016-06.io.spdk:cnode10076", 00:12:02.104 "min_cntlid": 0, 00:12:02.104 "method": "nvmf_create_subsystem", 00:12:02.104 "req_id": 1 00:12:02.104 } 00:12:02.104 Got JSON-RPC error response 00:12:02.105 response: 00:12:02.105 { 00:12:02.105 "code": -32602, 00:12:02.105 "message": "Invalid cntlid range [0-65519]" 00:12:02.105 }' 00:12:02.105 07:29:18 -- target/invalid.sh@74 -- # [[ request: 00:12:02.105 { 00:12:02.105 "nqn": "nqn.2016-06.io.spdk:cnode10076", 00:12:02.105 "min_cntlid": 0, 00:12:02.105 "method": "nvmf_create_subsystem", 00:12:02.105 "req_id": 1 00:12:02.105 } 00:12:02.105 Got JSON-RPC error response 00:12:02.105 response: 00:12:02.105 { 00:12:02.105 "code": -32602, 00:12:02.105 "message": "Invalid cntlid range [0-65519]" 00:12:02.105 } == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:12:02.105 07:29:18 -- target/invalid.sh@75 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode21934 -i 65520 00:12:02.105 [2024-07-14 07:29:18.253051] nvmf_rpc.c: 439:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode21934: invalid cntlid range [65520-65519] 00:12:02.105 07:29:18 -- target/invalid.sh@75 -- # out='request: 00:12:02.105 { 00:12:02.105 "nqn": "nqn.2016-06.io.spdk:cnode21934", 00:12:02.105 "min_cntlid": 65520, 00:12:02.105 "method": "nvmf_create_subsystem", 00:12:02.105 "req_id": 1 00:12:02.105 } 00:12:02.105 Got JSON-RPC error response 00:12:02.105 response: 00:12:02.105 { 00:12:02.105 "code": -32602, 00:12:02.105 "message": "Invalid cntlid range [65520-65519]" 00:12:02.105 }' 00:12:02.105 07:29:18 -- target/invalid.sh@76 -- # [[ request: 00:12:02.105 { 00:12:02.105 "nqn": "nqn.2016-06.io.spdk:cnode21934", 00:12:02.105 "min_cntlid": 65520, 00:12:02.105 "method": "nvmf_create_subsystem", 00:12:02.105 "req_id": 1 00:12:02.105 } 00:12:02.105 Got JSON-RPC error response 00:12:02.105 response: 00:12:02.105 { 00:12:02.105 "code": -32602, 00:12:02.105 "message": "Invalid cntlid range [65520-65519]" 00:12:02.105 } == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:12:02.105 07:29:18 -- target/invalid.sh@77 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode15519 -I 0 00:12:02.362 [2024-07-14 07:29:18.485875] nvmf_rpc.c: 439:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode15519: invalid cntlid range [1-0] 00:12:02.362 07:29:18 -- target/invalid.sh@77 -- # out='request: 00:12:02.362 { 00:12:02.362 "nqn": "nqn.2016-06.io.spdk:cnode15519", 00:12:02.362 "max_cntlid": 0, 00:12:02.362 "method": "nvmf_create_subsystem", 00:12:02.362 "req_id": 1 00:12:02.362 } 00:12:02.362 Got JSON-RPC error response 00:12:02.362 response: 00:12:02.362 { 00:12:02.362 "code": -32602, 00:12:02.362 "message": "Invalid cntlid range [1-0]" 00:12:02.362 }' 00:12:02.362 07:29:18 -- target/invalid.sh@78 -- # [[ request: 00:12:02.362 { 00:12:02.362 "nqn": "nqn.2016-06.io.spdk:cnode15519", 00:12:02.362 "max_cntlid": 0, 00:12:02.362 "method": "nvmf_create_subsystem", 00:12:02.362 "req_id": 1 00:12:02.362 } 00:12:02.362 Got JSON-RPC error response 00:12:02.362 response: 00:12:02.362 { 00:12:02.362 "code": -32602, 00:12:02.362 "message": "Invalid cntlid range [1-0]" 00:12:02.362 } == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:12:02.362 07:29:18 -- target/invalid.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode19695 -I 65520 00:12:02.620 [2024-07-14 07:29:18.718661] nvmf_rpc.c: 439:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode19695: invalid cntlid range [1-65520] 00:12:02.620 07:29:18 -- target/invalid.sh@79 -- # out='request: 00:12:02.620 { 00:12:02.620 "nqn": "nqn.2016-06.io.spdk:cnode19695", 00:12:02.620 "max_cntlid": 65520, 00:12:02.620 "method": "nvmf_create_subsystem", 00:12:02.620 "req_id": 1 00:12:02.620 } 00:12:02.620 Got JSON-RPC error response 00:12:02.620 response: 00:12:02.620 { 00:12:02.620 "code": -32602, 00:12:02.620 "message": "Invalid cntlid range [1-65520]" 00:12:02.620 }' 00:12:02.620 07:29:18 -- target/invalid.sh@80 -- # [[ request: 00:12:02.620 { 00:12:02.620 "nqn": "nqn.2016-06.io.spdk:cnode19695", 00:12:02.620 "max_cntlid": 65520, 00:12:02.620 "method": "nvmf_create_subsystem", 00:12:02.620 "req_id": 1 00:12:02.620 } 00:12:02.620 Got JSON-RPC error response 00:12:02.620 response: 00:12:02.620 { 00:12:02.620 "code": -32602, 00:12:02.620 "message": "Invalid cntlid range [1-65520]" 00:12:02.620 } == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:12:02.620 07:29:18 -- target/invalid.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode17918 -i 6 -I 5 00:12:02.877 [2024-07-14 07:29:18.975511] nvmf_rpc.c: 439:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode17918: invalid cntlid range [6-5] 00:12:02.877 07:29:18 -- target/invalid.sh@83 -- # out='request: 00:12:02.877 { 00:12:02.877 "nqn": "nqn.2016-06.io.spdk:cnode17918", 00:12:02.877 "min_cntlid": 6, 00:12:02.877 "max_cntlid": 5, 00:12:02.877 "method": "nvmf_create_subsystem", 00:12:02.877 "req_id": 1 00:12:02.877 } 00:12:02.877 Got JSON-RPC error response 00:12:02.877 response: 00:12:02.877 { 00:12:02.877 "code": -32602, 00:12:02.877 "message": "Invalid cntlid range [6-5]" 00:12:02.877 }' 00:12:02.877 07:29:18 -- target/invalid.sh@84 -- # [[ request: 00:12:02.877 { 00:12:02.877 "nqn": "nqn.2016-06.io.spdk:cnode17918", 00:12:02.877 "min_cntlid": 6, 00:12:02.877 "max_cntlid": 5, 00:12:02.877 "method": "nvmf_create_subsystem", 00:12:02.877 "req_id": 1 00:12:02.877 } 00:12:02.877 Got JSON-RPC error response 00:12:02.877 response: 00:12:02.877 { 00:12:02.877 "code": -32602, 00:12:02.877 "message": "Invalid cntlid range [6-5]" 00:12:02.877 } == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:12:02.877 07:29:18 -- target/invalid.sh@87 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_delete_target --name foobar 00:12:03.135 07:29:19 -- target/invalid.sh@87 -- # out='request: 00:12:03.135 { 00:12:03.135 "name": "foobar", 00:12:03.135 "method": "nvmf_delete_target", 00:12:03.135 "req_id": 1 00:12:03.135 } 00:12:03.135 Got JSON-RPC error response 00:12:03.135 response: 00:12:03.135 { 00:12:03.135 "code": -32602, 00:12:03.135 "message": "The specified target doesn'\''t exist, cannot delete it." 00:12:03.135 }' 00:12:03.135 07:29:19 -- target/invalid.sh@88 -- # [[ request: 00:12:03.135 { 00:12:03.135 "name": "foobar", 00:12:03.135 "method": "nvmf_delete_target", 00:12:03.135 "req_id": 1 00:12:03.135 } 00:12:03.135 Got JSON-RPC error response 00:12:03.135 response: 00:12:03.135 { 00:12:03.135 "code": -32602, 00:12:03.135 "message": "The specified target doesn't exist, cannot delete it." 00:12:03.135 } == *\T\h\e\ \s\p\e\c\i\f\i\e\d\ \t\a\r\g\e\t\ \d\o\e\s\n\'\t\ \e\x\i\s\t\,\ \c\a\n\n\o\t\ \d\e\l\e\t\e\ \i\t\.* ]] 00:12:03.135 07:29:19 -- target/invalid.sh@90 -- # trap - SIGINT SIGTERM EXIT 00:12:03.135 07:29:19 -- target/invalid.sh@91 -- # nvmftestfini 00:12:03.135 07:29:19 -- nvmf/common.sh@476 -- # nvmfcleanup 00:12:03.135 07:29:19 -- nvmf/common.sh@116 -- # sync 00:12:03.135 07:29:19 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:12:03.135 07:29:19 -- nvmf/common.sh@119 -- # set +e 00:12:03.135 07:29:19 -- nvmf/common.sh@120 -- # for i in {1..20} 00:12:03.135 07:29:19 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:12:03.135 rmmod nvme_tcp 00:12:03.135 rmmod nvme_fabrics 00:12:03.135 rmmod nvme_keyring 00:12:03.135 07:29:19 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:12:03.135 07:29:19 -- nvmf/common.sh@123 -- # set -e 00:12:03.135 07:29:19 -- nvmf/common.sh@124 -- # return 0 00:12:03.135 07:29:19 -- nvmf/common.sh@477 -- # '[' -n 4043097 ']' 00:12:03.135 07:29:19 -- nvmf/common.sh@478 -- # killprocess 4043097 00:12:03.135 07:29:19 -- common/autotest_common.sh@926 -- # '[' -z 4043097 ']' 00:12:03.135 07:29:19 -- common/autotest_common.sh@930 -- # kill -0 4043097 00:12:03.135 07:29:19 -- common/autotest_common.sh@931 -- # uname 00:12:03.135 07:29:19 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:12:03.135 07:29:19 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 4043097 00:12:03.135 07:29:19 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:12:03.135 07:29:19 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:12:03.135 07:29:19 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 4043097' 00:12:03.135 killing process with pid 4043097 00:12:03.135 07:29:19 -- common/autotest_common.sh@945 -- # kill 4043097 00:12:03.135 07:29:19 -- common/autotest_common.sh@950 -- # wait 4043097 00:12:03.393 07:29:19 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:12:03.393 07:29:19 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:12:03.394 07:29:19 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:12:03.394 07:29:19 -- nvmf/common.sh@273 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:12:03.394 07:29:19 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:12:03.394 07:29:19 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:03.394 07:29:19 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:12:03.394 07:29:19 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:05.929 07:29:21 -- nvmf/common.sh@278 -- # ip -4 addr flush cvl_0_1 00:12:05.929 00:12:05.929 real 0m9.194s 00:12:05.929 user 0m21.998s 00:12:05.929 sys 0m2.476s 00:12:05.929 07:29:21 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:12:05.929 07:29:21 -- common/autotest_common.sh@10 -- # set +x 00:12:05.929 ************************************ 00:12:05.929 END TEST nvmf_invalid 00:12:05.929 ************************************ 00:12:05.929 07:29:21 -- nvmf/nvmf.sh@31 -- # run_test nvmf_abort /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/abort.sh --transport=tcp 00:12:05.929 07:29:21 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:12:05.929 07:29:21 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:12:05.929 07:29:21 -- common/autotest_common.sh@10 -- # set +x 00:12:05.929 ************************************ 00:12:05.929 START TEST nvmf_abort 00:12:05.929 ************************************ 00:12:05.929 07:29:21 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/abort.sh --transport=tcp 00:12:05.929 * Looking for test storage... 00:12:05.929 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:12:05.929 07:29:21 -- target/abort.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:12:05.929 07:29:21 -- nvmf/common.sh@7 -- # uname -s 00:12:05.929 07:29:21 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:12:05.929 07:29:21 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:12:05.929 07:29:21 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:12:05.929 07:29:21 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:12:05.929 07:29:21 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:12:05.929 07:29:21 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:12:05.929 07:29:21 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:12:05.929 07:29:21 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:12:05.929 07:29:21 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:12:05.929 07:29:21 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:12:05.929 07:29:21 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:12:05.929 07:29:21 -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:12:05.929 07:29:21 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:12:05.929 07:29:21 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:12:05.929 07:29:21 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:12:05.929 07:29:21 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:12:05.929 07:29:21 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:12:05.929 07:29:21 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:12:05.929 07:29:21 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:12:05.929 07:29:21 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:05.929 07:29:21 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:05.929 07:29:21 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:05.929 07:29:21 -- paths/export.sh@5 -- # export PATH 00:12:05.929 07:29:21 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:05.929 07:29:21 -- nvmf/common.sh@46 -- # : 0 00:12:05.929 07:29:21 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:12:05.929 07:29:21 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:12:05.929 07:29:21 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:12:05.929 07:29:21 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:12:05.929 07:29:21 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:12:05.929 07:29:21 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:12:05.929 07:29:21 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:12:05.929 07:29:21 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:12:05.929 07:29:21 -- target/abort.sh@11 -- # MALLOC_BDEV_SIZE=64 00:12:05.929 07:29:21 -- target/abort.sh@12 -- # MALLOC_BLOCK_SIZE=4096 00:12:05.929 07:29:21 -- target/abort.sh@14 -- # nvmftestinit 00:12:05.929 07:29:21 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:12:05.929 07:29:21 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:12:05.929 07:29:21 -- nvmf/common.sh@436 -- # prepare_net_devs 00:12:05.929 07:29:21 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:12:05.929 07:29:21 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:12:05.929 07:29:21 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:05.929 07:29:21 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:12:05.929 07:29:21 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:05.929 07:29:21 -- nvmf/common.sh@402 -- # [[ phy != virt ]] 00:12:05.929 07:29:21 -- nvmf/common.sh@402 -- # gather_supported_nvmf_pci_devs 00:12:05.929 07:29:21 -- nvmf/common.sh@284 -- # xtrace_disable 00:12:05.929 07:29:21 -- common/autotest_common.sh@10 -- # set +x 00:12:07.829 07:29:23 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:12:07.829 07:29:23 -- nvmf/common.sh@290 -- # pci_devs=() 00:12:07.829 07:29:23 -- nvmf/common.sh@290 -- # local -a pci_devs 00:12:07.829 07:29:23 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:12:07.829 07:29:23 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:12:07.829 07:29:23 -- nvmf/common.sh@292 -- # pci_drivers=() 00:12:07.829 07:29:23 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:12:07.829 07:29:23 -- nvmf/common.sh@294 -- # net_devs=() 00:12:07.829 07:29:23 -- nvmf/common.sh@294 -- # local -ga net_devs 00:12:07.829 07:29:23 -- nvmf/common.sh@295 -- # e810=() 00:12:07.829 07:29:23 -- nvmf/common.sh@295 -- # local -ga e810 00:12:07.829 07:29:23 -- nvmf/common.sh@296 -- # x722=() 00:12:07.829 07:29:23 -- nvmf/common.sh@296 -- # local -ga x722 00:12:07.829 07:29:23 -- nvmf/common.sh@297 -- # mlx=() 00:12:07.829 07:29:23 -- nvmf/common.sh@297 -- # local -ga mlx 00:12:07.829 07:29:23 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:12:07.829 07:29:23 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:12:07.829 07:29:23 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:12:07.829 07:29:23 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:12:07.829 07:29:23 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:12:07.829 07:29:23 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:12:07.829 07:29:23 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:12:07.829 07:29:23 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:12:07.829 07:29:23 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:12:07.829 07:29:23 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:12:07.829 07:29:23 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:12:07.829 07:29:23 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:12:07.829 07:29:23 -- nvmf/common.sh@320 -- # [[ tcp == rdma ]] 00:12:07.829 07:29:23 -- nvmf/common.sh@326 -- # [[ e810 == mlx5 ]] 00:12:07.829 07:29:23 -- nvmf/common.sh@328 -- # [[ e810 == e810 ]] 00:12:07.829 07:29:23 -- nvmf/common.sh@329 -- # pci_devs=("${e810[@]}") 00:12:07.829 07:29:23 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:12:07.829 07:29:23 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:12:07.829 07:29:23 -- nvmf/common.sh@340 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:12:07.829 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:12:07.829 07:29:23 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:12:07.829 07:29:23 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:12:07.829 07:29:23 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:12:07.829 07:29:23 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:12:07.829 07:29:23 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:12:07.829 07:29:23 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:12:07.829 07:29:23 -- nvmf/common.sh@340 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:12:07.829 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:12:07.829 07:29:23 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:12:07.829 07:29:23 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:12:07.829 07:29:23 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:12:07.829 07:29:23 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:12:07.829 07:29:23 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:12:07.829 07:29:23 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:12:07.829 07:29:23 -- nvmf/common.sh@371 -- # [[ e810 == e810 ]] 00:12:07.829 07:29:23 -- nvmf/common.sh@371 -- # [[ tcp == rdma ]] 00:12:07.829 07:29:23 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:12:07.829 07:29:23 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:07.829 07:29:23 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:12:07.829 07:29:23 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:07.829 07:29:23 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:12:07.829 Found net devices under 0000:0a:00.0: cvl_0_0 00:12:07.829 07:29:23 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:12:07.829 07:29:23 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:12:07.829 07:29:23 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:07.829 07:29:23 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:12:07.829 07:29:23 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:07.829 07:29:23 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:12:07.829 Found net devices under 0000:0a:00.1: cvl_0_1 00:12:07.829 07:29:23 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:12:07.829 07:29:23 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:12:07.829 07:29:23 -- nvmf/common.sh@402 -- # is_hw=yes 00:12:07.829 07:29:23 -- nvmf/common.sh@404 -- # [[ yes == yes ]] 00:12:07.829 07:29:23 -- nvmf/common.sh@405 -- # [[ tcp == tcp ]] 00:12:07.829 07:29:23 -- nvmf/common.sh@406 -- # nvmf_tcp_init 00:12:07.829 07:29:23 -- nvmf/common.sh@228 -- # NVMF_INITIATOR_IP=10.0.0.1 00:12:07.829 07:29:23 -- nvmf/common.sh@229 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:12:07.829 07:29:23 -- nvmf/common.sh@230 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:12:07.829 07:29:23 -- nvmf/common.sh@233 -- # (( 2 > 1 )) 00:12:07.829 07:29:23 -- nvmf/common.sh@235 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:12:07.829 07:29:23 -- nvmf/common.sh@236 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:12:07.829 07:29:23 -- nvmf/common.sh@239 -- # NVMF_SECOND_TARGET_IP= 00:12:07.829 07:29:23 -- nvmf/common.sh@241 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:12:07.829 07:29:23 -- nvmf/common.sh@242 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:12:07.829 07:29:23 -- nvmf/common.sh@243 -- # ip -4 addr flush cvl_0_0 00:12:07.829 07:29:23 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_1 00:12:07.829 07:29:23 -- nvmf/common.sh@247 -- # ip netns add cvl_0_0_ns_spdk 00:12:07.829 07:29:23 -- nvmf/common.sh@250 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:12:07.829 07:29:23 -- nvmf/common.sh@253 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:12:07.829 07:29:23 -- nvmf/common.sh@254 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:12:07.829 07:29:23 -- nvmf/common.sh@257 -- # ip link set cvl_0_1 up 00:12:07.829 07:29:23 -- nvmf/common.sh@259 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:12:07.829 07:29:23 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:12:07.829 07:29:23 -- nvmf/common.sh@263 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:12:07.829 07:29:23 -- nvmf/common.sh@266 -- # ping -c 1 10.0.0.2 00:12:07.829 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:12:07.829 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.210 ms 00:12:07.829 00:12:07.829 --- 10.0.0.2 ping statistics --- 00:12:07.829 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:07.829 rtt min/avg/max/mdev = 0.210/0.210/0.210/0.000 ms 00:12:07.829 07:29:23 -- nvmf/common.sh@267 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:12:07.829 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:12:07.829 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.139 ms 00:12:07.829 00:12:07.829 --- 10.0.0.1 ping statistics --- 00:12:07.829 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:07.829 rtt min/avg/max/mdev = 0.139/0.139/0.139/0.000 ms 00:12:07.829 07:29:23 -- nvmf/common.sh@269 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:12:07.829 07:29:23 -- nvmf/common.sh@410 -- # return 0 00:12:07.829 07:29:23 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:12:07.829 07:29:23 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:12:07.829 07:29:23 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:12:07.829 07:29:23 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:12:07.829 07:29:23 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:12:07.829 07:29:23 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:12:07.829 07:29:23 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:12:07.829 07:29:23 -- target/abort.sh@15 -- # nvmfappstart -m 0xE 00:12:07.829 07:29:23 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:12:07.829 07:29:23 -- common/autotest_common.sh@712 -- # xtrace_disable 00:12:07.829 07:29:23 -- common/autotest_common.sh@10 -- # set +x 00:12:07.829 07:29:23 -- nvmf/common.sh@469 -- # nvmfpid=4045770 00:12:07.829 07:29:23 -- nvmf/common.sh@468 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:12:07.829 07:29:23 -- nvmf/common.sh@470 -- # waitforlisten 4045770 00:12:07.829 07:29:23 -- common/autotest_common.sh@819 -- # '[' -z 4045770 ']' 00:12:07.829 07:29:23 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:07.829 07:29:23 -- common/autotest_common.sh@824 -- # local max_retries=100 00:12:07.829 07:29:23 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:07.829 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:07.829 07:29:23 -- common/autotest_common.sh@828 -- # xtrace_disable 00:12:07.829 07:29:23 -- common/autotest_common.sh@10 -- # set +x 00:12:07.829 [2024-07-14 07:29:23.753746] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:12:07.829 [2024-07-14 07:29:23.753827] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:12:07.829 EAL: No free 2048 kB hugepages reported on node 1 00:12:07.829 [2024-07-14 07:29:23.828339] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 3 00:12:07.829 [2024-07-14 07:29:23.946379] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:12:07.830 [2024-07-14 07:29:23.946539] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:12:07.830 [2024-07-14 07:29:23.946559] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:12:07.830 [2024-07-14 07:29:23.946574] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:12:07.830 [2024-07-14 07:29:23.946634] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:12:07.830 [2024-07-14 07:29:23.946693] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:12:07.830 [2024-07-14 07:29:23.946690] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:12:08.761 07:29:24 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:12:08.761 07:29:24 -- common/autotest_common.sh@852 -- # return 0 00:12:08.761 07:29:24 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:12:08.761 07:29:24 -- common/autotest_common.sh@718 -- # xtrace_disable 00:12:08.761 07:29:24 -- common/autotest_common.sh@10 -- # set +x 00:12:08.761 07:29:24 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:12:08.761 07:29:24 -- target/abort.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -a 256 00:12:08.761 07:29:24 -- common/autotest_common.sh@551 -- # xtrace_disable 00:12:08.761 07:29:24 -- common/autotest_common.sh@10 -- # set +x 00:12:08.761 [2024-07-14 07:29:24.705226] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:12:08.761 07:29:24 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:12:08.761 07:29:24 -- target/abort.sh@20 -- # rpc_cmd bdev_malloc_create 64 4096 -b Malloc0 00:12:08.761 07:29:24 -- common/autotest_common.sh@551 -- # xtrace_disable 00:12:08.761 07:29:24 -- common/autotest_common.sh@10 -- # set +x 00:12:08.761 Malloc0 00:12:08.761 07:29:24 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:12:08.761 07:29:24 -- target/abort.sh@21 -- # rpc_cmd bdev_delay_create -b Malloc0 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:12:08.761 07:29:24 -- common/autotest_common.sh@551 -- # xtrace_disable 00:12:08.761 07:29:24 -- common/autotest_common.sh@10 -- # set +x 00:12:08.761 Delay0 00:12:08.761 07:29:24 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:12:08.761 07:29:24 -- target/abort.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:12:08.761 07:29:24 -- common/autotest_common.sh@551 -- # xtrace_disable 00:12:08.761 07:29:24 -- common/autotest_common.sh@10 -- # set +x 00:12:08.761 07:29:24 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:12:08.761 07:29:24 -- target/abort.sh@25 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 Delay0 00:12:08.761 07:29:24 -- common/autotest_common.sh@551 -- # xtrace_disable 00:12:08.761 07:29:24 -- common/autotest_common.sh@10 -- # set +x 00:12:08.761 07:29:24 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:12:08.761 07:29:24 -- target/abort.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:12:08.761 07:29:24 -- common/autotest_common.sh@551 -- # xtrace_disable 00:12:08.761 07:29:24 -- common/autotest_common.sh@10 -- # set +x 00:12:08.761 [2024-07-14 07:29:24.772021] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:12:08.761 07:29:24 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:12:08.762 07:29:24 -- target/abort.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:12:08.762 07:29:24 -- common/autotest_common.sh@551 -- # xtrace_disable 00:12:08.762 07:29:24 -- common/autotest_common.sh@10 -- # set +x 00:12:08.762 07:29:24 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:12:08.762 07:29:24 -- target/abort.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -c 0x1 -t 1 -l warning -q 128 00:12:08.762 EAL: No free 2048 kB hugepages reported on node 1 00:12:08.762 [2024-07-14 07:29:24.920056] nvme_fabric.c: 295:nvme_fabric_discover_probe: *WARNING*: Skipping unsupported current discovery service or discovery service referral 00:12:11.288 Initializing NVMe Controllers 00:12:11.288 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode0 00:12:11.288 controller IO queue size 128 less than required 00:12:11.288 Consider using lower queue depth or small IO size because IO requests may be queued at the NVMe driver. 00:12:11.288 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 0 00:12:11.288 Initialization complete. Launching workers. 00:12:11.288 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 I/O completed: 123, failed: 32297 00:12:11.288 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) abort submitted 32358, failed to submit 62 00:12:11.288 success 32297, unsuccess 61, failed 0 00:12:11.288 07:29:27 -- target/abort.sh@34 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:12:11.288 07:29:27 -- common/autotest_common.sh@551 -- # xtrace_disable 00:12:11.288 07:29:27 -- common/autotest_common.sh@10 -- # set +x 00:12:11.288 07:29:27 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:12:11.288 07:29:27 -- target/abort.sh@36 -- # trap - SIGINT SIGTERM EXIT 00:12:11.288 07:29:27 -- target/abort.sh@38 -- # nvmftestfini 00:12:11.289 07:29:27 -- nvmf/common.sh@476 -- # nvmfcleanup 00:12:11.289 07:29:27 -- nvmf/common.sh@116 -- # sync 00:12:11.289 07:29:27 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:12:11.289 07:29:27 -- nvmf/common.sh@119 -- # set +e 00:12:11.289 07:29:27 -- nvmf/common.sh@120 -- # for i in {1..20} 00:12:11.289 07:29:27 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:12:11.289 rmmod nvme_tcp 00:12:11.289 rmmod nvme_fabrics 00:12:11.289 rmmod nvme_keyring 00:12:11.289 07:29:27 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:12:11.289 07:29:27 -- nvmf/common.sh@123 -- # set -e 00:12:11.289 07:29:27 -- nvmf/common.sh@124 -- # return 0 00:12:11.289 07:29:27 -- nvmf/common.sh@477 -- # '[' -n 4045770 ']' 00:12:11.289 07:29:27 -- nvmf/common.sh@478 -- # killprocess 4045770 00:12:11.289 07:29:27 -- common/autotest_common.sh@926 -- # '[' -z 4045770 ']' 00:12:11.289 07:29:27 -- common/autotest_common.sh@930 -- # kill -0 4045770 00:12:11.289 07:29:27 -- common/autotest_common.sh@931 -- # uname 00:12:11.289 07:29:27 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:12:11.289 07:29:27 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 4045770 00:12:11.289 07:29:27 -- common/autotest_common.sh@932 -- # process_name=reactor_1 00:12:11.289 07:29:27 -- common/autotest_common.sh@936 -- # '[' reactor_1 = sudo ']' 00:12:11.289 07:29:27 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 4045770' 00:12:11.289 killing process with pid 4045770 00:12:11.289 07:29:27 -- common/autotest_common.sh@945 -- # kill 4045770 00:12:11.289 07:29:27 -- common/autotest_common.sh@950 -- # wait 4045770 00:12:11.548 07:29:27 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:12:11.548 07:29:27 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:12:11.548 07:29:27 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:12:11.548 07:29:27 -- nvmf/common.sh@273 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:12:11.548 07:29:27 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:12:11.548 07:29:27 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:11.548 07:29:27 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:12:11.548 07:29:27 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:13.452 07:29:29 -- nvmf/common.sh@278 -- # ip -4 addr flush cvl_0_1 00:12:13.452 00:12:13.452 real 0m8.009s 00:12:13.452 user 0m12.982s 00:12:13.452 sys 0m2.625s 00:12:13.452 07:29:29 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:12:13.452 07:29:29 -- common/autotest_common.sh@10 -- # set +x 00:12:13.452 ************************************ 00:12:13.452 END TEST nvmf_abort 00:12:13.452 ************************************ 00:12:13.452 07:29:29 -- nvmf/nvmf.sh@32 -- # run_test nvmf_ns_hotplug_stress /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/ns_hotplug_stress.sh --transport=tcp 00:12:13.452 07:29:29 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:12:13.452 07:29:29 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:12:13.452 07:29:29 -- common/autotest_common.sh@10 -- # set +x 00:12:13.452 ************************************ 00:12:13.452 START TEST nvmf_ns_hotplug_stress 00:12:13.452 ************************************ 00:12:13.452 07:29:29 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/ns_hotplug_stress.sh --transport=tcp 00:12:13.452 * Looking for test storage... 00:12:13.452 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:12:13.452 07:29:29 -- target/ns_hotplug_stress.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:12:13.452 07:29:29 -- nvmf/common.sh@7 -- # uname -s 00:12:13.452 07:29:29 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:12:13.452 07:29:29 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:12:13.452 07:29:29 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:12:13.452 07:29:29 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:12:13.452 07:29:29 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:12:13.452 07:29:29 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:12:13.452 07:29:29 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:12:13.452 07:29:29 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:12:13.452 07:29:29 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:12:13.452 07:29:29 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:12:13.710 07:29:29 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:12:13.710 07:29:29 -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:12:13.710 07:29:29 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:12:13.710 07:29:29 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:12:13.710 07:29:29 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:12:13.710 07:29:29 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:12:13.710 07:29:29 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:12:13.710 07:29:29 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:12:13.710 07:29:29 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:12:13.710 07:29:29 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:13.710 07:29:29 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:13.711 07:29:29 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:13.711 07:29:29 -- paths/export.sh@5 -- # export PATH 00:12:13.711 07:29:29 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:13.711 07:29:29 -- nvmf/common.sh@46 -- # : 0 00:12:13.711 07:29:29 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:12:13.711 07:29:29 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:12:13.711 07:29:29 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:12:13.711 07:29:29 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:12:13.711 07:29:29 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:12:13.711 07:29:29 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:12:13.711 07:29:29 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:12:13.711 07:29:29 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:12:13.711 07:29:29 -- target/ns_hotplug_stress.sh@11 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:12:13.711 07:29:29 -- target/ns_hotplug_stress.sh@22 -- # nvmftestinit 00:12:13.711 07:29:29 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:12:13.711 07:29:29 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:12:13.711 07:29:29 -- nvmf/common.sh@436 -- # prepare_net_devs 00:12:13.711 07:29:29 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:12:13.711 07:29:29 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:12:13.711 07:29:29 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:13.711 07:29:29 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:12:13.711 07:29:29 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:13.711 07:29:29 -- nvmf/common.sh@402 -- # [[ phy != virt ]] 00:12:13.711 07:29:29 -- nvmf/common.sh@402 -- # gather_supported_nvmf_pci_devs 00:12:13.711 07:29:29 -- nvmf/common.sh@284 -- # xtrace_disable 00:12:13.711 07:29:29 -- common/autotest_common.sh@10 -- # set +x 00:12:15.616 07:29:31 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:12:15.616 07:29:31 -- nvmf/common.sh@290 -- # pci_devs=() 00:12:15.616 07:29:31 -- nvmf/common.sh@290 -- # local -a pci_devs 00:12:15.616 07:29:31 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:12:15.616 07:29:31 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:12:15.616 07:29:31 -- nvmf/common.sh@292 -- # pci_drivers=() 00:12:15.616 07:29:31 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:12:15.616 07:29:31 -- nvmf/common.sh@294 -- # net_devs=() 00:12:15.616 07:29:31 -- nvmf/common.sh@294 -- # local -ga net_devs 00:12:15.616 07:29:31 -- nvmf/common.sh@295 -- # e810=() 00:12:15.616 07:29:31 -- nvmf/common.sh@295 -- # local -ga e810 00:12:15.616 07:29:31 -- nvmf/common.sh@296 -- # x722=() 00:12:15.616 07:29:31 -- nvmf/common.sh@296 -- # local -ga x722 00:12:15.616 07:29:31 -- nvmf/common.sh@297 -- # mlx=() 00:12:15.616 07:29:31 -- nvmf/common.sh@297 -- # local -ga mlx 00:12:15.616 07:29:31 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:12:15.616 07:29:31 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:12:15.616 07:29:31 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:12:15.616 07:29:31 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:12:15.616 07:29:31 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:12:15.616 07:29:31 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:12:15.616 07:29:31 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:12:15.616 07:29:31 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:12:15.616 07:29:31 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:12:15.616 07:29:31 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:12:15.616 07:29:31 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:12:15.616 07:29:31 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:12:15.616 07:29:31 -- nvmf/common.sh@320 -- # [[ tcp == rdma ]] 00:12:15.616 07:29:31 -- nvmf/common.sh@326 -- # [[ e810 == mlx5 ]] 00:12:15.616 07:29:31 -- nvmf/common.sh@328 -- # [[ e810 == e810 ]] 00:12:15.616 07:29:31 -- nvmf/common.sh@329 -- # pci_devs=("${e810[@]}") 00:12:15.616 07:29:31 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:12:15.616 07:29:31 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:12:15.616 07:29:31 -- nvmf/common.sh@340 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:12:15.616 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:12:15.616 07:29:31 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:12:15.616 07:29:31 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:12:15.616 07:29:31 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:12:15.616 07:29:31 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:12:15.616 07:29:31 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:12:15.616 07:29:31 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:12:15.616 07:29:31 -- nvmf/common.sh@340 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:12:15.616 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:12:15.616 07:29:31 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:12:15.616 07:29:31 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:12:15.616 07:29:31 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:12:15.616 07:29:31 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:12:15.616 07:29:31 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:12:15.616 07:29:31 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:12:15.616 07:29:31 -- nvmf/common.sh@371 -- # [[ e810 == e810 ]] 00:12:15.616 07:29:31 -- nvmf/common.sh@371 -- # [[ tcp == rdma ]] 00:12:15.616 07:29:31 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:12:15.616 07:29:31 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:15.616 07:29:31 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:12:15.616 07:29:31 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:15.616 07:29:31 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:12:15.616 Found net devices under 0000:0a:00.0: cvl_0_0 00:12:15.616 07:29:31 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:12:15.616 07:29:31 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:12:15.616 07:29:31 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:15.616 07:29:31 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:12:15.616 07:29:31 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:15.616 07:29:31 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:12:15.616 Found net devices under 0000:0a:00.1: cvl_0_1 00:12:15.616 07:29:31 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:12:15.616 07:29:31 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:12:15.616 07:29:31 -- nvmf/common.sh@402 -- # is_hw=yes 00:12:15.616 07:29:31 -- nvmf/common.sh@404 -- # [[ yes == yes ]] 00:12:15.616 07:29:31 -- nvmf/common.sh@405 -- # [[ tcp == tcp ]] 00:12:15.616 07:29:31 -- nvmf/common.sh@406 -- # nvmf_tcp_init 00:12:15.616 07:29:31 -- nvmf/common.sh@228 -- # NVMF_INITIATOR_IP=10.0.0.1 00:12:15.616 07:29:31 -- nvmf/common.sh@229 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:12:15.616 07:29:31 -- nvmf/common.sh@230 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:12:15.616 07:29:31 -- nvmf/common.sh@233 -- # (( 2 > 1 )) 00:12:15.617 07:29:31 -- nvmf/common.sh@235 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:12:15.617 07:29:31 -- nvmf/common.sh@236 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:12:15.617 07:29:31 -- nvmf/common.sh@239 -- # NVMF_SECOND_TARGET_IP= 00:12:15.617 07:29:31 -- nvmf/common.sh@241 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:12:15.617 07:29:31 -- nvmf/common.sh@242 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:12:15.617 07:29:31 -- nvmf/common.sh@243 -- # ip -4 addr flush cvl_0_0 00:12:15.617 07:29:31 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_1 00:12:15.617 07:29:31 -- nvmf/common.sh@247 -- # ip netns add cvl_0_0_ns_spdk 00:12:15.617 07:29:31 -- nvmf/common.sh@250 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:12:15.617 07:29:31 -- nvmf/common.sh@253 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:12:15.617 07:29:31 -- nvmf/common.sh@254 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:12:15.617 07:29:31 -- nvmf/common.sh@257 -- # ip link set cvl_0_1 up 00:12:15.617 07:29:31 -- nvmf/common.sh@259 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:12:15.617 07:29:31 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:12:15.617 07:29:31 -- nvmf/common.sh@263 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:12:15.617 07:29:31 -- nvmf/common.sh@266 -- # ping -c 1 10.0.0.2 00:12:15.617 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:12:15.617 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.268 ms 00:12:15.617 00:12:15.617 --- 10.0.0.2 ping statistics --- 00:12:15.617 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:15.617 rtt min/avg/max/mdev = 0.268/0.268/0.268/0.000 ms 00:12:15.617 07:29:31 -- nvmf/common.sh@267 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:12:15.617 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:12:15.617 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.245 ms 00:12:15.617 00:12:15.617 --- 10.0.0.1 ping statistics --- 00:12:15.617 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:15.617 rtt min/avg/max/mdev = 0.245/0.245/0.245/0.000 ms 00:12:15.617 07:29:31 -- nvmf/common.sh@269 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:12:15.617 07:29:31 -- nvmf/common.sh@410 -- # return 0 00:12:15.617 07:29:31 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:12:15.617 07:29:31 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:12:15.617 07:29:31 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:12:15.617 07:29:31 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:12:15.617 07:29:31 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:12:15.617 07:29:31 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:12:15.617 07:29:31 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:12:15.617 07:29:31 -- target/ns_hotplug_stress.sh@23 -- # nvmfappstart -m 0xE 00:12:15.617 07:29:31 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:12:15.617 07:29:31 -- common/autotest_common.sh@712 -- # xtrace_disable 00:12:15.617 07:29:31 -- common/autotest_common.sh@10 -- # set +x 00:12:15.617 07:29:31 -- nvmf/common.sh@469 -- # nvmfpid=4048141 00:12:15.617 07:29:31 -- nvmf/common.sh@468 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:12:15.617 07:29:31 -- nvmf/common.sh@470 -- # waitforlisten 4048141 00:12:15.617 07:29:31 -- common/autotest_common.sh@819 -- # '[' -z 4048141 ']' 00:12:15.617 07:29:31 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:15.617 07:29:31 -- common/autotest_common.sh@824 -- # local max_retries=100 00:12:15.617 07:29:31 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:15.617 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:15.617 07:29:31 -- common/autotest_common.sh@828 -- # xtrace_disable 00:12:15.617 07:29:31 -- common/autotest_common.sh@10 -- # set +x 00:12:15.617 [2024-07-14 07:29:31.763562] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:12:15.617 [2024-07-14 07:29:31.763638] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:12:15.875 EAL: No free 2048 kB hugepages reported on node 1 00:12:15.875 [2024-07-14 07:29:31.828698] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 3 00:12:15.875 [2024-07-14 07:29:31.933184] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:12:15.875 [2024-07-14 07:29:31.933355] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:12:15.875 [2024-07-14 07:29:31.933373] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:12:15.875 [2024-07-14 07:29:31.933386] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:12:15.875 [2024-07-14 07:29:31.933449] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:12:15.875 [2024-07-14 07:29:31.933574] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:12:15.875 [2024-07-14 07:29:31.933577] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:12:16.809 07:29:32 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:12:16.809 07:29:32 -- common/autotest_common.sh@852 -- # return 0 00:12:16.809 07:29:32 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:12:16.809 07:29:32 -- common/autotest_common.sh@718 -- # xtrace_disable 00:12:16.809 07:29:32 -- common/autotest_common.sh@10 -- # set +x 00:12:16.809 07:29:32 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:12:16.809 07:29:32 -- target/ns_hotplug_stress.sh@25 -- # null_size=1000 00:12:16.809 07:29:32 -- target/ns_hotplug_stress.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:12:16.809 [2024-07-14 07:29:32.958925] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:12:17.067 07:29:32 -- target/ns_hotplug_stress.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:12:17.324 07:29:33 -- target/ns_hotplug_stress.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:12:17.324 [2024-07-14 07:29:33.473642] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:12:17.324 07:29:33 -- target/ns_hotplug_stress.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:12:17.581 07:29:33 -- target/ns_hotplug_stress.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 512 -b Malloc0 00:12:17.839 Malloc0 00:12:17.839 07:29:33 -- target/ns_hotplug_stress.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_delay_create -b Malloc0 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:12:18.096 Delay0 00:12:18.096 07:29:34 -- target/ns_hotplug_stress.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:12:18.353 07:29:34 -- target/ns_hotplug_stress.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create NULL1 1000 512 00:12:18.611 NULL1 00:12:18.611 07:29:34 -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 NULL1 00:12:18.869 07:29:34 -- target/ns_hotplug_stress.sh@42 -- # PERF_PID=4048578 00:12:18.869 07:29:34 -- target/ns_hotplug_stress.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0x1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -t 30 -q 128 -w randread -o 512 -Q 1000 00:12:18.869 07:29:34 -- target/ns_hotplug_stress.sh@44 -- # kill -0 4048578 00:12:18.869 07:29:34 -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:12:18.869 EAL: No free 2048 kB hugepages reported on node 1 00:12:20.246 Read completed with error (sct=0, sc=11) 00:12:20.246 07:29:36 -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:12:20.246 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:12:20.246 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:12:20.246 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:12:20.246 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:12:20.246 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:12:20.246 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:12:20.246 07:29:36 -- target/ns_hotplug_stress.sh@49 -- # null_size=1001 00:12:20.246 07:29:36 -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1001 00:12:20.505 true 00:12:20.505 07:29:36 -- target/ns_hotplug_stress.sh@44 -- # kill -0 4048578 00:12:20.505 07:29:36 -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:12:21.439 07:29:37 -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:12:21.697 07:29:37 -- target/ns_hotplug_stress.sh@49 -- # null_size=1002 00:12:21.697 07:29:37 -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1002 00:12:21.697 true 00:12:21.697 07:29:37 -- target/ns_hotplug_stress.sh@44 -- # kill -0 4048578 00:12:21.697 07:29:37 -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:12:21.954 07:29:38 -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:12:22.212 07:29:38 -- target/ns_hotplug_stress.sh@49 -- # null_size=1003 00:12:22.212 07:29:38 -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1003 00:12:22.469 true 00:12:22.469 07:29:38 -- target/ns_hotplug_stress.sh@44 -- # kill -0 4048578 00:12:22.469 07:29:38 -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:12:23.402 07:29:39 -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:12:23.402 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:12:23.659 07:29:39 -- target/ns_hotplug_stress.sh@49 -- # null_size=1004 00:12:23.659 07:29:39 -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1004 00:12:23.915 true 00:12:23.915 07:29:39 -- target/ns_hotplug_stress.sh@44 -- # kill -0 4048578 00:12:23.915 07:29:39 -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:12:24.172 07:29:40 -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:12:24.429 07:29:40 -- target/ns_hotplug_stress.sh@49 -- # null_size=1005 00:12:24.429 07:29:40 -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1005 00:12:24.687 true 00:12:24.687 07:29:40 -- target/ns_hotplug_stress.sh@44 -- # kill -0 4048578 00:12:24.687 07:29:40 -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:12:25.620 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:12:25.620 07:29:41 -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:12:25.620 07:29:41 -- target/ns_hotplug_stress.sh@49 -- # null_size=1006 00:12:25.620 07:29:41 -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1006 00:12:25.878 true 00:12:25.878 07:29:41 -- target/ns_hotplug_stress.sh@44 -- # kill -0 4048578 00:12:25.878 07:29:41 -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:12:26.135 07:29:42 -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:12:26.392 07:29:42 -- target/ns_hotplug_stress.sh@49 -- # null_size=1007 00:12:26.392 07:29:42 -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1007 00:12:26.648 true 00:12:26.648 07:29:42 -- target/ns_hotplug_stress.sh@44 -- # kill -0 4048578 00:12:26.648 07:29:42 -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:12:27.579 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:12:27.579 07:29:43 -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:12:27.579 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:12:27.579 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:12:27.579 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:12:27.835 07:29:43 -- target/ns_hotplug_stress.sh@49 -- # null_size=1008 00:12:27.835 07:29:43 -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1008 00:12:28.092 true 00:12:28.092 07:29:44 -- target/ns_hotplug_stress.sh@44 -- # kill -0 4048578 00:12:28.092 07:29:44 -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:12:28.350 07:29:44 -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:12:28.608 07:29:44 -- target/ns_hotplug_stress.sh@49 -- # null_size=1009 00:12:28.608 07:29:44 -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1009 00:12:28.865 true 00:12:28.865 07:29:44 -- target/ns_hotplug_stress.sh@44 -- # kill -0 4048578 00:12:28.865 07:29:44 -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:12:29.829 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:12:29.829 07:29:45 -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:12:30.086 07:29:46 -- target/ns_hotplug_stress.sh@49 -- # null_size=1010 00:12:30.086 07:29:46 -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1010 00:12:30.343 true 00:12:30.343 07:29:46 -- target/ns_hotplug_stress.sh@44 -- # kill -0 4048578 00:12:30.343 07:29:46 -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:12:30.600 07:29:46 -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:12:30.857 07:29:46 -- target/ns_hotplug_stress.sh@49 -- # null_size=1011 00:12:30.857 07:29:46 -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1011 00:12:31.113 true 00:12:31.113 07:29:47 -- target/ns_hotplug_stress.sh@44 -- # kill -0 4048578 00:12:31.113 07:29:47 -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:12:31.676 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:12:31.933 07:29:47 -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:12:32.189 07:29:48 -- target/ns_hotplug_stress.sh@49 -- # null_size=1012 00:12:32.189 07:29:48 -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1012 00:12:32.445 true 00:12:32.445 07:29:48 -- target/ns_hotplug_stress.sh@44 -- # kill -0 4048578 00:12:32.445 07:29:48 -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:12:32.701 07:29:48 -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:12:32.958 07:29:48 -- target/ns_hotplug_stress.sh@49 -- # null_size=1013 00:12:32.958 07:29:48 -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1013 00:12:32.958 true 00:12:32.958 07:29:49 -- target/ns_hotplug_stress.sh@44 -- # kill -0 4048578 00:12:32.958 07:29:49 -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:12:33.888 07:29:49 -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:12:33.888 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:12:34.146 07:29:50 -- target/ns_hotplug_stress.sh@49 -- # null_size=1014 00:12:34.146 07:29:50 -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1014 00:12:34.404 true 00:12:34.404 07:29:50 -- target/ns_hotplug_stress.sh@44 -- # kill -0 4048578 00:12:34.404 07:29:50 -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:12:34.661 07:29:50 -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:12:34.919 07:29:50 -- target/ns_hotplug_stress.sh@49 -- # null_size=1015 00:12:34.919 07:29:50 -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1015 00:12:35.177 true 00:12:35.177 07:29:51 -- target/ns_hotplug_stress.sh@44 -- # kill -0 4048578 00:12:35.177 07:29:51 -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:12:35.434 07:29:51 -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:12:35.692 07:29:51 -- target/ns_hotplug_stress.sh@49 -- # null_size=1016 00:12:35.692 07:29:51 -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1016 00:12:35.949 true 00:12:35.949 07:29:51 -- target/ns_hotplug_stress.sh@44 -- # kill -0 4048578 00:12:35.949 07:29:51 -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:12:37.320 07:29:53 -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:12:37.320 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:12:37.320 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:12:37.320 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:12:37.320 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:12:37.320 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:12:37.320 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:12:37.320 07:29:53 -- target/ns_hotplug_stress.sh@49 -- # null_size=1017 00:12:37.320 07:29:53 -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1017 00:12:37.577 true 00:12:37.577 07:29:53 -- target/ns_hotplug_stress.sh@44 -- # kill -0 4048578 00:12:37.577 07:29:53 -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:12:38.507 07:29:54 -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:12:38.507 07:29:54 -- target/ns_hotplug_stress.sh@49 -- # null_size=1018 00:12:38.508 07:29:54 -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1018 00:12:38.765 true 00:12:38.765 07:29:54 -- target/ns_hotplug_stress.sh@44 -- # kill -0 4048578 00:12:38.765 07:29:54 -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:12:39.331 07:29:55 -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:12:39.331 07:29:55 -- target/ns_hotplug_stress.sh@49 -- # null_size=1019 00:12:39.331 07:29:55 -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1019 00:12:39.588 true 00:12:39.588 07:29:55 -- target/ns_hotplug_stress.sh@44 -- # kill -0 4048578 00:12:39.588 07:29:55 -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:12:39.846 07:29:55 -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:12:40.103 07:29:56 -- target/ns_hotplug_stress.sh@49 -- # null_size=1020 00:12:40.103 07:29:56 -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1020 00:12:40.360 true 00:12:40.360 07:29:56 -- target/ns_hotplug_stress.sh@44 -- # kill -0 4048578 00:12:40.361 07:29:56 -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:12:41.294 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:12:41.294 07:29:57 -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:12:41.294 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:12:41.552 07:29:57 -- target/ns_hotplug_stress.sh@49 -- # null_size=1021 00:12:41.552 07:29:57 -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1021 00:12:41.809 true 00:12:41.810 07:29:57 -- target/ns_hotplug_stress.sh@44 -- # kill -0 4048578 00:12:41.810 07:29:57 -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:12:42.068 07:29:58 -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:12:42.326 07:29:58 -- target/ns_hotplug_stress.sh@49 -- # null_size=1022 00:12:42.326 07:29:58 -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1022 00:12:42.583 true 00:12:42.583 07:29:58 -- target/ns_hotplug_stress.sh@44 -- # kill -0 4048578 00:12:42.583 07:29:58 -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:12:43.550 07:29:59 -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:12:43.550 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:12:43.809 07:29:59 -- target/ns_hotplug_stress.sh@49 -- # null_size=1023 00:12:43.809 07:29:59 -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1023 00:12:43.809 true 00:12:44.067 07:29:59 -- target/ns_hotplug_stress.sh@44 -- # kill -0 4048578 00:12:44.067 07:29:59 -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:12:44.324 07:30:00 -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:12:44.582 07:30:00 -- target/ns_hotplug_stress.sh@49 -- # null_size=1024 00:12:44.582 07:30:00 -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1024 00:12:44.582 true 00:12:44.839 07:30:00 -- target/ns_hotplug_stress.sh@44 -- # kill -0 4048578 00:12:44.839 07:30:00 -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:12:45.770 07:30:01 -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:12:45.770 07:30:01 -- target/ns_hotplug_stress.sh@49 -- # null_size=1025 00:12:45.770 07:30:01 -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1025 00:12:46.027 true 00:12:46.027 07:30:02 -- target/ns_hotplug_stress.sh@44 -- # kill -0 4048578 00:12:46.027 07:30:02 -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:12:46.284 07:30:02 -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:12:46.542 07:30:02 -- target/ns_hotplug_stress.sh@49 -- # null_size=1026 00:12:46.542 07:30:02 -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1026 00:12:46.799 true 00:12:46.799 07:30:02 -- target/ns_hotplug_stress.sh@44 -- # kill -0 4048578 00:12:46.799 07:30:02 -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:12:47.730 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:12:47.730 07:30:03 -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:12:47.730 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:12:47.730 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:12:47.987 07:30:03 -- target/ns_hotplug_stress.sh@49 -- # null_size=1027 00:12:47.987 07:30:03 -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1027 00:12:47.987 true 00:12:47.987 07:30:04 -- target/ns_hotplug_stress.sh@44 -- # kill -0 4048578 00:12:47.987 07:30:04 -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:12:48.244 07:30:04 -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:12:48.500 07:30:04 -- target/ns_hotplug_stress.sh@49 -- # null_size=1028 00:12:48.501 07:30:04 -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1028 00:12:48.758 true 00:12:48.758 07:30:04 -- target/ns_hotplug_stress.sh@44 -- # kill -0 4048578 00:12:48.758 07:30:04 -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:12:49.689 Initializing NVMe Controllers 00:12:49.689 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:12:49.689 Controller IO queue size 128, less than required. 00:12:49.689 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:12:49.689 Controller IO queue size 128, less than required. 00:12:49.689 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:12:49.689 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:12:49.689 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:12:49.689 Initialization complete. Launching workers. 00:12:49.689 ======================================================== 00:12:49.689 Latency(us) 00:12:49.689 Device Information : IOPS MiB/s Average min max 00:12:49.689 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 994.33 0.49 72141.82 2232.89 1021968.85 00:12:49.689 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 12281.87 6.00 10421.43 2448.47 442963.49 00:12:49.689 ======================================================== 00:12:49.689 Total : 13276.20 6.48 15044.03 2232.89 1021968.85 00:12:49.689 00:12:49.947 07:30:05 -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:12:50.204 07:30:06 -- target/ns_hotplug_stress.sh@49 -- # null_size=1029 00:12:50.204 07:30:06 -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1029 00:12:50.204 true 00:12:50.204 07:30:06 -- target/ns_hotplug_stress.sh@44 -- # kill -0 4048578 00:12:50.204 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/ns_hotplug_stress.sh: line 44: kill: (4048578) - No such process 00:12:50.204 07:30:06 -- target/ns_hotplug_stress.sh@53 -- # wait 4048578 00:12:50.204 07:30:06 -- target/ns_hotplug_stress.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:12:50.461 07:30:06 -- target/ns_hotplug_stress.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:12:50.719 07:30:06 -- target/ns_hotplug_stress.sh@58 -- # nthreads=8 00:12:50.719 07:30:06 -- target/ns_hotplug_stress.sh@58 -- # pids=() 00:12:50.719 07:30:06 -- target/ns_hotplug_stress.sh@59 -- # (( i = 0 )) 00:12:50.719 07:30:06 -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:12:50.719 07:30:06 -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null0 100 4096 00:12:50.977 null0 00:12:50.977 07:30:07 -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:12:50.977 07:30:07 -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:12:50.977 07:30:07 -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null1 100 4096 00:12:51.235 null1 00:12:51.235 07:30:07 -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:12:51.235 07:30:07 -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:12:51.235 07:30:07 -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null2 100 4096 00:12:51.494 null2 00:12:51.494 07:30:07 -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:12:51.494 07:30:07 -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:12:51.494 07:30:07 -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null3 100 4096 00:12:51.752 null3 00:12:51.752 07:30:07 -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:12:51.752 07:30:07 -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:12:51.752 07:30:07 -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null4 100 4096 00:12:52.009 null4 00:12:52.009 07:30:08 -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:12:52.009 07:30:08 -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:12:52.009 07:30:08 -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null5 100 4096 00:12:52.267 null5 00:12:52.267 07:30:08 -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:12:52.267 07:30:08 -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:12:52.267 07:30:08 -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null6 100 4096 00:12:52.523 null6 00:12:52.523 07:30:08 -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:12:52.523 07:30:08 -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:12:52.523 07:30:08 -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null7 100 4096 00:12:52.781 null7 00:12:52.781 07:30:08 -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:12:52.781 07:30:08 -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:12:52.781 07:30:08 -- target/ns_hotplug_stress.sh@62 -- # (( i = 0 )) 00:12:52.781 07:30:08 -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:12:52.781 07:30:08 -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:12:52.781 07:30:08 -- target/ns_hotplug_stress.sh@63 -- # add_remove 1 null0 00:12:52.781 07:30:08 -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:12:52.781 07:30:08 -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:12:52.781 07:30:08 -- target/ns_hotplug_stress.sh@14 -- # local nsid=1 bdev=null0 00:12:52.781 07:30:08 -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:12:52.781 07:30:08 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:12:52.781 07:30:08 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:12:52.781 07:30:08 -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:12:52.781 07:30:08 -- target/ns_hotplug_stress.sh@63 -- # add_remove 2 null1 00:12:52.781 07:30:08 -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:12:52.781 07:30:08 -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:12:52.781 07:30:08 -- target/ns_hotplug_stress.sh@14 -- # local nsid=2 bdev=null1 00:12:52.781 07:30:08 -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:12:52.781 07:30:08 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:12:52.781 07:30:08 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:12:52.781 07:30:08 -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:12:52.781 07:30:08 -- target/ns_hotplug_stress.sh@63 -- # add_remove 3 null2 00:12:52.781 07:30:08 -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:12:52.781 07:30:08 -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:12:52.781 07:30:08 -- target/ns_hotplug_stress.sh@14 -- # local nsid=3 bdev=null2 00:12:52.781 07:30:08 -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:12:52.781 07:30:08 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:12:52.781 07:30:08 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:12:52.781 07:30:08 -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:12:52.781 07:30:08 -- target/ns_hotplug_stress.sh@63 -- # add_remove 4 null3 00:12:52.781 07:30:08 -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:12:52.781 07:30:08 -- target/ns_hotplug_stress.sh@14 -- # local nsid=4 bdev=null3 00:12:52.781 07:30:08 -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:12:52.781 07:30:08 -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:12:52.781 07:30:08 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:12:52.781 07:30:08 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:12:52.781 07:30:08 -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:12:52.781 07:30:08 -- target/ns_hotplug_stress.sh@63 -- # add_remove 5 null4 00:12:52.781 07:30:08 -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:12:52.781 07:30:08 -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:12:52.781 07:30:08 -- target/ns_hotplug_stress.sh@14 -- # local nsid=5 bdev=null4 00:12:52.781 07:30:08 -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:12:52.781 07:30:08 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:12:52.781 07:30:08 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:12:52.781 07:30:08 -- target/ns_hotplug_stress.sh@63 -- # add_remove 6 null5 00:12:52.781 07:30:08 -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:12:52.781 07:30:08 -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:12:52.781 07:30:08 -- target/ns_hotplug_stress.sh@14 -- # local nsid=6 bdev=null5 00:12:52.781 07:30:08 -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:12:52.781 07:30:08 -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:12:52.781 07:30:08 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:12:52.781 07:30:08 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:12:52.781 07:30:08 -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:12:52.781 07:30:08 -- target/ns_hotplug_stress.sh@63 -- # add_remove 7 null6 00:12:52.781 07:30:08 -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:12:52.781 07:30:08 -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:12:52.781 07:30:08 -- target/ns_hotplug_stress.sh@14 -- # local nsid=7 bdev=null6 00:12:52.781 07:30:08 -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:12:52.781 07:30:08 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:12:52.781 07:30:08 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:12:52.781 07:30:08 -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:12:52.781 07:30:08 -- target/ns_hotplug_stress.sh@63 -- # add_remove 8 null7 00:12:52.781 07:30:08 -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:12:52.781 07:30:08 -- target/ns_hotplug_stress.sh@14 -- # local nsid=8 bdev=null7 00:12:52.781 07:30:08 -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:12:52.781 07:30:08 -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:12:52.781 07:30:08 -- target/ns_hotplug_stress.sh@66 -- # wait 4053353 4053354 4053357 4053359 4053361 4053363 4053365 4053367 00:12:52.781 07:30:08 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:12:52.781 07:30:08 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:12:53.039 07:30:09 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:12:53.039 07:30:09 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:12:53.039 07:30:09 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:12:53.039 07:30:09 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:12:53.039 07:30:09 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:12:53.039 07:30:09 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:12:53.039 07:30:09 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:12:53.039 07:30:09 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:12:53.297 07:30:09 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:12:53.297 07:30:09 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:12:53.297 07:30:09 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:12:53.297 07:30:09 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:12:53.297 07:30:09 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:12:53.297 07:30:09 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:12:53.297 07:30:09 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:12:53.297 07:30:09 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:12:53.297 07:30:09 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:12:53.297 07:30:09 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:12:53.297 07:30:09 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:12:53.297 07:30:09 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:12:53.297 07:30:09 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:12:53.297 07:30:09 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:12:53.297 07:30:09 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:12:53.297 07:30:09 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:12:53.297 07:30:09 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:12:53.297 07:30:09 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:12:53.297 07:30:09 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:12:53.297 07:30:09 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:12:53.297 07:30:09 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:12:53.297 07:30:09 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:12:53.297 07:30:09 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:12:53.297 07:30:09 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:12:53.556 07:30:09 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:12:53.556 07:30:09 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:12:53.556 07:30:09 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:12:53.556 07:30:09 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:12:53.556 07:30:09 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:12:53.556 07:30:09 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:12:53.556 07:30:09 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:12:53.556 07:30:09 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:12:53.815 07:30:09 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:12:53.815 07:30:09 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:12:53.815 07:30:09 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:12:53.815 07:30:09 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:12:53.815 07:30:09 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:12:53.815 07:30:09 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:12:53.815 07:30:09 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:12:53.815 07:30:09 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:12:53.815 07:30:09 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:12:53.815 07:30:09 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:12:53.815 07:30:09 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:12:53.815 07:30:09 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:12:53.815 07:30:09 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:12:53.815 07:30:09 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:12:53.815 07:30:09 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:12:53.815 07:30:09 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:12:53.815 07:30:09 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:12:53.815 07:30:09 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:12:53.815 07:30:09 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:12:53.815 07:30:09 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:12:53.815 07:30:09 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:12:53.815 07:30:09 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:12:53.815 07:30:09 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:12:53.815 07:30:09 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:12:54.072 07:30:10 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:12:54.072 07:30:10 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:12:54.072 07:30:10 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:12:54.072 07:30:10 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:12:54.072 07:30:10 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:12:54.072 07:30:10 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:12:54.072 07:30:10 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:12:54.072 07:30:10 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:12:54.330 07:30:10 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:12:54.330 07:30:10 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:12:54.330 07:30:10 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:12:54.330 07:30:10 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:12:54.330 07:30:10 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:12:54.330 07:30:10 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:12:54.330 07:30:10 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:12:54.330 07:30:10 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:12:54.331 07:30:10 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:12:54.331 07:30:10 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:12:54.331 07:30:10 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:12:54.331 07:30:10 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:12:54.331 07:30:10 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:12:54.331 07:30:10 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:12:54.331 07:30:10 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:12:54.331 07:30:10 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:12:54.331 07:30:10 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:12:54.331 07:30:10 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:12:54.331 07:30:10 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:12:54.331 07:30:10 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:12:54.331 07:30:10 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:12:54.331 07:30:10 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:12:54.331 07:30:10 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:12:54.331 07:30:10 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:12:54.589 07:30:10 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:12:54.589 07:30:10 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:12:54.589 07:30:10 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:12:54.589 07:30:10 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:12:54.589 07:30:10 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:12:54.589 07:30:10 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:12:54.589 07:30:10 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:12:54.589 07:30:10 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:12:54.848 07:30:10 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:12:54.848 07:30:10 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:12:54.848 07:30:10 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:12:54.848 07:30:10 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:12:54.848 07:30:10 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:12:54.848 07:30:10 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:12:54.848 07:30:10 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:12:54.848 07:30:10 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:12:54.848 07:30:10 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:12:54.848 07:30:10 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:12:54.848 07:30:10 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:12:54.848 07:30:10 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:12:54.848 07:30:10 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:12:54.848 07:30:10 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:12:54.848 07:30:10 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:12:54.848 07:30:10 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:12:54.848 07:30:10 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:12:54.848 07:30:10 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:12:54.848 07:30:10 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:12:54.848 07:30:10 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:12:54.848 07:30:10 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:12:54.848 07:30:10 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:12:54.848 07:30:10 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:12:54.848 07:30:10 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:12:55.106 07:30:11 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:12:55.106 07:30:11 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:12:55.106 07:30:11 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:12:55.106 07:30:11 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:12:55.107 07:30:11 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:12:55.107 07:30:11 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:12:55.107 07:30:11 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:12:55.107 07:30:11 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:12:55.365 07:30:11 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:12:55.365 07:30:11 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:12:55.365 07:30:11 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:12:55.365 07:30:11 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:12:55.365 07:30:11 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:12:55.365 07:30:11 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:12:55.365 07:30:11 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:12:55.365 07:30:11 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:12:55.365 07:30:11 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:12:55.365 07:30:11 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:12:55.365 07:30:11 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:12:55.365 07:30:11 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:12:55.365 07:30:11 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:12:55.365 07:30:11 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:12:55.365 07:30:11 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:12:55.365 07:30:11 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:12:55.365 07:30:11 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:12:55.365 07:30:11 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:12:55.365 07:30:11 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:12:55.365 07:30:11 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:12:55.365 07:30:11 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:12:55.365 07:30:11 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:12:55.365 07:30:11 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:12:55.365 07:30:11 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:12:55.623 07:30:11 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:12:55.623 07:30:11 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:12:55.623 07:30:11 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:12:55.623 07:30:11 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:12:55.623 07:30:11 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:12:55.623 07:30:11 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:12:55.623 07:30:11 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:12:55.623 07:30:11 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:12:55.889 07:30:12 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:12:55.889 07:30:12 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:12:55.889 07:30:12 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:12:55.889 07:30:12 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:12:55.889 07:30:12 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:12:55.889 07:30:12 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:12:55.889 07:30:12 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:12:55.889 07:30:12 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:12:55.889 07:30:12 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:12:55.889 07:30:12 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:12:55.889 07:30:12 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:12:55.889 07:30:12 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:12:55.889 07:30:12 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:12:55.889 07:30:12 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:12:55.889 07:30:12 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:12:55.889 07:30:12 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:12:55.889 07:30:12 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:12:55.889 07:30:12 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:12:55.889 07:30:12 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:12:55.889 07:30:12 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:12:55.889 07:30:12 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:12:55.889 07:30:12 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:12:55.889 07:30:12 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:12:55.889 07:30:12 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:12:56.191 07:30:12 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:12:56.191 07:30:12 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:12:56.191 07:30:12 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:12:56.191 07:30:12 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:12:56.191 07:30:12 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:12:56.191 07:30:12 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:12:56.191 07:30:12 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:12:56.191 07:30:12 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:12:56.450 07:30:12 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:12:56.450 07:30:12 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:12:56.450 07:30:12 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:12:56.450 07:30:12 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:12:56.450 07:30:12 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:12:56.450 07:30:12 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:12:56.450 07:30:12 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:12:56.450 07:30:12 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:12:56.450 07:30:12 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:12:56.450 07:30:12 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:12:56.450 07:30:12 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:12:56.450 07:30:12 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:12:56.450 07:30:12 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:12:56.450 07:30:12 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:12:56.450 07:30:12 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:12:56.450 07:30:12 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:12:56.450 07:30:12 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:12:56.450 07:30:12 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:12:56.450 07:30:12 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:12:56.450 07:30:12 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:12:56.450 07:30:12 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:12:56.450 07:30:12 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:12:56.450 07:30:12 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:12:56.450 07:30:12 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:12:56.753 07:30:12 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:12:56.753 07:30:12 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:12:56.753 07:30:12 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:12:56.753 07:30:12 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:12:56.753 07:30:12 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:12:56.753 07:30:12 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:12:56.753 07:30:12 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:12:56.753 07:30:12 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:12:57.011 07:30:13 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:12:57.011 07:30:13 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:12:57.011 07:30:13 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:12:57.011 07:30:13 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:12:57.011 07:30:13 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:12:57.011 07:30:13 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:12:57.011 07:30:13 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:12:57.011 07:30:13 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:12:57.011 07:30:13 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:12:57.011 07:30:13 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:12:57.011 07:30:13 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:12:57.011 07:30:13 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:12:57.011 07:30:13 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:12:57.011 07:30:13 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:12:57.011 07:30:13 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:12:57.011 07:30:13 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:12:57.011 07:30:13 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:12:57.011 07:30:13 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:12:57.011 07:30:13 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:12:57.011 07:30:13 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:12:57.011 07:30:13 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:12:57.011 07:30:13 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:12:57.011 07:30:13 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:12:57.011 07:30:13 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:12:57.269 07:30:13 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:12:57.269 07:30:13 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:12:57.269 07:30:13 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:12:57.269 07:30:13 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:12:57.269 07:30:13 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:12:57.269 07:30:13 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:12:57.269 07:30:13 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:12:57.269 07:30:13 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:12:57.526 07:30:13 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:12:57.526 07:30:13 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:12:57.526 07:30:13 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:12:57.526 07:30:13 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:12:57.526 07:30:13 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:12:57.526 07:30:13 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:12:57.526 07:30:13 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:12:57.526 07:30:13 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:12:57.526 07:30:13 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:12:57.526 07:30:13 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:12:57.526 07:30:13 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:12:57.526 07:30:13 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:12:57.526 07:30:13 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:12:57.526 07:30:13 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:12:57.526 07:30:13 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:12:57.526 07:30:13 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:12:57.526 07:30:13 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:12:57.526 07:30:13 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:12:57.526 07:30:13 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:12:57.526 07:30:13 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:12:57.526 07:30:13 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:12:57.526 07:30:13 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:12:57.526 07:30:13 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:12:57.526 07:30:13 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:12:57.783 07:30:13 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:12:57.783 07:30:13 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:12:57.783 07:30:13 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:12:57.783 07:30:13 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:12:57.783 07:30:13 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:12:57.783 07:30:13 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:12:57.783 07:30:13 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:12:57.783 07:30:13 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:12:58.040 07:30:14 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:12:58.040 07:30:14 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:12:58.040 07:30:14 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:12:58.040 07:30:14 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:12:58.040 07:30:14 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:12:58.040 07:30:14 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:12:58.040 07:30:14 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:12:58.040 07:30:14 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:12:58.040 07:30:14 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:12:58.040 07:30:14 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:12:58.040 07:30:14 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:12:58.040 07:30:14 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:12:58.040 07:30:14 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:12:58.040 07:30:14 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:12:58.040 07:30:14 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:12:58.040 07:30:14 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:12:58.040 07:30:14 -- target/ns_hotplug_stress.sh@68 -- # trap - SIGINT SIGTERM EXIT 00:12:58.040 07:30:14 -- target/ns_hotplug_stress.sh@70 -- # nvmftestfini 00:12:58.040 07:30:14 -- nvmf/common.sh@476 -- # nvmfcleanup 00:12:58.040 07:30:14 -- nvmf/common.sh@116 -- # sync 00:12:58.040 07:30:14 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:12:58.040 07:30:14 -- nvmf/common.sh@119 -- # set +e 00:12:58.040 07:30:14 -- nvmf/common.sh@120 -- # for i in {1..20} 00:12:58.040 07:30:14 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:12:58.040 rmmod nvme_tcp 00:12:58.040 rmmod nvme_fabrics 00:12:58.040 rmmod nvme_keyring 00:12:58.298 07:30:14 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:12:58.298 07:30:14 -- nvmf/common.sh@123 -- # set -e 00:12:58.298 07:30:14 -- nvmf/common.sh@124 -- # return 0 00:12:58.298 07:30:14 -- nvmf/common.sh@477 -- # '[' -n 4048141 ']' 00:12:58.298 07:30:14 -- nvmf/common.sh@478 -- # killprocess 4048141 00:12:58.298 07:30:14 -- common/autotest_common.sh@926 -- # '[' -z 4048141 ']' 00:12:58.298 07:30:14 -- common/autotest_common.sh@930 -- # kill -0 4048141 00:12:58.298 07:30:14 -- common/autotest_common.sh@931 -- # uname 00:12:58.298 07:30:14 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:12:58.298 07:30:14 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 4048141 00:12:58.298 07:30:14 -- common/autotest_common.sh@932 -- # process_name=reactor_1 00:12:58.298 07:30:14 -- common/autotest_common.sh@936 -- # '[' reactor_1 = sudo ']' 00:12:58.298 07:30:14 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 4048141' 00:12:58.298 killing process with pid 4048141 00:12:58.298 07:30:14 -- common/autotest_common.sh@945 -- # kill 4048141 00:12:58.298 07:30:14 -- common/autotest_common.sh@950 -- # wait 4048141 00:12:58.557 07:30:14 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:12:58.557 07:30:14 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:12:58.557 07:30:14 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:12:58.557 07:30:14 -- nvmf/common.sh@273 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:12:58.557 07:30:14 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:12:58.557 07:30:14 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:58.557 07:30:14 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:12:58.557 07:30:14 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:00.462 07:30:16 -- nvmf/common.sh@278 -- # ip -4 addr flush cvl_0_1 00:13:00.462 00:13:00.462 real 0m47.030s 00:13:00.462 user 3m31.829s 00:13:00.462 sys 0m15.972s 00:13:00.462 07:30:16 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:13:00.462 07:30:16 -- common/autotest_common.sh@10 -- # set +x 00:13:00.462 ************************************ 00:13:00.462 END TEST nvmf_ns_hotplug_stress 00:13:00.462 ************************************ 00:13:00.462 07:30:16 -- nvmf/nvmf.sh@33 -- # run_test nvmf_connect_stress /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_stress.sh --transport=tcp 00:13:00.462 07:30:16 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:13:00.462 07:30:16 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:13:00.462 07:30:16 -- common/autotest_common.sh@10 -- # set +x 00:13:00.462 ************************************ 00:13:00.462 START TEST nvmf_connect_stress 00:13:00.462 ************************************ 00:13:00.462 07:30:16 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_stress.sh --transport=tcp 00:13:00.721 * Looking for test storage... 00:13:00.721 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:13:00.721 07:30:16 -- target/connect_stress.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:13:00.721 07:30:16 -- nvmf/common.sh@7 -- # uname -s 00:13:00.721 07:30:16 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:13:00.721 07:30:16 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:13:00.721 07:30:16 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:13:00.721 07:30:16 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:13:00.721 07:30:16 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:13:00.721 07:30:16 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:13:00.721 07:30:16 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:13:00.721 07:30:16 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:13:00.721 07:30:16 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:13:00.721 07:30:16 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:13:00.721 07:30:16 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:13:00.721 07:30:16 -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:13:00.721 07:30:16 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:13:00.721 07:30:16 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:13:00.721 07:30:16 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:13:00.721 07:30:16 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:13:00.721 07:30:16 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:13:00.721 07:30:16 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:13:00.721 07:30:16 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:13:00.721 07:30:16 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:00.721 07:30:16 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:00.721 07:30:16 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:00.721 07:30:16 -- paths/export.sh@5 -- # export PATH 00:13:00.721 07:30:16 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:00.721 07:30:16 -- nvmf/common.sh@46 -- # : 0 00:13:00.721 07:30:16 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:13:00.721 07:30:16 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:13:00.721 07:30:16 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:13:00.721 07:30:16 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:13:00.721 07:30:16 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:13:00.721 07:30:16 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:13:00.721 07:30:16 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:13:00.721 07:30:16 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:13:00.721 07:30:16 -- target/connect_stress.sh@12 -- # nvmftestinit 00:13:00.721 07:30:16 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:13:00.721 07:30:16 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:13:00.721 07:30:16 -- nvmf/common.sh@436 -- # prepare_net_devs 00:13:00.721 07:30:16 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:13:00.722 07:30:16 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:13:00.722 07:30:16 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:00.722 07:30:16 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:13:00.722 07:30:16 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:00.722 07:30:16 -- nvmf/common.sh@402 -- # [[ phy != virt ]] 00:13:00.722 07:30:16 -- nvmf/common.sh@402 -- # gather_supported_nvmf_pci_devs 00:13:00.722 07:30:16 -- nvmf/common.sh@284 -- # xtrace_disable 00:13:00.722 07:30:16 -- common/autotest_common.sh@10 -- # set +x 00:13:02.624 07:30:18 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:13:02.624 07:30:18 -- nvmf/common.sh@290 -- # pci_devs=() 00:13:02.624 07:30:18 -- nvmf/common.sh@290 -- # local -a pci_devs 00:13:02.624 07:30:18 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:13:02.624 07:30:18 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:13:02.624 07:30:18 -- nvmf/common.sh@292 -- # pci_drivers=() 00:13:02.624 07:30:18 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:13:02.624 07:30:18 -- nvmf/common.sh@294 -- # net_devs=() 00:13:02.624 07:30:18 -- nvmf/common.sh@294 -- # local -ga net_devs 00:13:02.624 07:30:18 -- nvmf/common.sh@295 -- # e810=() 00:13:02.624 07:30:18 -- nvmf/common.sh@295 -- # local -ga e810 00:13:02.624 07:30:18 -- nvmf/common.sh@296 -- # x722=() 00:13:02.624 07:30:18 -- nvmf/common.sh@296 -- # local -ga x722 00:13:02.624 07:30:18 -- nvmf/common.sh@297 -- # mlx=() 00:13:02.624 07:30:18 -- nvmf/common.sh@297 -- # local -ga mlx 00:13:02.624 07:30:18 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:13:02.624 07:30:18 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:13:02.624 07:30:18 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:13:02.624 07:30:18 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:13:02.624 07:30:18 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:13:02.624 07:30:18 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:13:02.624 07:30:18 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:13:02.624 07:30:18 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:13:02.624 07:30:18 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:13:02.624 07:30:18 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:13:02.624 07:30:18 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:13:02.624 07:30:18 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:13:02.624 07:30:18 -- nvmf/common.sh@320 -- # [[ tcp == rdma ]] 00:13:02.624 07:30:18 -- nvmf/common.sh@326 -- # [[ e810 == mlx5 ]] 00:13:02.624 07:30:18 -- nvmf/common.sh@328 -- # [[ e810 == e810 ]] 00:13:02.624 07:30:18 -- nvmf/common.sh@329 -- # pci_devs=("${e810[@]}") 00:13:02.624 07:30:18 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:13:02.624 07:30:18 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:13:02.624 07:30:18 -- nvmf/common.sh@340 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:13:02.624 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:13:02.624 07:30:18 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:13:02.624 07:30:18 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:13:02.624 07:30:18 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:13:02.624 07:30:18 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:13:02.624 07:30:18 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:13:02.624 07:30:18 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:13:02.624 07:30:18 -- nvmf/common.sh@340 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:13:02.624 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:13:02.624 07:30:18 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:13:02.624 07:30:18 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:13:02.624 07:30:18 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:13:02.624 07:30:18 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:13:02.625 07:30:18 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:13:02.625 07:30:18 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:13:02.625 07:30:18 -- nvmf/common.sh@371 -- # [[ e810 == e810 ]] 00:13:02.625 07:30:18 -- nvmf/common.sh@371 -- # [[ tcp == rdma ]] 00:13:02.625 07:30:18 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:13:02.625 07:30:18 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:13:02.625 07:30:18 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:13:02.625 07:30:18 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:13:02.625 07:30:18 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:13:02.625 Found net devices under 0000:0a:00.0: cvl_0_0 00:13:02.625 07:30:18 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:13:02.625 07:30:18 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:13:02.625 07:30:18 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:13:02.625 07:30:18 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:13:02.625 07:30:18 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:13:02.625 07:30:18 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:13:02.625 Found net devices under 0000:0a:00.1: cvl_0_1 00:13:02.625 07:30:18 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:13:02.625 07:30:18 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:13:02.625 07:30:18 -- nvmf/common.sh@402 -- # is_hw=yes 00:13:02.625 07:30:18 -- nvmf/common.sh@404 -- # [[ yes == yes ]] 00:13:02.625 07:30:18 -- nvmf/common.sh@405 -- # [[ tcp == tcp ]] 00:13:02.625 07:30:18 -- nvmf/common.sh@406 -- # nvmf_tcp_init 00:13:02.625 07:30:18 -- nvmf/common.sh@228 -- # NVMF_INITIATOR_IP=10.0.0.1 00:13:02.625 07:30:18 -- nvmf/common.sh@229 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:13:02.625 07:30:18 -- nvmf/common.sh@230 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:13:02.625 07:30:18 -- nvmf/common.sh@233 -- # (( 2 > 1 )) 00:13:02.625 07:30:18 -- nvmf/common.sh@235 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:13:02.625 07:30:18 -- nvmf/common.sh@236 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:13:02.625 07:30:18 -- nvmf/common.sh@239 -- # NVMF_SECOND_TARGET_IP= 00:13:02.625 07:30:18 -- nvmf/common.sh@241 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:13:02.625 07:30:18 -- nvmf/common.sh@242 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:13:02.625 07:30:18 -- nvmf/common.sh@243 -- # ip -4 addr flush cvl_0_0 00:13:02.625 07:30:18 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_1 00:13:02.625 07:30:18 -- nvmf/common.sh@247 -- # ip netns add cvl_0_0_ns_spdk 00:13:02.625 07:30:18 -- nvmf/common.sh@250 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:13:02.884 07:30:18 -- nvmf/common.sh@253 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:13:02.884 07:30:18 -- nvmf/common.sh@254 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:13:02.884 07:30:18 -- nvmf/common.sh@257 -- # ip link set cvl_0_1 up 00:13:02.884 07:30:18 -- nvmf/common.sh@259 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:13:02.884 07:30:18 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:13:02.884 07:30:18 -- nvmf/common.sh@263 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:13:02.884 07:30:18 -- nvmf/common.sh@266 -- # ping -c 1 10.0.0.2 00:13:02.884 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:13:02.884 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.255 ms 00:13:02.884 00:13:02.884 --- 10.0.0.2 ping statistics --- 00:13:02.884 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:02.884 rtt min/avg/max/mdev = 0.255/0.255/0.255/0.000 ms 00:13:02.884 07:30:18 -- nvmf/common.sh@267 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:13:02.884 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:13:02.884 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.173 ms 00:13:02.884 00:13:02.884 --- 10.0.0.1 ping statistics --- 00:13:02.884 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:02.884 rtt min/avg/max/mdev = 0.173/0.173/0.173/0.000 ms 00:13:02.884 07:30:18 -- nvmf/common.sh@269 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:13:02.884 07:30:18 -- nvmf/common.sh@410 -- # return 0 00:13:02.884 07:30:18 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:13:02.884 07:30:18 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:13:02.884 07:30:18 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:13:02.884 07:30:18 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:13:02.884 07:30:18 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:13:02.884 07:30:18 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:13:02.884 07:30:18 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:13:02.884 07:30:18 -- target/connect_stress.sh@13 -- # nvmfappstart -m 0xE 00:13:02.884 07:30:18 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:13:02.884 07:30:18 -- common/autotest_common.sh@712 -- # xtrace_disable 00:13:02.884 07:30:18 -- common/autotest_common.sh@10 -- # set +x 00:13:02.884 07:30:18 -- nvmf/common.sh@469 -- # nvmfpid=4056156 00:13:02.884 07:30:18 -- nvmf/common.sh@468 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:13:02.884 07:30:18 -- nvmf/common.sh@470 -- # waitforlisten 4056156 00:13:02.884 07:30:18 -- common/autotest_common.sh@819 -- # '[' -z 4056156 ']' 00:13:02.884 07:30:18 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:02.884 07:30:18 -- common/autotest_common.sh@824 -- # local max_retries=100 00:13:02.884 07:30:18 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:02.884 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:02.884 07:30:18 -- common/autotest_common.sh@828 -- # xtrace_disable 00:13:02.884 07:30:18 -- common/autotest_common.sh@10 -- # set +x 00:13:02.884 [2024-07-14 07:30:18.940997] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:13:02.884 [2024-07-14 07:30:18.941081] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:13:02.884 EAL: No free 2048 kB hugepages reported on node 1 00:13:02.884 [2024-07-14 07:30:19.013335] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 3 00:13:03.142 [2024-07-14 07:30:19.131264] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:13:03.142 [2024-07-14 07:30:19.131423] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:13:03.142 [2024-07-14 07:30:19.131452] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:13:03.142 [2024-07-14 07:30:19.131468] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:13:03.142 [2024-07-14 07:30:19.131535] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:13:03.142 [2024-07-14 07:30:19.133884] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:13:03.142 [2024-07-14 07:30:19.133897] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:13:03.708 07:30:19 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:13:03.708 07:30:19 -- common/autotest_common.sh@852 -- # return 0 00:13:03.708 07:30:19 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:13:03.708 07:30:19 -- common/autotest_common.sh@718 -- # xtrace_disable 00:13:03.708 07:30:19 -- common/autotest_common.sh@10 -- # set +x 00:13:03.708 07:30:19 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:13:03.708 07:30:19 -- target/connect_stress.sh@15 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:13:03.708 07:30:19 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:03.708 07:30:19 -- common/autotest_common.sh@10 -- # set +x 00:13:03.708 [2024-07-14 07:30:19.869333] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:13:03.708 07:30:19 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:03.708 07:30:19 -- target/connect_stress.sh@16 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:13:03.708 07:30:19 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:03.708 07:30:19 -- common/autotest_common.sh@10 -- # set +x 00:13:03.966 07:30:19 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:03.966 07:30:19 -- target/connect_stress.sh@17 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:13:03.966 07:30:19 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:03.966 07:30:19 -- common/autotest_common.sh@10 -- # set +x 00:13:03.966 [2024-07-14 07:30:19.896028] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:13:03.966 07:30:19 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:03.966 07:30:19 -- target/connect_stress.sh@18 -- # rpc_cmd bdev_null_create NULL1 1000 512 00:13:03.966 07:30:19 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:03.966 07:30:19 -- common/autotest_common.sh@10 -- # set +x 00:13:03.966 NULL1 00:13:03.966 07:30:19 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:03.966 07:30:19 -- target/connect_stress.sh@21 -- # PERF_PID=4056311 00:13:03.966 07:30:19 -- target/connect_stress.sh@20 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/connect_stress/connect_stress -c 0x1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' -t 10 00:13:03.966 07:30:19 -- target/connect_stress.sh@23 -- # rpcs=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.txt 00:13:03.966 07:30:19 -- target/connect_stress.sh@25 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.txt 00:13:03.966 07:30:19 -- target/connect_stress.sh@27 -- # seq 1 20 00:13:03.966 07:30:19 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:13:03.966 07:30:19 -- target/connect_stress.sh@28 -- # cat 00:13:03.966 07:30:19 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:13:03.966 07:30:19 -- target/connect_stress.sh@28 -- # cat 00:13:03.966 07:30:19 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:13:03.966 07:30:19 -- target/connect_stress.sh@28 -- # cat 00:13:03.966 07:30:19 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:13:03.966 07:30:19 -- target/connect_stress.sh@28 -- # cat 00:13:03.966 07:30:19 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:13:03.966 07:30:19 -- target/connect_stress.sh@28 -- # cat 00:13:03.966 07:30:19 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:13:03.966 07:30:19 -- target/connect_stress.sh@28 -- # cat 00:13:03.966 07:30:19 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:13:03.966 07:30:19 -- target/connect_stress.sh@28 -- # cat 00:13:03.966 07:30:19 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:13:03.966 07:30:19 -- target/connect_stress.sh@28 -- # cat 00:13:03.966 07:30:19 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:13:03.966 07:30:19 -- target/connect_stress.sh@28 -- # cat 00:13:03.966 07:30:19 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:13:03.966 07:30:19 -- target/connect_stress.sh@28 -- # cat 00:13:03.966 07:30:19 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:13:03.966 07:30:19 -- target/connect_stress.sh@28 -- # cat 00:13:03.966 EAL: No free 2048 kB hugepages reported on node 1 00:13:03.966 07:30:19 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:13:03.966 07:30:19 -- target/connect_stress.sh@28 -- # cat 00:13:03.966 07:30:19 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:13:03.966 07:30:19 -- target/connect_stress.sh@28 -- # cat 00:13:03.966 07:30:19 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:13:03.966 07:30:19 -- target/connect_stress.sh@28 -- # cat 00:13:03.966 07:30:19 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:13:03.966 07:30:19 -- target/connect_stress.sh@28 -- # cat 00:13:03.966 07:30:19 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:13:03.966 07:30:19 -- target/connect_stress.sh@28 -- # cat 00:13:03.966 07:30:19 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:13:03.966 07:30:19 -- target/connect_stress.sh@28 -- # cat 00:13:03.966 07:30:19 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:13:03.966 07:30:19 -- target/connect_stress.sh@28 -- # cat 00:13:03.966 07:30:19 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:13:03.966 07:30:19 -- target/connect_stress.sh@28 -- # cat 00:13:03.966 07:30:19 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:13:03.966 07:30:19 -- target/connect_stress.sh@28 -- # cat 00:13:03.966 07:30:19 -- target/connect_stress.sh@34 -- # kill -0 4056311 00:13:03.966 07:30:19 -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:03.966 07:30:19 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:03.966 07:30:19 -- common/autotest_common.sh@10 -- # set +x 00:13:04.224 07:30:20 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:04.224 07:30:20 -- target/connect_stress.sh@34 -- # kill -0 4056311 00:13:04.224 07:30:20 -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:04.224 07:30:20 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:04.224 07:30:20 -- common/autotest_common.sh@10 -- # set +x 00:13:04.482 07:30:20 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:04.482 07:30:20 -- target/connect_stress.sh@34 -- # kill -0 4056311 00:13:04.482 07:30:20 -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:04.482 07:30:20 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:04.482 07:30:20 -- common/autotest_common.sh@10 -- # set +x 00:13:05.046 07:30:20 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:05.046 07:30:20 -- target/connect_stress.sh@34 -- # kill -0 4056311 00:13:05.046 07:30:20 -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:05.046 07:30:20 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:05.046 07:30:20 -- common/autotest_common.sh@10 -- # set +x 00:13:05.303 07:30:21 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:05.303 07:30:21 -- target/connect_stress.sh@34 -- # kill -0 4056311 00:13:05.303 07:30:21 -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:05.303 07:30:21 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:05.303 07:30:21 -- common/autotest_common.sh@10 -- # set +x 00:13:05.562 07:30:21 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:05.562 07:30:21 -- target/connect_stress.sh@34 -- # kill -0 4056311 00:13:05.562 07:30:21 -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:05.562 07:30:21 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:05.562 07:30:21 -- common/autotest_common.sh@10 -- # set +x 00:13:05.819 07:30:21 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:05.819 07:30:21 -- target/connect_stress.sh@34 -- # kill -0 4056311 00:13:05.819 07:30:21 -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:05.819 07:30:21 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:05.819 07:30:21 -- common/autotest_common.sh@10 -- # set +x 00:13:06.076 07:30:22 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:06.076 07:30:22 -- target/connect_stress.sh@34 -- # kill -0 4056311 00:13:06.076 07:30:22 -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:06.076 07:30:22 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:06.076 07:30:22 -- common/autotest_common.sh@10 -- # set +x 00:13:06.642 07:30:22 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:06.642 07:30:22 -- target/connect_stress.sh@34 -- # kill -0 4056311 00:13:06.642 07:30:22 -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:06.642 07:30:22 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:06.642 07:30:22 -- common/autotest_common.sh@10 -- # set +x 00:13:06.898 07:30:22 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:06.898 07:30:22 -- target/connect_stress.sh@34 -- # kill -0 4056311 00:13:06.898 07:30:22 -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:06.898 07:30:22 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:06.898 07:30:22 -- common/autotest_common.sh@10 -- # set +x 00:13:07.155 07:30:23 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:07.155 07:30:23 -- target/connect_stress.sh@34 -- # kill -0 4056311 00:13:07.155 07:30:23 -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:07.155 07:30:23 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:07.155 07:30:23 -- common/autotest_common.sh@10 -- # set +x 00:13:07.411 07:30:23 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:07.411 07:30:23 -- target/connect_stress.sh@34 -- # kill -0 4056311 00:13:07.411 07:30:23 -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:07.411 07:30:23 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:07.411 07:30:23 -- common/autotest_common.sh@10 -- # set +x 00:13:07.667 07:30:23 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:07.667 07:30:23 -- target/connect_stress.sh@34 -- # kill -0 4056311 00:13:07.667 07:30:23 -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:07.667 07:30:23 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:07.667 07:30:23 -- common/autotest_common.sh@10 -- # set +x 00:13:08.231 07:30:24 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:08.231 07:30:24 -- target/connect_stress.sh@34 -- # kill -0 4056311 00:13:08.231 07:30:24 -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:08.231 07:30:24 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:08.231 07:30:24 -- common/autotest_common.sh@10 -- # set +x 00:13:08.488 07:30:24 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:08.488 07:30:24 -- target/connect_stress.sh@34 -- # kill -0 4056311 00:13:08.488 07:30:24 -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:08.488 07:30:24 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:08.488 07:30:24 -- common/autotest_common.sh@10 -- # set +x 00:13:08.745 07:30:24 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:08.745 07:30:24 -- target/connect_stress.sh@34 -- # kill -0 4056311 00:13:08.745 07:30:24 -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:08.745 07:30:24 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:08.745 07:30:24 -- common/autotest_common.sh@10 -- # set +x 00:13:09.003 07:30:25 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:09.003 07:30:25 -- target/connect_stress.sh@34 -- # kill -0 4056311 00:13:09.003 07:30:25 -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:09.003 07:30:25 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:09.003 07:30:25 -- common/autotest_common.sh@10 -- # set +x 00:13:09.260 07:30:25 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:09.260 07:30:25 -- target/connect_stress.sh@34 -- # kill -0 4056311 00:13:09.260 07:30:25 -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:09.260 07:30:25 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:09.260 07:30:25 -- common/autotest_common.sh@10 -- # set +x 00:13:09.823 07:30:25 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:09.823 07:30:25 -- target/connect_stress.sh@34 -- # kill -0 4056311 00:13:09.823 07:30:25 -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:09.823 07:30:25 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:09.823 07:30:25 -- common/autotest_common.sh@10 -- # set +x 00:13:10.080 07:30:26 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:10.080 07:30:26 -- target/connect_stress.sh@34 -- # kill -0 4056311 00:13:10.080 07:30:26 -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:10.080 07:30:26 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:10.080 07:30:26 -- common/autotest_common.sh@10 -- # set +x 00:13:10.337 07:30:26 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:10.337 07:30:26 -- target/connect_stress.sh@34 -- # kill -0 4056311 00:13:10.337 07:30:26 -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:10.337 07:30:26 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:10.337 07:30:26 -- common/autotest_common.sh@10 -- # set +x 00:13:10.595 07:30:26 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:10.595 07:30:26 -- target/connect_stress.sh@34 -- # kill -0 4056311 00:13:10.595 07:30:26 -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:10.595 07:30:26 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:10.595 07:30:26 -- common/autotest_common.sh@10 -- # set +x 00:13:11.161 07:30:27 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:11.161 07:30:27 -- target/connect_stress.sh@34 -- # kill -0 4056311 00:13:11.161 07:30:27 -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:11.161 07:30:27 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:11.161 07:30:27 -- common/autotest_common.sh@10 -- # set +x 00:13:11.419 07:30:27 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:11.419 07:30:27 -- target/connect_stress.sh@34 -- # kill -0 4056311 00:13:11.419 07:30:27 -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:11.419 07:30:27 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:11.419 07:30:27 -- common/autotest_common.sh@10 -- # set +x 00:13:11.677 07:30:27 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:11.677 07:30:27 -- target/connect_stress.sh@34 -- # kill -0 4056311 00:13:11.677 07:30:27 -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:11.677 07:30:27 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:11.677 07:30:27 -- common/autotest_common.sh@10 -- # set +x 00:13:11.934 07:30:27 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:11.934 07:30:27 -- target/connect_stress.sh@34 -- # kill -0 4056311 00:13:11.934 07:30:27 -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:11.934 07:30:27 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:11.934 07:30:27 -- common/autotest_common.sh@10 -- # set +x 00:13:12.191 07:30:28 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:12.191 07:30:28 -- target/connect_stress.sh@34 -- # kill -0 4056311 00:13:12.191 07:30:28 -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:12.191 07:30:28 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:12.191 07:30:28 -- common/autotest_common.sh@10 -- # set +x 00:13:12.755 07:30:28 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:12.755 07:30:28 -- target/connect_stress.sh@34 -- # kill -0 4056311 00:13:12.755 07:30:28 -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:12.755 07:30:28 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:12.755 07:30:28 -- common/autotest_common.sh@10 -- # set +x 00:13:13.013 07:30:28 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:13.013 07:30:28 -- target/connect_stress.sh@34 -- # kill -0 4056311 00:13:13.013 07:30:28 -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:13.013 07:30:28 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:13.013 07:30:28 -- common/autotest_common.sh@10 -- # set +x 00:13:13.270 07:30:29 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:13.270 07:30:29 -- target/connect_stress.sh@34 -- # kill -0 4056311 00:13:13.270 07:30:29 -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:13.270 07:30:29 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:13.270 07:30:29 -- common/autotest_common.sh@10 -- # set +x 00:13:13.528 07:30:29 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:13.528 07:30:29 -- target/connect_stress.sh@34 -- # kill -0 4056311 00:13:13.528 07:30:29 -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:13.528 07:30:29 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:13.528 07:30:29 -- common/autotest_common.sh@10 -- # set +x 00:13:13.784 07:30:29 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:13.784 07:30:29 -- target/connect_stress.sh@34 -- # kill -0 4056311 00:13:13.784 07:30:29 -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:13.784 07:30:29 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:13.784 07:30:29 -- common/autotest_common.sh@10 -- # set +x 00:13:14.041 Testing NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:13:14.299 07:30:30 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:14.299 07:30:30 -- target/connect_stress.sh@34 -- # kill -0 4056311 00:13:14.299 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_stress.sh: line 34: kill: (4056311) - No such process 00:13:14.299 07:30:30 -- target/connect_stress.sh@38 -- # wait 4056311 00:13:14.299 07:30:30 -- target/connect_stress.sh@39 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.txt 00:13:14.299 07:30:30 -- target/connect_stress.sh@41 -- # trap - SIGINT SIGTERM EXIT 00:13:14.299 07:30:30 -- target/connect_stress.sh@43 -- # nvmftestfini 00:13:14.299 07:30:30 -- nvmf/common.sh@476 -- # nvmfcleanup 00:13:14.299 07:30:30 -- nvmf/common.sh@116 -- # sync 00:13:14.299 07:30:30 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:13:14.299 07:30:30 -- nvmf/common.sh@119 -- # set +e 00:13:14.299 07:30:30 -- nvmf/common.sh@120 -- # for i in {1..20} 00:13:14.299 07:30:30 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:13:14.299 rmmod nvme_tcp 00:13:14.299 rmmod nvme_fabrics 00:13:14.299 rmmod nvme_keyring 00:13:14.299 07:30:30 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:13:14.299 07:30:30 -- nvmf/common.sh@123 -- # set -e 00:13:14.299 07:30:30 -- nvmf/common.sh@124 -- # return 0 00:13:14.299 07:30:30 -- nvmf/common.sh@477 -- # '[' -n 4056156 ']' 00:13:14.299 07:30:30 -- nvmf/common.sh@478 -- # killprocess 4056156 00:13:14.299 07:30:30 -- common/autotest_common.sh@926 -- # '[' -z 4056156 ']' 00:13:14.299 07:30:30 -- common/autotest_common.sh@930 -- # kill -0 4056156 00:13:14.299 07:30:30 -- common/autotest_common.sh@931 -- # uname 00:13:14.299 07:30:30 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:13:14.299 07:30:30 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 4056156 00:13:14.299 07:30:30 -- common/autotest_common.sh@932 -- # process_name=reactor_1 00:13:14.299 07:30:30 -- common/autotest_common.sh@936 -- # '[' reactor_1 = sudo ']' 00:13:14.299 07:30:30 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 4056156' 00:13:14.299 killing process with pid 4056156 00:13:14.299 07:30:30 -- common/autotest_common.sh@945 -- # kill 4056156 00:13:14.299 07:30:30 -- common/autotest_common.sh@950 -- # wait 4056156 00:13:14.556 07:30:30 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:13:14.556 07:30:30 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:13:14.556 07:30:30 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:13:14.556 07:30:30 -- nvmf/common.sh@273 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:13:14.556 07:30:30 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:13:14.556 07:30:30 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:14.556 07:30:30 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:13:14.556 07:30:30 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:17.089 07:30:32 -- nvmf/common.sh@278 -- # ip -4 addr flush cvl_0_1 00:13:17.089 00:13:17.089 real 0m16.022s 00:13:17.089 user 0m40.073s 00:13:17.089 sys 0m6.097s 00:13:17.089 07:30:32 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:13:17.089 07:30:32 -- common/autotest_common.sh@10 -- # set +x 00:13:17.089 ************************************ 00:13:17.089 END TEST nvmf_connect_stress 00:13:17.089 ************************************ 00:13:17.089 07:30:32 -- nvmf/nvmf.sh@34 -- # run_test nvmf_fused_ordering /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fused_ordering.sh --transport=tcp 00:13:17.089 07:30:32 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:13:17.089 07:30:32 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:13:17.089 07:30:32 -- common/autotest_common.sh@10 -- # set +x 00:13:17.089 ************************************ 00:13:17.089 START TEST nvmf_fused_ordering 00:13:17.089 ************************************ 00:13:17.089 07:30:32 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fused_ordering.sh --transport=tcp 00:13:17.089 * Looking for test storage... 00:13:17.089 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:13:17.089 07:30:32 -- target/fused_ordering.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:13:17.089 07:30:32 -- nvmf/common.sh@7 -- # uname -s 00:13:17.089 07:30:32 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:13:17.089 07:30:32 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:13:17.089 07:30:32 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:13:17.089 07:30:32 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:13:17.089 07:30:32 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:13:17.089 07:30:32 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:13:17.090 07:30:32 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:13:17.090 07:30:32 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:13:17.090 07:30:32 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:13:17.090 07:30:32 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:13:17.090 07:30:32 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:13:17.090 07:30:32 -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:13:17.090 07:30:32 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:13:17.090 07:30:32 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:13:17.090 07:30:32 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:13:17.090 07:30:32 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:13:17.090 07:30:32 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:13:17.090 07:30:32 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:13:17.090 07:30:32 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:13:17.090 07:30:32 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:17.090 07:30:32 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:17.090 07:30:32 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:17.090 07:30:32 -- paths/export.sh@5 -- # export PATH 00:13:17.090 07:30:32 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:17.090 07:30:32 -- nvmf/common.sh@46 -- # : 0 00:13:17.090 07:30:32 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:13:17.090 07:30:32 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:13:17.090 07:30:32 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:13:17.090 07:30:32 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:13:17.090 07:30:32 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:13:17.090 07:30:32 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:13:17.090 07:30:32 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:13:17.090 07:30:32 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:13:17.090 07:30:32 -- target/fused_ordering.sh@12 -- # nvmftestinit 00:13:17.090 07:30:32 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:13:17.090 07:30:32 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:13:17.090 07:30:32 -- nvmf/common.sh@436 -- # prepare_net_devs 00:13:17.090 07:30:32 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:13:17.090 07:30:32 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:13:17.090 07:30:32 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:17.090 07:30:32 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:13:17.090 07:30:32 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:17.090 07:30:32 -- nvmf/common.sh@402 -- # [[ phy != virt ]] 00:13:17.090 07:30:32 -- nvmf/common.sh@402 -- # gather_supported_nvmf_pci_devs 00:13:17.090 07:30:32 -- nvmf/common.sh@284 -- # xtrace_disable 00:13:17.090 07:30:32 -- common/autotest_common.sh@10 -- # set +x 00:13:18.997 07:30:34 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:13:18.997 07:30:34 -- nvmf/common.sh@290 -- # pci_devs=() 00:13:18.997 07:30:34 -- nvmf/common.sh@290 -- # local -a pci_devs 00:13:18.997 07:30:34 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:13:18.997 07:30:34 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:13:18.997 07:30:34 -- nvmf/common.sh@292 -- # pci_drivers=() 00:13:18.997 07:30:34 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:13:18.997 07:30:34 -- nvmf/common.sh@294 -- # net_devs=() 00:13:18.997 07:30:34 -- nvmf/common.sh@294 -- # local -ga net_devs 00:13:18.997 07:30:34 -- nvmf/common.sh@295 -- # e810=() 00:13:18.997 07:30:34 -- nvmf/common.sh@295 -- # local -ga e810 00:13:18.997 07:30:34 -- nvmf/common.sh@296 -- # x722=() 00:13:18.997 07:30:34 -- nvmf/common.sh@296 -- # local -ga x722 00:13:18.997 07:30:34 -- nvmf/common.sh@297 -- # mlx=() 00:13:18.997 07:30:34 -- nvmf/common.sh@297 -- # local -ga mlx 00:13:18.997 07:30:34 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:13:18.997 07:30:34 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:13:18.997 07:30:34 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:13:18.997 07:30:34 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:13:18.997 07:30:34 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:13:18.997 07:30:34 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:13:18.997 07:30:34 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:13:18.997 07:30:34 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:13:18.997 07:30:34 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:13:18.997 07:30:34 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:13:18.997 07:30:34 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:13:18.997 07:30:34 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:13:18.997 07:30:34 -- nvmf/common.sh@320 -- # [[ tcp == rdma ]] 00:13:18.997 07:30:34 -- nvmf/common.sh@326 -- # [[ e810 == mlx5 ]] 00:13:18.997 07:30:34 -- nvmf/common.sh@328 -- # [[ e810 == e810 ]] 00:13:18.997 07:30:34 -- nvmf/common.sh@329 -- # pci_devs=("${e810[@]}") 00:13:18.997 07:30:34 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:13:18.997 07:30:34 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:13:18.997 07:30:34 -- nvmf/common.sh@340 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:13:18.997 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:13:18.997 07:30:34 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:13:18.997 07:30:34 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:13:18.997 07:30:34 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:13:18.997 07:30:34 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:13:18.997 07:30:34 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:13:18.997 07:30:34 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:13:18.997 07:30:34 -- nvmf/common.sh@340 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:13:18.997 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:13:18.997 07:30:34 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:13:18.997 07:30:34 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:13:18.997 07:30:34 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:13:18.997 07:30:34 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:13:18.997 07:30:34 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:13:18.997 07:30:34 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:13:18.997 07:30:34 -- nvmf/common.sh@371 -- # [[ e810 == e810 ]] 00:13:18.997 07:30:34 -- nvmf/common.sh@371 -- # [[ tcp == rdma ]] 00:13:18.997 07:30:34 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:13:18.997 07:30:34 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:13:18.997 07:30:34 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:13:18.997 07:30:34 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:13:18.997 07:30:34 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:13:18.997 Found net devices under 0000:0a:00.0: cvl_0_0 00:13:18.997 07:30:34 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:13:18.997 07:30:34 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:13:18.997 07:30:34 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:13:18.997 07:30:34 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:13:18.997 07:30:34 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:13:18.997 07:30:34 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:13:18.997 Found net devices under 0000:0a:00.1: cvl_0_1 00:13:18.997 07:30:34 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:13:18.997 07:30:34 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:13:18.997 07:30:34 -- nvmf/common.sh@402 -- # is_hw=yes 00:13:18.997 07:30:34 -- nvmf/common.sh@404 -- # [[ yes == yes ]] 00:13:18.997 07:30:34 -- nvmf/common.sh@405 -- # [[ tcp == tcp ]] 00:13:18.997 07:30:34 -- nvmf/common.sh@406 -- # nvmf_tcp_init 00:13:18.997 07:30:34 -- nvmf/common.sh@228 -- # NVMF_INITIATOR_IP=10.0.0.1 00:13:18.997 07:30:34 -- nvmf/common.sh@229 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:13:18.997 07:30:34 -- nvmf/common.sh@230 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:13:18.997 07:30:34 -- nvmf/common.sh@233 -- # (( 2 > 1 )) 00:13:18.997 07:30:34 -- nvmf/common.sh@235 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:13:18.997 07:30:34 -- nvmf/common.sh@236 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:13:18.997 07:30:34 -- nvmf/common.sh@239 -- # NVMF_SECOND_TARGET_IP= 00:13:18.997 07:30:34 -- nvmf/common.sh@241 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:13:18.997 07:30:34 -- nvmf/common.sh@242 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:13:18.997 07:30:34 -- nvmf/common.sh@243 -- # ip -4 addr flush cvl_0_0 00:13:18.997 07:30:34 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_1 00:13:18.997 07:30:34 -- nvmf/common.sh@247 -- # ip netns add cvl_0_0_ns_spdk 00:13:18.997 07:30:34 -- nvmf/common.sh@250 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:13:18.997 07:30:34 -- nvmf/common.sh@253 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:13:18.997 07:30:34 -- nvmf/common.sh@254 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:13:18.997 07:30:34 -- nvmf/common.sh@257 -- # ip link set cvl_0_1 up 00:13:18.998 07:30:34 -- nvmf/common.sh@259 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:13:18.998 07:30:34 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:13:18.998 07:30:34 -- nvmf/common.sh@263 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:13:18.998 07:30:34 -- nvmf/common.sh@266 -- # ping -c 1 10.0.0.2 00:13:18.998 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:13:18.998 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.184 ms 00:13:18.998 00:13:18.998 --- 10.0.0.2 ping statistics --- 00:13:18.998 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:18.998 rtt min/avg/max/mdev = 0.184/0.184/0.184/0.000 ms 00:13:18.998 07:30:34 -- nvmf/common.sh@267 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:13:18.998 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:13:18.998 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.143 ms 00:13:18.998 00:13:18.998 --- 10.0.0.1 ping statistics --- 00:13:18.998 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:18.998 rtt min/avg/max/mdev = 0.143/0.143/0.143/0.000 ms 00:13:18.998 07:30:34 -- nvmf/common.sh@269 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:13:18.998 07:30:34 -- nvmf/common.sh@410 -- # return 0 00:13:18.998 07:30:34 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:13:18.998 07:30:34 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:13:18.998 07:30:34 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:13:18.998 07:30:34 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:13:18.998 07:30:34 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:13:18.998 07:30:34 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:13:18.998 07:30:34 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:13:18.998 07:30:34 -- target/fused_ordering.sh@13 -- # nvmfappstart -m 0x2 00:13:18.998 07:30:34 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:13:18.998 07:30:34 -- common/autotest_common.sh@712 -- # xtrace_disable 00:13:18.998 07:30:34 -- common/autotest_common.sh@10 -- # set +x 00:13:18.998 07:30:34 -- nvmf/common.sh@469 -- # nvmfpid=4059507 00:13:18.998 07:30:34 -- nvmf/common.sh@468 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:13:18.998 07:30:34 -- nvmf/common.sh@470 -- # waitforlisten 4059507 00:13:18.998 07:30:34 -- common/autotest_common.sh@819 -- # '[' -z 4059507 ']' 00:13:18.998 07:30:34 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:18.998 07:30:34 -- common/autotest_common.sh@824 -- # local max_retries=100 00:13:18.998 07:30:34 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:18.998 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:18.998 07:30:34 -- common/autotest_common.sh@828 -- # xtrace_disable 00:13:18.998 07:30:34 -- common/autotest_common.sh@10 -- # set +x 00:13:18.998 [2024-07-14 07:30:34.868721] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:13:18.998 [2024-07-14 07:30:34.868814] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:13:18.998 EAL: No free 2048 kB hugepages reported on node 1 00:13:18.998 [2024-07-14 07:30:34.937292] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:18.998 [2024-07-14 07:30:35.051656] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:13:18.998 [2024-07-14 07:30:35.051833] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:13:18.998 [2024-07-14 07:30:35.051853] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:13:18.998 [2024-07-14 07:30:35.051878] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:13:18.998 [2024-07-14 07:30:35.051932] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:13:19.634 07:30:35 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:13:19.634 07:30:35 -- common/autotest_common.sh@852 -- # return 0 00:13:19.634 07:30:35 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:13:19.634 07:30:35 -- common/autotest_common.sh@718 -- # xtrace_disable 00:13:19.634 07:30:35 -- common/autotest_common.sh@10 -- # set +x 00:13:19.634 07:30:35 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:13:19.634 07:30:35 -- target/fused_ordering.sh@15 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:13:19.634 07:30:35 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:19.634 07:30:35 -- common/autotest_common.sh@10 -- # set +x 00:13:19.892 [2024-07-14 07:30:35.806298] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:13:19.892 07:30:35 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:19.892 07:30:35 -- target/fused_ordering.sh@16 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:13:19.892 07:30:35 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:19.892 07:30:35 -- common/autotest_common.sh@10 -- # set +x 00:13:19.892 07:30:35 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:19.892 07:30:35 -- target/fused_ordering.sh@17 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:13:19.892 07:30:35 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:19.892 07:30:35 -- common/autotest_common.sh@10 -- # set +x 00:13:19.892 [2024-07-14 07:30:35.822445] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:13:19.892 07:30:35 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:19.892 07:30:35 -- target/fused_ordering.sh@18 -- # rpc_cmd bdev_null_create NULL1 1000 512 00:13:19.892 07:30:35 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:19.892 07:30:35 -- common/autotest_common.sh@10 -- # set +x 00:13:19.892 NULL1 00:13:19.892 07:30:35 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:19.892 07:30:35 -- target/fused_ordering.sh@19 -- # rpc_cmd bdev_wait_for_examine 00:13:19.892 07:30:35 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:19.892 07:30:35 -- common/autotest_common.sh@10 -- # set +x 00:13:19.892 07:30:35 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:19.892 07:30:35 -- target/fused_ordering.sh@20 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 NULL1 00:13:19.892 07:30:35 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:19.892 07:30:35 -- common/autotest_common.sh@10 -- # set +x 00:13:19.892 07:30:35 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:19.892 07:30:35 -- target/fused_ordering.sh@22 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/fused_ordering/fused_ordering -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:13:19.892 [2024-07-14 07:30:35.866505] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:13:19.892 [2024-07-14 07:30:35.866548] [ DPDK EAL parameters: fused_ordering --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid4059668 ] 00:13:19.892 EAL: No free 2048 kB hugepages reported on node 1 00:13:20.457 Attached to nqn.2016-06.io.spdk:cnode1 00:13:20.457 Namespace ID: 1 size: 1GB 00:13:20.457 fused_ordering(0) 00:13:20.457 fused_ordering(1) 00:13:20.457 fused_ordering(2) 00:13:20.457 fused_ordering(3) 00:13:20.457 fused_ordering(4) 00:13:20.457 fused_ordering(5) 00:13:20.457 fused_ordering(6) 00:13:20.457 fused_ordering(7) 00:13:20.457 fused_ordering(8) 00:13:20.457 fused_ordering(9) 00:13:20.457 fused_ordering(10) 00:13:20.457 fused_ordering(11) 00:13:20.457 fused_ordering(12) 00:13:20.457 fused_ordering(13) 00:13:20.457 fused_ordering(14) 00:13:20.457 fused_ordering(15) 00:13:20.457 fused_ordering(16) 00:13:20.457 fused_ordering(17) 00:13:20.457 fused_ordering(18) 00:13:20.457 fused_ordering(19) 00:13:20.457 fused_ordering(20) 00:13:20.457 fused_ordering(21) 00:13:20.457 fused_ordering(22) 00:13:20.457 fused_ordering(23) 00:13:20.457 fused_ordering(24) 00:13:20.457 fused_ordering(25) 00:13:20.457 fused_ordering(26) 00:13:20.457 fused_ordering(27) 00:13:20.457 fused_ordering(28) 00:13:20.457 fused_ordering(29) 00:13:20.457 fused_ordering(30) 00:13:20.457 fused_ordering(31) 00:13:20.457 fused_ordering(32) 00:13:20.457 fused_ordering(33) 00:13:20.457 fused_ordering(34) 00:13:20.457 fused_ordering(35) 00:13:20.457 fused_ordering(36) 00:13:20.457 fused_ordering(37) 00:13:20.457 fused_ordering(38) 00:13:20.457 fused_ordering(39) 00:13:20.457 fused_ordering(40) 00:13:20.457 fused_ordering(41) 00:13:20.457 fused_ordering(42) 00:13:20.457 fused_ordering(43) 00:13:20.457 fused_ordering(44) 00:13:20.457 fused_ordering(45) 00:13:20.457 fused_ordering(46) 00:13:20.457 fused_ordering(47) 00:13:20.457 fused_ordering(48) 00:13:20.457 fused_ordering(49) 00:13:20.457 fused_ordering(50) 00:13:20.457 fused_ordering(51) 00:13:20.457 fused_ordering(52) 00:13:20.457 fused_ordering(53) 00:13:20.457 fused_ordering(54) 00:13:20.457 fused_ordering(55) 00:13:20.457 fused_ordering(56) 00:13:20.457 fused_ordering(57) 00:13:20.457 fused_ordering(58) 00:13:20.457 fused_ordering(59) 00:13:20.457 fused_ordering(60) 00:13:20.457 fused_ordering(61) 00:13:20.457 fused_ordering(62) 00:13:20.457 fused_ordering(63) 00:13:20.457 fused_ordering(64) 00:13:20.457 fused_ordering(65) 00:13:20.457 fused_ordering(66) 00:13:20.457 fused_ordering(67) 00:13:20.457 fused_ordering(68) 00:13:20.457 fused_ordering(69) 00:13:20.457 fused_ordering(70) 00:13:20.457 fused_ordering(71) 00:13:20.458 fused_ordering(72) 00:13:20.458 fused_ordering(73) 00:13:20.458 fused_ordering(74) 00:13:20.458 fused_ordering(75) 00:13:20.458 fused_ordering(76) 00:13:20.458 fused_ordering(77) 00:13:20.458 fused_ordering(78) 00:13:20.458 fused_ordering(79) 00:13:20.458 fused_ordering(80) 00:13:20.458 fused_ordering(81) 00:13:20.458 fused_ordering(82) 00:13:20.458 fused_ordering(83) 00:13:20.458 fused_ordering(84) 00:13:20.458 fused_ordering(85) 00:13:20.458 fused_ordering(86) 00:13:20.458 fused_ordering(87) 00:13:20.458 fused_ordering(88) 00:13:20.458 fused_ordering(89) 00:13:20.458 fused_ordering(90) 00:13:20.458 fused_ordering(91) 00:13:20.458 fused_ordering(92) 00:13:20.458 fused_ordering(93) 00:13:20.458 fused_ordering(94) 00:13:20.458 fused_ordering(95) 00:13:20.458 fused_ordering(96) 00:13:20.458 fused_ordering(97) 00:13:20.458 fused_ordering(98) 00:13:20.458 fused_ordering(99) 00:13:20.458 fused_ordering(100) 00:13:20.458 fused_ordering(101) 00:13:20.458 fused_ordering(102) 00:13:20.458 fused_ordering(103) 00:13:20.458 fused_ordering(104) 00:13:20.458 fused_ordering(105) 00:13:20.458 fused_ordering(106) 00:13:20.458 fused_ordering(107) 00:13:20.458 fused_ordering(108) 00:13:20.458 fused_ordering(109) 00:13:20.458 fused_ordering(110) 00:13:20.458 fused_ordering(111) 00:13:20.458 fused_ordering(112) 00:13:20.458 fused_ordering(113) 00:13:20.458 fused_ordering(114) 00:13:20.458 fused_ordering(115) 00:13:20.458 fused_ordering(116) 00:13:20.458 fused_ordering(117) 00:13:20.458 fused_ordering(118) 00:13:20.458 fused_ordering(119) 00:13:20.458 fused_ordering(120) 00:13:20.458 fused_ordering(121) 00:13:20.458 fused_ordering(122) 00:13:20.458 fused_ordering(123) 00:13:20.458 fused_ordering(124) 00:13:20.458 fused_ordering(125) 00:13:20.458 fused_ordering(126) 00:13:20.458 fused_ordering(127) 00:13:20.458 fused_ordering(128) 00:13:20.458 fused_ordering(129) 00:13:20.458 fused_ordering(130) 00:13:20.458 fused_ordering(131) 00:13:20.458 fused_ordering(132) 00:13:20.458 fused_ordering(133) 00:13:20.458 fused_ordering(134) 00:13:20.458 fused_ordering(135) 00:13:20.458 fused_ordering(136) 00:13:20.458 fused_ordering(137) 00:13:20.458 fused_ordering(138) 00:13:20.458 fused_ordering(139) 00:13:20.458 fused_ordering(140) 00:13:20.458 fused_ordering(141) 00:13:20.458 fused_ordering(142) 00:13:20.458 fused_ordering(143) 00:13:20.458 fused_ordering(144) 00:13:20.458 fused_ordering(145) 00:13:20.458 fused_ordering(146) 00:13:20.458 fused_ordering(147) 00:13:20.458 fused_ordering(148) 00:13:20.458 fused_ordering(149) 00:13:20.458 fused_ordering(150) 00:13:20.458 fused_ordering(151) 00:13:20.458 fused_ordering(152) 00:13:20.458 fused_ordering(153) 00:13:20.458 fused_ordering(154) 00:13:20.458 fused_ordering(155) 00:13:20.458 fused_ordering(156) 00:13:20.458 fused_ordering(157) 00:13:20.458 fused_ordering(158) 00:13:20.458 fused_ordering(159) 00:13:20.458 fused_ordering(160) 00:13:20.458 fused_ordering(161) 00:13:20.458 fused_ordering(162) 00:13:20.458 fused_ordering(163) 00:13:20.458 fused_ordering(164) 00:13:20.458 fused_ordering(165) 00:13:20.458 fused_ordering(166) 00:13:20.458 fused_ordering(167) 00:13:20.458 fused_ordering(168) 00:13:20.458 fused_ordering(169) 00:13:20.458 fused_ordering(170) 00:13:20.458 fused_ordering(171) 00:13:20.458 fused_ordering(172) 00:13:20.458 fused_ordering(173) 00:13:20.458 fused_ordering(174) 00:13:20.458 fused_ordering(175) 00:13:20.458 fused_ordering(176) 00:13:20.458 fused_ordering(177) 00:13:20.458 fused_ordering(178) 00:13:20.458 fused_ordering(179) 00:13:20.458 fused_ordering(180) 00:13:20.458 fused_ordering(181) 00:13:20.458 fused_ordering(182) 00:13:20.458 fused_ordering(183) 00:13:20.458 fused_ordering(184) 00:13:20.458 fused_ordering(185) 00:13:20.458 fused_ordering(186) 00:13:20.458 fused_ordering(187) 00:13:20.458 fused_ordering(188) 00:13:20.458 fused_ordering(189) 00:13:20.458 fused_ordering(190) 00:13:20.458 fused_ordering(191) 00:13:20.458 fused_ordering(192) 00:13:20.458 fused_ordering(193) 00:13:20.458 fused_ordering(194) 00:13:20.458 fused_ordering(195) 00:13:20.458 fused_ordering(196) 00:13:20.458 fused_ordering(197) 00:13:20.458 fused_ordering(198) 00:13:20.458 fused_ordering(199) 00:13:20.458 fused_ordering(200) 00:13:20.458 fused_ordering(201) 00:13:20.458 fused_ordering(202) 00:13:20.458 fused_ordering(203) 00:13:20.458 fused_ordering(204) 00:13:20.458 fused_ordering(205) 00:13:21.399 fused_ordering(206) 00:13:21.399 fused_ordering(207) 00:13:21.400 fused_ordering(208) 00:13:21.400 fused_ordering(209) 00:13:21.400 fused_ordering(210) 00:13:21.400 fused_ordering(211) 00:13:21.400 fused_ordering(212) 00:13:21.400 fused_ordering(213) 00:13:21.400 fused_ordering(214) 00:13:21.400 fused_ordering(215) 00:13:21.400 fused_ordering(216) 00:13:21.400 fused_ordering(217) 00:13:21.400 fused_ordering(218) 00:13:21.400 fused_ordering(219) 00:13:21.400 fused_ordering(220) 00:13:21.400 fused_ordering(221) 00:13:21.400 fused_ordering(222) 00:13:21.400 fused_ordering(223) 00:13:21.400 fused_ordering(224) 00:13:21.400 fused_ordering(225) 00:13:21.400 fused_ordering(226) 00:13:21.400 fused_ordering(227) 00:13:21.400 fused_ordering(228) 00:13:21.400 fused_ordering(229) 00:13:21.400 fused_ordering(230) 00:13:21.400 fused_ordering(231) 00:13:21.400 fused_ordering(232) 00:13:21.400 fused_ordering(233) 00:13:21.400 fused_ordering(234) 00:13:21.400 fused_ordering(235) 00:13:21.400 fused_ordering(236) 00:13:21.400 fused_ordering(237) 00:13:21.400 fused_ordering(238) 00:13:21.400 fused_ordering(239) 00:13:21.400 fused_ordering(240) 00:13:21.400 fused_ordering(241) 00:13:21.400 fused_ordering(242) 00:13:21.400 fused_ordering(243) 00:13:21.400 fused_ordering(244) 00:13:21.400 fused_ordering(245) 00:13:21.400 fused_ordering(246) 00:13:21.400 fused_ordering(247) 00:13:21.400 fused_ordering(248) 00:13:21.400 fused_ordering(249) 00:13:21.400 fused_ordering(250) 00:13:21.400 fused_ordering(251) 00:13:21.400 fused_ordering(252) 00:13:21.400 fused_ordering(253) 00:13:21.400 fused_ordering(254) 00:13:21.400 fused_ordering(255) 00:13:21.400 fused_ordering(256) 00:13:21.400 fused_ordering(257) 00:13:21.400 fused_ordering(258) 00:13:21.400 fused_ordering(259) 00:13:21.400 fused_ordering(260) 00:13:21.400 fused_ordering(261) 00:13:21.400 fused_ordering(262) 00:13:21.400 fused_ordering(263) 00:13:21.400 fused_ordering(264) 00:13:21.400 fused_ordering(265) 00:13:21.400 fused_ordering(266) 00:13:21.400 fused_ordering(267) 00:13:21.400 fused_ordering(268) 00:13:21.400 fused_ordering(269) 00:13:21.400 fused_ordering(270) 00:13:21.400 fused_ordering(271) 00:13:21.400 fused_ordering(272) 00:13:21.400 fused_ordering(273) 00:13:21.400 fused_ordering(274) 00:13:21.400 fused_ordering(275) 00:13:21.400 fused_ordering(276) 00:13:21.400 fused_ordering(277) 00:13:21.400 fused_ordering(278) 00:13:21.400 fused_ordering(279) 00:13:21.400 fused_ordering(280) 00:13:21.400 fused_ordering(281) 00:13:21.400 fused_ordering(282) 00:13:21.400 fused_ordering(283) 00:13:21.400 fused_ordering(284) 00:13:21.400 fused_ordering(285) 00:13:21.400 fused_ordering(286) 00:13:21.400 fused_ordering(287) 00:13:21.400 fused_ordering(288) 00:13:21.400 fused_ordering(289) 00:13:21.400 fused_ordering(290) 00:13:21.400 fused_ordering(291) 00:13:21.400 fused_ordering(292) 00:13:21.400 fused_ordering(293) 00:13:21.400 fused_ordering(294) 00:13:21.400 fused_ordering(295) 00:13:21.400 fused_ordering(296) 00:13:21.400 fused_ordering(297) 00:13:21.400 fused_ordering(298) 00:13:21.400 fused_ordering(299) 00:13:21.400 fused_ordering(300) 00:13:21.400 fused_ordering(301) 00:13:21.400 fused_ordering(302) 00:13:21.400 fused_ordering(303) 00:13:21.400 fused_ordering(304) 00:13:21.400 fused_ordering(305) 00:13:21.400 fused_ordering(306) 00:13:21.400 fused_ordering(307) 00:13:21.400 fused_ordering(308) 00:13:21.400 fused_ordering(309) 00:13:21.400 fused_ordering(310) 00:13:21.400 fused_ordering(311) 00:13:21.400 fused_ordering(312) 00:13:21.400 fused_ordering(313) 00:13:21.400 fused_ordering(314) 00:13:21.400 fused_ordering(315) 00:13:21.400 fused_ordering(316) 00:13:21.400 fused_ordering(317) 00:13:21.400 fused_ordering(318) 00:13:21.400 fused_ordering(319) 00:13:21.400 fused_ordering(320) 00:13:21.400 fused_ordering(321) 00:13:21.400 fused_ordering(322) 00:13:21.400 fused_ordering(323) 00:13:21.400 fused_ordering(324) 00:13:21.400 fused_ordering(325) 00:13:21.400 fused_ordering(326) 00:13:21.400 fused_ordering(327) 00:13:21.400 fused_ordering(328) 00:13:21.400 fused_ordering(329) 00:13:21.400 fused_ordering(330) 00:13:21.400 fused_ordering(331) 00:13:21.400 fused_ordering(332) 00:13:21.400 fused_ordering(333) 00:13:21.400 fused_ordering(334) 00:13:21.400 fused_ordering(335) 00:13:21.400 fused_ordering(336) 00:13:21.400 fused_ordering(337) 00:13:21.400 fused_ordering(338) 00:13:21.400 fused_ordering(339) 00:13:21.400 fused_ordering(340) 00:13:21.400 fused_ordering(341) 00:13:21.400 fused_ordering(342) 00:13:21.400 fused_ordering(343) 00:13:21.400 fused_ordering(344) 00:13:21.400 fused_ordering(345) 00:13:21.400 fused_ordering(346) 00:13:21.400 fused_ordering(347) 00:13:21.400 fused_ordering(348) 00:13:21.400 fused_ordering(349) 00:13:21.400 fused_ordering(350) 00:13:21.400 fused_ordering(351) 00:13:21.400 fused_ordering(352) 00:13:21.400 fused_ordering(353) 00:13:21.400 fused_ordering(354) 00:13:21.400 fused_ordering(355) 00:13:21.400 fused_ordering(356) 00:13:21.400 fused_ordering(357) 00:13:21.400 fused_ordering(358) 00:13:21.400 fused_ordering(359) 00:13:21.400 fused_ordering(360) 00:13:21.400 fused_ordering(361) 00:13:21.400 fused_ordering(362) 00:13:21.400 fused_ordering(363) 00:13:21.400 fused_ordering(364) 00:13:21.400 fused_ordering(365) 00:13:21.400 fused_ordering(366) 00:13:21.400 fused_ordering(367) 00:13:21.400 fused_ordering(368) 00:13:21.400 fused_ordering(369) 00:13:21.400 fused_ordering(370) 00:13:21.400 fused_ordering(371) 00:13:21.400 fused_ordering(372) 00:13:21.400 fused_ordering(373) 00:13:21.400 fused_ordering(374) 00:13:21.400 fused_ordering(375) 00:13:21.400 fused_ordering(376) 00:13:21.400 fused_ordering(377) 00:13:21.400 fused_ordering(378) 00:13:21.400 fused_ordering(379) 00:13:21.400 fused_ordering(380) 00:13:21.400 fused_ordering(381) 00:13:21.400 fused_ordering(382) 00:13:21.400 fused_ordering(383) 00:13:21.400 fused_ordering(384) 00:13:21.400 fused_ordering(385) 00:13:21.400 fused_ordering(386) 00:13:21.400 fused_ordering(387) 00:13:21.400 fused_ordering(388) 00:13:21.400 fused_ordering(389) 00:13:21.400 fused_ordering(390) 00:13:21.400 fused_ordering(391) 00:13:21.400 fused_ordering(392) 00:13:21.400 fused_ordering(393) 00:13:21.400 fused_ordering(394) 00:13:21.400 fused_ordering(395) 00:13:21.400 fused_ordering(396) 00:13:21.400 fused_ordering(397) 00:13:21.400 fused_ordering(398) 00:13:21.400 fused_ordering(399) 00:13:21.400 fused_ordering(400) 00:13:21.400 fused_ordering(401) 00:13:21.400 fused_ordering(402) 00:13:21.400 fused_ordering(403) 00:13:21.400 fused_ordering(404) 00:13:21.400 fused_ordering(405) 00:13:21.400 fused_ordering(406) 00:13:21.400 fused_ordering(407) 00:13:21.400 fused_ordering(408) 00:13:21.400 fused_ordering(409) 00:13:21.400 fused_ordering(410) 00:13:21.971 fused_ordering(411) 00:13:21.971 fused_ordering(412) 00:13:21.971 fused_ordering(413) 00:13:21.971 fused_ordering(414) 00:13:21.971 fused_ordering(415) 00:13:21.971 fused_ordering(416) 00:13:21.971 fused_ordering(417) 00:13:21.971 fused_ordering(418) 00:13:21.971 fused_ordering(419) 00:13:21.971 fused_ordering(420) 00:13:21.971 fused_ordering(421) 00:13:21.971 fused_ordering(422) 00:13:21.971 fused_ordering(423) 00:13:21.971 fused_ordering(424) 00:13:21.971 fused_ordering(425) 00:13:21.971 fused_ordering(426) 00:13:21.971 fused_ordering(427) 00:13:21.971 fused_ordering(428) 00:13:21.971 fused_ordering(429) 00:13:21.971 fused_ordering(430) 00:13:21.971 fused_ordering(431) 00:13:21.971 fused_ordering(432) 00:13:21.971 fused_ordering(433) 00:13:21.971 fused_ordering(434) 00:13:21.971 fused_ordering(435) 00:13:21.971 fused_ordering(436) 00:13:21.971 fused_ordering(437) 00:13:21.971 fused_ordering(438) 00:13:21.971 fused_ordering(439) 00:13:21.971 fused_ordering(440) 00:13:21.971 fused_ordering(441) 00:13:21.971 fused_ordering(442) 00:13:21.971 fused_ordering(443) 00:13:21.971 fused_ordering(444) 00:13:21.971 fused_ordering(445) 00:13:21.971 fused_ordering(446) 00:13:21.971 fused_ordering(447) 00:13:21.971 fused_ordering(448) 00:13:21.971 fused_ordering(449) 00:13:21.971 fused_ordering(450) 00:13:21.971 fused_ordering(451) 00:13:21.971 fused_ordering(452) 00:13:21.971 fused_ordering(453) 00:13:21.971 fused_ordering(454) 00:13:21.971 fused_ordering(455) 00:13:21.971 fused_ordering(456) 00:13:21.971 fused_ordering(457) 00:13:21.971 fused_ordering(458) 00:13:21.971 fused_ordering(459) 00:13:21.971 fused_ordering(460) 00:13:21.971 fused_ordering(461) 00:13:21.971 fused_ordering(462) 00:13:21.971 fused_ordering(463) 00:13:21.971 fused_ordering(464) 00:13:21.971 fused_ordering(465) 00:13:21.971 fused_ordering(466) 00:13:21.971 fused_ordering(467) 00:13:21.971 fused_ordering(468) 00:13:21.971 fused_ordering(469) 00:13:21.971 fused_ordering(470) 00:13:21.971 fused_ordering(471) 00:13:21.971 fused_ordering(472) 00:13:21.971 fused_ordering(473) 00:13:21.971 fused_ordering(474) 00:13:21.971 fused_ordering(475) 00:13:21.971 fused_ordering(476) 00:13:21.971 fused_ordering(477) 00:13:21.971 fused_ordering(478) 00:13:21.971 fused_ordering(479) 00:13:21.971 fused_ordering(480) 00:13:21.971 fused_ordering(481) 00:13:21.971 fused_ordering(482) 00:13:21.971 fused_ordering(483) 00:13:21.971 fused_ordering(484) 00:13:21.971 fused_ordering(485) 00:13:21.971 fused_ordering(486) 00:13:21.971 fused_ordering(487) 00:13:21.971 fused_ordering(488) 00:13:21.971 fused_ordering(489) 00:13:21.971 fused_ordering(490) 00:13:21.971 fused_ordering(491) 00:13:21.971 fused_ordering(492) 00:13:21.971 fused_ordering(493) 00:13:21.971 fused_ordering(494) 00:13:21.971 fused_ordering(495) 00:13:21.971 fused_ordering(496) 00:13:21.971 fused_ordering(497) 00:13:21.971 fused_ordering(498) 00:13:21.971 fused_ordering(499) 00:13:21.971 fused_ordering(500) 00:13:21.971 fused_ordering(501) 00:13:21.971 fused_ordering(502) 00:13:21.971 fused_ordering(503) 00:13:21.971 fused_ordering(504) 00:13:21.971 fused_ordering(505) 00:13:21.971 fused_ordering(506) 00:13:21.971 fused_ordering(507) 00:13:21.971 fused_ordering(508) 00:13:21.971 fused_ordering(509) 00:13:21.971 fused_ordering(510) 00:13:21.971 fused_ordering(511) 00:13:21.971 fused_ordering(512) 00:13:21.971 fused_ordering(513) 00:13:21.971 fused_ordering(514) 00:13:21.971 fused_ordering(515) 00:13:21.971 fused_ordering(516) 00:13:21.971 fused_ordering(517) 00:13:21.971 fused_ordering(518) 00:13:21.971 fused_ordering(519) 00:13:21.971 fused_ordering(520) 00:13:21.971 fused_ordering(521) 00:13:21.971 fused_ordering(522) 00:13:21.971 fused_ordering(523) 00:13:21.971 fused_ordering(524) 00:13:21.971 fused_ordering(525) 00:13:21.971 fused_ordering(526) 00:13:21.971 fused_ordering(527) 00:13:21.971 fused_ordering(528) 00:13:21.971 fused_ordering(529) 00:13:21.971 fused_ordering(530) 00:13:21.971 fused_ordering(531) 00:13:21.971 fused_ordering(532) 00:13:21.971 fused_ordering(533) 00:13:21.971 fused_ordering(534) 00:13:21.971 fused_ordering(535) 00:13:21.971 fused_ordering(536) 00:13:21.971 fused_ordering(537) 00:13:21.971 fused_ordering(538) 00:13:21.971 fused_ordering(539) 00:13:21.971 fused_ordering(540) 00:13:21.971 fused_ordering(541) 00:13:21.971 fused_ordering(542) 00:13:21.971 fused_ordering(543) 00:13:21.971 fused_ordering(544) 00:13:21.971 fused_ordering(545) 00:13:21.971 fused_ordering(546) 00:13:21.971 fused_ordering(547) 00:13:21.971 fused_ordering(548) 00:13:21.971 fused_ordering(549) 00:13:21.971 fused_ordering(550) 00:13:21.971 fused_ordering(551) 00:13:21.971 fused_ordering(552) 00:13:21.971 fused_ordering(553) 00:13:21.971 fused_ordering(554) 00:13:21.971 fused_ordering(555) 00:13:21.971 fused_ordering(556) 00:13:21.971 fused_ordering(557) 00:13:21.971 fused_ordering(558) 00:13:21.971 fused_ordering(559) 00:13:21.971 fused_ordering(560) 00:13:21.971 fused_ordering(561) 00:13:21.971 fused_ordering(562) 00:13:21.971 fused_ordering(563) 00:13:21.971 fused_ordering(564) 00:13:21.971 fused_ordering(565) 00:13:21.971 fused_ordering(566) 00:13:21.971 fused_ordering(567) 00:13:21.971 fused_ordering(568) 00:13:21.971 fused_ordering(569) 00:13:21.971 fused_ordering(570) 00:13:21.971 fused_ordering(571) 00:13:21.971 fused_ordering(572) 00:13:21.971 fused_ordering(573) 00:13:21.971 fused_ordering(574) 00:13:21.971 fused_ordering(575) 00:13:21.971 fused_ordering(576) 00:13:21.972 fused_ordering(577) 00:13:21.972 fused_ordering(578) 00:13:21.972 fused_ordering(579) 00:13:21.972 fused_ordering(580) 00:13:21.972 fused_ordering(581) 00:13:21.972 fused_ordering(582) 00:13:21.972 fused_ordering(583) 00:13:21.972 fused_ordering(584) 00:13:21.972 fused_ordering(585) 00:13:21.972 fused_ordering(586) 00:13:21.972 fused_ordering(587) 00:13:21.972 fused_ordering(588) 00:13:21.972 fused_ordering(589) 00:13:21.972 fused_ordering(590) 00:13:21.972 fused_ordering(591) 00:13:21.972 fused_ordering(592) 00:13:21.972 fused_ordering(593) 00:13:21.972 fused_ordering(594) 00:13:21.972 fused_ordering(595) 00:13:21.972 fused_ordering(596) 00:13:21.972 fused_ordering(597) 00:13:21.972 fused_ordering(598) 00:13:21.972 fused_ordering(599) 00:13:21.972 fused_ordering(600) 00:13:21.972 fused_ordering(601) 00:13:21.972 fused_ordering(602) 00:13:21.972 fused_ordering(603) 00:13:21.972 fused_ordering(604) 00:13:21.972 fused_ordering(605) 00:13:21.972 fused_ordering(606) 00:13:21.972 fused_ordering(607) 00:13:21.972 fused_ordering(608) 00:13:21.972 fused_ordering(609) 00:13:21.972 fused_ordering(610) 00:13:21.972 fused_ordering(611) 00:13:21.972 fused_ordering(612) 00:13:21.972 fused_ordering(613) 00:13:21.972 fused_ordering(614) 00:13:21.972 fused_ordering(615) 00:13:22.907 fused_ordering(616) 00:13:22.907 fused_ordering(617) 00:13:22.907 fused_ordering(618) 00:13:22.907 fused_ordering(619) 00:13:22.907 fused_ordering(620) 00:13:22.907 fused_ordering(621) 00:13:22.907 fused_ordering(622) 00:13:22.907 fused_ordering(623) 00:13:22.907 fused_ordering(624) 00:13:22.907 fused_ordering(625) 00:13:22.907 fused_ordering(626) 00:13:22.907 fused_ordering(627) 00:13:22.907 fused_ordering(628) 00:13:22.907 fused_ordering(629) 00:13:22.907 fused_ordering(630) 00:13:22.907 fused_ordering(631) 00:13:22.907 fused_ordering(632) 00:13:22.907 fused_ordering(633) 00:13:22.907 fused_ordering(634) 00:13:22.907 fused_ordering(635) 00:13:22.907 fused_ordering(636) 00:13:22.907 fused_ordering(637) 00:13:22.907 fused_ordering(638) 00:13:22.907 fused_ordering(639) 00:13:22.907 fused_ordering(640) 00:13:22.907 fused_ordering(641) 00:13:22.907 fused_ordering(642) 00:13:22.907 fused_ordering(643) 00:13:22.907 fused_ordering(644) 00:13:22.907 fused_ordering(645) 00:13:22.907 fused_ordering(646) 00:13:22.907 fused_ordering(647) 00:13:22.907 fused_ordering(648) 00:13:22.907 fused_ordering(649) 00:13:22.907 fused_ordering(650) 00:13:22.907 fused_ordering(651) 00:13:22.907 fused_ordering(652) 00:13:22.907 fused_ordering(653) 00:13:22.907 fused_ordering(654) 00:13:22.907 fused_ordering(655) 00:13:22.907 fused_ordering(656) 00:13:22.907 fused_ordering(657) 00:13:22.907 fused_ordering(658) 00:13:22.907 fused_ordering(659) 00:13:22.907 fused_ordering(660) 00:13:22.907 fused_ordering(661) 00:13:22.907 fused_ordering(662) 00:13:22.907 fused_ordering(663) 00:13:22.907 fused_ordering(664) 00:13:22.907 fused_ordering(665) 00:13:22.907 fused_ordering(666) 00:13:22.907 fused_ordering(667) 00:13:22.907 fused_ordering(668) 00:13:22.907 fused_ordering(669) 00:13:22.907 fused_ordering(670) 00:13:22.907 fused_ordering(671) 00:13:22.907 fused_ordering(672) 00:13:22.907 fused_ordering(673) 00:13:22.907 fused_ordering(674) 00:13:22.907 fused_ordering(675) 00:13:22.907 fused_ordering(676) 00:13:22.907 fused_ordering(677) 00:13:22.907 fused_ordering(678) 00:13:22.907 fused_ordering(679) 00:13:22.907 fused_ordering(680) 00:13:22.907 fused_ordering(681) 00:13:22.907 fused_ordering(682) 00:13:22.907 fused_ordering(683) 00:13:22.907 fused_ordering(684) 00:13:22.907 fused_ordering(685) 00:13:22.907 fused_ordering(686) 00:13:22.907 fused_ordering(687) 00:13:22.907 fused_ordering(688) 00:13:22.907 fused_ordering(689) 00:13:22.907 fused_ordering(690) 00:13:22.907 fused_ordering(691) 00:13:22.907 fused_ordering(692) 00:13:22.907 fused_ordering(693) 00:13:22.907 fused_ordering(694) 00:13:22.907 fused_ordering(695) 00:13:22.907 fused_ordering(696) 00:13:22.907 fused_ordering(697) 00:13:22.907 fused_ordering(698) 00:13:22.907 fused_ordering(699) 00:13:22.907 fused_ordering(700) 00:13:22.907 fused_ordering(701) 00:13:22.907 fused_ordering(702) 00:13:22.907 fused_ordering(703) 00:13:22.907 fused_ordering(704) 00:13:22.907 fused_ordering(705) 00:13:22.907 fused_ordering(706) 00:13:22.907 fused_ordering(707) 00:13:22.907 fused_ordering(708) 00:13:22.907 fused_ordering(709) 00:13:22.907 fused_ordering(710) 00:13:22.907 fused_ordering(711) 00:13:22.907 fused_ordering(712) 00:13:22.907 fused_ordering(713) 00:13:22.907 fused_ordering(714) 00:13:22.907 fused_ordering(715) 00:13:22.907 fused_ordering(716) 00:13:22.907 fused_ordering(717) 00:13:22.907 fused_ordering(718) 00:13:22.907 fused_ordering(719) 00:13:22.907 fused_ordering(720) 00:13:22.907 fused_ordering(721) 00:13:22.907 fused_ordering(722) 00:13:22.907 fused_ordering(723) 00:13:22.907 fused_ordering(724) 00:13:22.907 fused_ordering(725) 00:13:22.907 fused_ordering(726) 00:13:22.907 fused_ordering(727) 00:13:22.907 fused_ordering(728) 00:13:22.907 fused_ordering(729) 00:13:22.907 fused_ordering(730) 00:13:22.907 fused_ordering(731) 00:13:22.907 fused_ordering(732) 00:13:22.907 fused_ordering(733) 00:13:22.907 fused_ordering(734) 00:13:22.907 fused_ordering(735) 00:13:22.907 fused_ordering(736) 00:13:22.907 fused_ordering(737) 00:13:22.907 fused_ordering(738) 00:13:22.907 fused_ordering(739) 00:13:22.907 fused_ordering(740) 00:13:22.907 fused_ordering(741) 00:13:22.907 fused_ordering(742) 00:13:22.907 fused_ordering(743) 00:13:22.907 fused_ordering(744) 00:13:22.907 fused_ordering(745) 00:13:22.907 fused_ordering(746) 00:13:22.907 fused_ordering(747) 00:13:22.907 fused_ordering(748) 00:13:22.907 fused_ordering(749) 00:13:22.907 fused_ordering(750) 00:13:22.907 fused_ordering(751) 00:13:22.907 fused_ordering(752) 00:13:22.907 fused_ordering(753) 00:13:22.907 fused_ordering(754) 00:13:22.907 fused_ordering(755) 00:13:22.907 fused_ordering(756) 00:13:22.907 fused_ordering(757) 00:13:22.907 fused_ordering(758) 00:13:22.907 fused_ordering(759) 00:13:22.907 fused_ordering(760) 00:13:22.907 fused_ordering(761) 00:13:22.907 fused_ordering(762) 00:13:22.907 fused_ordering(763) 00:13:22.907 fused_ordering(764) 00:13:22.907 fused_ordering(765) 00:13:22.907 fused_ordering(766) 00:13:22.907 fused_ordering(767) 00:13:22.907 fused_ordering(768) 00:13:22.907 fused_ordering(769) 00:13:22.907 fused_ordering(770) 00:13:22.907 fused_ordering(771) 00:13:22.907 fused_ordering(772) 00:13:22.907 fused_ordering(773) 00:13:22.907 fused_ordering(774) 00:13:22.907 fused_ordering(775) 00:13:22.907 fused_ordering(776) 00:13:22.907 fused_ordering(777) 00:13:22.907 fused_ordering(778) 00:13:22.907 fused_ordering(779) 00:13:22.907 fused_ordering(780) 00:13:22.907 fused_ordering(781) 00:13:22.907 fused_ordering(782) 00:13:22.907 fused_ordering(783) 00:13:22.907 fused_ordering(784) 00:13:22.907 fused_ordering(785) 00:13:22.907 fused_ordering(786) 00:13:22.907 fused_ordering(787) 00:13:22.907 fused_ordering(788) 00:13:22.907 fused_ordering(789) 00:13:22.907 fused_ordering(790) 00:13:22.907 fused_ordering(791) 00:13:22.907 fused_ordering(792) 00:13:22.907 fused_ordering(793) 00:13:22.907 fused_ordering(794) 00:13:22.907 fused_ordering(795) 00:13:22.907 fused_ordering(796) 00:13:22.907 fused_ordering(797) 00:13:22.907 fused_ordering(798) 00:13:22.907 fused_ordering(799) 00:13:22.907 fused_ordering(800) 00:13:22.907 fused_ordering(801) 00:13:22.907 fused_ordering(802) 00:13:22.907 fused_ordering(803) 00:13:22.907 fused_ordering(804) 00:13:22.907 fused_ordering(805) 00:13:22.907 fused_ordering(806) 00:13:22.907 fused_ordering(807) 00:13:22.907 fused_ordering(808) 00:13:22.907 fused_ordering(809) 00:13:22.907 fused_ordering(810) 00:13:22.907 fused_ordering(811) 00:13:22.907 fused_ordering(812) 00:13:22.907 fused_ordering(813) 00:13:22.907 fused_ordering(814) 00:13:22.907 fused_ordering(815) 00:13:22.907 fused_ordering(816) 00:13:22.907 fused_ordering(817) 00:13:22.907 fused_ordering(818) 00:13:22.907 fused_ordering(819) 00:13:22.907 fused_ordering(820) 00:13:23.843 fused_ordering(821) 00:13:23.843 fused_ordering(822) 00:13:23.843 fused_ordering(823) 00:13:23.843 fused_ordering(824) 00:13:23.843 fused_ordering(825) 00:13:23.843 fused_ordering(826) 00:13:23.843 fused_ordering(827) 00:13:23.843 fused_ordering(828) 00:13:23.843 fused_ordering(829) 00:13:23.843 fused_ordering(830) 00:13:23.843 fused_ordering(831) 00:13:23.843 fused_ordering(832) 00:13:23.843 fused_ordering(833) 00:13:23.843 fused_ordering(834) 00:13:23.843 fused_ordering(835) 00:13:23.843 fused_ordering(836) 00:13:23.843 fused_ordering(837) 00:13:23.843 fused_ordering(838) 00:13:23.843 fused_ordering(839) 00:13:23.843 fused_ordering(840) 00:13:23.843 fused_ordering(841) 00:13:23.843 fused_ordering(842) 00:13:23.843 fused_ordering(843) 00:13:23.843 fused_ordering(844) 00:13:23.843 fused_ordering(845) 00:13:23.843 fused_ordering(846) 00:13:23.843 fused_ordering(847) 00:13:23.843 fused_ordering(848) 00:13:23.843 fused_ordering(849) 00:13:23.843 fused_ordering(850) 00:13:23.843 fused_ordering(851) 00:13:23.843 fused_ordering(852) 00:13:23.843 fused_ordering(853) 00:13:23.843 fused_ordering(854) 00:13:23.843 fused_ordering(855) 00:13:23.843 fused_ordering(856) 00:13:23.843 fused_ordering(857) 00:13:23.843 fused_ordering(858) 00:13:23.843 fused_ordering(859) 00:13:23.843 fused_ordering(860) 00:13:23.843 fused_ordering(861) 00:13:23.843 fused_ordering(862) 00:13:23.843 fused_ordering(863) 00:13:23.843 fused_ordering(864) 00:13:23.843 fused_ordering(865) 00:13:23.843 fused_ordering(866) 00:13:23.843 fused_ordering(867) 00:13:23.843 fused_ordering(868) 00:13:23.843 fused_ordering(869) 00:13:23.843 fused_ordering(870) 00:13:23.843 fused_ordering(871) 00:13:23.843 fused_ordering(872) 00:13:23.843 fused_ordering(873) 00:13:23.843 fused_ordering(874) 00:13:23.843 fused_ordering(875) 00:13:23.843 fused_ordering(876) 00:13:23.843 fused_ordering(877) 00:13:23.843 fused_ordering(878) 00:13:23.843 fused_ordering(879) 00:13:23.843 fused_ordering(880) 00:13:23.843 fused_ordering(881) 00:13:23.843 fused_ordering(882) 00:13:23.843 fused_ordering(883) 00:13:23.843 fused_ordering(884) 00:13:23.843 fused_ordering(885) 00:13:23.843 fused_ordering(886) 00:13:23.843 fused_ordering(887) 00:13:23.843 fused_ordering(888) 00:13:23.843 fused_ordering(889) 00:13:23.843 fused_ordering(890) 00:13:23.843 fused_ordering(891) 00:13:23.843 fused_ordering(892) 00:13:23.843 fused_ordering(893) 00:13:23.843 fused_ordering(894) 00:13:23.843 fused_ordering(895) 00:13:23.843 fused_ordering(896) 00:13:23.843 fused_ordering(897) 00:13:23.843 fused_ordering(898) 00:13:23.843 fused_ordering(899) 00:13:23.843 fused_ordering(900) 00:13:23.843 fused_ordering(901) 00:13:23.843 fused_ordering(902) 00:13:23.843 fused_ordering(903) 00:13:23.843 fused_ordering(904) 00:13:23.843 fused_ordering(905) 00:13:23.843 fused_ordering(906) 00:13:23.843 fused_ordering(907) 00:13:23.843 fused_ordering(908) 00:13:23.843 fused_ordering(909) 00:13:23.843 fused_ordering(910) 00:13:23.843 fused_ordering(911) 00:13:23.843 fused_ordering(912) 00:13:23.843 fused_ordering(913) 00:13:23.843 fused_ordering(914) 00:13:23.843 fused_ordering(915) 00:13:23.843 fused_ordering(916) 00:13:23.843 fused_ordering(917) 00:13:23.843 fused_ordering(918) 00:13:23.843 fused_ordering(919) 00:13:23.843 fused_ordering(920) 00:13:23.843 fused_ordering(921) 00:13:23.843 fused_ordering(922) 00:13:23.843 fused_ordering(923) 00:13:23.843 fused_ordering(924) 00:13:23.843 fused_ordering(925) 00:13:23.843 fused_ordering(926) 00:13:23.843 fused_ordering(927) 00:13:23.843 fused_ordering(928) 00:13:23.843 fused_ordering(929) 00:13:23.843 fused_ordering(930) 00:13:23.843 fused_ordering(931) 00:13:23.843 fused_ordering(932) 00:13:23.843 fused_ordering(933) 00:13:23.843 fused_ordering(934) 00:13:23.843 fused_ordering(935) 00:13:23.843 fused_ordering(936) 00:13:23.843 fused_ordering(937) 00:13:23.843 fused_ordering(938) 00:13:23.843 fused_ordering(939) 00:13:23.843 fused_ordering(940) 00:13:23.843 fused_ordering(941) 00:13:23.843 fused_ordering(942) 00:13:23.843 fused_ordering(943) 00:13:23.843 fused_ordering(944) 00:13:23.843 fused_ordering(945) 00:13:23.843 fused_ordering(946) 00:13:23.843 fused_ordering(947) 00:13:23.843 fused_ordering(948) 00:13:23.843 fused_ordering(949) 00:13:23.843 fused_ordering(950) 00:13:23.843 fused_ordering(951) 00:13:23.843 fused_ordering(952) 00:13:23.843 fused_ordering(953) 00:13:23.843 fused_ordering(954) 00:13:23.843 fused_ordering(955) 00:13:23.843 fused_ordering(956) 00:13:23.843 fused_ordering(957) 00:13:23.843 fused_ordering(958) 00:13:23.843 fused_ordering(959) 00:13:23.843 fused_ordering(960) 00:13:23.843 fused_ordering(961) 00:13:23.843 fused_ordering(962) 00:13:23.843 fused_ordering(963) 00:13:23.843 fused_ordering(964) 00:13:23.843 fused_ordering(965) 00:13:23.843 fused_ordering(966) 00:13:23.843 fused_ordering(967) 00:13:23.843 fused_ordering(968) 00:13:23.843 fused_ordering(969) 00:13:23.843 fused_ordering(970) 00:13:23.843 fused_ordering(971) 00:13:23.843 fused_ordering(972) 00:13:23.843 fused_ordering(973) 00:13:23.843 fused_ordering(974) 00:13:23.843 fused_ordering(975) 00:13:23.843 fused_ordering(976) 00:13:23.843 fused_ordering(977) 00:13:23.843 fused_ordering(978) 00:13:23.843 fused_ordering(979) 00:13:23.843 fused_ordering(980) 00:13:23.843 fused_ordering(981) 00:13:23.843 fused_ordering(982) 00:13:23.843 fused_ordering(983) 00:13:23.843 fused_ordering(984) 00:13:23.843 fused_ordering(985) 00:13:23.843 fused_ordering(986) 00:13:23.843 fused_ordering(987) 00:13:23.843 fused_ordering(988) 00:13:23.843 fused_ordering(989) 00:13:23.843 fused_ordering(990) 00:13:23.843 fused_ordering(991) 00:13:23.843 fused_ordering(992) 00:13:23.843 fused_ordering(993) 00:13:23.843 fused_ordering(994) 00:13:23.843 fused_ordering(995) 00:13:23.843 fused_ordering(996) 00:13:23.843 fused_ordering(997) 00:13:23.843 fused_ordering(998) 00:13:23.843 fused_ordering(999) 00:13:23.843 fused_ordering(1000) 00:13:23.843 fused_ordering(1001) 00:13:23.843 fused_ordering(1002) 00:13:23.843 fused_ordering(1003) 00:13:23.843 fused_ordering(1004) 00:13:23.843 fused_ordering(1005) 00:13:23.843 fused_ordering(1006) 00:13:23.843 fused_ordering(1007) 00:13:23.843 fused_ordering(1008) 00:13:23.843 fused_ordering(1009) 00:13:23.843 fused_ordering(1010) 00:13:23.843 fused_ordering(1011) 00:13:23.843 fused_ordering(1012) 00:13:23.843 fused_ordering(1013) 00:13:23.843 fused_ordering(1014) 00:13:23.843 fused_ordering(1015) 00:13:23.843 fused_ordering(1016) 00:13:23.843 fused_ordering(1017) 00:13:23.843 fused_ordering(1018) 00:13:23.843 fused_ordering(1019) 00:13:23.843 fused_ordering(1020) 00:13:23.843 fused_ordering(1021) 00:13:23.843 fused_ordering(1022) 00:13:23.843 fused_ordering(1023) 00:13:23.843 07:30:39 -- target/fused_ordering.sh@23 -- # trap - SIGINT SIGTERM EXIT 00:13:23.843 07:30:39 -- target/fused_ordering.sh@25 -- # nvmftestfini 00:13:23.843 07:30:39 -- nvmf/common.sh@476 -- # nvmfcleanup 00:13:23.843 07:30:39 -- nvmf/common.sh@116 -- # sync 00:13:23.843 07:30:39 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:13:23.843 07:30:39 -- nvmf/common.sh@119 -- # set +e 00:13:23.843 07:30:39 -- nvmf/common.sh@120 -- # for i in {1..20} 00:13:23.843 07:30:39 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:13:23.843 rmmod nvme_tcp 00:13:23.843 rmmod nvme_fabrics 00:13:23.843 rmmod nvme_keyring 00:13:23.843 07:30:39 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:13:23.843 07:30:39 -- nvmf/common.sh@123 -- # set -e 00:13:23.843 07:30:39 -- nvmf/common.sh@124 -- # return 0 00:13:23.844 07:30:39 -- nvmf/common.sh@477 -- # '[' -n 4059507 ']' 00:13:23.844 07:30:39 -- nvmf/common.sh@478 -- # killprocess 4059507 00:13:23.844 07:30:39 -- common/autotest_common.sh@926 -- # '[' -z 4059507 ']' 00:13:23.844 07:30:39 -- common/autotest_common.sh@930 -- # kill -0 4059507 00:13:23.844 07:30:39 -- common/autotest_common.sh@931 -- # uname 00:13:23.844 07:30:39 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:13:23.844 07:30:39 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 4059507 00:13:23.844 07:30:39 -- common/autotest_common.sh@932 -- # process_name=reactor_1 00:13:23.844 07:30:39 -- common/autotest_common.sh@936 -- # '[' reactor_1 = sudo ']' 00:13:23.844 07:30:39 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 4059507' 00:13:23.844 killing process with pid 4059507 00:13:23.844 07:30:39 -- common/autotest_common.sh@945 -- # kill 4059507 00:13:23.844 07:30:39 -- common/autotest_common.sh@950 -- # wait 4059507 00:13:24.103 07:30:40 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:13:24.103 07:30:40 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:13:24.103 07:30:40 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:13:24.103 07:30:40 -- nvmf/common.sh@273 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:13:24.103 07:30:40 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:13:24.103 07:30:40 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:24.103 07:30:40 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:13:24.103 07:30:40 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:26.008 07:30:42 -- nvmf/common.sh@278 -- # ip -4 addr flush cvl_0_1 00:13:26.008 00:13:26.008 real 0m9.496s 00:13:26.008 user 0m7.158s 00:13:26.008 sys 0m4.640s 00:13:26.008 07:30:42 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:13:26.008 07:30:42 -- common/autotest_common.sh@10 -- # set +x 00:13:26.008 ************************************ 00:13:26.008 END TEST nvmf_fused_ordering 00:13:26.008 ************************************ 00:13:26.266 07:30:42 -- nvmf/nvmf.sh@35 -- # run_test nvmf_delete_subsystem /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh --transport=tcp 00:13:26.266 07:30:42 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:13:26.266 07:30:42 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:13:26.266 07:30:42 -- common/autotest_common.sh@10 -- # set +x 00:13:26.266 ************************************ 00:13:26.266 START TEST nvmf_delete_subsystem 00:13:26.266 ************************************ 00:13:26.266 07:30:42 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh --transport=tcp 00:13:26.266 * Looking for test storage... 00:13:26.266 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:13:26.266 07:30:42 -- target/delete_subsystem.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:13:26.266 07:30:42 -- nvmf/common.sh@7 -- # uname -s 00:13:26.266 07:30:42 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:13:26.266 07:30:42 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:13:26.266 07:30:42 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:13:26.266 07:30:42 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:13:26.266 07:30:42 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:13:26.266 07:30:42 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:13:26.266 07:30:42 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:13:26.266 07:30:42 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:13:26.266 07:30:42 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:13:26.266 07:30:42 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:13:26.266 07:30:42 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:13:26.266 07:30:42 -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:13:26.266 07:30:42 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:13:26.266 07:30:42 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:13:26.266 07:30:42 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:13:26.266 07:30:42 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:13:26.266 07:30:42 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:13:26.266 07:30:42 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:13:26.266 07:30:42 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:13:26.266 07:30:42 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:26.266 07:30:42 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:26.266 07:30:42 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:26.266 07:30:42 -- paths/export.sh@5 -- # export PATH 00:13:26.266 07:30:42 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:26.266 07:30:42 -- nvmf/common.sh@46 -- # : 0 00:13:26.266 07:30:42 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:13:26.266 07:30:42 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:13:26.266 07:30:42 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:13:26.266 07:30:42 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:13:26.266 07:30:42 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:13:26.266 07:30:42 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:13:26.266 07:30:42 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:13:26.266 07:30:42 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:13:26.266 07:30:42 -- target/delete_subsystem.sh@12 -- # nvmftestinit 00:13:26.266 07:30:42 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:13:26.266 07:30:42 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:13:26.266 07:30:42 -- nvmf/common.sh@436 -- # prepare_net_devs 00:13:26.266 07:30:42 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:13:26.266 07:30:42 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:13:26.266 07:30:42 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:26.266 07:30:42 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:13:26.266 07:30:42 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:26.266 07:30:42 -- nvmf/common.sh@402 -- # [[ phy != virt ]] 00:13:26.266 07:30:42 -- nvmf/common.sh@402 -- # gather_supported_nvmf_pci_devs 00:13:26.266 07:30:42 -- nvmf/common.sh@284 -- # xtrace_disable 00:13:26.266 07:30:42 -- common/autotest_common.sh@10 -- # set +x 00:13:28.170 07:30:44 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:13:28.170 07:30:44 -- nvmf/common.sh@290 -- # pci_devs=() 00:13:28.170 07:30:44 -- nvmf/common.sh@290 -- # local -a pci_devs 00:13:28.170 07:30:44 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:13:28.170 07:30:44 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:13:28.170 07:30:44 -- nvmf/common.sh@292 -- # pci_drivers=() 00:13:28.170 07:30:44 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:13:28.170 07:30:44 -- nvmf/common.sh@294 -- # net_devs=() 00:13:28.170 07:30:44 -- nvmf/common.sh@294 -- # local -ga net_devs 00:13:28.170 07:30:44 -- nvmf/common.sh@295 -- # e810=() 00:13:28.170 07:30:44 -- nvmf/common.sh@295 -- # local -ga e810 00:13:28.170 07:30:44 -- nvmf/common.sh@296 -- # x722=() 00:13:28.170 07:30:44 -- nvmf/common.sh@296 -- # local -ga x722 00:13:28.170 07:30:44 -- nvmf/common.sh@297 -- # mlx=() 00:13:28.170 07:30:44 -- nvmf/common.sh@297 -- # local -ga mlx 00:13:28.170 07:30:44 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:13:28.170 07:30:44 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:13:28.170 07:30:44 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:13:28.170 07:30:44 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:13:28.170 07:30:44 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:13:28.170 07:30:44 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:13:28.170 07:30:44 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:13:28.170 07:30:44 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:13:28.170 07:30:44 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:13:28.170 07:30:44 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:13:28.170 07:30:44 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:13:28.170 07:30:44 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:13:28.170 07:30:44 -- nvmf/common.sh@320 -- # [[ tcp == rdma ]] 00:13:28.170 07:30:44 -- nvmf/common.sh@326 -- # [[ e810 == mlx5 ]] 00:13:28.170 07:30:44 -- nvmf/common.sh@328 -- # [[ e810 == e810 ]] 00:13:28.170 07:30:44 -- nvmf/common.sh@329 -- # pci_devs=("${e810[@]}") 00:13:28.170 07:30:44 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:13:28.170 07:30:44 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:13:28.170 07:30:44 -- nvmf/common.sh@340 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:13:28.170 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:13:28.170 07:30:44 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:13:28.170 07:30:44 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:13:28.170 07:30:44 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:13:28.170 07:30:44 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:13:28.170 07:30:44 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:13:28.170 07:30:44 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:13:28.170 07:30:44 -- nvmf/common.sh@340 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:13:28.170 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:13:28.170 07:30:44 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:13:28.170 07:30:44 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:13:28.170 07:30:44 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:13:28.170 07:30:44 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:13:28.170 07:30:44 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:13:28.170 07:30:44 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:13:28.170 07:30:44 -- nvmf/common.sh@371 -- # [[ e810 == e810 ]] 00:13:28.170 07:30:44 -- nvmf/common.sh@371 -- # [[ tcp == rdma ]] 00:13:28.170 07:30:44 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:13:28.170 07:30:44 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:13:28.170 07:30:44 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:13:28.170 07:30:44 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:13:28.170 07:30:44 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:13:28.170 Found net devices under 0000:0a:00.0: cvl_0_0 00:13:28.170 07:30:44 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:13:28.170 07:30:44 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:13:28.170 07:30:44 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:13:28.170 07:30:44 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:13:28.170 07:30:44 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:13:28.170 07:30:44 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:13:28.170 Found net devices under 0000:0a:00.1: cvl_0_1 00:13:28.170 07:30:44 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:13:28.170 07:30:44 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:13:28.170 07:30:44 -- nvmf/common.sh@402 -- # is_hw=yes 00:13:28.170 07:30:44 -- nvmf/common.sh@404 -- # [[ yes == yes ]] 00:13:28.170 07:30:44 -- nvmf/common.sh@405 -- # [[ tcp == tcp ]] 00:13:28.170 07:30:44 -- nvmf/common.sh@406 -- # nvmf_tcp_init 00:13:28.170 07:30:44 -- nvmf/common.sh@228 -- # NVMF_INITIATOR_IP=10.0.0.1 00:13:28.170 07:30:44 -- nvmf/common.sh@229 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:13:28.170 07:30:44 -- nvmf/common.sh@230 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:13:28.170 07:30:44 -- nvmf/common.sh@233 -- # (( 2 > 1 )) 00:13:28.170 07:30:44 -- nvmf/common.sh@235 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:13:28.170 07:30:44 -- nvmf/common.sh@236 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:13:28.170 07:30:44 -- nvmf/common.sh@239 -- # NVMF_SECOND_TARGET_IP= 00:13:28.170 07:30:44 -- nvmf/common.sh@241 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:13:28.170 07:30:44 -- nvmf/common.sh@242 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:13:28.170 07:30:44 -- nvmf/common.sh@243 -- # ip -4 addr flush cvl_0_0 00:13:28.170 07:30:44 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_1 00:13:28.170 07:30:44 -- nvmf/common.sh@247 -- # ip netns add cvl_0_0_ns_spdk 00:13:28.170 07:30:44 -- nvmf/common.sh@250 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:13:28.170 07:30:44 -- nvmf/common.sh@253 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:13:28.170 07:30:44 -- nvmf/common.sh@254 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:13:28.170 07:30:44 -- nvmf/common.sh@257 -- # ip link set cvl_0_1 up 00:13:28.170 07:30:44 -- nvmf/common.sh@259 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:13:28.170 07:30:44 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:13:28.170 07:30:44 -- nvmf/common.sh@263 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:13:28.170 07:30:44 -- nvmf/common.sh@266 -- # ping -c 1 10.0.0.2 00:13:28.170 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:13:28.170 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.208 ms 00:13:28.170 00:13:28.170 --- 10.0.0.2 ping statistics --- 00:13:28.170 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:28.170 rtt min/avg/max/mdev = 0.208/0.208/0.208/0.000 ms 00:13:28.170 07:30:44 -- nvmf/common.sh@267 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:13:28.170 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:13:28.170 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.149 ms 00:13:28.170 00:13:28.170 --- 10.0.0.1 ping statistics --- 00:13:28.170 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:28.170 rtt min/avg/max/mdev = 0.149/0.149/0.149/0.000 ms 00:13:28.170 07:30:44 -- nvmf/common.sh@269 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:13:28.170 07:30:44 -- nvmf/common.sh@410 -- # return 0 00:13:28.170 07:30:44 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:13:28.170 07:30:44 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:13:28.170 07:30:44 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:13:28.170 07:30:44 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:13:28.170 07:30:44 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:13:28.170 07:30:44 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:13:28.170 07:30:44 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:13:28.170 07:30:44 -- target/delete_subsystem.sh@13 -- # nvmfappstart -m 0x3 00:13:28.170 07:30:44 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:13:28.170 07:30:44 -- common/autotest_common.sh@712 -- # xtrace_disable 00:13:28.170 07:30:44 -- common/autotest_common.sh@10 -- # set +x 00:13:28.170 07:30:44 -- nvmf/common.sh@469 -- # nvmfpid=4062038 00:13:28.170 07:30:44 -- nvmf/common.sh@468 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x3 00:13:28.170 07:30:44 -- nvmf/common.sh@470 -- # waitforlisten 4062038 00:13:28.170 07:30:44 -- common/autotest_common.sh@819 -- # '[' -z 4062038 ']' 00:13:28.170 07:30:44 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:28.170 07:30:44 -- common/autotest_common.sh@824 -- # local max_retries=100 00:13:28.170 07:30:44 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:28.171 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:28.171 07:30:44 -- common/autotest_common.sh@828 -- # xtrace_disable 00:13:28.171 07:30:44 -- common/autotest_common.sh@10 -- # set +x 00:13:28.171 [2024-07-14 07:30:44.266567] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:13:28.171 [2024-07-14 07:30:44.266660] [ DPDK EAL parameters: nvmf -c 0x3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:13:28.171 EAL: No free 2048 kB hugepages reported on node 1 00:13:28.171 [2024-07-14 07:30:44.336145] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 2 00:13:28.428 [2024-07-14 07:30:44.454915] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:13:28.428 [2024-07-14 07:30:44.455079] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:13:28.428 [2024-07-14 07:30:44.455100] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:13:28.428 [2024-07-14 07:30:44.455114] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:13:28.428 [2024-07-14 07:30:44.455185] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:13:28.428 [2024-07-14 07:30:44.455191] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:13:29.361 07:30:45 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:13:29.361 07:30:45 -- common/autotest_common.sh@852 -- # return 0 00:13:29.361 07:30:45 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:13:29.361 07:30:45 -- common/autotest_common.sh@718 -- # xtrace_disable 00:13:29.361 07:30:45 -- common/autotest_common.sh@10 -- # set +x 00:13:29.361 07:30:45 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:13:29.361 07:30:45 -- target/delete_subsystem.sh@15 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:13:29.361 07:30:45 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:29.361 07:30:45 -- common/autotest_common.sh@10 -- # set +x 00:13:29.361 [2024-07-14 07:30:45.268990] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:13:29.361 07:30:45 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:29.361 07:30:45 -- target/delete_subsystem.sh@16 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:13:29.361 07:30:45 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:29.361 07:30:45 -- common/autotest_common.sh@10 -- # set +x 00:13:29.361 07:30:45 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:29.361 07:30:45 -- target/delete_subsystem.sh@17 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:13:29.361 07:30:45 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:29.361 07:30:45 -- common/autotest_common.sh@10 -- # set +x 00:13:29.361 [2024-07-14 07:30:45.285156] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:13:29.361 07:30:45 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:29.361 07:30:45 -- target/delete_subsystem.sh@18 -- # rpc_cmd bdev_null_create NULL1 1000 512 00:13:29.361 07:30:45 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:29.361 07:30:45 -- common/autotest_common.sh@10 -- # set +x 00:13:29.361 NULL1 00:13:29.361 07:30:45 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:29.361 07:30:45 -- target/delete_subsystem.sh@23 -- # rpc_cmd bdev_delay_create -b NULL1 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:13:29.361 07:30:45 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:29.361 07:30:45 -- common/autotest_common.sh@10 -- # set +x 00:13:29.361 Delay0 00:13:29.361 07:30:45 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:29.361 07:30:45 -- target/delete_subsystem.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:13:29.361 07:30:45 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:29.361 07:30:45 -- common/autotest_common.sh@10 -- # set +x 00:13:29.361 07:30:45 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:29.361 07:30:45 -- target/delete_subsystem.sh@28 -- # perf_pid=4062182 00:13:29.361 07:30:45 -- target/delete_subsystem.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0xC -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -t 5 -q 128 -w randrw -M 70 -o 512 -P 4 00:13:29.361 07:30:45 -- target/delete_subsystem.sh@30 -- # sleep 2 00:13:29.361 EAL: No free 2048 kB hugepages reported on node 1 00:13:29.361 [2024-07-14 07:30:45.359958] subsystem.c:1344:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:13:31.261 07:30:47 -- target/delete_subsystem.sh@32 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:13:31.261 07:30:47 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:31.261 07:30:47 -- common/autotest_common.sh@10 -- # set +x 00:13:31.520 Read completed with error (sct=0, sc=8) 00:13:31.520 Read completed with error (sct=0, sc=8) 00:13:31.520 Write completed with error (sct=0, sc=8) 00:13:31.520 starting I/O failed: -6 00:13:31.520 Read completed with error (sct=0, sc=8) 00:13:31.520 Write completed with error (sct=0, sc=8) 00:13:31.520 Read completed with error (sct=0, sc=8) 00:13:31.520 Read completed with error (sct=0, sc=8) 00:13:31.520 starting I/O failed: -6 00:13:31.520 Read completed with error (sct=0, sc=8) 00:13:31.520 Read completed with error (sct=0, sc=8) 00:13:31.520 Read completed with error (sct=0, sc=8) 00:13:31.520 Read completed with error (sct=0, sc=8) 00:13:31.520 starting I/O failed: -6 00:13:31.520 Read completed with error (sct=0, sc=8) 00:13:31.520 Read completed with error (sct=0, sc=8) 00:13:31.520 Write completed with error (sct=0, sc=8) 00:13:31.520 Read completed with error (sct=0, sc=8) 00:13:31.520 starting I/O failed: -6 00:13:31.520 Read completed with error (sct=0, sc=8) 00:13:31.520 Read completed with error (sct=0, sc=8) 00:13:31.520 Read completed with error (sct=0, sc=8) 00:13:31.520 Read completed with error (sct=0, sc=8) 00:13:31.520 starting I/O failed: -6 00:13:31.520 Write completed with error (sct=0, sc=8) 00:13:31.520 Read completed with error (sct=0, sc=8) 00:13:31.520 Write completed with error (sct=0, sc=8) 00:13:31.520 Read completed with error (sct=0, sc=8) 00:13:31.520 starting I/O failed: -6 00:13:31.520 Read completed with error (sct=0, sc=8) 00:13:31.520 Read completed with error (sct=0, sc=8) 00:13:31.520 Write completed with error (sct=0, sc=8) 00:13:31.520 Read completed with error (sct=0, sc=8) 00:13:31.520 starting I/O failed: -6 00:13:31.520 Write completed with error (sct=0, sc=8) 00:13:31.520 Read completed with error (sct=0, sc=8) 00:13:31.520 Read completed with error (sct=0, sc=8) 00:13:31.520 starting I/O failed: -6 00:13:31.520 Read completed with error (sct=0, sc=8) 00:13:31.520 Write completed with error (sct=0, sc=8) 00:13:31.520 Read completed with error (sct=0, sc=8) 00:13:31.520 starting I/O failed: -6 00:13:31.520 Read completed with error (sct=0, sc=8) 00:13:31.520 Read completed with error (sct=0, sc=8) 00:13:31.520 Read completed with error (sct=0, sc=8) 00:13:31.520 Read completed with error (sct=0, sc=8) 00:13:31.520 Write completed with error (sct=0, sc=8) 00:13:31.520 Read completed with error (sct=0, sc=8) 00:13:31.520 Read completed with error (sct=0, sc=8) 00:13:31.520 starting I/O failed: -6 00:13:31.520 starting I/O failed: -6 00:13:31.521 Read completed with error (sct=0, sc=8) 00:13:31.521 Write completed with error (sct=0, sc=8) 00:13:31.521 Write completed with error (sct=0, sc=8) 00:13:31.521 Read completed with error (sct=0, sc=8) 00:13:31.521 Write completed with error (sct=0, sc=8) 00:13:31.521 Read completed with error (sct=0, sc=8) 00:13:31.521 starting I/O failed: -6 00:13:31.521 Write completed with error (sct=0, sc=8) 00:13:31.521 Read completed with error (sct=0, sc=8) 00:13:31.521 Read completed with error (sct=0, sc=8) 00:13:31.521 Read completed with error (sct=0, sc=8) 00:13:31.521 starting I/O failed: -6 00:13:31.521 [2024-07-14 07:30:47.491805] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7f97a800c480 is same with the state(5) to be set 00:13:31.521 Write completed with error (sct=0, sc=8) 00:13:31.521 Read completed with error (sct=0, sc=8) 00:13:31.521 Read completed with error (sct=0, sc=8) 00:13:31.521 Read completed with error (sct=0, sc=8) 00:13:31.521 Read completed with error (sct=0, sc=8) 00:13:31.521 Write completed with error (sct=0, sc=8) 00:13:31.521 starting I/O failed: -6 00:13:31.521 Read completed with error (sct=0, sc=8) 00:13:31.521 Read completed with error (sct=0, sc=8) 00:13:31.521 Read completed with error (sct=0, sc=8) 00:13:31.521 Read completed with error (sct=0, sc=8) 00:13:31.521 Write completed with error (sct=0, sc=8) 00:13:31.521 Read completed with error (sct=0, sc=8) 00:13:31.521 Read completed with error (sct=0, sc=8) 00:13:31.521 Read completed with error (sct=0, sc=8) 00:13:31.521 Read completed with error (sct=0, sc=8) 00:13:31.521 Read completed with error (sct=0, sc=8) 00:13:31.521 Read completed with error (sct=0, sc=8) 00:13:31.521 Read completed with error (sct=0, sc=8) 00:13:31.521 starting I/O failed: -6 00:13:31.521 Write completed with error (sct=0, sc=8) 00:13:31.521 Write completed with error (sct=0, sc=8) 00:13:31.521 Read completed with error (sct=0, sc=8) 00:13:31.521 Read completed with error (sct=0, sc=8) 00:13:31.521 Read completed with error (sct=0, sc=8) 00:13:31.521 Write completed with error (sct=0, sc=8) 00:13:31.521 Read completed with error (sct=0, sc=8) 00:13:31.521 Write completed with error (sct=0, sc=8) 00:13:31.521 Read completed with error (sct=0, sc=8) 00:13:31.521 Write completed with error (sct=0, sc=8) 00:13:31.521 Write completed with error (sct=0, sc=8) 00:13:31.521 Write completed with error (sct=0, sc=8) 00:13:31.521 starting I/O failed: -6 00:13:31.521 Read completed with error (sct=0, sc=8) 00:13:31.521 Read completed with error (sct=0, sc=8) 00:13:31.521 Read completed with error (sct=0, sc=8) 00:13:31.521 Read completed with error (sct=0, sc=8) 00:13:31.521 Read completed with error (sct=0, sc=8) 00:13:31.521 Read completed with error (sct=0, sc=8) 00:13:31.521 Read completed with error (sct=0, sc=8) 00:13:31.521 Read completed with error (sct=0, sc=8) 00:13:31.521 starting I/O failed: -6 00:13:31.521 Write completed with error (sct=0, sc=8) 00:13:31.521 Read completed with error (sct=0, sc=8) 00:13:31.521 Read completed with error (sct=0, sc=8) 00:13:31.521 Write completed with error (sct=0, sc=8) 00:13:31.521 Read completed with error (sct=0, sc=8) 00:13:31.521 Read completed with error (sct=0, sc=8) 00:13:31.521 Read completed with error (sct=0, sc=8) 00:13:31.521 Read completed with error (sct=0, sc=8) 00:13:31.521 Read completed with error (sct=0, sc=8) 00:13:31.521 Read completed with error (sct=0, sc=8) 00:13:31.521 Write completed with error (sct=0, sc=8) 00:13:31.521 starting I/O failed: -6 00:13:31.521 Write completed with error (sct=0, sc=8) 00:13:31.521 Read completed with error (sct=0, sc=8) 00:13:31.521 Read completed with error (sct=0, sc=8) 00:13:31.521 Read completed with error (sct=0, sc=8) 00:13:31.521 Read completed with error (sct=0, sc=8) 00:13:31.521 Write completed with error (sct=0, sc=8) 00:13:31.521 Write completed with error (sct=0, sc=8) 00:13:31.521 Read completed with error (sct=0, sc=8) 00:13:31.521 Read completed with error (sct=0, sc=8) 00:13:31.521 Read completed with error (sct=0, sc=8) 00:13:31.521 starting I/O failed: -6 00:13:31.521 Write completed with error (sct=0, sc=8) 00:13:31.521 Read completed with error (sct=0, sc=8) 00:13:31.521 Write completed with error (sct=0, sc=8) 00:13:31.521 Write completed with error (sct=0, sc=8) 00:13:31.521 Read completed with error (sct=0, sc=8) 00:13:31.521 Read completed with error (sct=0, sc=8) 00:13:31.521 Write completed with error (sct=0, sc=8) 00:13:31.521 Read completed with error (sct=0, sc=8) 00:13:31.521 Read completed with error (sct=0, sc=8) 00:13:31.521 Write completed with error (sct=0, sc=8) 00:13:31.521 Write completed with error (sct=0, sc=8) 00:13:31.521 starting I/O failed: -6 00:13:31.521 Read completed with error (sct=0, sc=8) 00:13:31.521 Write completed with error (sct=0, sc=8) 00:13:31.521 Read completed with error (sct=0, sc=8) 00:13:31.521 Read completed with error (sct=0, sc=8) 00:13:31.521 Write completed with error (sct=0, sc=8) 00:13:31.521 Read completed with error (sct=0, sc=8) 00:13:31.521 Write completed with error (sct=0, sc=8) 00:13:31.521 Read completed with error (sct=0, sc=8) 00:13:31.521 Write completed with error (sct=0, sc=8) 00:13:31.521 starting I/O failed: -6 00:13:31.521 Write completed with error (sct=0, sc=8) 00:13:31.521 Read completed with error (sct=0, sc=8) 00:13:31.521 Write completed with error (sct=0, sc=8) 00:13:31.521 Read completed with error (sct=0, sc=8) 00:13:31.521 Read completed with error (sct=0, sc=8) 00:13:31.521 Read completed with error (sct=0, sc=8) 00:13:31.521 starting I/O failed: -6 00:13:31.521 Write completed with error (sct=0, sc=8) 00:13:31.521 Read completed with error (sct=0, sc=8) 00:13:31.521 Write completed with error (sct=0, sc=8) 00:13:31.521 Write completed with error (sct=0, sc=8) 00:13:31.521 Write completed with error (sct=0, sc=8) 00:13:31.521 Write completed with error (sct=0, sc=8) 00:13:31.521 Write completed with error (sct=0, sc=8) 00:13:31.521 Write completed with error (sct=0, sc=8) 00:13:31.521 Read completed with error (sct=0, sc=8) 00:13:31.521 Read completed with error (sct=0, sc=8) 00:13:31.521 Write completed with error (sct=0, sc=8) 00:13:31.521 Write completed with error (sct=0, sc=8) 00:13:31.521 Write completed with error (sct=0, sc=8) 00:13:31.521 starting I/O failed: -6 00:13:31.521 Write completed with error (sct=0, sc=8) 00:13:31.521 Write completed with error (sct=0, sc=8) 00:13:31.521 Read completed with error (sct=0, sc=8) 00:13:31.521 Read completed with error (sct=0, sc=8) 00:13:31.521 Read completed with error (sct=0, sc=8) 00:13:31.521 Write completed with error (sct=0, sc=8) 00:13:31.521 Read completed with error (sct=0, sc=8) 00:13:31.521 Read completed with error (sct=0, sc=8) 00:13:31.521 Read completed with error (sct=0, sc=8) 00:13:31.521 Read completed with error (sct=0, sc=8) 00:13:31.521 Read completed with error (sct=0, sc=8) 00:13:31.521 Read completed with error (sct=0, sc=8) 00:13:31.521 Read completed with error (sct=0, sc=8) 00:13:31.521 Read completed with error (sct=0, sc=8) 00:13:31.521 Write completed with error (sct=0, sc=8) 00:13:31.521 Read completed with error (sct=0, sc=8) 00:13:31.521 Read completed with error (sct=0, sc=8) 00:13:31.521 Read completed with error (sct=0, sc=8) 00:13:31.521 Write completed with error (sct=0, sc=8) 00:13:31.522 Write completed with error (sct=0, sc=8) 00:13:31.522 Read completed with error (sct=0, sc=8) 00:13:31.522 starting I/O failed: -6 00:13:31.522 Read completed with error (sct=0, sc=8) 00:13:31.522 Read completed with error (sct=0, sc=8) 00:13:31.522 Read completed with error (sct=0, sc=8) 00:13:31.522 Read completed with error (sct=0, sc=8) 00:13:31.522 Write completed with error (sct=0, sc=8) 00:13:31.522 Read completed with error (sct=0, sc=8) 00:13:31.522 Read completed with error (sct=0, sc=8) 00:13:31.522 Read completed with error (sct=0, sc=8) 00:13:31.522 Read completed with error (sct=0, sc=8) 00:13:31.522 Write completed with error (sct=0, sc=8) 00:13:31.522 Read completed with error (sct=0, sc=8) 00:13:31.522 Read completed with error (sct=0, sc=8) 00:13:31.522 Write completed with error (sct=0, sc=8) 00:13:31.522 Write completed with error (sct=0, sc=8) 00:13:31.522 Read completed with error (sct=0, sc=8) 00:13:31.522 Read completed with error (sct=0, sc=8) 00:13:31.522 Read completed with error (sct=0, sc=8) 00:13:32.462 [2024-07-14 07:30:48.458334] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17a15a0 is same with the state(5) to be set 00:13:32.462 Read completed with error (sct=0, sc=8) 00:13:32.462 Read completed with error (sct=0, sc=8) 00:13:32.462 Write completed with error (sct=0, sc=8) 00:13:32.462 Read completed with error (sct=0, sc=8) 00:13:32.462 Write completed with error (sct=0, sc=8) 00:13:32.462 Write completed with error (sct=0, sc=8) 00:13:32.462 Read completed with error (sct=0, sc=8) 00:13:32.462 Read completed with error (sct=0, sc=8) 00:13:32.462 Read completed with error (sct=0, sc=8) 00:13:32.462 Read completed with error (sct=0, sc=8) 00:13:32.462 Write completed with error (sct=0, sc=8) 00:13:32.462 Write completed with error (sct=0, sc=8) 00:13:32.462 Read completed with error (sct=0, sc=8) 00:13:32.462 Read completed with error (sct=0, sc=8) 00:13:32.462 [2024-07-14 07:30:48.490725] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7f97a800c1d0 is same with the state(5) to be set 00:13:32.462 Read completed with error (sct=0, sc=8) 00:13:32.462 Read completed with error (sct=0, sc=8) 00:13:32.462 Read completed with error (sct=0, sc=8) 00:13:32.462 Read completed with error (sct=0, sc=8) 00:13:32.462 Write completed with error (sct=0, sc=8) 00:13:32.462 Read completed with error (sct=0, sc=8) 00:13:32.462 Read completed with error (sct=0, sc=8) 00:13:32.462 Read completed with error (sct=0, sc=8) 00:13:32.462 Write completed with error (sct=0, sc=8) 00:13:32.462 Read completed with error (sct=0, sc=8) 00:13:32.462 Read completed with error (sct=0, sc=8) 00:13:32.462 Read completed with error (sct=0, sc=8) 00:13:32.462 Read completed with error (sct=0, sc=8) 00:13:32.462 Write completed with error (sct=0, sc=8) 00:13:32.462 Read completed with error (sct=0, sc=8) 00:13:32.462 Read completed with error (sct=0, sc=8) 00:13:32.462 Write completed with error (sct=0, sc=8) 00:13:32.462 Write completed with error (sct=0, sc=8) 00:13:32.462 Read completed with error (sct=0, sc=8) 00:13:32.462 Read completed with error (sct=0, sc=8) 00:13:32.462 Write completed with error (sct=0, sc=8) 00:13:32.462 Read completed with error (sct=0, sc=8) 00:13:32.462 Read completed with error (sct=0, sc=8) 00:13:32.462 Read completed with error (sct=0, sc=8) 00:13:32.462 Read completed with error (sct=0, sc=8) 00:13:32.462 Read completed with error (sct=0, sc=8) 00:13:32.462 Read completed with error (sct=0, sc=8) 00:13:32.462 Write completed with error (sct=0, sc=8) 00:13:32.462 Read completed with error (sct=0, sc=8) 00:13:32.462 Read completed with error (sct=0, sc=8) 00:13:32.462 Write completed with error (sct=0, sc=8) 00:13:32.462 Read completed with error (sct=0, sc=8) 00:13:32.462 Read completed with error (sct=0, sc=8) 00:13:32.462 Read completed with error (sct=0, sc=8) 00:13:32.462 Read completed with error (sct=0, sc=8) 00:13:32.462 Read completed with error (sct=0, sc=8) 00:13:32.463 Write completed with error (sct=0, sc=8) 00:13:32.463 [2024-07-14 07:30:48.495178] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1782c60 is same with the state(5) to be set 00:13:32.463 Write completed with error (sct=0, sc=8) 00:13:32.463 Read completed with error (sct=0, sc=8) 00:13:32.463 Read completed with error (sct=0, sc=8) 00:13:32.463 Read completed with error (sct=0, sc=8) 00:13:32.463 Read completed with error (sct=0, sc=8) 00:13:32.463 Read completed with error (sct=0, sc=8) 00:13:32.463 Read completed with error (sct=0, sc=8) 00:13:32.463 Read completed with error (sct=0, sc=8) 00:13:32.463 Write completed with error (sct=0, sc=8) 00:13:32.463 Read completed with error (sct=0, sc=8) 00:13:32.463 Read completed with error (sct=0, sc=8) 00:13:32.463 Read completed with error (sct=0, sc=8) 00:13:32.463 Write completed with error (sct=0, sc=8) 00:13:32.463 Read completed with error (sct=0, sc=8) 00:13:32.463 Read completed with error (sct=0, sc=8) 00:13:32.463 Read completed with error (sct=0, sc=8) 00:13:32.463 Read completed with error (sct=0, sc=8) 00:13:32.463 Write completed with error (sct=0, sc=8) 00:13:32.463 Read completed with error (sct=0, sc=8) 00:13:32.463 Write completed with error (sct=0, sc=8) 00:13:32.463 Read completed with error (sct=0, sc=8) 00:13:32.463 Write completed with error (sct=0, sc=8) 00:13:32.463 Read completed with error (sct=0, sc=8) 00:13:32.463 Read completed with error (sct=0, sc=8) 00:13:32.463 Write completed with error (sct=0, sc=8) 00:13:32.463 Read completed with error (sct=0, sc=8) 00:13:32.463 Read completed with error (sct=0, sc=8) 00:13:32.463 Write completed with error (sct=0, sc=8) 00:13:32.463 Write completed with error (sct=0, sc=8) 00:13:32.463 Read completed with error (sct=0, sc=8) 00:13:32.463 Read completed with error (sct=0, sc=8) 00:13:32.463 Read completed with error (sct=0, sc=8) 00:13:32.463 Read completed with error (sct=0, sc=8) 00:13:32.463 Write completed with error (sct=0, sc=8) 00:13:32.463 Read completed with error (sct=0, sc=8) 00:13:32.463 Write completed with error (sct=0, sc=8) 00:13:32.463 Read completed with error (sct=0, sc=8) 00:13:32.463 Write completed with error (sct=0, sc=8) 00:13:32.463 [2024-07-14 07:30:48.495434] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1781f90 is same with the state(5) to be set 00:13:32.463 Read completed with error (sct=0, sc=8) 00:13:32.463 Write completed with error (sct=0, sc=8) 00:13:32.463 Write completed with error (sct=0, sc=8) 00:13:32.463 Read completed with error (sct=0, sc=8) 00:13:32.463 Write completed with error (sct=0, sc=8) 00:13:32.463 Read completed with error (sct=0, sc=8) 00:13:32.463 Write completed with error (sct=0, sc=8) 00:13:32.463 Read completed with error (sct=0, sc=8) 00:13:32.463 Read completed with error (sct=0, sc=8) 00:13:32.463 Read completed with error (sct=0, sc=8) 00:13:32.463 Read completed with error (sct=0, sc=8) 00:13:32.463 Read completed with error (sct=0, sc=8) 00:13:32.463 Read completed with error (sct=0, sc=8) 00:13:32.463 Read completed with error (sct=0, sc=8) 00:13:32.463 Write completed with error (sct=0, sc=8) 00:13:32.463 Read completed with error (sct=0, sc=8) 00:13:32.463 Write completed with error (sct=0, sc=8) 00:13:32.463 Read completed with error (sct=0, sc=8) 00:13:32.463 Read completed with error (sct=0, sc=8) 00:13:32.463 Write completed with error (sct=0, sc=8) 00:13:32.463 Read completed with error (sct=0, sc=8) 00:13:32.463 Write completed with error (sct=0, sc=8) 00:13:32.463 Read completed with error (sct=0, sc=8) 00:13:32.463 Write completed with error (sct=0, sc=8) 00:13:32.463 Read completed with error (sct=0, sc=8) 00:13:32.463 Read completed with error (sct=0, sc=8) 00:13:32.463 Write completed with error (sct=0, sc=8) 00:13:32.463 Read completed with error (sct=0, sc=8) 00:13:32.463 Read completed with error (sct=0, sc=8) 00:13:32.463 Write completed with error (sct=0, sc=8) 00:13:32.463 Read completed with error (sct=0, sc=8) 00:13:32.463 Read completed with error (sct=0, sc=8) 00:13:32.463 Read completed with error (sct=0, sc=8) 00:13:32.463 Write completed with error (sct=0, sc=8) 00:13:32.463 Read completed with error (sct=0, sc=8) 00:13:32.463 Write completed with error (sct=0, sc=8) 00:13:32.463 Read completed with error (sct=0, sc=8) 00:13:32.463 Read completed with error (sct=0, sc=8) 00:13:32.463 [2024-07-14 07:30:48.495704] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1781e10 is same with the state(5) to be set 00:13:32.463 [2024-07-14 07:30:48.496538] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17a15a0 (9): Bad file descriptor 00:13:32.463 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf: errors occurred 00:13:32.463 07:30:48 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:32.463 07:30:48 -- target/delete_subsystem.sh@34 -- # delay=0 00:13:32.463 07:30:48 -- target/delete_subsystem.sh@35 -- # kill -0 4062182 00:13:32.463 07:30:48 -- target/delete_subsystem.sh@36 -- # sleep 0.5 00:13:32.463 Initializing NVMe Controllers 00:13:32.463 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:13:32.463 Controller IO queue size 128, less than required. 00:13:32.463 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:13:32.463 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 2 00:13:32.463 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 3 00:13:32.463 Initialization complete. Launching workers. 00:13:32.463 ======================================================== 00:13:32.463 Latency(us) 00:13:32.463 Device Information : IOPS MiB/s Average min max 00:13:32.463 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 2: 194.45 0.09 946697.60 1027.05 1012990.15 00:13:32.463 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 3: 151.29 0.07 891768.55 634.92 1013292.95 00:13:32.463 ======================================================== 00:13:32.463 Total : 345.74 0.17 922661.22 634.92 1013292.95 00:13:32.463 00:13:33.029 07:30:48 -- target/delete_subsystem.sh@38 -- # (( delay++ > 30 )) 00:13:33.029 07:30:48 -- target/delete_subsystem.sh@35 -- # kill -0 4062182 00:13:33.029 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh: line 35: kill: (4062182) - No such process 00:13:33.029 07:30:48 -- target/delete_subsystem.sh@45 -- # NOT wait 4062182 00:13:33.029 07:30:49 -- common/autotest_common.sh@640 -- # local es=0 00:13:33.029 07:30:49 -- common/autotest_common.sh@642 -- # valid_exec_arg wait 4062182 00:13:33.029 07:30:49 -- common/autotest_common.sh@628 -- # local arg=wait 00:13:33.029 07:30:49 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:13:33.029 07:30:49 -- common/autotest_common.sh@632 -- # type -t wait 00:13:33.029 07:30:49 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:13:33.029 07:30:49 -- common/autotest_common.sh@643 -- # wait 4062182 00:13:33.029 07:30:49 -- common/autotest_common.sh@643 -- # es=1 00:13:33.029 07:30:49 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:13:33.029 07:30:49 -- common/autotest_common.sh@662 -- # [[ -n '' ]] 00:13:33.029 07:30:49 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:13:33.029 07:30:49 -- target/delete_subsystem.sh@48 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:13:33.029 07:30:49 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:33.029 07:30:49 -- common/autotest_common.sh@10 -- # set +x 00:13:33.029 07:30:49 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:33.029 07:30:49 -- target/delete_subsystem.sh@49 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:13:33.029 07:30:49 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:33.029 07:30:49 -- common/autotest_common.sh@10 -- # set +x 00:13:33.029 [2024-07-14 07:30:49.019198] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:13:33.029 07:30:49 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:33.029 07:30:49 -- target/delete_subsystem.sh@50 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:13:33.029 07:30:49 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:33.029 07:30:49 -- common/autotest_common.sh@10 -- # set +x 00:13:33.029 07:30:49 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:33.029 07:30:49 -- target/delete_subsystem.sh@54 -- # perf_pid=4062717 00:13:33.029 07:30:49 -- target/delete_subsystem.sh@56 -- # delay=0 00:13:33.029 07:30:49 -- target/delete_subsystem.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0xC -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -t 3 -q 128 -w randrw -M 70 -o 512 -P 4 00:13:33.029 07:30:49 -- target/delete_subsystem.sh@57 -- # kill -0 4062717 00:13:33.029 07:30:49 -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:13:33.029 EAL: No free 2048 kB hugepages reported on node 1 00:13:33.029 [2024-07-14 07:30:49.081841] subsystem.c:1344:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:13:33.594 07:30:49 -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:13:33.594 07:30:49 -- target/delete_subsystem.sh@57 -- # kill -0 4062717 00:13:33.594 07:30:49 -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:13:34.159 07:30:50 -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:13:34.159 07:30:50 -- target/delete_subsystem.sh@57 -- # kill -0 4062717 00:13:34.159 07:30:50 -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:13:34.415 07:30:50 -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:13:34.415 07:30:50 -- target/delete_subsystem.sh@57 -- # kill -0 4062717 00:13:34.415 07:30:50 -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:13:34.979 07:30:51 -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:13:34.979 07:30:51 -- target/delete_subsystem.sh@57 -- # kill -0 4062717 00:13:34.979 07:30:51 -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:13:35.544 07:30:51 -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:13:35.544 07:30:51 -- target/delete_subsystem.sh@57 -- # kill -0 4062717 00:13:35.544 07:30:51 -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:13:36.108 07:30:52 -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:13:36.108 07:30:52 -- target/delete_subsystem.sh@57 -- # kill -0 4062717 00:13:36.108 07:30:52 -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:13:36.108 Initializing NVMe Controllers 00:13:36.108 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:13:36.108 Controller IO queue size 128, less than required. 00:13:36.108 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:13:36.108 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 2 00:13:36.108 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 3 00:13:36.108 Initialization complete. Launching workers. 00:13:36.108 ======================================================== 00:13:36.108 Latency(us) 00:13:36.108 Device Information : IOPS MiB/s Average min max 00:13:36.108 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 2: 128.00 0.06 1004421.26 1000255.74 1042112.08 00:13:36.108 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 3: 128.00 0.06 1005065.94 1000282.33 1011274.26 00:13:36.108 ======================================================== 00:13:36.108 Total : 256.00 0.12 1004743.60 1000255.74 1042112.08 00:13:36.108 00:13:36.673 07:30:52 -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:13:36.673 07:30:52 -- target/delete_subsystem.sh@57 -- # kill -0 4062717 00:13:36.673 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh: line 57: kill: (4062717) - No such process 00:13:36.673 07:30:52 -- target/delete_subsystem.sh@67 -- # wait 4062717 00:13:36.673 07:30:52 -- target/delete_subsystem.sh@69 -- # trap - SIGINT SIGTERM EXIT 00:13:36.673 07:30:52 -- target/delete_subsystem.sh@71 -- # nvmftestfini 00:13:36.673 07:30:52 -- nvmf/common.sh@476 -- # nvmfcleanup 00:13:36.673 07:30:52 -- nvmf/common.sh@116 -- # sync 00:13:36.673 07:30:52 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:13:36.673 07:30:52 -- nvmf/common.sh@119 -- # set +e 00:13:36.673 07:30:52 -- nvmf/common.sh@120 -- # for i in {1..20} 00:13:36.673 07:30:52 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:13:36.673 rmmod nvme_tcp 00:13:36.673 rmmod nvme_fabrics 00:13:36.673 rmmod nvme_keyring 00:13:36.673 07:30:52 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:13:36.673 07:30:52 -- nvmf/common.sh@123 -- # set -e 00:13:36.673 07:30:52 -- nvmf/common.sh@124 -- # return 0 00:13:36.673 07:30:52 -- nvmf/common.sh@477 -- # '[' -n 4062038 ']' 00:13:36.673 07:30:52 -- nvmf/common.sh@478 -- # killprocess 4062038 00:13:36.673 07:30:52 -- common/autotest_common.sh@926 -- # '[' -z 4062038 ']' 00:13:36.673 07:30:52 -- common/autotest_common.sh@930 -- # kill -0 4062038 00:13:36.673 07:30:52 -- common/autotest_common.sh@931 -- # uname 00:13:36.673 07:30:52 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:13:36.674 07:30:52 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 4062038 00:13:36.674 07:30:52 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:13:36.674 07:30:52 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:13:36.674 07:30:52 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 4062038' 00:13:36.674 killing process with pid 4062038 00:13:36.674 07:30:52 -- common/autotest_common.sh@945 -- # kill 4062038 00:13:36.674 07:30:52 -- common/autotest_common.sh@950 -- # wait 4062038 00:13:36.933 07:30:52 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:13:36.933 07:30:52 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:13:36.933 07:30:52 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:13:36.933 07:30:52 -- nvmf/common.sh@273 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:13:36.933 07:30:52 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:13:36.933 07:30:52 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:36.933 07:30:52 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:13:36.933 07:30:52 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:38.834 07:30:54 -- nvmf/common.sh@278 -- # ip -4 addr flush cvl_0_1 00:13:38.834 00:13:38.834 real 0m12.735s 00:13:38.834 user 0m29.152s 00:13:38.834 sys 0m2.892s 00:13:38.834 07:30:54 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:13:38.834 07:30:54 -- common/autotest_common.sh@10 -- # set +x 00:13:38.834 ************************************ 00:13:38.834 END TEST nvmf_delete_subsystem 00:13:38.834 ************************************ 00:13:38.834 07:30:54 -- nvmf/nvmf.sh@36 -- # [[ 1 -eq 1 ]] 00:13:38.834 07:30:54 -- nvmf/nvmf.sh@37 -- # run_test nvmf_nvme_cli /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvme_cli.sh --transport=tcp 00:13:38.834 07:30:54 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:13:38.834 07:30:54 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:13:38.834 07:30:54 -- common/autotest_common.sh@10 -- # set +x 00:13:38.834 ************************************ 00:13:38.834 START TEST nvmf_nvme_cli 00:13:38.834 ************************************ 00:13:38.834 07:30:54 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvme_cli.sh --transport=tcp 00:13:38.834 * Looking for test storage... 00:13:38.834 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:13:39.092 07:30:55 -- target/nvme_cli.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:13:39.092 07:30:55 -- nvmf/common.sh@7 -- # uname -s 00:13:39.092 07:30:55 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:13:39.092 07:30:55 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:13:39.092 07:30:55 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:13:39.092 07:30:55 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:13:39.092 07:30:55 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:13:39.092 07:30:55 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:13:39.092 07:30:55 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:13:39.092 07:30:55 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:13:39.092 07:30:55 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:13:39.092 07:30:55 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:13:39.092 07:30:55 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:13:39.092 07:30:55 -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:13:39.092 07:30:55 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:13:39.092 07:30:55 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:13:39.092 07:30:55 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:13:39.092 07:30:55 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:13:39.092 07:30:55 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:13:39.092 07:30:55 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:13:39.092 07:30:55 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:13:39.092 07:30:55 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:39.092 07:30:55 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:39.092 07:30:55 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:39.092 07:30:55 -- paths/export.sh@5 -- # export PATH 00:13:39.092 07:30:55 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:39.092 07:30:55 -- nvmf/common.sh@46 -- # : 0 00:13:39.092 07:30:55 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:13:39.092 07:30:55 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:13:39.092 07:30:55 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:13:39.092 07:30:55 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:13:39.092 07:30:55 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:13:39.092 07:30:55 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:13:39.092 07:30:55 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:13:39.092 07:30:55 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:13:39.092 07:30:55 -- target/nvme_cli.sh@11 -- # MALLOC_BDEV_SIZE=64 00:13:39.092 07:30:55 -- target/nvme_cli.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:13:39.092 07:30:55 -- target/nvme_cli.sh@14 -- # devs=() 00:13:39.092 07:30:55 -- target/nvme_cli.sh@16 -- # nvmftestinit 00:13:39.092 07:30:55 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:13:39.092 07:30:55 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:13:39.092 07:30:55 -- nvmf/common.sh@436 -- # prepare_net_devs 00:13:39.092 07:30:55 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:13:39.092 07:30:55 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:13:39.093 07:30:55 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:39.093 07:30:55 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:13:39.093 07:30:55 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:39.093 07:30:55 -- nvmf/common.sh@402 -- # [[ phy != virt ]] 00:13:39.093 07:30:55 -- nvmf/common.sh@402 -- # gather_supported_nvmf_pci_devs 00:13:39.093 07:30:55 -- nvmf/common.sh@284 -- # xtrace_disable 00:13:39.093 07:30:55 -- common/autotest_common.sh@10 -- # set +x 00:13:40.994 07:30:57 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:13:40.994 07:30:57 -- nvmf/common.sh@290 -- # pci_devs=() 00:13:40.994 07:30:57 -- nvmf/common.sh@290 -- # local -a pci_devs 00:13:40.994 07:30:57 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:13:40.994 07:30:57 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:13:40.994 07:30:57 -- nvmf/common.sh@292 -- # pci_drivers=() 00:13:40.994 07:30:57 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:13:40.994 07:30:57 -- nvmf/common.sh@294 -- # net_devs=() 00:13:40.994 07:30:57 -- nvmf/common.sh@294 -- # local -ga net_devs 00:13:40.994 07:30:57 -- nvmf/common.sh@295 -- # e810=() 00:13:40.994 07:30:57 -- nvmf/common.sh@295 -- # local -ga e810 00:13:40.994 07:30:57 -- nvmf/common.sh@296 -- # x722=() 00:13:40.994 07:30:57 -- nvmf/common.sh@296 -- # local -ga x722 00:13:40.994 07:30:57 -- nvmf/common.sh@297 -- # mlx=() 00:13:40.994 07:30:57 -- nvmf/common.sh@297 -- # local -ga mlx 00:13:40.994 07:30:57 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:13:40.994 07:30:57 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:13:40.994 07:30:57 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:13:40.994 07:30:57 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:13:40.994 07:30:57 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:13:40.994 07:30:57 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:13:40.994 07:30:57 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:13:40.994 07:30:57 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:13:40.994 07:30:57 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:13:40.994 07:30:57 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:13:40.994 07:30:57 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:13:40.994 07:30:57 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:13:40.994 07:30:57 -- nvmf/common.sh@320 -- # [[ tcp == rdma ]] 00:13:40.994 07:30:57 -- nvmf/common.sh@326 -- # [[ e810 == mlx5 ]] 00:13:40.994 07:30:57 -- nvmf/common.sh@328 -- # [[ e810 == e810 ]] 00:13:40.994 07:30:57 -- nvmf/common.sh@329 -- # pci_devs=("${e810[@]}") 00:13:40.994 07:30:57 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:13:40.994 07:30:57 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:13:40.994 07:30:57 -- nvmf/common.sh@340 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:13:40.994 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:13:40.994 07:30:57 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:13:40.994 07:30:57 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:13:40.994 07:30:57 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:13:40.994 07:30:57 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:13:40.994 07:30:57 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:13:40.994 07:30:57 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:13:40.994 07:30:57 -- nvmf/common.sh@340 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:13:40.994 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:13:40.994 07:30:57 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:13:40.994 07:30:57 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:13:40.994 07:30:57 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:13:40.994 07:30:57 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:13:40.994 07:30:57 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:13:40.994 07:30:57 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:13:40.994 07:30:57 -- nvmf/common.sh@371 -- # [[ e810 == e810 ]] 00:13:40.994 07:30:57 -- nvmf/common.sh@371 -- # [[ tcp == rdma ]] 00:13:40.994 07:30:57 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:13:40.994 07:30:57 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:13:40.994 07:30:57 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:13:40.994 07:30:57 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:13:40.995 07:30:57 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:13:40.995 Found net devices under 0000:0a:00.0: cvl_0_0 00:13:40.995 07:30:57 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:13:40.995 07:30:57 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:13:40.995 07:30:57 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:13:40.995 07:30:57 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:13:40.995 07:30:57 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:13:40.995 07:30:57 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:13:40.995 Found net devices under 0000:0a:00.1: cvl_0_1 00:13:40.995 07:30:57 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:13:40.995 07:30:57 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:13:40.995 07:30:57 -- nvmf/common.sh@402 -- # is_hw=yes 00:13:40.995 07:30:57 -- nvmf/common.sh@404 -- # [[ yes == yes ]] 00:13:40.995 07:30:57 -- nvmf/common.sh@405 -- # [[ tcp == tcp ]] 00:13:40.995 07:30:57 -- nvmf/common.sh@406 -- # nvmf_tcp_init 00:13:40.995 07:30:57 -- nvmf/common.sh@228 -- # NVMF_INITIATOR_IP=10.0.0.1 00:13:40.995 07:30:57 -- nvmf/common.sh@229 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:13:40.995 07:30:57 -- nvmf/common.sh@230 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:13:40.995 07:30:57 -- nvmf/common.sh@233 -- # (( 2 > 1 )) 00:13:40.995 07:30:57 -- nvmf/common.sh@235 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:13:40.995 07:30:57 -- nvmf/common.sh@236 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:13:40.995 07:30:57 -- nvmf/common.sh@239 -- # NVMF_SECOND_TARGET_IP= 00:13:40.995 07:30:57 -- nvmf/common.sh@241 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:13:40.995 07:30:57 -- nvmf/common.sh@242 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:13:40.995 07:30:57 -- nvmf/common.sh@243 -- # ip -4 addr flush cvl_0_0 00:13:40.995 07:30:57 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_1 00:13:40.995 07:30:57 -- nvmf/common.sh@247 -- # ip netns add cvl_0_0_ns_spdk 00:13:40.995 07:30:57 -- nvmf/common.sh@250 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:13:40.995 07:30:57 -- nvmf/common.sh@253 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:13:40.995 07:30:57 -- nvmf/common.sh@254 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:13:40.995 07:30:57 -- nvmf/common.sh@257 -- # ip link set cvl_0_1 up 00:13:40.995 07:30:57 -- nvmf/common.sh@259 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:13:40.995 07:30:57 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:13:40.995 07:30:57 -- nvmf/common.sh@263 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:13:40.995 07:30:57 -- nvmf/common.sh@266 -- # ping -c 1 10.0.0.2 00:13:40.995 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:13:40.995 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.260 ms 00:13:40.995 00:13:40.995 --- 10.0.0.2 ping statistics --- 00:13:40.995 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:40.995 rtt min/avg/max/mdev = 0.260/0.260/0.260/0.000 ms 00:13:40.995 07:30:57 -- nvmf/common.sh@267 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:13:40.995 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:13:40.995 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.198 ms 00:13:40.995 00:13:40.995 --- 10.0.0.1 ping statistics --- 00:13:40.995 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:40.995 rtt min/avg/max/mdev = 0.198/0.198/0.198/0.000 ms 00:13:40.995 07:30:57 -- nvmf/common.sh@269 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:13:40.995 07:30:57 -- nvmf/common.sh@410 -- # return 0 00:13:40.995 07:30:57 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:13:40.995 07:30:57 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:13:40.995 07:30:57 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:13:40.995 07:30:57 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:13:40.995 07:30:57 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:13:40.995 07:30:57 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:13:40.995 07:30:57 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:13:40.995 07:30:57 -- target/nvme_cli.sh@17 -- # nvmfappstart -m 0xF 00:13:40.995 07:30:57 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:13:40.995 07:30:57 -- common/autotest_common.sh@712 -- # xtrace_disable 00:13:40.995 07:30:57 -- common/autotest_common.sh@10 -- # set +x 00:13:41.254 07:30:57 -- nvmf/common.sh@469 -- # nvmfpid=4065078 00:13:41.254 07:30:57 -- nvmf/common.sh@468 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:13:41.254 07:30:57 -- nvmf/common.sh@470 -- # waitforlisten 4065078 00:13:41.254 07:30:57 -- common/autotest_common.sh@819 -- # '[' -z 4065078 ']' 00:13:41.254 07:30:57 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:41.254 07:30:57 -- common/autotest_common.sh@824 -- # local max_retries=100 00:13:41.254 07:30:57 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:41.254 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:41.254 07:30:57 -- common/autotest_common.sh@828 -- # xtrace_disable 00:13:41.254 07:30:57 -- common/autotest_common.sh@10 -- # set +x 00:13:41.254 [2024-07-14 07:30:57.205234] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:13:41.255 [2024-07-14 07:30:57.205303] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:13:41.255 EAL: No free 2048 kB hugepages reported on node 1 00:13:41.255 [2024-07-14 07:30:57.268556] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:13:41.255 [2024-07-14 07:30:57.378444] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:13:41.255 [2024-07-14 07:30:57.378622] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:13:41.255 [2024-07-14 07:30:57.378640] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:13:41.255 [2024-07-14 07:30:57.378653] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:13:41.255 [2024-07-14 07:30:57.378708] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:13:41.255 [2024-07-14 07:30:57.378752] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:13:41.255 [2024-07-14 07:30:57.378789] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:13:41.255 [2024-07-14 07:30:57.378792] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:13:42.189 07:30:58 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:13:42.189 07:30:58 -- common/autotest_common.sh@852 -- # return 0 00:13:42.189 07:30:58 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:13:42.189 07:30:58 -- common/autotest_common.sh@718 -- # xtrace_disable 00:13:42.189 07:30:58 -- common/autotest_common.sh@10 -- # set +x 00:13:42.189 07:30:58 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:13:42.189 07:30:58 -- target/nvme_cli.sh@19 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:13:42.189 07:30:58 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:42.189 07:30:58 -- common/autotest_common.sh@10 -- # set +x 00:13:42.189 [2024-07-14 07:30:58.218457] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:13:42.189 07:30:58 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:42.189 07:30:58 -- target/nvme_cli.sh@21 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:13:42.189 07:30:58 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:42.189 07:30:58 -- common/autotest_common.sh@10 -- # set +x 00:13:42.189 Malloc0 00:13:42.189 07:30:58 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:42.189 07:30:58 -- target/nvme_cli.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:13:42.189 07:30:58 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:42.189 07:30:58 -- common/autotest_common.sh@10 -- # set +x 00:13:42.189 Malloc1 00:13:42.189 07:30:58 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:42.189 07:30:58 -- target/nvme_cli.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME -d SPDK_Controller1 -i 291 00:13:42.189 07:30:58 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:42.189 07:30:58 -- common/autotest_common.sh@10 -- # set +x 00:13:42.189 07:30:58 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:42.189 07:30:58 -- target/nvme_cli.sh@25 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:13:42.189 07:30:58 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:42.189 07:30:58 -- common/autotest_common.sh@10 -- # set +x 00:13:42.189 07:30:58 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:42.189 07:30:58 -- target/nvme_cli.sh@26 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:13:42.189 07:30:58 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:42.189 07:30:58 -- common/autotest_common.sh@10 -- # set +x 00:13:42.189 07:30:58 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:42.189 07:30:58 -- target/nvme_cli.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:13:42.189 07:30:58 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:42.190 07:30:58 -- common/autotest_common.sh@10 -- # set +x 00:13:42.190 [2024-07-14 07:30:58.300672] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:13:42.190 07:30:58 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:42.190 07:30:58 -- target/nvme_cli.sh@28 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:13:42.190 07:30:58 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:42.190 07:30:58 -- common/autotest_common.sh@10 -- # set +x 00:13:42.190 07:30:58 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:42.190 07:30:58 -- target/nvme_cli.sh@30 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -a 10.0.0.2 -s 4420 00:13:42.448 00:13:42.448 Discovery Log Number of Records 2, Generation counter 2 00:13:42.448 =====Discovery Log Entry 0====== 00:13:42.448 trtype: tcp 00:13:42.448 adrfam: ipv4 00:13:42.448 subtype: current discovery subsystem 00:13:42.448 treq: not required 00:13:42.448 portid: 0 00:13:42.448 trsvcid: 4420 00:13:42.448 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:13:42.448 traddr: 10.0.0.2 00:13:42.448 eflags: explicit discovery connections, duplicate discovery information 00:13:42.448 sectype: none 00:13:42.448 =====Discovery Log Entry 1====== 00:13:42.448 trtype: tcp 00:13:42.448 adrfam: ipv4 00:13:42.448 subtype: nvme subsystem 00:13:42.448 treq: not required 00:13:42.448 portid: 0 00:13:42.448 trsvcid: 4420 00:13:42.448 subnqn: nqn.2016-06.io.spdk:cnode1 00:13:42.448 traddr: 10.0.0.2 00:13:42.448 eflags: none 00:13:42.448 sectype: none 00:13:42.448 07:30:58 -- target/nvme_cli.sh@31 -- # devs=($(get_nvme_devs)) 00:13:42.448 07:30:58 -- target/nvme_cli.sh@31 -- # get_nvme_devs 00:13:42.448 07:30:58 -- nvmf/common.sh@510 -- # local dev _ 00:13:42.448 07:30:58 -- nvmf/common.sh@512 -- # read -r dev _ 00:13:42.448 07:30:58 -- nvmf/common.sh@509 -- # nvme list 00:13:42.448 07:30:58 -- nvmf/common.sh@513 -- # [[ Node == /dev/nvme* ]] 00:13:42.448 07:30:58 -- nvmf/common.sh@512 -- # read -r dev _ 00:13:42.448 07:30:58 -- nvmf/common.sh@513 -- # [[ --------------------- == /dev/nvme* ]] 00:13:42.448 07:30:58 -- nvmf/common.sh@512 -- # read -r dev _ 00:13:42.448 07:30:58 -- target/nvme_cli.sh@31 -- # nvme_num_before_connection=0 00:13:42.448 07:30:58 -- target/nvme_cli.sh@32 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:13:43.015 07:30:59 -- target/nvme_cli.sh@34 -- # waitforserial SPDKISFASTANDAWESOME 2 00:13:43.015 07:30:59 -- common/autotest_common.sh@1177 -- # local i=0 00:13:43.015 07:30:59 -- common/autotest_common.sh@1178 -- # local nvme_device_counter=1 nvme_devices=0 00:13:43.015 07:30:59 -- common/autotest_common.sh@1179 -- # [[ -n 2 ]] 00:13:43.015 07:30:59 -- common/autotest_common.sh@1180 -- # nvme_device_counter=2 00:13:43.015 07:30:59 -- common/autotest_common.sh@1184 -- # sleep 2 00:13:44.915 07:31:01 -- common/autotest_common.sh@1185 -- # (( i++ <= 15 )) 00:13:44.915 07:31:01 -- common/autotest_common.sh@1186 -- # lsblk -l -o NAME,SERIAL 00:13:44.915 07:31:01 -- common/autotest_common.sh@1186 -- # grep -c SPDKISFASTANDAWESOME 00:13:44.915 07:31:01 -- common/autotest_common.sh@1186 -- # nvme_devices=2 00:13:44.915 07:31:01 -- common/autotest_common.sh@1187 -- # (( nvme_devices == nvme_device_counter )) 00:13:44.915 07:31:01 -- common/autotest_common.sh@1187 -- # return 0 00:13:44.915 07:31:01 -- target/nvme_cli.sh@35 -- # get_nvme_devs 00:13:44.915 07:31:01 -- nvmf/common.sh@510 -- # local dev _ 00:13:44.915 07:31:01 -- nvmf/common.sh@512 -- # read -r dev _ 00:13:44.915 07:31:01 -- nvmf/common.sh@509 -- # nvme list 00:13:45.173 07:31:01 -- nvmf/common.sh@513 -- # [[ Node == /dev/nvme* ]] 00:13:45.173 07:31:01 -- nvmf/common.sh@512 -- # read -r dev _ 00:13:45.173 07:31:01 -- nvmf/common.sh@513 -- # [[ --------------------- == /dev/nvme* ]] 00:13:45.173 07:31:01 -- nvmf/common.sh@512 -- # read -r dev _ 00:13:45.173 07:31:01 -- nvmf/common.sh@513 -- # [[ /dev/nvme0n2 == /dev/nvme* ]] 00:13:45.173 07:31:01 -- nvmf/common.sh@514 -- # echo /dev/nvme0n2 00:13:45.173 07:31:01 -- nvmf/common.sh@512 -- # read -r dev _ 00:13:45.173 07:31:01 -- nvmf/common.sh@513 -- # [[ /dev/nvme0n1 == /dev/nvme* ]] 00:13:45.173 07:31:01 -- nvmf/common.sh@514 -- # echo /dev/nvme0n1 00:13:45.173 07:31:01 -- nvmf/common.sh@512 -- # read -r dev _ 00:13:45.173 07:31:01 -- target/nvme_cli.sh@35 -- # [[ -z /dev/nvme0n2 00:13:45.173 /dev/nvme0n1 ]] 00:13:45.173 07:31:01 -- target/nvme_cli.sh@59 -- # devs=($(get_nvme_devs)) 00:13:45.173 07:31:01 -- target/nvme_cli.sh@59 -- # get_nvme_devs 00:13:45.173 07:31:01 -- nvmf/common.sh@510 -- # local dev _ 00:13:45.173 07:31:01 -- nvmf/common.sh@512 -- # read -r dev _ 00:13:45.173 07:31:01 -- nvmf/common.sh@509 -- # nvme list 00:13:45.173 07:31:01 -- nvmf/common.sh@513 -- # [[ Node == /dev/nvme* ]] 00:13:45.173 07:31:01 -- nvmf/common.sh@512 -- # read -r dev _ 00:13:45.173 07:31:01 -- nvmf/common.sh@513 -- # [[ --------------------- == /dev/nvme* ]] 00:13:45.173 07:31:01 -- nvmf/common.sh@512 -- # read -r dev _ 00:13:45.173 07:31:01 -- nvmf/common.sh@513 -- # [[ /dev/nvme0n2 == /dev/nvme* ]] 00:13:45.173 07:31:01 -- nvmf/common.sh@514 -- # echo /dev/nvme0n2 00:13:45.173 07:31:01 -- nvmf/common.sh@512 -- # read -r dev _ 00:13:45.173 07:31:01 -- nvmf/common.sh@513 -- # [[ /dev/nvme0n1 == /dev/nvme* ]] 00:13:45.173 07:31:01 -- nvmf/common.sh@514 -- # echo /dev/nvme0n1 00:13:45.173 07:31:01 -- nvmf/common.sh@512 -- # read -r dev _ 00:13:45.173 07:31:01 -- target/nvme_cli.sh@59 -- # nvme_num=2 00:13:45.173 07:31:01 -- target/nvme_cli.sh@60 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:13:45.431 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:45.431 07:31:01 -- target/nvme_cli.sh@61 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:13:45.431 07:31:01 -- common/autotest_common.sh@1198 -- # local i=0 00:13:45.431 07:31:01 -- common/autotest_common.sh@1199 -- # lsblk -o NAME,SERIAL 00:13:45.431 07:31:01 -- common/autotest_common.sh@1199 -- # grep -q -w SPDKISFASTANDAWESOME 00:13:45.431 07:31:01 -- common/autotest_common.sh@1206 -- # lsblk -l -o NAME,SERIAL 00:13:45.431 07:31:01 -- common/autotest_common.sh@1206 -- # grep -q -w SPDKISFASTANDAWESOME 00:13:45.431 07:31:01 -- common/autotest_common.sh@1210 -- # return 0 00:13:45.431 07:31:01 -- target/nvme_cli.sh@62 -- # (( nvme_num <= nvme_num_before_connection )) 00:13:45.431 07:31:01 -- target/nvme_cli.sh@67 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:13:45.431 07:31:01 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:45.431 07:31:01 -- common/autotest_common.sh@10 -- # set +x 00:13:45.431 07:31:01 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:45.431 07:31:01 -- target/nvme_cli.sh@68 -- # trap - SIGINT SIGTERM EXIT 00:13:45.431 07:31:01 -- target/nvme_cli.sh@70 -- # nvmftestfini 00:13:45.431 07:31:01 -- nvmf/common.sh@476 -- # nvmfcleanup 00:13:45.431 07:31:01 -- nvmf/common.sh@116 -- # sync 00:13:45.431 07:31:01 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:13:45.431 07:31:01 -- nvmf/common.sh@119 -- # set +e 00:13:45.432 07:31:01 -- nvmf/common.sh@120 -- # for i in {1..20} 00:13:45.432 07:31:01 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:13:45.432 rmmod nvme_tcp 00:13:45.690 rmmod nvme_fabrics 00:13:45.690 rmmod nvme_keyring 00:13:45.690 07:31:01 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:13:45.690 07:31:01 -- nvmf/common.sh@123 -- # set -e 00:13:45.690 07:31:01 -- nvmf/common.sh@124 -- # return 0 00:13:45.690 07:31:01 -- nvmf/common.sh@477 -- # '[' -n 4065078 ']' 00:13:45.690 07:31:01 -- nvmf/common.sh@478 -- # killprocess 4065078 00:13:45.690 07:31:01 -- common/autotest_common.sh@926 -- # '[' -z 4065078 ']' 00:13:45.690 07:31:01 -- common/autotest_common.sh@930 -- # kill -0 4065078 00:13:45.690 07:31:01 -- common/autotest_common.sh@931 -- # uname 00:13:45.690 07:31:01 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:13:45.690 07:31:01 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 4065078 00:13:45.690 07:31:01 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:13:45.690 07:31:01 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:13:45.690 07:31:01 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 4065078' 00:13:45.690 killing process with pid 4065078 00:13:45.690 07:31:01 -- common/autotest_common.sh@945 -- # kill 4065078 00:13:45.690 07:31:01 -- common/autotest_common.sh@950 -- # wait 4065078 00:13:45.950 07:31:02 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:13:45.950 07:31:02 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:13:45.950 07:31:02 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:13:45.950 07:31:02 -- nvmf/common.sh@273 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:13:45.950 07:31:02 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:13:45.950 07:31:02 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:45.950 07:31:02 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:13:45.950 07:31:02 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:48.483 07:31:04 -- nvmf/common.sh@278 -- # ip -4 addr flush cvl_0_1 00:13:48.483 00:13:48.483 real 0m9.101s 00:13:48.483 user 0m18.968s 00:13:48.483 sys 0m2.213s 00:13:48.483 07:31:04 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:13:48.483 07:31:04 -- common/autotest_common.sh@10 -- # set +x 00:13:48.483 ************************************ 00:13:48.483 END TEST nvmf_nvme_cli 00:13:48.483 ************************************ 00:13:48.483 07:31:04 -- nvmf/nvmf.sh@39 -- # [[ 0 -eq 1 ]] 00:13:48.483 07:31:04 -- nvmf/nvmf.sh@46 -- # run_test nvmf_host_management /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/host_management.sh --transport=tcp 00:13:48.483 07:31:04 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:13:48.483 07:31:04 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:13:48.483 07:31:04 -- common/autotest_common.sh@10 -- # set +x 00:13:48.483 ************************************ 00:13:48.483 START TEST nvmf_host_management 00:13:48.483 ************************************ 00:13:48.483 07:31:04 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/host_management.sh --transport=tcp 00:13:48.483 * Looking for test storage... 00:13:48.483 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:13:48.483 07:31:04 -- target/host_management.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:13:48.483 07:31:04 -- nvmf/common.sh@7 -- # uname -s 00:13:48.483 07:31:04 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:13:48.483 07:31:04 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:13:48.483 07:31:04 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:13:48.483 07:31:04 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:13:48.483 07:31:04 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:13:48.483 07:31:04 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:13:48.483 07:31:04 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:13:48.483 07:31:04 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:13:48.483 07:31:04 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:13:48.483 07:31:04 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:13:48.483 07:31:04 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:13:48.483 07:31:04 -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:13:48.483 07:31:04 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:13:48.483 07:31:04 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:13:48.483 07:31:04 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:13:48.483 07:31:04 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:13:48.483 07:31:04 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:13:48.483 07:31:04 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:13:48.483 07:31:04 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:13:48.483 07:31:04 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:48.483 07:31:04 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:48.483 07:31:04 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:48.483 07:31:04 -- paths/export.sh@5 -- # export PATH 00:13:48.483 07:31:04 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:48.483 07:31:04 -- nvmf/common.sh@46 -- # : 0 00:13:48.483 07:31:04 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:13:48.483 07:31:04 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:13:48.483 07:31:04 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:13:48.483 07:31:04 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:13:48.483 07:31:04 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:13:48.483 07:31:04 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:13:48.483 07:31:04 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:13:48.483 07:31:04 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:13:48.483 07:31:04 -- target/host_management.sh@11 -- # MALLOC_BDEV_SIZE=64 00:13:48.483 07:31:04 -- target/host_management.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:13:48.483 07:31:04 -- target/host_management.sh@104 -- # nvmftestinit 00:13:48.483 07:31:04 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:13:48.483 07:31:04 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:13:48.483 07:31:04 -- nvmf/common.sh@436 -- # prepare_net_devs 00:13:48.483 07:31:04 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:13:48.483 07:31:04 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:13:48.483 07:31:04 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:48.483 07:31:04 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:13:48.483 07:31:04 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:48.483 07:31:04 -- nvmf/common.sh@402 -- # [[ phy != virt ]] 00:13:48.483 07:31:04 -- nvmf/common.sh@402 -- # gather_supported_nvmf_pci_devs 00:13:48.483 07:31:04 -- nvmf/common.sh@284 -- # xtrace_disable 00:13:48.483 07:31:04 -- common/autotest_common.sh@10 -- # set +x 00:13:50.422 07:31:06 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:13:50.422 07:31:06 -- nvmf/common.sh@290 -- # pci_devs=() 00:13:50.422 07:31:06 -- nvmf/common.sh@290 -- # local -a pci_devs 00:13:50.422 07:31:06 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:13:50.422 07:31:06 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:13:50.422 07:31:06 -- nvmf/common.sh@292 -- # pci_drivers=() 00:13:50.422 07:31:06 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:13:50.422 07:31:06 -- nvmf/common.sh@294 -- # net_devs=() 00:13:50.422 07:31:06 -- nvmf/common.sh@294 -- # local -ga net_devs 00:13:50.422 07:31:06 -- nvmf/common.sh@295 -- # e810=() 00:13:50.422 07:31:06 -- nvmf/common.sh@295 -- # local -ga e810 00:13:50.422 07:31:06 -- nvmf/common.sh@296 -- # x722=() 00:13:50.422 07:31:06 -- nvmf/common.sh@296 -- # local -ga x722 00:13:50.422 07:31:06 -- nvmf/common.sh@297 -- # mlx=() 00:13:50.422 07:31:06 -- nvmf/common.sh@297 -- # local -ga mlx 00:13:50.422 07:31:06 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:13:50.422 07:31:06 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:13:50.422 07:31:06 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:13:50.422 07:31:06 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:13:50.422 07:31:06 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:13:50.422 07:31:06 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:13:50.422 07:31:06 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:13:50.422 07:31:06 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:13:50.422 07:31:06 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:13:50.422 07:31:06 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:13:50.422 07:31:06 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:13:50.422 07:31:06 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:13:50.422 07:31:06 -- nvmf/common.sh@320 -- # [[ tcp == rdma ]] 00:13:50.422 07:31:06 -- nvmf/common.sh@326 -- # [[ e810 == mlx5 ]] 00:13:50.422 07:31:06 -- nvmf/common.sh@328 -- # [[ e810 == e810 ]] 00:13:50.422 07:31:06 -- nvmf/common.sh@329 -- # pci_devs=("${e810[@]}") 00:13:50.422 07:31:06 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:13:50.422 07:31:06 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:13:50.422 07:31:06 -- nvmf/common.sh@340 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:13:50.422 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:13:50.422 07:31:06 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:13:50.422 07:31:06 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:13:50.422 07:31:06 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:13:50.422 07:31:06 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:13:50.422 07:31:06 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:13:50.422 07:31:06 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:13:50.422 07:31:06 -- nvmf/common.sh@340 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:13:50.422 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:13:50.422 07:31:06 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:13:50.422 07:31:06 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:13:50.422 07:31:06 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:13:50.422 07:31:06 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:13:50.422 07:31:06 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:13:50.422 07:31:06 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:13:50.422 07:31:06 -- nvmf/common.sh@371 -- # [[ e810 == e810 ]] 00:13:50.422 07:31:06 -- nvmf/common.sh@371 -- # [[ tcp == rdma ]] 00:13:50.422 07:31:06 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:13:50.422 07:31:06 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:13:50.422 07:31:06 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:13:50.422 07:31:06 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:13:50.422 07:31:06 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:13:50.422 Found net devices under 0000:0a:00.0: cvl_0_0 00:13:50.422 07:31:06 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:13:50.422 07:31:06 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:13:50.422 07:31:06 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:13:50.422 07:31:06 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:13:50.422 07:31:06 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:13:50.422 07:31:06 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:13:50.422 Found net devices under 0000:0a:00.1: cvl_0_1 00:13:50.422 07:31:06 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:13:50.422 07:31:06 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:13:50.422 07:31:06 -- nvmf/common.sh@402 -- # is_hw=yes 00:13:50.422 07:31:06 -- nvmf/common.sh@404 -- # [[ yes == yes ]] 00:13:50.422 07:31:06 -- nvmf/common.sh@405 -- # [[ tcp == tcp ]] 00:13:50.422 07:31:06 -- nvmf/common.sh@406 -- # nvmf_tcp_init 00:13:50.422 07:31:06 -- nvmf/common.sh@228 -- # NVMF_INITIATOR_IP=10.0.0.1 00:13:50.422 07:31:06 -- nvmf/common.sh@229 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:13:50.422 07:31:06 -- nvmf/common.sh@230 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:13:50.422 07:31:06 -- nvmf/common.sh@233 -- # (( 2 > 1 )) 00:13:50.422 07:31:06 -- nvmf/common.sh@235 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:13:50.422 07:31:06 -- nvmf/common.sh@236 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:13:50.422 07:31:06 -- nvmf/common.sh@239 -- # NVMF_SECOND_TARGET_IP= 00:13:50.422 07:31:06 -- nvmf/common.sh@241 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:13:50.422 07:31:06 -- nvmf/common.sh@242 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:13:50.422 07:31:06 -- nvmf/common.sh@243 -- # ip -4 addr flush cvl_0_0 00:13:50.422 07:31:06 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_1 00:13:50.422 07:31:06 -- nvmf/common.sh@247 -- # ip netns add cvl_0_0_ns_spdk 00:13:50.422 07:31:06 -- nvmf/common.sh@250 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:13:50.422 07:31:06 -- nvmf/common.sh@253 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:13:50.422 07:31:06 -- nvmf/common.sh@254 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:13:50.422 07:31:06 -- nvmf/common.sh@257 -- # ip link set cvl_0_1 up 00:13:50.422 07:31:06 -- nvmf/common.sh@259 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:13:50.422 07:31:06 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:13:50.422 07:31:06 -- nvmf/common.sh@263 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:13:50.422 07:31:06 -- nvmf/common.sh@266 -- # ping -c 1 10.0.0.2 00:13:50.422 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:13:50.422 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.191 ms 00:13:50.422 00:13:50.422 --- 10.0.0.2 ping statistics --- 00:13:50.422 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:50.422 rtt min/avg/max/mdev = 0.191/0.191/0.191/0.000 ms 00:13:50.422 07:31:06 -- nvmf/common.sh@267 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:13:50.422 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:13:50.422 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.125 ms 00:13:50.422 00:13:50.422 --- 10.0.0.1 ping statistics --- 00:13:50.422 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:50.422 rtt min/avg/max/mdev = 0.125/0.125/0.125/0.000 ms 00:13:50.422 07:31:06 -- nvmf/common.sh@269 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:13:50.422 07:31:06 -- nvmf/common.sh@410 -- # return 0 00:13:50.422 07:31:06 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:13:50.422 07:31:06 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:13:50.422 07:31:06 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:13:50.422 07:31:06 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:13:50.422 07:31:06 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:13:50.422 07:31:06 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:13:50.422 07:31:06 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:13:50.422 07:31:06 -- target/host_management.sh@106 -- # run_test nvmf_host_management nvmf_host_management 00:13:50.422 07:31:06 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:13:50.422 07:31:06 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:13:50.422 07:31:06 -- common/autotest_common.sh@10 -- # set +x 00:13:50.422 ************************************ 00:13:50.422 START TEST nvmf_host_management 00:13:50.422 ************************************ 00:13:50.422 07:31:06 -- common/autotest_common.sh@1104 -- # nvmf_host_management 00:13:50.422 07:31:06 -- target/host_management.sh@69 -- # starttarget 00:13:50.422 07:31:06 -- target/host_management.sh@16 -- # nvmfappstart -m 0x1E 00:13:50.422 07:31:06 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:13:50.422 07:31:06 -- common/autotest_common.sh@712 -- # xtrace_disable 00:13:50.422 07:31:06 -- common/autotest_common.sh@10 -- # set +x 00:13:50.423 07:31:06 -- nvmf/common.sh@469 -- # nvmfpid=4067625 00:13:50.423 07:31:06 -- nvmf/common.sh@468 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:13:50.423 07:31:06 -- nvmf/common.sh@470 -- # waitforlisten 4067625 00:13:50.423 07:31:06 -- common/autotest_common.sh@819 -- # '[' -z 4067625 ']' 00:13:50.423 07:31:06 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:50.423 07:31:06 -- common/autotest_common.sh@824 -- # local max_retries=100 00:13:50.423 07:31:06 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:50.423 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:50.423 07:31:06 -- common/autotest_common.sh@828 -- # xtrace_disable 00:13:50.423 07:31:06 -- common/autotest_common.sh@10 -- # set +x 00:13:50.423 [2024-07-14 07:31:06.319546] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:13:50.423 [2024-07-14 07:31:06.319626] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:13:50.423 EAL: No free 2048 kB hugepages reported on node 1 00:13:50.423 [2024-07-14 07:31:06.391459] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:13:50.423 [2024-07-14 07:31:06.510758] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:13:50.423 [2024-07-14 07:31:06.510954] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:13:50.423 [2024-07-14 07:31:06.510976] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:13:50.423 [2024-07-14 07:31:06.510991] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:13:50.423 [2024-07-14 07:31:06.511078] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:13:50.423 [2024-07-14 07:31:06.511129] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:13:50.423 [2024-07-14 07:31:06.511195] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 4 00:13:50.423 [2024-07-14 07:31:06.511198] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:13:51.357 07:31:07 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:13:51.357 07:31:07 -- common/autotest_common.sh@852 -- # return 0 00:13:51.357 07:31:07 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:13:51.357 07:31:07 -- common/autotest_common.sh@718 -- # xtrace_disable 00:13:51.357 07:31:07 -- common/autotest_common.sh@10 -- # set +x 00:13:51.357 07:31:07 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:13:51.357 07:31:07 -- target/host_management.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:13:51.357 07:31:07 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:51.357 07:31:07 -- common/autotest_common.sh@10 -- # set +x 00:13:51.357 [2024-07-14 07:31:07.254103] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:13:51.357 07:31:07 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:51.357 07:31:07 -- target/host_management.sh@20 -- # timing_enter create_subsystem 00:13:51.357 07:31:07 -- common/autotest_common.sh@712 -- # xtrace_disable 00:13:51.357 07:31:07 -- common/autotest_common.sh@10 -- # set +x 00:13:51.357 07:31:07 -- target/host_management.sh@22 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:13:51.357 07:31:07 -- target/host_management.sh@23 -- # cat 00:13:51.357 07:31:07 -- target/host_management.sh@30 -- # rpc_cmd 00:13:51.357 07:31:07 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:51.357 07:31:07 -- common/autotest_common.sh@10 -- # set +x 00:13:51.357 Malloc0 00:13:51.357 [2024-07-14 07:31:07.314302] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:13:51.357 07:31:07 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:51.357 07:31:07 -- target/host_management.sh@31 -- # timing_exit create_subsystems 00:13:51.357 07:31:07 -- common/autotest_common.sh@718 -- # xtrace_disable 00:13:51.357 07:31:07 -- common/autotest_common.sh@10 -- # set +x 00:13:51.358 07:31:07 -- target/host_management.sh@73 -- # perfpid=4067804 00:13:51.358 07:31:07 -- target/host_management.sh@74 -- # waitforlisten 4067804 /var/tmp/bdevperf.sock 00:13:51.358 07:31:07 -- common/autotest_common.sh@819 -- # '[' -z 4067804 ']' 00:13:51.358 07:31:07 -- target/host_management.sh@72 -- # gen_nvmf_target_json 0 00:13:51.358 07:31:07 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:13:51.358 07:31:07 -- target/host_management.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json /dev/fd/63 -q 64 -o 65536 -w verify -t 10 00:13:51.358 07:31:07 -- common/autotest_common.sh@824 -- # local max_retries=100 00:13:51.358 07:31:07 -- nvmf/common.sh@520 -- # config=() 00:13:51.358 07:31:07 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:13:51.358 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:13:51.358 07:31:07 -- nvmf/common.sh@520 -- # local subsystem config 00:13:51.358 07:31:07 -- common/autotest_common.sh@828 -- # xtrace_disable 00:13:51.358 07:31:07 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:13:51.358 07:31:07 -- common/autotest_common.sh@10 -- # set +x 00:13:51.358 07:31:07 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:13:51.358 { 00:13:51.358 "params": { 00:13:51.358 "name": "Nvme$subsystem", 00:13:51.358 "trtype": "$TEST_TRANSPORT", 00:13:51.358 "traddr": "$NVMF_FIRST_TARGET_IP", 00:13:51.358 "adrfam": "ipv4", 00:13:51.358 "trsvcid": "$NVMF_PORT", 00:13:51.358 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:13:51.358 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:13:51.358 "hdgst": ${hdgst:-false}, 00:13:51.358 "ddgst": ${ddgst:-false} 00:13:51.358 }, 00:13:51.358 "method": "bdev_nvme_attach_controller" 00:13:51.358 } 00:13:51.358 EOF 00:13:51.358 )") 00:13:51.358 07:31:07 -- nvmf/common.sh@542 -- # cat 00:13:51.358 07:31:07 -- nvmf/common.sh@544 -- # jq . 00:13:51.358 07:31:07 -- nvmf/common.sh@545 -- # IFS=, 00:13:51.358 07:31:07 -- nvmf/common.sh@546 -- # printf '%s\n' '{ 00:13:51.358 "params": { 00:13:51.358 "name": "Nvme0", 00:13:51.358 "trtype": "tcp", 00:13:51.358 "traddr": "10.0.0.2", 00:13:51.358 "adrfam": "ipv4", 00:13:51.358 "trsvcid": "4420", 00:13:51.358 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:13:51.358 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:13:51.358 "hdgst": false, 00:13:51.358 "ddgst": false 00:13:51.358 }, 00:13:51.358 "method": "bdev_nvme_attach_controller" 00:13:51.358 }' 00:13:51.358 [2024-07-14 07:31:07.389655] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:13:51.358 [2024-07-14 07:31:07.389725] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid4067804 ] 00:13:51.358 EAL: No free 2048 kB hugepages reported on node 1 00:13:51.358 [2024-07-14 07:31:07.449327] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:51.616 [2024-07-14 07:31:07.558633] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:13:51.875 Running I/O for 10 seconds... 00:13:52.442 07:31:08 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:13:52.442 07:31:08 -- common/autotest_common.sh@852 -- # return 0 00:13:52.442 07:31:08 -- target/host_management.sh@75 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:13:52.442 07:31:08 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:52.442 07:31:08 -- common/autotest_common.sh@10 -- # set +x 00:13:52.442 07:31:08 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:52.443 07:31:08 -- target/host_management.sh@78 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill -9 $perfpid || true; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:13:52.443 07:31:08 -- target/host_management.sh@80 -- # waitforio /var/tmp/bdevperf.sock Nvme0n1 00:13:52.443 07:31:08 -- target/host_management.sh@45 -- # '[' -z /var/tmp/bdevperf.sock ']' 00:13:52.443 07:31:08 -- target/host_management.sh@49 -- # '[' -z Nvme0n1 ']' 00:13:52.443 07:31:08 -- target/host_management.sh@52 -- # local ret=1 00:13:52.443 07:31:08 -- target/host_management.sh@53 -- # local i 00:13:52.443 07:31:08 -- target/host_management.sh@54 -- # (( i = 10 )) 00:13:52.443 07:31:08 -- target/host_management.sh@54 -- # (( i != 0 )) 00:13:52.443 07:31:08 -- target/host_management.sh@55 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme0n1 00:13:52.443 07:31:08 -- target/host_management.sh@55 -- # jq -r '.bdevs[0].num_read_ops' 00:13:52.443 07:31:08 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:52.443 07:31:08 -- common/autotest_common.sh@10 -- # set +x 00:13:52.443 07:31:08 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:52.443 07:31:08 -- target/host_management.sh@55 -- # read_io_count=1097 00:13:52.443 07:31:08 -- target/host_management.sh@58 -- # '[' 1097 -ge 100 ']' 00:13:52.443 07:31:08 -- target/host_management.sh@59 -- # ret=0 00:13:52.443 07:31:08 -- target/host_management.sh@60 -- # break 00:13:52.443 07:31:08 -- target/host_management.sh@64 -- # return 0 00:13:52.443 07:31:08 -- target/host_management.sh@84 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host0 00:13:52.443 07:31:08 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:52.443 07:31:08 -- common/autotest_common.sh@10 -- # set +x 00:13:52.443 [2024-07-14 07:31:08.373909] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd7b480 is same with the state(5) to be set 00:13:52.443 [2024-07-14 07:31:08.374005] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd7b480 is same with the state(5) to be set 00:13:52.443 [2024-07-14 07:31:08.374022] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd7b480 is same with the state(5) to be set 00:13:52.443 [2024-07-14 07:31:08.374035] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd7b480 is same with the state(5) to be set 00:13:52.443 [2024-07-14 07:31:08.374047] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd7b480 is same with the state(5) to be set 00:13:52.443 [2024-07-14 07:31:08.374061] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd7b480 is same with the state(5) to be set 00:13:52.443 [2024-07-14 07:31:08.374073] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd7b480 is same with the state(5) to be set 00:13:52.443 [2024-07-14 07:31:08.374086] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd7b480 is same with the state(5) to be set 00:13:52.443 [2024-07-14 07:31:08.374098] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd7b480 is same with the state(5) to be set 00:13:52.443 [2024-07-14 07:31:08.374110] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd7b480 is same with the state(5) to be set 00:13:52.443 [2024-07-14 07:31:08.374122] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd7b480 is same with the state(5) to be set 00:13:52.443 [2024-07-14 07:31:08.374135] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd7b480 is same with the state(5) to be set 00:13:52.443 [2024-07-14 07:31:08.374147] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd7b480 is same with the state(5) to be set 00:13:52.443 [2024-07-14 07:31:08.374168] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd7b480 is same with the state(5) to be set 00:13:52.443 [2024-07-14 07:31:08.374180] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd7b480 is same with the state(5) to be set 00:13:52.443 [2024-07-14 07:31:08.374192] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd7b480 is same with the state(5) to be set 00:13:52.443 [2024-07-14 07:31:08.374205] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd7b480 is same with the state(5) to be set 00:13:52.443 [2024-07-14 07:31:08.374226] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd7b480 is same with the state(5) to be set 00:13:52.443 [2024-07-14 07:31:08.374238] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd7b480 is same with the state(5) to be set 00:13:52.443 [2024-07-14 07:31:08.374250] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd7b480 is same with the state(5) to be set 00:13:52.443 [2024-07-14 07:31:08.374262] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd7b480 is same with the state(5) to be set 00:13:52.443 [2024-07-14 07:31:08.374275] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd7b480 is same with the state(5) to be set 00:13:52.443 [2024-07-14 07:31:08.374287] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd7b480 is same with the state(5) to be set 00:13:52.443 [2024-07-14 07:31:08.374299] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd7b480 is same with the state(5) to be set 00:13:52.443 [2024-07-14 07:31:08.374312] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd7b480 is same with the state(5) to be set 00:13:52.443 [2024-07-14 07:31:08.374325] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd7b480 is same with the state(5) to be set 00:13:52.443 [2024-07-14 07:31:08.374337] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd7b480 is same with the state(5) to be set 00:13:52.443 [2024-07-14 07:31:08.374350] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd7b480 is same with the state(5) to be set 00:13:52.443 [2024-07-14 07:31:08.374362] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd7b480 is same with the state(5) to be set 00:13:52.443 [2024-07-14 07:31:08.374374] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd7b480 is same with the state(5) to be set 00:13:52.443 [2024-07-14 07:31:08.374385] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd7b480 is same with the state(5) to be set 00:13:52.443 [2024-07-14 07:31:08.374398] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd7b480 is same with the state(5) to be set 00:13:52.443 [2024-07-14 07:31:08.374410] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd7b480 is same with the state(5) to be set 00:13:52.443 [2024-07-14 07:31:08.374422] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd7b480 is same with the state(5) to be set 00:13:52.443 [2024-07-14 07:31:08.374435] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd7b480 is same with the state(5) to be set 00:13:52.443 [2024-07-14 07:31:08.374447] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd7b480 is same with the state(5) to be set 00:13:52.443 [2024-07-14 07:31:08.374460] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd7b480 is same with the state(5) to be set 00:13:52.443 [2024-07-14 07:31:08.374472] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd7b480 is same with the state(5) to be set 00:13:52.443 [2024-07-14 07:31:08.374484] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd7b480 is same with the state(5) to be set 00:13:52.443 [2024-07-14 07:31:08.374497] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd7b480 is same with the state(5) to be set 00:13:52.443 [2024-07-14 07:31:08.374509] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd7b480 is same with the state(5) to be set 00:13:52.443 [2024-07-14 07:31:08.374521] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd7b480 is same with the state(5) to be set 00:13:52.443 [2024-07-14 07:31:08.374533] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd7b480 is same with the state(5) to be set 00:13:52.443 [2024-07-14 07:31:08.374546] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd7b480 is same with the state(5) to be set 00:13:52.443 [2024-07-14 07:31:08.374561] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd7b480 is same with the state(5) to be set 00:13:52.443 [2024-07-14 07:31:08.374574] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd7b480 is same with the state(5) to be set 00:13:52.443 [2024-07-14 07:31:08.374586] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd7b480 is same with the state(5) to be set 00:13:52.443 [2024-07-14 07:31:08.374598] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd7b480 is same with the state(5) to be set 00:13:52.443 [2024-07-14 07:31:08.374610] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd7b480 is same with the state(5) to be set 00:13:52.443 [2024-07-14 07:31:08.374622] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd7b480 is same with the state(5) to be set 00:13:52.443 [2024-07-14 07:31:08.374635] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd7b480 is same with the state(5) to be set 00:13:52.443 [2024-07-14 07:31:08.374647] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd7b480 is same with the state(5) to be set 00:13:52.443 [2024-07-14 07:31:08.374660] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd7b480 is same with the state(5) to be set 00:13:52.443 [2024-07-14 07:31:08.374672] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd7b480 is same with the state(5) to be set 00:13:52.443 [2024-07-14 07:31:08.375242] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:22656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:52.443 [2024-07-14 07:31:08.375282] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:52.443 [2024-07-14 07:31:08.375311] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:22784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:52.443 [2024-07-14 07:31:08.375327] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:52.443 [2024-07-14 07:31:08.375344] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:22912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:52.443 [2024-07-14 07:31:08.375358] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:52.443 [2024-07-14 07:31:08.375374] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:23040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:52.443 [2024-07-14 07:31:08.375388] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:52.443 [2024-07-14 07:31:08.375403] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:23168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:52.443 [2024-07-14 07:31:08.375418] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:52.443 [2024-07-14 07:31:08.375433] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:23296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:52.443 [2024-07-14 07:31:08.375446] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:52.443 [2024-07-14 07:31:08.375462] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:23424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:52.443 [2024-07-14 07:31:08.375476] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:52.443 [2024-07-14 07:31:08.375491] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:52.443 [2024-07-14 07:31:08.375505] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:52.443 [2024-07-14 07:31:08.375526] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:23680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:52.443 [2024-07-14 07:31:08.375541] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:52.443 [2024-07-14 07:31:08.375556] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:23808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:52.443 [2024-07-14 07:31:08.375569] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:52.443 [2024-07-14 07:31:08.375584] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:23936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:52.443 [2024-07-14 07:31:08.375598] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:52.444 [2024-07-14 07:31:08.375613] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:24064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:52.444 [2024-07-14 07:31:08.375627] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:52.444 [2024-07-14 07:31:08.375642] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:17152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:52.444 [2024-07-14 07:31:08.375656] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:52.444 [2024-07-14 07:31:08.375671] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:24192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:52.444 [2024-07-14 07:31:08.375685] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:52.444 [2024-07-14 07:31:08.375700] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:17664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:52.444 [2024-07-14 07:31:08.375714] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:52.444 [2024-07-14 07:31:08.375729] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:24320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:52.444 [2024-07-14 07:31:08.375742] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:52.444 [2024-07-14 07:31:08.375757] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:17792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:52.444 [2024-07-14 07:31:08.375772] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:52.444 [2024-07-14 07:31:08.375788] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:18176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:52.444 [2024-07-14 07:31:08.375801] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:52.444 [2024-07-14 07:31:08.375816] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:24448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:52.444 [2024-07-14 07:31:08.375830] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:52.444 [2024-07-14 07:31:08.375845] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:18304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:52.444 [2024-07-14 07:31:08.375861] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:52.444 [2024-07-14 07:31:08.375885] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:24576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:52.444 [2024-07-14 07:31:08.375904] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:52.444 [2024-07-14 07:31:08.375920] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:18432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:52.444 [2024-07-14 07:31:08.375934] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:52.444 [2024-07-14 07:31:08.375949] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:19072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:52.444 [2024-07-14 07:31:08.375963] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:52.444 [2024-07-14 07:31:08.375978] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:24704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:52.444 [2024-07-14 07:31:08.375992] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:52.444 [2024-07-14 07:31:08.376007] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:24832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:52.444 [2024-07-14 07:31:08.376021] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:52.444 [2024-07-14 07:31:08.376036] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:19200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:52.444 [2024-07-14 07:31:08.376049] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:52.444 [2024-07-14 07:31:08.376064] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:19456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:52.444 [2024-07-14 07:31:08.376078] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:52.444 [2024-07-14 07:31:08.376094] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:24960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:52.444 [2024-07-14 07:31:08.376108] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:52.444 [2024-07-14 07:31:08.376123] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:19968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:52.444 [2024-07-14 07:31:08.376137] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:52.444 [2024-07-14 07:31:08.376152] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:25088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:52.444 [2024-07-14 07:31:08.376175] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:52.444 [2024-07-14 07:31:08.376190] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:20224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:52.444 [2024-07-14 07:31:08.376204] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:52.444 [2024-07-14 07:31:08.376219] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:25216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:52.444 [2024-07-14 07:31:08.376233] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:52.444 [2024-07-14 07:31:08.376248] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:20864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:52.444 [2024-07-14 07:31:08.376262] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:52.444 [2024-07-14 07:31:08.376281] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:25344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:52.444 [2024-07-14 07:31:08.376295] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:52.444 [2024-07-14 07:31:08.376311] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:25472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:52.444 [2024-07-14 07:31:08.376325] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:52.444 [2024-07-14 07:31:08.376340] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:20992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:52.444 [2024-07-14 07:31:08.376354] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:52.444 [2024-07-14 07:31:08.376369] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:21120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:52.444 [2024-07-14 07:31:08.376383] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:52.444 [2024-07-14 07:31:08.376399] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:25600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:52.444 [2024-07-14 07:31:08.376413] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:52.444 [2024-07-14 07:31:08.376428] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:21248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:52.444 [2024-07-14 07:31:08.376441] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:52.444 [2024-07-14 07:31:08.376457] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:25728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:52.444 [2024-07-14 07:31:08.376471] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:52.444 [2024-07-14 07:31:08.376486] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:25856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:52.444 [2024-07-14 07:31:08.376500] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:52.444 [2024-07-14 07:31:08.376515] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:21376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:52.444 [2024-07-14 07:31:08.376529] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:52.444 [2024-07-14 07:31:08.376544] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:21632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:52.444 [2024-07-14 07:31:08.376558] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:52.444 [2024-07-14 07:31:08.376573] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:25984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:52.444 [2024-07-14 07:31:08.376587] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:52.444 [2024-07-14 07:31:08.376602] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:21760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:52.444 [2024-07-14 07:31:08.376616] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:52.444 [2024-07-14 07:31:08.376632] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:22016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:52.444 [2024-07-14 07:31:08.376649] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:52.444 [2024-07-14 07:31:08.376665] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:26112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:52.444 [2024-07-14 07:31:08.376678] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:52.444 [2024-07-14 07:31:08.376693] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:26240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:52.444 [2024-07-14 07:31:08.376707] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:52.444 [2024-07-14 07:31:08.376722] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:26368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:52.444 [2024-07-14 07:31:08.376736] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:52.444 [2024-07-14 07:31:08.376751] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:26496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:52.444 [2024-07-14 07:31:08.376765] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:52.444 [2024-07-14 07:31:08.376781] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:26624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:52.444 [2024-07-14 07:31:08.376794] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:52.444 [2024-07-14 07:31:08.376809] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:26752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:52.444 [2024-07-14 07:31:08.376823] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:52.444 [2024-07-14 07:31:08.376838] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:26880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:52.444 [2024-07-14 07:31:08.376862] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:52.444 [2024-07-14 07:31:08.376886] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:27008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:52.445 [2024-07-14 07:31:08.376901] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:52.445 [2024-07-14 07:31:08.376916] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:27136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:52.445 [2024-07-14 07:31:08.376930] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:52.445 [2024-07-14 07:31:08.376945] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:27264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:52.445 [2024-07-14 07:31:08.376959] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:52.445 [2024-07-14 07:31:08.376974] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:27392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:52.445 [2024-07-14 07:31:08.376988] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:52.445 [2024-07-14 07:31:08.377003] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:27520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:52.445 [2024-07-14 07:31:08.377017] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:52.445 [2024-07-14 07:31:08.377035] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:27648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:52.445 [2024-07-14 07:31:08.377050] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:52.445 [2024-07-14 07:31:08.377065] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:27776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:52.445 [2024-07-14 07:31:08.377078] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:52.445 [2024-07-14 07:31:08.377093] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:27904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:52.445 [2024-07-14 07:31:08.377107] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:52.445 [2024-07-14 07:31:08.377122] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:28032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:52.445 [2024-07-14 07:31:08.377136] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:52.445 [2024-07-14 07:31:08.377150] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:28160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:52.445 [2024-07-14 07:31:08.377170] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:52.445 [2024-07-14 07:31:08.377185] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:28288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:52.445 [2024-07-14 07:31:08.377199] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:52.445 [2024-07-14 07:31:08.377213] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x74db50 is same with the state(5) to be set 00:13:52.445 [2024-07-14 07:31:08.377285] bdev_nvme.c:1590:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x74db50 was disconnected and freed. reset controller. 00:13:52.445 07:31:08 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:52.445 07:31:08 -- target/host_management.sh@85 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host0 00:13:52.445 [2024-07-14 07:31:08.378430] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:13:52.445 07:31:08 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:52.445 07:31:08 -- common/autotest_common.sh@10 -- # set +x 00:13:52.445 task offset: 22656 on job bdev=Nvme0n1 fails 00:13:52.445 00:13:52.445 Latency(us) 00:13:52.445 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:13:52.445 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:13:52.445 Job: Nvme0n1 ended in about 0.52 seconds with error 00:13:52.445 Verification LBA range: start 0x0 length 0x400 00:13:52.445 Nvme0n1 : 0.52 2283.24 142.70 123.63 0.00 26240.03 5364.24 29709.65 00:13:52.445 =================================================================================================================== 00:13:52.445 Total : 2283.24 142.70 123.63 0.00 26240.03 5364.24 29709.65 00:13:52.445 [2024-07-14 07:31:08.380397] app.c: 910:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:13:52.445 [2024-07-14 07:31:08.380427] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x750400 (9): Bad file descriptor 00:13:52.445 07:31:08 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:52.445 07:31:08 -- target/host_management.sh@87 -- # sleep 1 00:13:52.445 [2024-07-14 07:31:08.391419] bdev_nvme.c:2040:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:13:53.380 07:31:09 -- target/host_management.sh@91 -- # kill -9 4067804 00:13:53.380 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/host_management.sh: line 91: kill: (4067804) - No such process 00:13:53.380 07:31:09 -- target/host_management.sh@91 -- # true 00:13:53.380 07:31:09 -- target/host_management.sh@97 -- # rm -f /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 /var/tmp/spdk_cpu_lock_003 /var/tmp/spdk_cpu_lock_004 00:13:53.380 07:31:09 -- target/host_management.sh@100 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json /dev/fd/62 -q 64 -o 65536 -w verify -t 1 00:13:53.380 07:31:09 -- target/host_management.sh@100 -- # gen_nvmf_target_json 0 00:13:53.380 07:31:09 -- nvmf/common.sh@520 -- # config=() 00:13:53.380 07:31:09 -- nvmf/common.sh@520 -- # local subsystem config 00:13:53.380 07:31:09 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:13:53.381 07:31:09 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:13:53.381 { 00:13:53.381 "params": { 00:13:53.381 "name": "Nvme$subsystem", 00:13:53.381 "trtype": "$TEST_TRANSPORT", 00:13:53.381 "traddr": "$NVMF_FIRST_TARGET_IP", 00:13:53.381 "adrfam": "ipv4", 00:13:53.381 "trsvcid": "$NVMF_PORT", 00:13:53.381 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:13:53.381 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:13:53.381 "hdgst": ${hdgst:-false}, 00:13:53.381 "ddgst": ${ddgst:-false} 00:13:53.381 }, 00:13:53.381 "method": "bdev_nvme_attach_controller" 00:13:53.381 } 00:13:53.381 EOF 00:13:53.381 )") 00:13:53.381 07:31:09 -- nvmf/common.sh@542 -- # cat 00:13:53.381 07:31:09 -- nvmf/common.sh@544 -- # jq . 00:13:53.381 07:31:09 -- nvmf/common.sh@545 -- # IFS=, 00:13:53.381 07:31:09 -- nvmf/common.sh@546 -- # printf '%s\n' '{ 00:13:53.381 "params": { 00:13:53.381 "name": "Nvme0", 00:13:53.381 "trtype": "tcp", 00:13:53.381 "traddr": "10.0.0.2", 00:13:53.381 "adrfam": "ipv4", 00:13:53.381 "trsvcid": "4420", 00:13:53.381 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:13:53.381 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:13:53.381 "hdgst": false, 00:13:53.381 "ddgst": false 00:13:53.381 }, 00:13:53.381 "method": "bdev_nvme_attach_controller" 00:13:53.381 }' 00:13:53.381 [2024-07-14 07:31:09.431390] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:13:53.381 [2024-07-14 07:31:09.431467] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid4068087 ] 00:13:53.381 EAL: No free 2048 kB hugepages reported on node 1 00:13:53.381 [2024-07-14 07:31:09.491258] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:53.639 [2024-07-14 07:31:09.601928] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:13:53.896 Running I/O for 1 seconds... 00:13:54.831 00:13:54.831 Latency(us) 00:13:54.831 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:13:54.831 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:13:54.831 Verification LBA range: start 0x0 length 0x400 00:13:54.831 Nvme0n1 : 1.01 2758.04 172.38 0.00 0.00 22894.89 1686.95 34952.53 00:13:54.831 =================================================================================================================== 00:13:54.831 Total : 2758.04 172.38 0.00 0.00 22894.89 1686.95 34952.53 00:13:55.088 07:31:11 -- target/host_management.sh@101 -- # stoptarget 00:13:55.088 07:31:11 -- target/host_management.sh@36 -- # rm -f ./local-job0-0-verify.state 00:13:55.088 07:31:11 -- target/host_management.sh@37 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:13:55.088 07:31:11 -- target/host_management.sh@38 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:13:55.088 07:31:11 -- target/host_management.sh@40 -- # nvmftestfini 00:13:55.088 07:31:11 -- nvmf/common.sh@476 -- # nvmfcleanup 00:13:55.088 07:31:11 -- nvmf/common.sh@116 -- # sync 00:13:55.088 07:31:11 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:13:55.088 07:31:11 -- nvmf/common.sh@119 -- # set +e 00:13:55.088 07:31:11 -- nvmf/common.sh@120 -- # for i in {1..20} 00:13:55.088 07:31:11 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:13:55.088 rmmod nvme_tcp 00:13:55.088 rmmod nvme_fabrics 00:13:55.088 rmmod nvme_keyring 00:13:55.088 07:31:11 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:13:55.088 07:31:11 -- nvmf/common.sh@123 -- # set -e 00:13:55.088 07:31:11 -- nvmf/common.sh@124 -- # return 0 00:13:55.088 07:31:11 -- nvmf/common.sh@477 -- # '[' -n 4067625 ']' 00:13:55.088 07:31:11 -- nvmf/common.sh@478 -- # killprocess 4067625 00:13:55.088 07:31:11 -- common/autotest_common.sh@926 -- # '[' -z 4067625 ']' 00:13:55.088 07:31:11 -- common/autotest_common.sh@930 -- # kill -0 4067625 00:13:55.088 07:31:11 -- common/autotest_common.sh@931 -- # uname 00:13:55.088 07:31:11 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:13:55.088 07:31:11 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 4067625 00:13:55.088 07:31:11 -- common/autotest_common.sh@932 -- # process_name=reactor_1 00:13:55.088 07:31:11 -- common/autotest_common.sh@936 -- # '[' reactor_1 = sudo ']' 00:13:55.088 07:31:11 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 4067625' 00:13:55.088 killing process with pid 4067625 00:13:55.088 07:31:11 -- common/autotest_common.sh@945 -- # kill 4067625 00:13:55.088 07:31:11 -- common/autotest_common.sh@950 -- # wait 4067625 00:13:55.346 [2024-07-14 07:31:11.459965] app.c: 605:unclaim_cpu_cores: *ERROR*: Failed to unlink lock fd for core 1, errno: 2 00:13:55.346 07:31:11 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:13:55.346 07:31:11 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:13:55.346 07:31:11 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:13:55.346 07:31:11 -- nvmf/common.sh@273 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:13:55.346 07:31:11 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:13:55.346 07:31:11 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:55.346 07:31:11 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:13:55.346 07:31:11 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:57.885 07:31:13 -- nvmf/common.sh@278 -- # ip -4 addr flush cvl_0_1 00:13:57.885 00:13:57.885 real 0m7.256s 00:13:57.885 user 0m21.558s 00:13:57.885 sys 0m1.363s 00:13:57.885 07:31:13 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:13:57.885 07:31:13 -- common/autotest_common.sh@10 -- # set +x 00:13:57.885 ************************************ 00:13:57.885 END TEST nvmf_host_management 00:13:57.885 ************************************ 00:13:57.885 07:31:13 -- target/host_management.sh@108 -- # trap - SIGINT SIGTERM EXIT 00:13:57.885 00:13:57.885 real 0m9.473s 00:13:57.885 user 0m22.350s 00:13:57.885 sys 0m2.809s 00:13:57.885 07:31:13 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:13:57.885 07:31:13 -- common/autotest_common.sh@10 -- # set +x 00:13:57.885 ************************************ 00:13:57.885 END TEST nvmf_host_management 00:13:57.885 ************************************ 00:13:57.885 07:31:13 -- nvmf/nvmf.sh@47 -- # run_test nvmf_lvol /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvol.sh --transport=tcp 00:13:57.885 07:31:13 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:13:57.885 07:31:13 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:13:57.886 07:31:13 -- common/autotest_common.sh@10 -- # set +x 00:13:57.886 ************************************ 00:13:57.886 START TEST nvmf_lvol 00:13:57.886 ************************************ 00:13:57.886 07:31:13 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvol.sh --transport=tcp 00:13:57.886 * Looking for test storage... 00:13:57.886 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:13:57.886 07:31:13 -- target/nvmf_lvol.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:13:57.886 07:31:13 -- nvmf/common.sh@7 -- # uname -s 00:13:57.886 07:31:13 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:13:57.886 07:31:13 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:13:57.886 07:31:13 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:13:57.886 07:31:13 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:13:57.886 07:31:13 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:13:57.886 07:31:13 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:13:57.886 07:31:13 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:13:57.886 07:31:13 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:13:57.886 07:31:13 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:13:57.886 07:31:13 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:13:57.886 07:31:13 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:13:57.886 07:31:13 -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:13:57.886 07:31:13 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:13:57.886 07:31:13 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:13:57.886 07:31:13 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:13:57.886 07:31:13 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:13:57.886 07:31:13 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:13:57.886 07:31:13 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:13:57.886 07:31:13 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:13:57.886 07:31:13 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:57.886 07:31:13 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:57.886 07:31:13 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:57.886 07:31:13 -- paths/export.sh@5 -- # export PATH 00:13:57.886 07:31:13 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:57.886 07:31:13 -- nvmf/common.sh@46 -- # : 0 00:13:57.886 07:31:13 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:13:57.886 07:31:13 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:13:57.886 07:31:13 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:13:57.886 07:31:13 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:13:57.886 07:31:13 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:13:57.886 07:31:13 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:13:57.886 07:31:13 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:13:57.886 07:31:13 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:13:57.886 07:31:13 -- target/nvmf_lvol.sh@11 -- # MALLOC_BDEV_SIZE=64 00:13:57.886 07:31:13 -- target/nvmf_lvol.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:13:57.886 07:31:13 -- target/nvmf_lvol.sh@13 -- # LVOL_BDEV_INIT_SIZE=20 00:13:57.886 07:31:13 -- target/nvmf_lvol.sh@14 -- # LVOL_BDEV_FINAL_SIZE=30 00:13:57.886 07:31:13 -- target/nvmf_lvol.sh@16 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:13:57.886 07:31:13 -- target/nvmf_lvol.sh@18 -- # nvmftestinit 00:13:57.886 07:31:13 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:13:57.886 07:31:13 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:13:57.886 07:31:13 -- nvmf/common.sh@436 -- # prepare_net_devs 00:13:57.886 07:31:13 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:13:57.886 07:31:13 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:13:57.886 07:31:13 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:57.886 07:31:13 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:13:57.886 07:31:13 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:57.886 07:31:13 -- nvmf/common.sh@402 -- # [[ phy != virt ]] 00:13:57.886 07:31:13 -- nvmf/common.sh@402 -- # gather_supported_nvmf_pci_devs 00:13:57.886 07:31:13 -- nvmf/common.sh@284 -- # xtrace_disable 00:13:57.886 07:31:13 -- common/autotest_common.sh@10 -- # set +x 00:13:59.786 07:31:15 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:13:59.786 07:31:15 -- nvmf/common.sh@290 -- # pci_devs=() 00:13:59.786 07:31:15 -- nvmf/common.sh@290 -- # local -a pci_devs 00:13:59.786 07:31:15 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:13:59.786 07:31:15 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:13:59.786 07:31:15 -- nvmf/common.sh@292 -- # pci_drivers=() 00:13:59.786 07:31:15 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:13:59.786 07:31:15 -- nvmf/common.sh@294 -- # net_devs=() 00:13:59.786 07:31:15 -- nvmf/common.sh@294 -- # local -ga net_devs 00:13:59.786 07:31:15 -- nvmf/common.sh@295 -- # e810=() 00:13:59.786 07:31:15 -- nvmf/common.sh@295 -- # local -ga e810 00:13:59.786 07:31:15 -- nvmf/common.sh@296 -- # x722=() 00:13:59.786 07:31:15 -- nvmf/common.sh@296 -- # local -ga x722 00:13:59.786 07:31:15 -- nvmf/common.sh@297 -- # mlx=() 00:13:59.786 07:31:15 -- nvmf/common.sh@297 -- # local -ga mlx 00:13:59.786 07:31:15 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:13:59.786 07:31:15 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:13:59.786 07:31:15 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:13:59.786 07:31:15 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:13:59.786 07:31:15 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:13:59.786 07:31:15 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:13:59.786 07:31:15 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:13:59.786 07:31:15 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:13:59.786 07:31:15 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:13:59.786 07:31:15 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:13:59.786 07:31:15 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:13:59.786 07:31:15 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:13:59.786 07:31:15 -- nvmf/common.sh@320 -- # [[ tcp == rdma ]] 00:13:59.786 07:31:15 -- nvmf/common.sh@326 -- # [[ e810 == mlx5 ]] 00:13:59.786 07:31:15 -- nvmf/common.sh@328 -- # [[ e810 == e810 ]] 00:13:59.786 07:31:15 -- nvmf/common.sh@329 -- # pci_devs=("${e810[@]}") 00:13:59.786 07:31:15 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:13:59.786 07:31:15 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:13:59.786 07:31:15 -- nvmf/common.sh@340 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:13:59.786 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:13:59.786 07:31:15 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:13:59.786 07:31:15 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:13:59.786 07:31:15 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:13:59.786 07:31:15 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:13:59.786 07:31:15 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:13:59.786 07:31:15 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:13:59.786 07:31:15 -- nvmf/common.sh@340 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:13:59.786 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:13:59.786 07:31:15 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:13:59.786 07:31:15 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:13:59.786 07:31:15 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:13:59.786 07:31:15 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:13:59.786 07:31:15 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:13:59.786 07:31:15 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:13:59.786 07:31:15 -- nvmf/common.sh@371 -- # [[ e810 == e810 ]] 00:13:59.786 07:31:15 -- nvmf/common.sh@371 -- # [[ tcp == rdma ]] 00:13:59.786 07:31:15 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:13:59.786 07:31:15 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:13:59.786 07:31:15 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:13:59.786 07:31:15 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:13:59.786 07:31:15 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:13:59.786 Found net devices under 0000:0a:00.0: cvl_0_0 00:13:59.786 07:31:15 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:13:59.786 07:31:15 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:13:59.786 07:31:15 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:13:59.786 07:31:15 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:13:59.786 07:31:15 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:13:59.786 07:31:15 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:13:59.786 Found net devices under 0000:0a:00.1: cvl_0_1 00:13:59.786 07:31:15 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:13:59.786 07:31:15 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:13:59.786 07:31:15 -- nvmf/common.sh@402 -- # is_hw=yes 00:13:59.786 07:31:15 -- nvmf/common.sh@404 -- # [[ yes == yes ]] 00:13:59.786 07:31:15 -- nvmf/common.sh@405 -- # [[ tcp == tcp ]] 00:13:59.786 07:31:15 -- nvmf/common.sh@406 -- # nvmf_tcp_init 00:13:59.786 07:31:15 -- nvmf/common.sh@228 -- # NVMF_INITIATOR_IP=10.0.0.1 00:13:59.786 07:31:15 -- nvmf/common.sh@229 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:13:59.786 07:31:15 -- nvmf/common.sh@230 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:13:59.786 07:31:15 -- nvmf/common.sh@233 -- # (( 2 > 1 )) 00:13:59.786 07:31:15 -- nvmf/common.sh@235 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:13:59.786 07:31:15 -- nvmf/common.sh@236 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:13:59.786 07:31:15 -- nvmf/common.sh@239 -- # NVMF_SECOND_TARGET_IP= 00:13:59.786 07:31:15 -- nvmf/common.sh@241 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:13:59.786 07:31:15 -- nvmf/common.sh@242 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:13:59.786 07:31:15 -- nvmf/common.sh@243 -- # ip -4 addr flush cvl_0_0 00:13:59.786 07:31:15 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_1 00:13:59.786 07:31:15 -- nvmf/common.sh@247 -- # ip netns add cvl_0_0_ns_spdk 00:13:59.786 07:31:15 -- nvmf/common.sh@250 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:13:59.786 07:31:15 -- nvmf/common.sh@253 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:13:59.786 07:31:15 -- nvmf/common.sh@254 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:13:59.786 07:31:15 -- nvmf/common.sh@257 -- # ip link set cvl_0_1 up 00:13:59.786 07:31:15 -- nvmf/common.sh@259 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:13:59.786 07:31:15 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:13:59.786 07:31:15 -- nvmf/common.sh@263 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:13:59.786 07:31:15 -- nvmf/common.sh@266 -- # ping -c 1 10.0.0.2 00:13:59.786 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:13:59.786 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.137 ms 00:13:59.786 00:13:59.786 --- 10.0.0.2 ping statistics --- 00:13:59.786 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:59.786 rtt min/avg/max/mdev = 0.137/0.137/0.137/0.000 ms 00:13:59.786 07:31:15 -- nvmf/common.sh@267 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:13:59.786 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:13:59.786 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.173 ms 00:13:59.786 00:13:59.786 --- 10.0.0.1 ping statistics --- 00:13:59.786 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:59.786 rtt min/avg/max/mdev = 0.173/0.173/0.173/0.000 ms 00:13:59.786 07:31:15 -- nvmf/common.sh@269 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:13:59.786 07:31:15 -- nvmf/common.sh@410 -- # return 0 00:13:59.786 07:31:15 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:13:59.787 07:31:15 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:13:59.787 07:31:15 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:13:59.787 07:31:15 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:13:59.787 07:31:15 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:13:59.787 07:31:15 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:13:59.787 07:31:15 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:13:59.787 07:31:15 -- target/nvmf_lvol.sh@19 -- # nvmfappstart -m 0x7 00:13:59.787 07:31:15 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:13:59.787 07:31:15 -- common/autotest_common.sh@712 -- # xtrace_disable 00:13:59.787 07:31:15 -- common/autotest_common.sh@10 -- # set +x 00:13:59.787 07:31:15 -- nvmf/common.sh@469 -- # nvmfpid=4070200 00:13:59.787 07:31:15 -- nvmf/common.sh@468 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x7 00:13:59.787 07:31:15 -- nvmf/common.sh@470 -- # waitforlisten 4070200 00:13:59.787 07:31:15 -- common/autotest_common.sh@819 -- # '[' -z 4070200 ']' 00:13:59.787 07:31:15 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:59.787 07:31:15 -- common/autotest_common.sh@824 -- # local max_retries=100 00:13:59.787 07:31:15 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:59.787 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:59.787 07:31:15 -- common/autotest_common.sh@828 -- # xtrace_disable 00:13:59.787 07:31:15 -- common/autotest_common.sh@10 -- # set +x 00:13:59.787 [2024-07-14 07:31:15.777205] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:13:59.787 [2024-07-14 07:31:15.777279] [ DPDK EAL parameters: nvmf -c 0x7 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:13:59.787 EAL: No free 2048 kB hugepages reported on node 1 00:13:59.787 [2024-07-14 07:31:15.839054] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 3 00:13:59.787 [2024-07-14 07:31:15.946465] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:13:59.787 [2024-07-14 07:31:15.946609] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:13:59.787 [2024-07-14 07:31:15.946626] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:13:59.787 [2024-07-14 07:31:15.946640] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:13:59.787 [2024-07-14 07:31:15.946708] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:13:59.787 [2024-07-14 07:31:15.946728] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:13:59.787 [2024-07-14 07:31:15.946731] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:14:00.720 07:31:16 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:14:00.720 07:31:16 -- common/autotest_common.sh@852 -- # return 0 00:14:00.720 07:31:16 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:14:00.720 07:31:16 -- common/autotest_common.sh@718 -- # xtrace_disable 00:14:00.720 07:31:16 -- common/autotest_common.sh@10 -- # set +x 00:14:00.720 07:31:16 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:14:00.720 07:31:16 -- target/nvmf_lvol.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:14:00.977 [2024-07-14 07:31:17.008473] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:14:00.977 07:31:17 -- target/nvmf_lvol.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:14:01.235 07:31:17 -- target/nvmf_lvol.sh@24 -- # base_bdevs='Malloc0 ' 00:14:01.235 07:31:17 -- target/nvmf_lvol.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:14:01.493 07:31:17 -- target/nvmf_lvol.sh@25 -- # base_bdevs+=Malloc1 00:14:01.493 07:31:17 -- target/nvmf_lvol.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_create -n raid0 -z 64 -r 0 -b 'Malloc0 Malloc1' 00:14:01.750 07:31:17 -- target/nvmf_lvol.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore raid0 lvs 00:14:02.008 07:31:18 -- target/nvmf_lvol.sh@29 -- # lvs=150b4d68-daab-428d-9d31-e33e093bf8a9 00:14:02.008 07:31:18 -- target/nvmf_lvol.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u 150b4d68-daab-428d-9d31-e33e093bf8a9 lvol 20 00:14:02.265 07:31:18 -- target/nvmf_lvol.sh@32 -- # lvol=46e34cab-aad2-476d-b2f8-dabe0bc967ab 00:14:02.265 07:31:18 -- target/nvmf_lvol.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:14:02.523 07:31:18 -- target/nvmf_lvol.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 46e34cab-aad2-476d-b2f8-dabe0bc967ab 00:14:02.781 07:31:18 -- target/nvmf_lvol.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:14:03.039 [2024-07-14 07:31:18.989641] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:14:03.039 07:31:19 -- target/nvmf_lvol.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:14:03.297 07:31:19 -- target/nvmf_lvol.sh@42 -- # perf_pid=4070656 00:14:03.297 07:31:19 -- target/nvmf_lvol.sh@44 -- # sleep 1 00:14:03.297 07:31:19 -- target/nvmf_lvol.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -o 4096 -q 128 -s 512 -w randwrite -t 10 -c 0x18 00:14:03.297 EAL: No free 2048 kB hugepages reported on node 1 00:14:04.245 07:31:20 -- target/nvmf_lvol.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_snapshot 46e34cab-aad2-476d-b2f8-dabe0bc967ab MY_SNAPSHOT 00:14:04.553 07:31:20 -- target/nvmf_lvol.sh@47 -- # snapshot=83781f6f-29be-4493-9950-1ce11d0a96b6 00:14:04.553 07:31:20 -- target/nvmf_lvol.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_resize 46e34cab-aad2-476d-b2f8-dabe0bc967ab 30 00:14:04.812 07:31:20 -- target/nvmf_lvol.sh@49 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_clone 83781f6f-29be-4493-9950-1ce11d0a96b6 MY_CLONE 00:14:05.070 07:31:21 -- target/nvmf_lvol.sh@49 -- # clone=1cba0beb-3b3c-48a6-9ca5-efbc8605bd80 00:14:05.070 07:31:21 -- target/nvmf_lvol.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_inflate 1cba0beb-3b3c-48a6-9ca5-efbc8605bd80 00:14:05.636 07:31:21 -- target/nvmf_lvol.sh@53 -- # wait 4070656 00:14:13.743 Initializing NVMe Controllers 00:14:13.743 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode0 00:14:13.743 Controller IO queue size 128, less than required. 00:14:13.743 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:14:13.743 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 3 00:14:13.743 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 4 00:14:13.743 Initialization complete. Launching workers. 00:14:13.743 ======================================================== 00:14:13.743 Latency(us) 00:14:13.743 Device Information : IOPS MiB/s Average min max 00:14:13.743 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 from core 3: 11137.15 43.50 11495.93 2235.92 70475.32 00:14:13.743 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 from core 4: 11051.66 43.17 11587.01 1941.18 65252.01 00:14:13.743 ======================================================== 00:14:13.743 Total : 22188.81 86.68 11541.29 1941.18 70475.32 00:14:13.743 00:14:13.743 07:31:29 -- target/nvmf_lvol.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:14:13.743 07:31:29 -- target/nvmf_lvol.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete 46e34cab-aad2-476d-b2f8-dabe0bc967ab 00:14:14.001 07:31:30 -- target/nvmf_lvol.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 150b4d68-daab-428d-9d31-e33e093bf8a9 00:14:14.259 07:31:30 -- target/nvmf_lvol.sh@60 -- # rm -f 00:14:14.259 07:31:30 -- target/nvmf_lvol.sh@62 -- # trap - SIGINT SIGTERM EXIT 00:14:14.259 07:31:30 -- target/nvmf_lvol.sh@64 -- # nvmftestfini 00:14:14.259 07:31:30 -- nvmf/common.sh@476 -- # nvmfcleanup 00:14:14.259 07:31:30 -- nvmf/common.sh@116 -- # sync 00:14:14.259 07:31:30 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:14:14.259 07:31:30 -- nvmf/common.sh@119 -- # set +e 00:14:14.259 07:31:30 -- nvmf/common.sh@120 -- # for i in {1..20} 00:14:14.259 07:31:30 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:14:14.259 rmmod nvme_tcp 00:14:14.259 rmmod nvme_fabrics 00:14:14.259 rmmod nvme_keyring 00:14:14.518 07:31:30 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:14:14.518 07:31:30 -- nvmf/common.sh@123 -- # set -e 00:14:14.518 07:31:30 -- nvmf/common.sh@124 -- # return 0 00:14:14.518 07:31:30 -- nvmf/common.sh@477 -- # '[' -n 4070200 ']' 00:14:14.518 07:31:30 -- nvmf/common.sh@478 -- # killprocess 4070200 00:14:14.518 07:31:30 -- common/autotest_common.sh@926 -- # '[' -z 4070200 ']' 00:14:14.518 07:31:30 -- common/autotest_common.sh@930 -- # kill -0 4070200 00:14:14.518 07:31:30 -- common/autotest_common.sh@931 -- # uname 00:14:14.518 07:31:30 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:14:14.518 07:31:30 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 4070200 00:14:14.518 07:31:30 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:14:14.518 07:31:30 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:14:14.518 07:31:30 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 4070200' 00:14:14.518 killing process with pid 4070200 00:14:14.518 07:31:30 -- common/autotest_common.sh@945 -- # kill 4070200 00:14:14.518 07:31:30 -- common/autotest_common.sh@950 -- # wait 4070200 00:14:14.777 07:31:30 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:14:14.777 07:31:30 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:14:14.777 07:31:30 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:14:14.777 07:31:30 -- nvmf/common.sh@273 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:14:14.777 07:31:30 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:14:14.777 07:31:30 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:14.777 07:31:30 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:14:14.777 07:31:30 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:16.678 07:31:32 -- nvmf/common.sh@278 -- # ip -4 addr flush cvl_0_1 00:14:16.678 00:14:16.678 real 0m19.253s 00:14:16.678 user 1m5.468s 00:14:16.678 sys 0m5.760s 00:14:16.678 07:31:32 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:14:16.678 07:31:32 -- common/autotest_common.sh@10 -- # set +x 00:14:16.678 ************************************ 00:14:16.678 END TEST nvmf_lvol 00:14:16.678 ************************************ 00:14:16.936 07:31:32 -- nvmf/nvmf.sh@48 -- # run_test nvmf_lvs_grow /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvs_grow.sh --transport=tcp 00:14:16.936 07:31:32 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:14:16.936 07:31:32 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:14:16.936 07:31:32 -- common/autotest_common.sh@10 -- # set +x 00:14:16.936 ************************************ 00:14:16.936 START TEST nvmf_lvs_grow 00:14:16.936 ************************************ 00:14:16.936 07:31:32 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvs_grow.sh --transport=tcp 00:14:16.936 * Looking for test storage... 00:14:16.936 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:14:16.936 07:31:32 -- target/nvmf_lvs_grow.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:14:16.936 07:31:32 -- nvmf/common.sh@7 -- # uname -s 00:14:16.936 07:31:32 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:14:16.936 07:31:32 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:14:16.936 07:31:32 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:14:16.936 07:31:32 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:14:16.936 07:31:32 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:14:16.936 07:31:32 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:14:16.936 07:31:32 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:14:16.936 07:31:32 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:14:16.936 07:31:32 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:14:16.936 07:31:32 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:14:16.936 07:31:32 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:14:16.936 07:31:32 -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:14:16.936 07:31:32 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:14:16.936 07:31:32 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:14:16.936 07:31:32 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:14:16.936 07:31:32 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:14:16.936 07:31:32 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:14:16.936 07:31:32 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:14:16.936 07:31:32 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:14:16.936 07:31:32 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:16.937 07:31:32 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:16.937 07:31:32 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:16.937 07:31:32 -- paths/export.sh@5 -- # export PATH 00:14:16.937 07:31:32 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:16.937 07:31:32 -- nvmf/common.sh@46 -- # : 0 00:14:16.937 07:31:32 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:14:16.937 07:31:32 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:14:16.937 07:31:32 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:14:16.937 07:31:32 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:14:16.937 07:31:32 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:14:16.937 07:31:32 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:14:16.937 07:31:32 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:14:16.937 07:31:32 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:14:16.937 07:31:32 -- target/nvmf_lvs_grow.sh@11 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:14:16.937 07:31:32 -- target/nvmf_lvs_grow.sh@12 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:14:16.937 07:31:32 -- target/nvmf_lvs_grow.sh@97 -- # nvmftestinit 00:14:16.937 07:31:32 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:14:16.937 07:31:32 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:14:16.937 07:31:32 -- nvmf/common.sh@436 -- # prepare_net_devs 00:14:16.937 07:31:32 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:14:16.937 07:31:32 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:14:16.937 07:31:32 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:16.937 07:31:32 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:14:16.937 07:31:32 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:16.937 07:31:32 -- nvmf/common.sh@402 -- # [[ phy != virt ]] 00:14:16.937 07:31:32 -- nvmf/common.sh@402 -- # gather_supported_nvmf_pci_devs 00:14:16.937 07:31:32 -- nvmf/common.sh@284 -- # xtrace_disable 00:14:16.937 07:31:32 -- common/autotest_common.sh@10 -- # set +x 00:14:18.838 07:31:34 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:14:18.838 07:31:34 -- nvmf/common.sh@290 -- # pci_devs=() 00:14:18.838 07:31:34 -- nvmf/common.sh@290 -- # local -a pci_devs 00:14:18.838 07:31:34 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:14:18.838 07:31:34 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:14:18.838 07:31:34 -- nvmf/common.sh@292 -- # pci_drivers=() 00:14:18.838 07:31:34 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:14:18.838 07:31:34 -- nvmf/common.sh@294 -- # net_devs=() 00:14:18.838 07:31:34 -- nvmf/common.sh@294 -- # local -ga net_devs 00:14:18.838 07:31:34 -- nvmf/common.sh@295 -- # e810=() 00:14:18.838 07:31:34 -- nvmf/common.sh@295 -- # local -ga e810 00:14:18.838 07:31:34 -- nvmf/common.sh@296 -- # x722=() 00:14:18.838 07:31:34 -- nvmf/common.sh@296 -- # local -ga x722 00:14:18.838 07:31:34 -- nvmf/common.sh@297 -- # mlx=() 00:14:18.838 07:31:34 -- nvmf/common.sh@297 -- # local -ga mlx 00:14:18.838 07:31:34 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:14:18.838 07:31:34 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:14:18.838 07:31:34 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:14:18.838 07:31:34 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:14:18.838 07:31:34 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:14:18.838 07:31:34 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:14:18.838 07:31:34 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:14:18.838 07:31:34 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:14:18.838 07:31:34 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:14:18.838 07:31:34 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:14:18.838 07:31:34 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:14:18.838 07:31:34 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:14:18.838 07:31:34 -- nvmf/common.sh@320 -- # [[ tcp == rdma ]] 00:14:18.838 07:31:34 -- nvmf/common.sh@326 -- # [[ e810 == mlx5 ]] 00:14:18.838 07:31:34 -- nvmf/common.sh@328 -- # [[ e810 == e810 ]] 00:14:18.838 07:31:34 -- nvmf/common.sh@329 -- # pci_devs=("${e810[@]}") 00:14:18.838 07:31:34 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:14:18.838 07:31:34 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:14:18.838 07:31:34 -- nvmf/common.sh@340 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:14:18.838 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:14:18.838 07:31:34 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:14:18.838 07:31:34 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:14:18.838 07:31:34 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:14:18.838 07:31:34 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:14:18.838 07:31:34 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:14:18.838 07:31:34 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:14:18.838 07:31:34 -- nvmf/common.sh@340 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:14:18.838 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:14:18.838 07:31:34 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:14:18.838 07:31:34 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:14:18.838 07:31:34 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:14:18.838 07:31:34 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:14:18.838 07:31:34 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:14:18.838 07:31:34 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:14:18.838 07:31:34 -- nvmf/common.sh@371 -- # [[ e810 == e810 ]] 00:14:18.838 07:31:34 -- nvmf/common.sh@371 -- # [[ tcp == rdma ]] 00:14:18.838 07:31:34 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:14:18.838 07:31:34 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:14:18.838 07:31:34 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:14:18.838 07:31:34 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:14:18.838 07:31:34 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:14:18.838 Found net devices under 0000:0a:00.0: cvl_0_0 00:14:18.838 07:31:34 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:14:18.838 07:31:34 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:14:18.838 07:31:34 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:14:18.838 07:31:34 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:14:18.838 07:31:34 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:14:18.838 07:31:34 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:14:18.838 Found net devices under 0000:0a:00.1: cvl_0_1 00:14:18.838 07:31:34 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:14:18.838 07:31:34 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:14:18.838 07:31:34 -- nvmf/common.sh@402 -- # is_hw=yes 00:14:18.838 07:31:34 -- nvmf/common.sh@404 -- # [[ yes == yes ]] 00:14:18.838 07:31:34 -- nvmf/common.sh@405 -- # [[ tcp == tcp ]] 00:14:18.838 07:31:34 -- nvmf/common.sh@406 -- # nvmf_tcp_init 00:14:18.838 07:31:34 -- nvmf/common.sh@228 -- # NVMF_INITIATOR_IP=10.0.0.1 00:14:18.838 07:31:34 -- nvmf/common.sh@229 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:14:18.838 07:31:34 -- nvmf/common.sh@230 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:14:18.838 07:31:34 -- nvmf/common.sh@233 -- # (( 2 > 1 )) 00:14:18.838 07:31:34 -- nvmf/common.sh@235 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:14:18.838 07:31:34 -- nvmf/common.sh@236 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:14:18.838 07:31:34 -- nvmf/common.sh@239 -- # NVMF_SECOND_TARGET_IP= 00:14:18.838 07:31:34 -- nvmf/common.sh@241 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:14:18.838 07:31:34 -- nvmf/common.sh@242 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:14:18.838 07:31:34 -- nvmf/common.sh@243 -- # ip -4 addr flush cvl_0_0 00:14:18.838 07:31:34 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_1 00:14:18.838 07:31:34 -- nvmf/common.sh@247 -- # ip netns add cvl_0_0_ns_spdk 00:14:18.838 07:31:34 -- nvmf/common.sh@250 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:14:18.838 07:31:34 -- nvmf/common.sh@253 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:14:18.838 07:31:34 -- nvmf/common.sh@254 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:14:18.838 07:31:34 -- nvmf/common.sh@257 -- # ip link set cvl_0_1 up 00:14:18.838 07:31:34 -- nvmf/common.sh@259 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:14:18.838 07:31:34 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:14:18.838 07:31:34 -- nvmf/common.sh@263 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:14:18.838 07:31:34 -- nvmf/common.sh@266 -- # ping -c 1 10.0.0.2 00:14:18.838 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:14:18.838 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.207 ms 00:14:18.838 00:14:18.838 --- 10.0.0.2 ping statistics --- 00:14:18.838 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:18.838 rtt min/avg/max/mdev = 0.207/0.207/0.207/0.000 ms 00:14:18.838 07:31:34 -- nvmf/common.sh@267 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:14:18.838 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:14:18.838 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.106 ms 00:14:18.838 00:14:18.838 --- 10.0.0.1 ping statistics --- 00:14:18.838 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:18.838 rtt min/avg/max/mdev = 0.106/0.106/0.106/0.000 ms 00:14:18.838 07:31:34 -- nvmf/common.sh@269 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:14:18.838 07:31:34 -- nvmf/common.sh@410 -- # return 0 00:14:18.838 07:31:34 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:14:18.838 07:31:34 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:14:18.838 07:31:34 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:14:18.838 07:31:34 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:14:18.838 07:31:34 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:14:18.838 07:31:34 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:14:18.839 07:31:34 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:14:19.097 07:31:35 -- target/nvmf_lvs_grow.sh@98 -- # nvmfappstart -m 0x1 00:14:19.097 07:31:35 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:14:19.097 07:31:35 -- common/autotest_common.sh@712 -- # xtrace_disable 00:14:19.097 07:31:35 -- common/autotest_common.sh@10 -- # set +x 00:14:19.097 07:31:35 -- nvmf/common.sh@469 -- # nvmfpid=4073954 00:14:19.097 07:31:35 -- nvmf/common.sh@468 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:14:19.097 07:31:35 -- nvmf/common.sh@470 -- # waitforlisten 4073954 00:14:19.097 07:31:35 -- common/autotest_common.sh@819 -- # '[' -z 4073954 ']' 00:14:19.097 07:31:35 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:19.097 07:31:35 -- common/autotest_common.sh@824 -- # local max_retries=100 00:14:19.097 07:31:35 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:19.097 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:19.097 07:31:35 -- common/autotest_common.sh@828 -- # xtrace_disable 00:14:19.097 07:31:35 -- common/autotest_common.sh@10 -- # set +x 00:14:19.097 [2024-07-14 07:31:35.067687] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:14:19.097 [2024-07-14 07:31:35.067761] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:14:19.097 EAL: No free 2048 kB hugepages reported on node 1 00:14:19.097 [2024-07-14 07:31:35.139839] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:19.097 [2024-07-14 07:31:35.256313] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:14:19.097 [2024-07-14 07:31:35.256481] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:14:19.097 [2024-07-14 07:31:35.256500] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:14:19.097 [2024-07-14 07:31:35.256514] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:14:19.097 [2024-07-14 07:31:35.256560] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:14:20.028 07:31:35 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:14:20.028 07:31:35 -- common/autotest_common.sh@852 -- # return 0 00:14:20.028 07:31:35 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:14:20.028 07:31:35 -- common/autotest_common.sh@718 -- # xtrace_disable 00:14:20.028 07:31:35 -- common/autotest_common.sh@10 -- # set +x 00:14:20.028 07:31:36 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:14:20.028 07:31:36 -- target/nvmf_lvs_grow.sh@99 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:14:20.286 [2024-07-14 07:31:36.233075] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:14:20.286 07:31:36 -- target/nvmf_lvs_grow.sh@101 -- # run_test lvs_grow_clean lvs_grow 00:14:20.286 07:31:36 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:14:20.286 07:31:36 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:14:20.286 07:31:36 -- common/autotest_common.sh@10 -- # set +x 00:14:20.286 ************************************ 00:14:20.286 START TEST lvs_grow_clean 00:14:20.286 ************************************ 00:14:20.286 07:31:36 -- common/autotest_common.sh@1104 -- # lvs_grow 00:14:20.286 07:31:36 -- target/nvmf_lvs_grow.sh@15 -- # local aio_bdev lvs lvol 00:14:20.286 07:31:36 -- target/nvmf_lvs_grow.sh@16 -- # local data_clusters free_clusters 00:14:20.286 07:31:36 -- target/nvmf_lvs_grow.sh@17 -- # local bdevperf_pid run_test_pid 00:14:20.286 07:31:36 -- target/nvmf_lvs_grow.sh@18 -- # local aio_init_size_mb=200 00:14:20.286 07:31:36 -- target/nvmf_lvs_grow.sh@19 -- # local aio_final_size_mb=400 00:14:20.286 07:31:36 -- target/nvmf_lvs_grow.sh@20 -- # local lvol_bdev_size_mb=150 00:14:20.286 07:31:36 -- target/nvmf_lvs_grow.sh@23 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:14:20.286 07:31:36 -- target/nvmf_lvs_grow.sh@24 -- # truncate -s 200M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:14:20.286 07:31:36 -- target/nvmf_lvs_grow.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:14:20.543 07:31:36 -- target/nvmf_lvs_grow.sh@25 -- # aio_bdev=aio_bdev 00:14:20.543 07:31:36 -- target/nvmf_lvs_grow.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore --cluster-sz 4194304 --md-pages-per-cluster-ratio 300 aio_bdev lvs 00:14:20.800 07:31:36 -- target/nvmf_lvs_grow.sh@28 -- # lvs=607181df-8403-42de-8c51-e9c69861eb6d 00:14:20.800 07:31:36 -- target/nvmf_lvs_grow.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 607181df-8403-42de-8c51-e9c69861eb6d 00:14:20.800 07:31:36 -- target/nvmf_lvs_grow.sh@29 -- # jq -r '.[0].total_data_clusters' 00:14:21.058 07:31:36 -- target/nvmf_lvs_grow.sh@29 -- # data_clusters=49 00:14:21.058 07:31:36 -- target/nvmf_lvs_grow.sh@30 -- # (( data_clusters == 49 )) 00:14:21.058 07:31:36 -- target/nvmf_lvs_grow.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u 607181df-8403-42de-8c51-e9c69861eb6d lvol 150 00:14:21.316 07:31:37 -- target/nvmf_lvs_grow.sh@33 -- # lvol=61ad79cd-51d4-41df-8565-7e799cfbd201 00:14:21.316 07:31:37 -- target/nvmf_lvs_grow.sh@36 -- # truncate -s 400M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:14:21.316 07:31:37 -- target/nvmf_lvs_grow.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_rescan aio_bdev 00:14:21.574 [2024-07-14 07:31:37.490091] bdev_aio.c: 959:bdev_aio_rescan: *NOTICE*: AIO device is resized: bdev name /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev, old block count 51200, new block count 102400 00:14:21.574 [2024-07-14 07:31:37.490188] vbdev_lvol.c: 165:vbdev_lvs_base_bdev_event_cb: *NOTICE*: Unsupported bdev event: type 1 00:14:21.574 true 00:14:21.574 07:31:37 -- target/nvmf_lvs_grow.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 607181df-8403-42de-8c51-e9c69861eb6d 00:14:21.574 07:31:37 -- target/nvmf_lvs_grow.sh@38 -- # jq -r '.[0].total_data_clusters' 00:14:21.574 07:31:37 -- target/nvmf_lvs_grow.sh@38 -- # (( data_clusters == 49 )) 00:14:21.574 07:31:37 -- target/nvmf_lvs_grow.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:14:21.831 07:31:37 -- target/nvmf_lvs_grow.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 61ad79cd-51d4-41df-8565-7e799cfbd201 00:14:22.089 07:31:38 -- target/nvmf_lvs_grow.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:14:22.346 [2024-07-14 07:31:38.428985] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:14:22.346 07:31:38 -- target/nvmf_lvs_grow.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:14:22.605 07:31:38 -- target/nvmf_lvs_grow.sh@48 -- # bdevperf_pid=4074526 00:14:22.605 07:31:38 -- target/nvmf_lvs_grow.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock -m 0x2 -o 4096 -q 128 -w randwrite -t 10 -S 1 -z 00:14:22.605 07:31:38 -- target/nvmf_lvs_grow.sh@49 -- # trap 'killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:14:22.605 07:31:38 -- target/nvmf_lvs_grow.sh@50 -- # waitforlisten 4074526 /var/tmp/bdevperf.sock 00:14:22.605 07:31:38 -- common/autotest_common.sh@819 -- # '[' -z 4074526 ']' 00:14:22.605 07:31:38 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:14:22.605 07:31:38 -- common/autotest_common.sh@824 -- # local max_retries=100 00:14:22.605 07:31:38 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:14:22.605 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:14:22.605 07:31:38 -- common/autotest_common.sh@828 -- # xtrace_disable 00:14:22.605 07:31:38 -- common/autotest_common.sh@10 -- # set +x 00:14:22.605 [2024-07-14 07:31:38.713265] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:14:22.605 [2024-07-14 07:31:38.713343] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid4074526 ] 00:14:22.605 EAL: No free 2048 kB hugepages reported on node 1 00:14:22.605 [2024-07-14 07:31:38.771442] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:22.863 [2024-07-14 07:31:38.877323] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:14:23.796 07:31:39 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:14:23.796 07:31:39 -- common/autotest_common.sh@852 -- # return 0 00:14:23.796 07:31:39 -- target/nvmf_lvs_grow.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 00:14:24.055 Nvme0n1 00:14:24.055 07:31:40 -- target/nvmf_lvs_grow.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs -b Nvme0n1 -t 3000 00:14:24.314 [ 00:14:24.314 { 00:14:24.314 "name": "Nvme0n1", 00:14:24.314 "aliases": [ 00:14:24.314 "61ad79cd-51d4-41df-8565-7e799cfbd201" 00:14:24.314 ], 00:14:24.314 "product_name": "NVMe disk", 00:14:24.314 "block_size": 4096, 00:14:24.314 "num_blocks": 38912, 00:14:24.314 "uuid": "61ad79cd-51d4-41df-8565-7e799cfbd201", 00:14:24.314 "assigned_rate_limits": { 00:14:24.314 "rw_ios_per_sec": 0, 00:14:24.314 "rw_mbytes_per_sec": 0, 00:14:24.314 "r_mbytes_per_sec": 0, 00:14:24.314 "w_mbytes_per_sec": 0 00:14:24.314 }, 00:14:24.314 "claimed": false, 00:14:24.314 "zoned": false, 00:14:24.314 "supported_io_types": { 00:14:24.314 "read": true, 00:14:24.314 "write": true, 00:14:24.314 "unmap": true, 00:14:24.314 "write_zeroes": true, 00:14:24.314 "flush": true, 00:14:24.314 "reset": true, 00:14:24.314 "compare": true, 00:14:24.314 "compare_and_write": true, 00:14:24.314 "abort": true, 00:14:24.314 "nvme_admin": true, 00:14:24.314 "nvme_io": true 00:14:24.314 }, 00:14:24.314 "driver_specific": { 00:14:24.314 "nvme": [ 00:14:24.314 { 00:14:24.314 "trid": { 00:14:24.314 "trtype": "TCP", 00:14:24.314 "adrfam": "IPv4", 00:14:24.314 "traddr": "10.0.0.2", 00:14:24.314 "trsvcid": "4420", 00:14:24.314 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:14:24.314 }, 00:14:24.314 "ctrlr_data": { 00:14:24.314 "cntlid": 1, 00:14:24.314 "vendor_id": "0x8086", 00:14:24.314 "model_number": "SPDK bdev Controller", 00:14:24.314 "serial_number": "SPDK0", 00:14:24.314 "firmware_revision": "24.01.1", 00:14:24.314 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:14:24.314 "oacs": { 00:14:24.314 "security": 0, 00:14:24.314 "format": 0, 00:14:24.314 "firmware": 0, 00:14:24.314 "ns_manage": 0 00:14:24.314 }, 00:14:24.314 "multi_ctrlr": true, 00:14:24.314 "ana_reporting": false 00:14:24.314 }, 00:14:24.314 "vs": { 00:14:24.314 "nvme_version": "1.3" 00:14:24.314 }, 00:14:24.314 "ns_data": { 00:14:24.314 "id": 1, 00:14:24.314 "can_share": true 00:14:24.314 } 00:14:24.314 } 00:14:24.314 ], 00:14:24.314 "mp_policy": "active_passive" 00:14:24.314 } 00:14:24.314 } 00:14:24.314 ] 00:14:24.314 07:31:40 -- target/nvmf_lvs_grow.sh@56 -- # run_test_pid=4074680 00:14:24.314 07:31:40 -- target/nvmf_lvs_grow.sh@57 -- # sleep 2 00:14:24.314 07:31:40 -- target/nvmf_lvs_grow.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:14:24.314 Running I/O for 10 seconds... 00:14:25.247 Latency(us) 00:14:25.247 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:25.247 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:14:25.247 Nvme0n1 : 1.00 14799.00 57.81 0.00 0.00 0.00 0.00 0.00 00:14:25.247 =================================================================================================================== 00:14:25.247 Total : 14799.00 57.81 0.00 0.00 0.00 0.00 0.00 00:14:25.247 00:14:26.180 07:31:42 -- target/nvmf_lvs_grow.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_grow_lvstore -u 607181df-8403-42de-8c51-e9c69861eb6d 00:14:26.439 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:14:26.439 Nvme0n1 : 2.00 14977.00 58.50 0.00 0.00 0.00 0.00 0.00 00:14:26.439 =================================================================================================================== 00:14:26.439 Total : 14977.00 58.50 0.00 0.00 0.00 0.00 0.00 00:14:26.439 00:14:26.439 true 00:14:26.439 07:31:42 -- target/nvmf_lvs_grow.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 607181df-8403-42de-8c51-e9c69861eb6d 00:14:26.439 07:31:42 -- target/nvmf_lvs_grow.sh@61 -- # jq -r '.[0].total_data_clusters' 00:14:26.697 07:31:42 -- target/nvmf_lvs_grow.sh@61 -- # data_clusters=99 00:14:26.697 07:31:42 -- target/nvmf_lvs_grow.sh@62 -- # (( data_clusters == 99 )) 00:14:26.697 07:31:42 -- target/nvmf_lvs_grow.sh@65 -- # wait 4074680 00:14:27.264 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:14:27.264 Nvme0n1 : 3.00 15066.33 58.85 0.00 0.00 0.00 0.00 0.00 00:14:27.264 =================================================================================================================== 00:14:27.264 Total : 15066.33 58.85 0.00 0.00 0.00 0.00 0.00 00:14:27.264 00:14:28.636 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:14:28.636 Nvme0n1 : 4.00 15155.75 59.20 0.00 0.00 0.00 0.00 0.00 00:14:28.637 =================================================================================================================== 00:14:28.637 Total : 15155.75 59.20 0.00 0.00 0.00 0.00 0.00 00:14:28.637 00:14:29.571 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:14:29.571 Nvme0n1 : 5.00 15222.20 59.46 0.00 0.00 0.00 0.00 0.00 00:14:29.571 =================================================================================================================== 00:14:29.571 Total : 15222.20 59.46 0.00 0.00 0.00 0.00 0.00 00:14:29.571 00:14:30.507 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:14:30.507 Nvme0n1 : 6.00 15275.00 59.67 0.00 0.00 0.00 0.00 0.00 00:14:30.507 =================================================================================================================== 00:14:30.507 Total : 15275.00 59.67 0.00 0.00 0.00 0.00 0.00 00:14:30.507 00:14:31.442 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:14:31.442 Nvme0n1 : 7.00 15307.29 59.79 0.00 0.00 0.00 0.00 0.00 00:14:31.442 =================================================================================================================== 00:14:31.442 Total : 15307.29 59.79 0.00 0.00 0.00 0.00 0.00 00:14:31.442 00:14:32.378 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:14:32.378 Nvme0n1 : 8.00 15329.75 59.88 0.00 0.00 0.00 0.00 0.00 00:14:32.378 =================================================================================================================== 00:14:32.378 Total : 15329.75 59.88 0.00 0.00 0.00 0.00 0.00 00:14:32.378 00:14:33.315 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:14:33.315 Nvme0n1 : 9.00 15354.56 59.98 0.00 0.00 0.00 0.00 0.00 00:14:33.315 =================================================================================================================== 00:14:33.315 Total : 15354.56 59.98 0.00 0.00 0.00 0.00 0.00 00:14:33.315 00:14:34.247 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:14:34.247 Nvme0n1 : 10.00 15367.90 60.03 0.00 0.00 0.00 0.00 0.00 00:14:34.247 =================================================================================================================== 00:14:34.247 Total : 15367.90 60.03 0.00 0.00 0.00 0.00 0.00 00:14:34.247 00:14:34.247 00:14:34.247 Latency(us) 00:14:34.247 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:34.247 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:14:34.247 Nvme0n1 : 10.01 15367.54 60.03 0.00 0.00 8323.77 4878.79 16408.27 00:14:34.247 =================================================================================================================== 00:14:34.247 Total : 15367.54 60.03 0.00 0.00 8323.77 4878.79 16408.27 00:14:34.247 0 00:14:34.504 07:31:50 -- target/nvmf_lvs_grow.sh@66 -- # killprocess 4074526 00:14:34.504 07:31:50 -- common/autotest_common.sh@926 -- # '[' -z 4074526 ']' 00:14:34.504 07:31:50 -- common/autotest_common.sh@930 -- # kill -0 4074526 00:14:34.504 07:31:50 -- common/autotest_common.sh@931 -- # uname 00:14:34.504 07:31:50 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:14:34.504 07:31:50 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 4074526 00:14:34.504 07:31:50 -- common/autotest_common.sh@932 -- # process_name=reactor_1 00:14:34.504 07:31:50 -- common/autotest_common.sh@936 -- # '[' reactor_1 = sudo ']' 00:14:34.504 07:31:50 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 4074526' 00:14:34.504 killing process with pid 4074526 00:14:34.504 07:31:50 -- common/autotest_common.sh@945 -- # kill 4074526 00:14:34.504 Received shutdown signal, test time was about 10.000000 seconds 00:14:34.504 00:14:34.504 Latency(us) 00:14:34.504 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:34.504 =================================================================================================================== 00:14:34.504 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:14:34.504 07:31:50 -- common/autotest_common.sh@950 -- # wait 4074526 00:14:34.761 07:31:50 -- target/nvmf_lvs_grow.sh@68 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:14:35.018 07:31:51 -- target/nvmf_lvs_grow.sh@69 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 607181df-8403-42de-8c51-e9c69861eb6d 00:14:35.018 07:31:51 -- target/nvmf_lvs_grow.sh@69 -- # jq -r '.[0].free_clusters' 00:14:35.275 07:31:51 -- target/nvmf_lvs_grow.sh@69 -- # free_clusters=61 00:14:35.275 07:31:51 -- target/nvmf_lvs_grow.sh@71 -- # [[ '' == \d\i\r\t\y ]] 00:14:35.275 07:31:51 -- target/nvmf_lvs_grow.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:14:35.532 [2024-07-14 07:31:51.535761] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev aio_bdev being removed: closing lvstore lvs 00:14:35.532 07:31:51 -- target/nvmf_lvs_grow.sh@84 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 607181df-8403-42de-8c51-e9c69861eb6d 00:14:35.532 07:31:51 -- common/autotest_common.sh@640 -- # local es=0 00:14:35.532 07:31:51 -- common/autotest_common.sh@642 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 607181df-8403-42de-8c51-e9c69861eb6d 00:14:35.532 07:31:51 -- common/autotest_common.sh@628 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:14:35.532 07:31:51 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:14:35.532 07:31:51 -- common/autotest_common.sh@632 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:14:35.533 07:31:51 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:14:35.533 07:31:51 -- common/autotest_common.sh@634 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:14:35.533 07:31:51 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:14:35.533 07:31:51 -- common/autotest_common.sh@634 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:14:35.533 07:31:51 -- common/autotest_common.sh@634 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:14:35.533 07:31:51 -- common/autotest_common.sh@643 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 607181df-8403-42de-8c51-e9c69861eb6d 00:14:35.790 request: 00:14:35.790 { 00:14:35.790 "uuid": "607181df-8403-42de-8c51-e9c69861eb6d", 00:14:35.790 "method": "bdev_lvol_get_lvstores", 00:14:35.790 "req_id": 1 00:14:35.790 } 00:14:35.790 Got JSON-RPC error response 00:14:35.790 response: 00:14:35.790 { 00:14:35.790 "code": -19, 00:14:35.790 "message": "No such device" 00:14:35.790 } 00:14:35.790 07:31:51 -- common/autotest_common.sh@643 -- # es=1 00:14:35.790 07:31:51 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:14:35.790 07:31:51 -- common/autotest_common.sh@662 -- # [[ -n '' ]] 00:14:35.790 07:31:51 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:14:35.790 07:31:51 -- target/nvmf_lvs_grow.sh@85 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:14:36.048 aio_bdev 00:14:36.048 07:31:52 -- target/nvmf_lvs_grow.sh@86 -- # waitforbdev 61ad79cd-51d4-41df-8565-7e799cfbd201 00:14:36.048 07:31:52 -- common/autotest_common.sh@887 -- # local bdev_name=61ad79cd-51d4-41df-8565-7e799cfbd201 00:14:36.048 07:31:52 -- common/autotest_common.sh@888 -- # local bdev_timeout= 00:14:36.048 07:31:52 -- common/autotest_common.sh@889 -- # local i 00:14:36.048 07:31:52 -- common/autotest_common.sh@890 -- # [[ -z '' ]] 00:14:36.048 07:31:52 -- common/autotest_common.sh@890 -- # bdev_timeout=2000 00:14:36.048 07:31:52 -- common/autotest_common.sh@892 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:14:36.306 07:31:52 -- common/autotest_common.sh@894 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b 61ad79cd-51d4-41df-8565-7e799cfbd201 -t 2000 00:14:36.564 [ 00:14:36.564 { 00:14:36.564 "name": "61ad79cd-51d4-41df-8565-7e799cfbd201", 00:14:36.564 "aliases": [ 00:14:36.564 "lvs/lvol" 00:14:36.564 ], 00:14:36.564 "product_name": "Logical Volume", 00:14:36.564 "block_size": 4096, 00:14:36.564 "num_blocks": 38912, 00:14:36.564 "uuid": "61ad79cd-51d4-41df-8565-7e799cfbd201", 00:14:36.564 "assigned_rate_limits": { 00:14:36.564 "rw_ios_per_sec": 0, 00:14:36.564 "rw_mbytes_per_sec": 0, 00:14:36.564 "r_mbytes_per_sec": 0, 00:14:36.564 "w_mbytes_per_sec": 0 00:14:36.564 }, 00:14:36.564 "claimed": false, 00:14:36.564 "zoned": false, 00:14:36.564 "supported_io_types": { 00:14:36.564 "read": true, 00:14:36.564 "write": true, 00:14:36.564 "unmap": true, 00:14:36.564 "write_zeroes": true, 00:14:36.564 "flush": false, 00:14:36.564 "reset": true, 00:14:36.564 "compare": false, 00:14:36.564 "compare_and_write": false, 00:14:36.564 "abort": false, 00:14:36.564 "nvme_admin": false, 00:14:36.564 "nvme_io": false 00:14:36.564 }, 00:14:36.564 "driver_specific": { 00:14:36.564 "lvol": { 00:14:36.564 "lvol_store_uuid": "607181df-8403-42de-8c51-e9c69861eb6d", 00:14:36.564 "base_bdev": "aio_bdev", 00:14:36.564 "thin_provision": false, 00:14:36.564 "snapshot": false, 00:14:36.564 "clone": false, 00:14:36.564 "esnap_clone": false 00:14:36.564 } 00:14:36.564 } 00:14:36.564 } 00:14:36.564 ] 00:14:36.564 07:31:52 -- common/autotest_common.sh@895 -- # return 0 00:14:36.564 07:31:52 -- target/nvmf_lvs_grow.sh@87 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 607181df-8403-42de-8c51-e9c69861eb6d 00:14:36.564 07:31:52 -- target/nvmf_lvs_grow.sh@87 -- # jq -r '.[0].free_clusters' 00:14:36.821 07:31:52 -- target/nvmf_lvs_grow.sh@87 -- # (( free_clusters == 61 )) 00:14:36.821 07:31:52 -- target/nvmf_lvs_grow.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 607181df-8403-42de-8c51-e9c69861eb6d 00:14:36.821 07:31:52 -- target/nvmf_lvs_grow.sh@88 -- # jq -r '.[0].total_data_clusters' 00:14:37.079 07:31:53 -- target/nvmf_lvs_grow.sh@88 -- # (( data_clusters == 99 )) 00:14:37.079 07:31:53 -- target/nvmf_lvs_grow.sh@91 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete 61ad79cd-51d4-41df-8565-7e799cfbd201 00:14:37.079 07:31:53 -- target/nvmf_lvs_grow.sh@92 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 607181df-8403-42de-8c51-e9c69861eb6d 00:14:37.336 07:31:53 -- target/nvmf_lvs_grow.sh@93 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:14:37.592 07:31:53 -- target/nvmf_lvs_grow.sh@94 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:14:37.592 00:14:37.592 real 0m17.493s 00:14:37.592 user 0m17.200s 00:14:37.592 sys 0m1.797s 00:14:37.592 07:31:53 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:14:37.592 07:31:53 -- common/autotest_common.sh@10 -- # set +x 00:14:37.592 ************************************ 00:14:37.592 END TEST lvs_grow_clean 00:14:37.592 ************************************ 00:14:37.850 07:31:53 -- target/nvmf_lvs_grow.sh@102 -- # run_test lvs_grow_dirty lvs_grow dirty 00:14:37.850 07:31:53 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:14:37.850 07:31:53 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:14:37.850 07:31:53 -- common/autotest_common.sh@10 -- # set +x 00:14:37.850 ************************************ 00:14:37.850 START TEST lvs_grow_dirty 00:14:37.850 ************************************ 00:14:37.850 07:31:53 -- common/autotest_common.sh@1104 -- # lvs_grow dirty 00:14:37.850 07:31:53 -- target/nvmf_lvs_grow.sh@15 -- # local aio_bdev lvs lvol 00:14:37.850 07:31:53 -- target/nvmf_lvs_grow.sh@16 -- # local data_clusters free_clusters 00:14:37.850 07:31:53 -- target/nvmf_lvs_grow.sh@17 -- # local bdevperf_pid run_test_pid 00:14:37.850 07:31:53 -- target/nvmf_lvs_grow.sh@18 -- # local aio_init_size_mb=200 00:14:37.850 07:31:53 -- target/nvmf_lvs_grow.sh@19 -- # local aio_final_size_mb=400 00:14:37.850 07:31:53 -- target/nvmf_lvs_grow.sh@20 -- # local lvol_bdev_size_mb=150 00:14:37.850 07:31:53 -- target/nvmf_lvs_grow.sh@23 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:14:37.850 07:31:53 -- target/nvmf_lvs_grow.sh@24 -- # truncate -s 200M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:14:37.850 07:31:53 -- target/nvmf_lvs_grow.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:14:38.108 07:31:54 -- target/nvmf_lvs_grow.sh@25 -- # aio_bdev=aio_bdev 00:14:38.108 07:31:54 -- target/nvmf_lvs_grow.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore --cluster-sz 4194304 --md-pages-per-cluster-ratio 300 aio_bdev lvs 00:14:38.108 07:31:54 -- target/nvmf_lvs_grow.sh@28 -- # lvs=9635d5e6-03ec-4858-ba32-d535a5446951 00:14:38.108 07:31:54 -- target/nvmf_lvs_grow.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 9635d5e6-03ec-4858-ba32-d535a5446951 00:14:38.108 07:31:54 -- target/nvmf_lvs_grow.sh@29 -- # jq -r '.[0].total_data_clusters' 00:14:38.398 07:31:54 -- target/nvmf_lvs_grow.sh@29 -- # data_clusters=49 00:14:38.398 07:31:54 -- target/nvmf_lvs_grow.sh@30 -- # (( data_clusters == 49 )) 00:14:38.398 07:31:54 -- target/nvmf_lvs_grow.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u 9635d5e6-03ec-4858-ba32-d535a5446951 lvol 150 00:14:38.655 07:31:54 -- target/nvmf_lvs_grow.sh@33 -- # lvol=3485e916-d78d-45ba-98e7-ea8e371cf526 00:14:38.655 07:31:54 -- target/nvmf_lvs_grow.sh@36 -- # truncate -s 400M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:14:38.655 07:31:54 -- target/nvmf_lvs_grow.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_rescan aio_bdev 00:14:38.913 [2024-07-14 07:31:54.967961] bdev_aio.c: 959:bdev_aio_rescan: *NOTICE*: AIO device is resized: bdev name /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev, old block count 51200, new block count 102400 00:14:38.913 [2024-07-14 07:31:54.968069] vbdev_lvol.c: 165:vbdev_lvs_base_bdev_event_cb: *NOTICE*: Unsupported bdev event: type 1 00:14:38.913 true 00:14:38.913 07:31:54 -- target/nvmf_lvs_grow.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 9635d5e6-03ec-4858-ba32-d535a5446951 00:14:38.913 07:31:54 -- target/nvmf_lvs_grow.sh@38 -- # jq -r '.[0].total_data_clusters' 00:14:39.170 07:31:55 -- target/nvmf_lvs_grow.sh@38 -- # (( data_clusters == 49 )) 00:14:39.170 07:31:55 -- target/nvmf_lvs_grow.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:14:39.428 07:31:55 -- target/nvmf_lvs_grow.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 3485e916-d78d-45ba-98e7-ea8e371cf526 00:14:39.686 07:31:55 -- target/nvmf_lvs_grow.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:14:39.943 07:31:55 -- target/nvmf_lvs_grow.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:14:40.201 07:31:56 -- target/nvmf_lvs_grow.sh@48 -- # bdevperf_pid=4076647 00:14:40.201 07:31:56 -- target/nvmf_lvs_grow.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock -m 0x2 -o 4096 -q 128 -w randwrite -t 10 -S 1 -z 00:14:40.201 07:31:56 -- target/nvmf_lvs_grow.sh@49 -- # trap 'killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:14:40.201 07:31:56 -- target/nvmf_lvs_grow.sh@50 -- # waitforlisten 4076647 /var/tmp/bdevperf.sock 00:14:40.201 07:31:56 -- common/autotest_common.sh@819 -- # '[' -z 4076647 ']' 00:14:40.201 07:31:56 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:14:40.201 07:31:56 -- common/autotest_common.sh@824 -- # local max_retries=100 00:14:40.201 07:31:56 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:14:40.201 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:14:40.201 07:31:56 -- common/autotest_common.sh@828 -- # xtrace_disable 00:14:40.201 07:31:56 -- common/autotest_common.sh@10 -- # set +x 00:14:40.201 [2024-07-14 07:31:56.204272] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:14:40.201 [2024-07-14 07:31:56.204339] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid4076647 ] 00:14:40.201 EAL: No free 2048 kB hugepages reported on node 1 00:14:40.201 [2024-07-14 07:31:56.264812] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:40.459 [2024-07-14 07:31:56.379194] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:14:41.024 07:31:57 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:14:41.024 07:31:57 -- common/autotest_common.sh@852 -- # return 0 00:14:41.024 07:31:57 -- target/nvmf_lvs_grow.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 00:14:41.590 Nvme0n1 00:14:41.590 07:31:57 -- target/nvmf_lvs_grow.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs -b Nvme0n1 -t 3000 00:14:41.848 [ 00:14:41.848 { 00:14:41.848 "name": "Nvme0n1", 00:14:41.848 "aliases": [ 00:14:41.848 "3485e916-d78d-45ba-98e7-ea8e371cf526" 00:14:41.848 ], 00:14:41.848 "product_name": "NVMe disk", 00:14:41.848 "block_size": 4096, 00:14:41.848 "num_blocks": 38912, 00:14:41.848 "uuid": "3485e916-d78d-45ba-98e7-ea8e371cf526", 00:14:41.848 "assigned_rate_limits": { 00:14:41.848 "rw_ios_per_sec": 0, 00:14:41.848 "rw_mbytes_per_sec": 0, 00:14:41.848 "r_mbytes_per_sec": 0, 00:14:41.848 "w_mbytes_per_sec": 0 00:14:41.848 }, 00:14:41.848 "claimed": false, 00:14:41.848 "zoned": false, 00:14:41.848 "supported_io_types": { 00:14:41.848 "read": true, 00:14:41.848 "write": true, 00:14:41.848 "unmap": true, 00:14:41.848 "write_zeroes": true, 00:14:41.848 "flush": true, 00:14:41.848 "reset": true, 00:14:41.848 "compare": true, 00:14:41.848 "compare_and_write": true, 00:14:41.848 "abort": true, 00:14:41.848 "nvme_admin": true, 00:14:41.848 "nvme_io": true 00:14:41.848 }, 00:14:41.848 "driver_specific": { 00:14:41.848 "nvme": [ 00:14:41.848 { 00:14:41.848 "trid": { 00:14:41.848 "trtype": "TCP", 00:14:41.848 "adrfam": "IPv4", 00:14:41.848 "traddr": "10.0.0.2", 00:14:41.848 "trsvcid": "4420", 00:14:41.848 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:14:41.848 }, 00:14:41.848 "ctrlr_data": { 00:14:41.848 "cntlid": 1, 00:14:41.848 "vendor_id": "0x8086", 00:14:41.848 "model_number": "SPDK bdev Controller", 00:14:41.848 "serial_number": "SPDK0", 00:14:41.848 "firmware_revision": "24.01.1", 00:14:41.848 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:14:41.848 "oacs": { 00:14:41.848 "security": 0, 00:14:41.848 "format": 0, 00:14:41.848 "firmware": 0, 00:14:41.848 "ns_manage": 0 00:14:41.848 }, 00:14:41.848 "multi_ctrlr": true, 00:14:41.848 "ana_reporting": false 00:14:41.848 }, 00:14:41.848 "vs": { 00:14:41.848 "nvme_version": "1.3" 00:14:41.848 }, 00:14:41.848 "ns_data": { 00:14:41.848 "id": 1, 00:14:41.848 "can_share": true 00:14:41.848 } 00:14:41.848 } 00:14:41.848 ], 00:14:41.848 "mp_policy": "active_passive" 00:14:41.848 } 00:14:41.848 } 00:14:41.848 ] 00:14:41.848 07:31:57 -- target/nvmf_lvs_grow.sh@56 -- # run_test_pid=4076885 00:14:41.848 07:31:57 -- target/nvmf_lvs_grow.sh@57 -- # sleep 2 00:14:41.848 07:31:57 -- target/nvmf_lvs_grow.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:14:41.848 Running I/O for 10 seconds... 00:14:42.781 Latency(us) 00:14:42.781 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:42.781 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:14:42.781 Nvme0n1 : 1.00 14466.00 56.51 0.00 0.00 0.00 0.00 0.00 00:14:42.781 =================================================================================================================== 00:14:42.781 Total : 14466.00 56.51 0.00 0.00 0.00 0.00 0.00 00:14:42.781 00:14:43.715 07:31:59 -- target/nvmf_lvs_grow.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_grow_lvstore -u 9635d5e6-03ec-4858-ba32-d535a5446951 00:14:43.973 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:14:43.973 Nvme0n1 : 2.00 14593.00 57.00 0.00 0.00 0.00 0.00 0.00 00:14:43.973 =================================================================================================================== 00:14:43.973 Total : 14593.00 57.00 0.00 0.00 0.00 0.00 0.00 00:14:43.973 00:14:43.973 true 00:14:43.973 07:32:00 -- target/nvmf_lvs_grow.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 9635d5e6-03ec-4858-ba32-d535a5446951 00:14:43.973 07:32:00 -- target/nvmf_lvs_grow.sh@61 -- # jq -r '.[0].total_data_clusters' 00:14:44.231 07:32:00 -- target/nvmf_lvs_grow.sh@61 -- # data_clusters=99 00:14:44.232 07:32:00 -- target/nvmf_lvs_grow.sh@62 -- # (( data_clusters == 99 )) 00:14:44.232 07:32:00 -- target/nvmf_lvs_grow.sh@65 -- # wait 4076885 00:14:44.797 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:14:44.797 Nvme0n1 : 3.00 14786.67 57.76 0.00 0.00 0.00 0.00 0.00 00:14:44.797 =================================================================================================================== 00:14:44.797 Total : 14786.67 57.76 0.00 0.00 0.00 0.00 0.00 00:14:44.797 00:14:46.171 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:14:46.171 Nvme0n1 : 4.00 14866.00 58.07 0.00 0.00 0.00 0.00 0.00 00:14:46.171 =================================================================================================================== 00:14:46.171 Total : 14866.00 58.07 0.00 0.00 0.00 0.00 0.00 00:14:46.171 00:14:47.104 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:14:47.104 Nvme0n1 : 5.00 14964.80 58.46 0.00 0.00 0.00 0.00 0.00 00:14:47.104 =================================================================================================================== 00:14:47.104 Total : 14964.80 58.46 0.00 0.00 0.00 0.00 0.00 00:14:47.104 00:14:48.036 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:14:48.036 Nvme0n1 : 6.00 14997.50 58.58 0.00 0.00 0.00 0.00 0.00 00:14:48.036 =================================================================================================================== 00:14:48.036 Total : 14997.50 58.58 0.00 0.00 0.00 0.00 0.00 00:14:48.036 00:14:48.970 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:14:48.970 Nvme0n1 : 7.00 15022.86 58.68 0.00 0.00 0.00 0.00 0.00 00:14:48.970 =================================================================================================================== 00:14:48.970 Total : 15022.86 58.68 0.00 0.00 0.00 0.00 0.00 00:14:48.970 00:14:49.904 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:14:49.904 Nvme0n1 : 8.00 15049.00 58.79 0.00 0.00 0.00 0.00 0.00 00:14:49.904 =================================================================================================================== 00:14:49.904 Total : 15049.00 58.79 0.00 0.00 0.00 0.00 0.00 00:14:49.904 00:14:50.839 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:14:50.839 Nvme0n1 : 9.00 15069.33 58.86 0.00 0.00 0.00 0.00 0.00 00:14:50.839 =================================================================================================================== 00:14:50.839 Total : 15069.33 58.86 0.00 0.00 0.00 0.00 0.00 00:14:50.839 00:14:51.774 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:14:51.774 Nvme0n1 : 10.00 15085.60 58.93 0.00 0.00 0.00 0.00 0.00 00:14:51.774 =================================================================================================================== 00:14:51.774 Total : 15085.60 58.93 0.00 0.00 0.00 0.00 0.00 00:14:51.774 00:14:51.774 00:14:51.774 Latency(us) 00:14:51.774 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:51.774 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:14:51.774 Nvme0n1 : 10.01 15089.89 58.94 0.00 0.00 8477.24 4369.07 14272.28 00:14:51.774 =================================================================================================================== 00:14:51.774 Total : 15089.89 58.94 0.00 0.00 8477.24 4369.07 14272.28 00:14:51.774 0 00:14:51.774 07:32:07 -- target/nvmf_lvs_grow.sh@66 -- # killprocess 4076647 00:14:51.774 07:32:07 -- common/autotest_common.sh@926 -- # '[' -z 4076647 ']' 00:14:51.774 07:32:07 -- common/autotest_common.sh@930 -- # kill -0 4076647 00:14:51.774 07:32:07 -- common/autotest_common.sh@931 -- # uname 00:14:51.774 07:32:07 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:14:51.774 07:32:07 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 4076647 00:14:52.032 07:32:07 -- common/autotest_common.sh@932 -- # process_name=reactor_1 00:14:52.032 07:32:07 -- common/autotest_common.sh@936 -- # '[' reactor_1 = sudo ']' 00:14:52.032 07:32:07 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 4076647' 00:14:52.032 killing process with pid 4076647 00:14:52.032 07:32:07 -- common/autotest_common.sh@945 -- # kill 4076647 00:14:52.032 Received shutdown signal, test time was about 10.000000 seconds 00:14:52.032 00:14:52.032 Latency(us) 00:14:52.032 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:52.032 =================================================================================================================== 00:14:52.032 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:14:52.032 07:32:07 -- common/autotest_common.sh@950 -- # wait 4076647 00:14:52.290 07:32:08 -- target/nvmf_lvs_grow.sh@68 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:14:52.548 07:32:08 -- target/nvmf_lvs_grow.sh@69 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 9635d5e6-03ec-4858-ba32-d535a5446951 00:14:52.548 07:32:08 -- target/nvmf_lvs_grow.sh@69 -- # jq -r '.[0].free_clusters' 00:14:52.812 07:32:08 -- target/nvmf_lvs_grow.sh@69 -- # free_clusters=61 00:14:52.812 07:32:08 -- target/nvmf_lvs_grow.sh@71 -- # [[ dirty == \d\i\r\t\y ]] 00:14:52.812 07:32:08 -- target/nvmf_lvs_grow.sh@73 -- # kill -9 4073954 00:14:52.812 07:32:08 -- target/nvmf_lvs_grow.sh@74 -- # wait 4073954 00:14:52.812 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvs_grow.sh: line 74: 4073954 Killed "${NVMF_APP[@]}" "$@" 00:14:52.812 07:32:08 -- target/nvmf_lvs_grow.sh@74 -- # true 00:14:52.812 07:32:08 -- target/nvmf_lvs_grow.sh@75 -- # nvmfappstart -m 0x1 00:14:52.812 07:32:08 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:14:52.812 07:32:08 -- common/autotest_common.sh@712 -- # xtrace_disable 00:14:52.812 07:32:08 -- common/autotest_common.sh@10 -- # set +x 00:14:52.812 07:32:08 -- nvmf/common.sh@469 -- # nvmfpid=4078160 00:14:52.812 07:32:08 -- nvmf/common.sh@468 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:14:52.812 07:32:08 -- nvmf/common.sh@470 -- # waitforlisten 4078160 00:14:52.812 07:32:08 -- common/autotest_common.sh@819 -- # '[' -z 4078160 ']' 00:14:52.812 07:32:08 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:52.812 07:32:08 -- common/autotest_common.sh@824 -- # local max_retries=100 00:14:52.812 07:32:08 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:52.812 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:52.812 07:32:08 -- common/autotest_common.sh@828 -- # xtrace_disable 00:14:52.812 07:32:08 -- common/autotest_common.sh@10 -- # set +x 00:14:52.812 [2024-07-14 07:32:08.842113] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:14:52.812 [2024-07-14 07:32:08.842195] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:14:52.812 EAL: No free 2048 kB hugepages reported on node 1 00:14:52.812 [2024-07-14 07:32:08.906358] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:53.118 [2024-07-14 07:32:09.016603] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:14:53.118 [2024-07-14 07:32:09.016752] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:14:53.118 [2024-07-14 07:32:09.016770] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:14:53.118 [2024-07-14 07:32:09.016781] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:14:53.118 [2024-07-14 07:32:09.016815] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:14:53.681 07:32:09 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:14:53.681 07:32:09 -- common/autotest_common.sh@852 -- # return 0 00:14:53.681 07:32:09 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:14:53.681 07:32:09 -- common/autotest_common.sh@718 -- # xtrace_disable 00:14:53.681 07:32:09 -- common/autotest_common.sh@10 -- # set +x 00:14:53.681 07:32:09 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:14:53.681 07:32:09 -- target/nvmf_lvs_grow.sh@76 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:14:53.939 [2024-07-14 07:32:10.030245] blobstore.c:4642:bs_recover: *NOTICE*: Performing recovery on blobstore 00:14:53.939 [2024-07-14 07:32:10.030416] blobstore.c:4589:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x0 00:14:53.939 [2024-07-14 07:32:10.030465] blobstore.c:4589:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x1 00:14:53.939 07:32:10 -- target/nvmf_lvs_grow.sh@76 -- # aio_bdev=aio_bdev 00:14:53.939 07:32:10 -- target/nvmf_lvs_grow.sh@77 -- # waitforbdev 3485e916-d78d-45ba-98e7-ea8e371cf526 00:14:53.939 07:32:10 -- common/autotest_common.sh@887 -- # local bdev_name=3485e916-d78d-45ba-98e7-ea8e371cf526 00:14:53.939 07:32:10 -- common/autotest_common.sh@888 -- # local bdev_timeout= 00:14:53.939 07:32:10 -- common/autotest_common.sh@889 -- # local i 00:14:53.939 07:32:10 -- common/autotest_common.sh@890 -- # [[ -z '' ]] 00:14:53.939 07:32:10 -- common/autotest_common.sh@890 -- # bdev_timeout=2000 00:14:53.939 07:32:10 -- common/autotest_common.sh@892 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:14:54.197 07:32:10 -- common/autotest_common.sh@894 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b 3485e916-d78d-45ba-98e7-ea8e371cf526 -t 2000 00:14:54.455 [ 00:14:54.455 { 00:14:54.455 "name": "3485e916-d78d-45ba-98e7-ea8e371cf526", 00:14:54.455 "aliases": [ 00:14:54.455 "lvs/lvol" 00:14:54.455 ], 00:14:54.455 "product_name": "Logical Volume", 00:14:54.455 "block_size": 4096, 00:14:54.455 "num_blocks": 38912, 00:14:54.455 "uuid": "3485e916-d78d-45ba-98e7-ea8e371cf526", 00:14:54.455 "assigned_rate_limits": { 00:14:54.455 "rw_ios_per_sec": 0, 00:14:54.455 "rw_mbytes_per_sec": 0, 00:14:54.455 "r_mbytes_per_sec": 0, 00:14:54.455 "w_mbytes_per_sec": 0 00:14:54.455 }, 00:14:54.455 "claimed": false, 00:14:54.455 "zoned": false, 00:14:54.455 "supported_io_types": { 00:14:54.455 "read": true, 00:14:54.455 "write": true, 00:14:54.455 "unmap": true, 00:14:54.455 "write_zeroes": true, 00:14:54.455 "flush": false, 00:14:54.455 "reset": true, 00:14:54.455 "compare": false, 00:14:54.455 "compare_and_write": false, 00:14:54.455 "abort": false, 00:14:54.455 "nvme_admin": false, 00:14:54.455 "nvme_io": false 00:14:54.455 }, 00:14:54.455 "driver_specific": { 00:14:54.455 "lvol": { 00:14:54.455 "lvol_store_uuid": "9635d5e6-03ec-4858-ba32-d535a5446951", 00:14:54.455 "base_bdev": "aio_bdev", 00:14:54.455 "thin_provision": false, 00:14:54.455 "snapshot": false, 00:14:54.455 "clone": false, 00:14:54.455 "esnap_clone": false 00:14:54.455 } 00:14:54.455 } 00:14:54.455 } 00:14:54.455 ] 00:14:54.455 07:32:10 -- common/autotest_common.sh@895 -- # return 0 00:14:54.455 07:32:10 -- target/nvmf_lvs_grow.sh@78 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 9635d5e6-03ec-4858-ba32-d535a5446951 00:14:54.455 07:32:10 -- target/nvmf_lvs_grow.sh@78 -- # jq -r '.[0].free_clusters' 00:14:54.712 07:32:10 -- target/nvmf_lvs_grow.sh@78 -- # (( free_clusters == 61 )) 00:14:54.712 07:32:10 -- target/nvmf_lvs_grow.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 9635d5e6-03ec-4858-ba32-d535a5446951 00:14:54.712 07:32:10 -- target/nvmf_lvs_grow.sh@79 -- # jq -r '.[0].total_data_clusters' 00:14:54.969 07:32:11 -- target/nvmf_lvs_grow.sh@79 -- # (( data_clusters == 99 )) 00:14:54.969 07:32:11 -- target/nvmf_lvs_grow.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:14:55.226 [2024-07-14 07:32:11.223062] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev aio_bdev being removed: closing lvstore lvs 00:14:55.226 07:32:11 -- target/nvmf_lvs_grow.sh@84 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 9635d5e6-03ec-4858-ba32-d535a5446951 00:14:55.226 07:32:11 -- common/autotest_common.sh@640 -- # local es=0 00:14:55.226 07:32:11 -- common/autotest_common.sh@642 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 9635d5e6-03ec-4858-ba32-d535a5446951 00:14:55.226 07:32:11 -- common/autotest_common.sh@628 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:14:55.226 07:32:11 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:14:55.226 07:32:11 -- common/autotest_common.sh@632 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:14:55.226 07:32:11 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:14:55.226 07:32:11 -- common/autotest_common.sh@634 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:14:55.226 07:32:11 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:14:55.226 07:32:11 -- common/autotest_common.sh@634 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:14:55.226 07:32:11 -- common/autotest_common.sh@634 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:14:55.226 07:32:11 -- common/autotest_common.sh@643 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 9635d5e6-03ec-4858-ba32-d535a5446951 00:14:55.485 request: 00:14:55.485 { 00:14:55.485 "uuid": "9635d5e6-03ec-4858-ba32-d535a5446951", 00:14:55.485 "method": "bdev_lvol_get_lvstores", 00:14:55.485 "req_id": 1 00:14:55.485 } 00:14:55.485 Got JSON-RPC error response 00:14:55.485 response: 00:14:55.485 { 00:14:55.485 "code": -19, 00:14:55.485 "message": "No such device" 00:14:55.485 } 00:14:55.485 07:32:11 -- common/autotest_common.sh@643 -- # es=1 00:14:55.485 07:32:11 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:14:55.485 07:32:11 -- common/autotest_common.sh@662 -- # [[ -n '' ]] 00:14:55.485 07:32:11 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:14:55.485 07:32:11 -- target/nvmf_lvs_grow.sh@85 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:14:55.743 aio_bdev 00:14:55.743 07:32:11 -- target/nvmf_lvs_grow.sh@86 -- # waitforbdev 3485e916-d78d-45ba-98e7-ea8e371cf526 00:14:55.743 07:32:11 -- common/autotest_common.sh@887 -- # local bdev_name=3485e916-d78d-45ba-98e7-ea8e371cf526 00:14:55.743 07:32:11 -- common/autotest_common.sh@888 -- # local bdev_timeout= 00:14:55.743 07:32:11 -- common/autotest_common.sh@889 -- # local i 00:14:55.743 07:32:11 -- common/autotest_common.sh@890 -- # [[ -z '' ]] 00:14:55.743 07:32:11 -- common/autotest_common.sh@890 -- # bdev_timeout=2000 00:14:55.743 07:32:11 -- common/autotest_common.sh@892 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:14:56.000 07:32:12 -- common/autotest_common.sh@894 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b 3485e916-d78d-45ba-98e7-ea8e371cf526 -t 2000 00:14:56.259 [ 00:14:56.259 { 00:14:56.259 "name": "3485e916-d78d-45ba-98e7-ea8e371cf526", 00:14:56.259 "aliases": [ 00:14:56.259 "lvs/lvol" 00:14:56.259 ], 00:14:56.259 "product_name": "Logical Volume", 00:14:56.259 "block_size": 4096, 00:14:56.259 "num_blocks": 38912, 00:14:56.259 "uuid": "3485e916-d78d-45ba-98e7-ea8e371cf526", 00:14:56.259 "assigned_rate_limits": { 00:14:56.259 "rw_ios_per_sec": 0, 00:14:56.259 "rw_mbytes_per_sec": 0, 00:14:56.259 "r_mbytes_per_sec": 0, 00:14:56.259 "w_mbytes_per_sec": 0 00:14:56.259 }, 00:14:56.259 "claimed": false, 00:14:56.259 "zoned": false, 00:14:56.259 "supported_io_types": { 00:14:56.259 "read": true, 00:14:56.259 "write": true, 00:14:56.259 "unmap": true, 00:14:56.259 "write_zeroes": true, 00:14:56.259 "flush": false, 00:14:56.259 "reset": true, 00:14:56.259 "compare": false, 00:14:56.259 "compare_and_write": false, 00:14:56.259 "abort": false, 00:14:56.259 "nvme_admin": false, 00:14:56.259 "nvme_io": false 00:14:56.259 }, 00:14:56.259 "driver_specific": { 00:14:56.259 "lvol": { 00:14:56.259 "lvol_store_uuid": "9635d5e6-03ec-4858-ba32-d535a5446951", 00:14:56.259 "base_bdev": "aio_bdev", 00:14:56.259 "thin_provision": false, 00:14:56.259 "snapshot": false, 00:14:56.259 "clone": false, 00:14:56.259 "esnap_clone": false 00:14:56.259 } 00:14:56.259 } 00:14:56.259 } 00:14:56.259 ] 00:14:56.259 07:32:12 -- common/autotest_common.sh@895 -- # return 0 00:14:56.259 07:32:12 -- target/nvmf_lvs_grow.sh@87 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 9635d5e6-03ec-4858-ba32-d535a5446951 00:14:56.259 07:32:12 -- target/nvmf_lvs_grow.sh@87 -- # jq -r '.[0].free_clusters' 00:14:56.517 07:32:12 -- target/nvmf_lvs_grow.sh@87 -- # (( free_clusters == 61 )) 00:14:56.517 07:32:12 -- target/nvmf_lvs_grow.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 9635d5e6-03ec-4858-ba32-d535a5446951 00:14:56.517 07:32:12 -- target/nvmf_lvs_grow.sh@88 -- # jq -r '.[0].total_data_clusters' 00:14:56.775 07:32:12 -- target/nvmf_lvs_grow.sh@88 -- # (( data_clusters == 99 )) 00:14:56.775 07:32:12 -- target/nvmf_lvs_grow.sh@91 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete 3485e916-d78d-45ba-98e7-ea8e371cf526 00:14:57.032 07:32:12 -- target/nvmf_lvs_grow.sh@92 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 9635d5e6-03ec-4858-ba32-d535a5446951 00:14:57.032 07:32:13 -- target/nvmf_lvs_grow.sh@93 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:14:57.289 07:32:13 -- target/nvmf_lvs_grow.sh@94 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:14:57.545 00:14:57.545 real 0m19.686s 00:14:57.545 user 0m49.403s 00:14:57.545 sys 0m4.724s 00:14:57.545 07:32:13 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:14:57.545 07:32:13 -- common/autotest_common.sh@10 -- # set +x 00:14:57.545 ************************************ 00:14:57.545 END TEST lvs_grow_dirty 00:14:57.545 ************************************ 00:14:57.545 07:32:13 -- target/nvmf_lvs_grow.sh@1 -- # process_shm --id 0 00:14:57.545 07:32:13 -- common/autotest_common.sh@796 -- # type=--id 00:14:57.545 07:32:13 -- common/autotest_common.sh@797 -- # id=0 00:14:57.545 07:32:13 -- common/autotest_common.sh@798 -- # '[' --id = --pid ']' 00:14:57.545 07:32:13 -- common/autotest_common.sh@802 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:14:57.545 07:32:13 -- common/autotest_common.sh@802 -- # shm_files=nvmf_trace.0 00:14:57.545 07:32:13 -- common/autotest_common.sh@804 -- # [[ -z nvmf_trace.0 ]] 00:14:57.546 07:32:13 -- common/autotest_common.sh@808 -- # for n in $shm_files 00:14:57.546 07:32:13 -- common/autotest_common.sh@809 -- # tar -C /dev/shm/ -cvzf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:14:57.546 nvmf_trace.0 00:14:57.546 07:32:13 -- common/autotest_common.sh@811 -- # return 0 00:14:57.546 07:32:13 -- target/nvmf_lvs_grow.sh@1 -- # nvmftestfini 00:14:57.546 07:32:13 -- nvmf/common.sh@476 -- # nvmfcleanup 00:14:57.546 07:32:13 -- nvmf/common.sh@116 -- # sync 00:14:57.546 07:32:13 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:14:57.546 07:32:13 -- nvmf/common.sh@119 -- # set +e 00:14:57.546 07:32:13 -- nvmf/common.sh@120 -- # for i in {1..20} 00:14:57.546 07:32:13 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:14:57.546 rmmod nvme_tcp 00:14:57.546 rmmod nvme_fabrics 00:14:57.546 rmmod nvme_keyring 00:14:57.546 07:32:13 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:14:57.546 07:32:13 -- nvmf/common.sh@123 -- # set -e 00:14:57.546 07:32:13 -- nvmf/common.sh@124 -- # return 0 00:14:57.546 07:32:13 -- nvmf/common.sh@477 -- # '[' -n 4078160 ']' 00:14:57.546 07:32:13 -- nvmf/common.sh@478 -- # killprocess 4078160 00:14:57.546 07:32:13 -- common/autotest_common.sh@926 -- # '[' -z 4078160 ']' 00:14:57.546 07:32:13 -- common/autotest_common.sh@930 -- # kill -0 4078160 00:14:57.546 07:32:13 -- common/autotest_common.sh@931 -- # uname 00:14:57.546 07:32:13 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:14:57.546 07:32:13 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 4078160 00:14:57.546 07:32:13 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:14:57.546 07:32:13 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:14:57.546 07:32:13 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 4078160' 00:14:57.546 killing process with pid 4078160 00:14:57.546 07:32:13 -- common/autotest_common.sh@945 -- # kill 4078160 00:14:57.546 07:32:13 -- common/autotest_common.sh@950 -- # wait 4078160 00:14:57.802 07:32:13 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:14:57.802 07:32:13 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:14:57.802 07:32:13 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:14:57.802 07:32:13 -- nvmf/common.sh@273 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:14:57.802 07:32:13 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:14:57.802 07:32:13 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:57.802 07:32:13 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:14:57.802 07:32:13 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:00.327 07:32:15 -- nvmf/common.sh@278 -- # ip -4 addr flush cvl_0_1 00:15:00.327 00:15:00.327 real 0m43.055s 00:15:00.327 user 1m12.843s 00:15:00.327 sys 0m8.381s 00:15:00.327 07:32:15 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:15:00.327 07:32:15 -- common/autotest_common.sh@10 -- # set +x 00:15:00.327 ************************************ 00:15:00.327 END TEST nvmf_lvs_grow 00:15:00.327 ************************************ 00:15:00.327 07:32:15 -- nvmf/nvmf.sh@49 -- # run_test nvmf_bdev_io_wait /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdev_io_wait.sh --transport=tcp 00:15:00.327 07:32:15 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:15:00.327 07:32:15 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:15:00.327 07:32:15 -- common/autotest_common.sh@10 -- # set +x 00:15:00.327 ************************************ 00:15:00.327 START TEST nvmf_bdev_io_wait 00:15:00.327 ************************************ 00:15:00.327 07:32:15 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdev_io_wait.sh --transport=tcp 00:15:00.327 * Looking for test storage... 00:15:00.327 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:15:00.327 07:32:15 -- target/bdev_io_wait.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:15:00.327 07:32:15 -- nvmf/common.sh@7 -- # uname -s 00:15:00.327 07:32:15 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:15:00.327 07:32:15 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:15:00.328 07:32:15 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:15:00.328 07:32:15 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:15:00.328 07:32:15 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:15:00.328 07:32:15 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:15:00.328 07:32:15 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:15:00.328 07:32:15 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:15:00.328 07:32:15 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:15:00.328 07:32:15 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:15:00.328 07:32:15 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:15:00.328 07:32:15 -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:15:00.328 07:32:15 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:15:00.328 07:32:15 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:15:00.328 07:32:15 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:15:00.328 07:32:15 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:15:00.328 07:32:15 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:15:00.328 07:32:15 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:15:00.328 07:32:15 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:15:00.328 07:32:15 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:00.328 07:32:15 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:00.328 07:32:16 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:00.328 07:32:16 -- paths/export.sh@5 -- # export PATH 00:15:00.328 07:32:16 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:00.328 07:32:16 -- nvmf/common.sh@46 -- # : 0 00:15:00.328 07:32:16 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:15:00.328 07:32:16 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:15:00.328 07:32:16 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:15:00.328 07:32:16 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:15:00.328 07:32:16 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:15:00.328 07:32:16 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:15:00.328 07:32:16 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:15:00.328 07:32:16 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:15:00.328 07:32:16 -- target/bdev_io_wait.sh@11 -- # MALLOC_BDEV_SIZE=64 00:15:00.328 07:32:16 -- target/bdev_io_wait.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:15:00.328 07:32:16 -- target/bdev_io_wait.sh@14 -- # nvmftestinit 00:15:00.328 07:32:16 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:15:00.328 07:32:16 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:15:00.328 07:32:16 -- nvmf/common.sh@436 -- # prepare_net_devs 00:15:00.328 07:32:16 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:15:00.328 07:32:16 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:15:00.328 07:32:16 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:00.328 07:32:16 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:15:00.328 07:32:16 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:00.328 07:32:16 -- nvmf/common.sh@402 -- # [[ phy != virt ]] 00:15:00.328 07:32:16 -- nvmf/common.sh@402 -- # gather_supported_nvmf_pci_devs 00:15:00.328 07:32:16 -- nvmf/common.sh@284 -- # xtrace_disable 00:15:00.328 07:32:16 -- common/autotest_common.sh@10 -- # set +x 00:15:02.230 07:32:17 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:15:02.230 07:32:17 -- nvmf/common.sh@290 -- # pci_devs=() 00:15:02.230 07:32:17 -- nvmf/common.sh@290 -- # local -a pci_devs 00:15:02.230 07:32:17 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:15:02.230 07:32:17 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:15:02.230 07:32:17 -- nvmf/common.sh@292 -- # pci_drivers=() 00:15:02.230 07:32:17 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:15:02.230 07:32:17 -- nvmf/common.sh@294 -- # net_devs=() 00:15:02.230 07:32:17 -- nvmf/common.sh@294 -- # local -ga net_devs 00:15:02.230 07:32:17 -- nvmf/common.sh@295 -- # e810=() 00:15:02.230 07:32:17 -- nvmf/common.sh@295 -- # local -ga e810 00:15:02.230 07:32:17 -- nvmf/common.sh@296 -- # x722=() 00:15:02.230 07:32:17 -- nvmf/common.sh@296 -- # local -ga x722 00:15:02.230 07:32:17 -- nvmf/common.sh@297 -- # mlx=() 00:15:02.230 07:32:17 -- nvmf/common.sh@297 -- # local -ga mlx 00:15:02.230 07:32:17 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:15:02.230 07:32:17 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:15:02.230 07:32:17 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:15:02.230 07:32:17 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:15:02.230 07:32:17 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:15:02.230 07:32:17 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:15:02.230 07:32:17 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:15:02.230 07:32:17 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:15:02.230 07:32:17 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:15:02.230 07:32:17 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:15:02.230 07:32:17 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:15:02.230 07:32:17 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:15:02.230 07:32:17 -- nvmf/common.sh@320 -- # [[ tcp == rdma ]] 00:15:02.230 07:32:17 -- nvmf/common.sh@326 -- # [[ e810 == mlx5 ]] 00:15:02.230 07:32:17 -- nvmf/common.sh@328 -- # [[ e810 == e810 ]] 00:15:02.230 07:32:17 -- nvmf/common.sh@329 -- # pci_devs=("${e810[@]}") 00:15:02.230 07:32:17 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:15:02.230 07:32:17 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:15:02.230 07:32:17 -- nvmf/common.sh@340 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:15:02.230 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:15:02.230 07:32:17 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:15:02.230 07:32:17 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:15:02.230 07:32:17 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:15:02.230 07:32:17 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:15:02.230 07:32:17 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:15:02.230 07:32:17 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:15:02.230 07:32:17 -- nvmf/common.sh@340 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:15:02.230 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:15:02.230 07:32:17 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:15:02.230 07:32:17 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:15:02.230 07:32:17 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:15:02.230 07:32:17 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:15:02.230 07:32:17 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:15:02.230 07:32:17 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:15:02.230 07:32:17 -- nvmf/common.sh@371 -- # [[ e810 == e810 ]] 00:15:02.230 07:32:17 -- nvmf/common.sh@371 -- # [[ tcp == rdma ]] 00:15:02.230 07:32:17 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:15:02.230 07:32:17 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:15:02.230 07:32:17 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:15:02.230 07:32:17 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:15:02.230 07:32:17 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:15:02.230 Found net devices under 0000:0a:00.0: cvl_0_0 00:15:02.230 07:32:17 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:15:02.230 07:32:17 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:15:02.230 07:32:17 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:15:02.230 07:32:17 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:15:02.230 07:32:17 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:15:02.230 07:32:17 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:15:02.230 Found net devices under 0000:0a:00.1: cvl_0_1 00:15:02.230 07:32:17 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:15:02.230 07:32:17 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:15:02.230 07:32:17 -- nvmf/common.sh@402 -- # is_hw=yes 00:15:02.230 07:32:17 -- nvmf/common.sh@404 -- # [[ yes == yes ]] 00:15:02.230 07:32:17 -- nvmf/common.sh@405 -- # [[ tcp == tcp ]] 00:15:02.230 07:32:17 -- nvmf/common.sh@406 -- # nvmf_tcp_init 00:15:02.230 07:32:17 -- nvmf/common.sh@228 -- # NVMF_INITIATOR_IP=10.0.0.1 00:15:02.230 07:32:17 -- nvmf/common.sh@229 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:15:02.230 07:32:17 -- nvmf/common.sh@230 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:15:02.230 07:32:17 -- nvmf/common.sh@233 -- # (( 2 > 1 )) 00:15:02.230 07:32:17 -- nvmf/common.sh@235 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:15:02.230 07:32:17 -- nvmf/common.sh@236 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:15:02.230 07:32:17 -- nvmf/common.sh@239 -- # NVMF_SECOND_TARGET_IP= 00:15:02.230 07:32:17 -- nvmf/common.sh@241 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:15:02.230 07:32:17 -- nvmf/common.sh@242 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:15:02.230 07:32:17 -- nvmf/common.sh@243 -- # ip -4 addr flush cvl_0_0 00:15:02.230 07:32:17 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_1 00:15:02.230 07:32:17 -- nvmf/common.sh@247 -- # ip netns add cvl_0_0_ns_spdk 00:15:02.230 07:32:17 -- nvmf/common.sh@250 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:15:02.230 07:32:18 -- nvmf/common.sh@253 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:15:02.230 07:32:18 -- nvmf/common.sh@254 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:15:02.230 07:32:18 -- nvmf/common.sh@257 -- # ip link set cvl_0_1 up 00:15:02.230 07:32:18 -- nvmf/common.sh@259 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:15:02.230 07:32:18 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:15:02.230 07:32:18 -- nvmf/common.sh@263 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:15:02.230 07:32:18 -- nvmf/common.sh@266 -- # ping -c 1 10.0.0.2 00:15:02.230 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:15:02.230 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.120 ms 00:15:02.230 00:15:02.230 --- 10.0.0.2 ping statistics --- 00:15:02.230 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:02.230 rtt min/avg/max/mdev = 0.120/0.120/0.120/0.000 ms 00:15:02.230 07:32:18 -- nvmf/common.sh@267 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:15:02.230 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:15:02.230 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.103 ms 00:15:02.230 00:15:02.230 --- 10.0.0.1 ping statistics --- 00:15:02.230 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:02.230 rtt min/avg/max/mdev = 0.103/0.103/0.103/0.000 ms 00:15:02.230 07:32:18 -- nvmf/common.sh@269 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:15:02.230 07:32:18 -- nvmf/common.sh@410 -- # return 0 00:15:02.230 07:32:18 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:15:02.230 07:32:18 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:15:02.230 07:32:18 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:15:02.230 07:32:18 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:15:02.230 07:32:18 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:15:02.230 07:32:18 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:15:02.230 07:32:18 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:15:02.230 07:32:18 -- target/bdev_io_wait.sh@15 -- # nvmfappstart -m 0xF --wait-for-rpc 00:15:02.230 07:32:18 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:15:02.230 07:32:18 -- common/autotest_common.sh@712 -- # xtrace_disable 00:15:02.230 07:32:18 -- common/autotest_common.sh@10 -- # set +x 00:15:02.230 07:32:18 -- nvmf/common.sh@469 -- # nvmfpid=4080741 00:15:02.230 07:32:18 -- nvmf/common.sh@468 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF --wait-for-rpc 00:15:02.230 07:32:18 -- nvmf/common.sh@470 -- # waitforlisten 4080741 00:15:02.230 07:32:18 -- common/autotest_common.sh@819 -- # '[' -z 4080741 ']' 00:15:02.230 07:32:18 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:02.230 07:32:18 -- common/autotest_common.sh@824 -- # local max_retries=100 00:15:02.230 07:32:18 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:02.230 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:02.230 07:32:18 -- common/autotest_common.sh@828 -- # xtrace_disable 00:15:02.230 07:32:18 -- common/autotest_common.sh@10 -- # set +x 00:15:02.230 [2024-07-14 07:32:18.169664] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:15:02.230 [2024-07-14 07:32:18.169736] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:15:02.230 EAL: No free 2048 kB hugepages reported on node 1 00:15:02.230 [2024-07-14 07:32:18.237193] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:15:02.230 [2024-07-14 07:32:18.344072] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:15:02.230 [2024-07-14 07:32:18.344223] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:15:02.230 [2024-07-14 07:32:18.344240] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:15:02.230 [2024-07-14 07:32:18.344252] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:15:02.230 [2024-07-14 07:32:18.344304] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:15:02.230 [2024-07-14 07:32:18.344554] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:15:02.230 [2024-07-14 07:32:18.344582] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:15:02.230 [2024-07-14 07:32:18.344585] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:15:02.230 07:32:18 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:15:02.230 07:32:18 -- common/autotest_common.sh@852 -- # return 0 00:15:02.230 07:32:18 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:15:02.230 07:32:18 -- common/autotest_common.sh@718 -- # xtrace_disable 00:15:02.230 07:32:18 -- common/autotest_common.sh@10 -- # set +x 00:15:02.230 07:32:18 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:15:02.230 07:32:18 -- target/bdev_io_wait.sh@18 -- # rpc_cmd bdev_set_options -p 5 -c 1 00:15:02.230 07:32:18 -- common/autotest_common.sh@551 -- # xtrace_disable 00:15:02.230 07:32:18 -- common/autotest_common.sh@10 -- # set +x 00:15:02.488 07:32:18 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:15:02.488 07:32:18 -- target/bdev_io_wait.sh@19 -- # rpc_cmd framework_start_init 00:15:02.488 07:32:18 -- common/autotest_common.sh@551 -- # xtrace_disable 00:15:02.488 07:32:18 -- common/autotest_common.sh@10 -- # set +x 00:15:02.488 07:32:18 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:15:02.488 07:32:18 -- target/bdev_io_wait.sh@20 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:15:02.488 07:32:18 -- common/autotest_common.sh@551 -- # xtrace_disable 00:15:02.488 07:32:18 -- common/autotest_common.sh@10 -- # set +x 00:15:02.488 [2024-07-14 07:32:18.480624] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:15:02.488 07:32:18 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:15:02.488 07:32:18 -- target/bdev_io_wait.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:15:02.488 07:32:18 -- common/autotest_common.sh@551 -- # xtrace_disable 00:15:02.488 07:32:18 -- common/autotest_common.sh@10 -- # set +x 00:15:02.488 Malloc0 00:15:02.488 07:32:18 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:15:02.488 07:32:18 -- target/bdev_io_wait.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:15:02.488 07:32:18 -- common/autotest_common.sh@551 -- # xtrace_disable 00:15:02.488 07:32:18 -- common/autotest_common.sh@10 -- # set +x 00:15:02.488 07:32:18 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:15:02.488 07:32:18 -- target/bdev_io_wait.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:15:02.488 07:32:18 -- common/autotest_common.sh@551 -- # xtrace_disable 00:15:02.488 07:32:18 -- common/autotest_common.sh@10 -- # set +x 00:15:02.488 07:32:18 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:15:02.488 07:32:18 -- target/bdev_io_wait.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:15:02.488 07:32:18 -- common/autotest_common.sh@551 -- # xtrace_disable 00:15:02.488 07:32:18 -- common/autotest_common.sh@10 -- # set +x 00:15:02.488 [2024-07-14 07:32:18.541588] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:15:02.488 07:32:18 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:15:02.488 07:32:18 -- target/bdev_io_wait.sh@28 -- # WRITE_PID=4080866 00:15:02.488 07:32:18 -- target/bdev_io_wait.sh@30 -- # READ_PID=4080868 00:15:02.488 07:32:18 -- target/bdev_io_wait.sh@27 -- # gen_nvmf_target_json 00:15:02.488 07:32:18 -- target/bdev_io_wait.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x10 -i 1 --json /dev/fd/63 -q 128 -o 4096 -w write -t 1 -s 256 00:15:02.488 07:32:18 -- nvmf/common.sh@520 -- # config=() 00:15:02.488 07:32:18 -- nvmf/common.sh@520 -- # local subsystem config 00:15:02.488 07:32:18 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:15:02.488 07:32:18 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:15:02.488 { 00:15:02.488 "params": { 00:15:02.488 "name": "Nvme$subsystem", 00:15:02.488 "trtype": "$TEST_TRANSPORT", 00:15:02.488 "traddr": "$NVMF_FIRST_TARGET_IP", 00:15:02.488 "adrfam": "ipv4", 00:15:02.488 "trsvcid": "$NVMF_PORT", 00:15:02.488 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:15:02.488 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:15:02.488 "hdgst": ${hdgst:-false}, 00:15:02.488 "ddgst": ${ddgst:-false} 00:15:02.488 }, 00:15:02.488 "method": "bdev_nvme_attach_controller" 00:15:02.488 } 00:15:02.488 EOF 00:15:02.488 )") 00:15:02.488 07:32:18 -- target/bdev_io_wait.sh@32 -- # FLUSH_PID=4080870 00:15:02.488 07:32:18 -- target/bdev_io_wait.sh@29 -- # gen_nvmf_target_json 00:15:02.488 07:32:18 -- target/bdev_io_wait.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x20 -i 2 --json /dev/fd/63 -q 128 -o 4096 -w read -t 1 -s 256 00:15:02.488 07:32:18 -- nvmf/common.sh@520 -- # config=() 00:15:02.488 07:32:18 -- nvmf/common.sh@520 -- # local subsystem config 00:15:02.488 07:32:18 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:15:02.488 07:32:18 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:15:02.488 { 00:15:02.488 "params": { 00:15:02.488 "name": "Nvme$subsystem", 00:15:02.488 "trtype": "$TEST_TRANSPORT", 00:15:02.488 "traddr": "$NVMF_FIRST_TARGET_IP", 00:15:02.488 "adrfam": "ipv4", 00:15:02.488 "trsvcid": "$NVMF_PORT", 00:15:02.488 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:15:02.488 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:15:02.488 "hdgst": ${hdgst:-false}, 00:15:02.488 "ddgst": ${ddgst:-false} 00:15:02.488 }, 00:15:02.488 "method": "bdev_nvme_attach_controller" 00:15:02.488 } 00:15:02.488 EOF 00:15:02.488 )") 00:15:02.488 07:32:18 -- target/bdev_io_wait.sh@31 -- # gen_nvmf_target_json 00:15:02.488 07:32:18 -- target/bdev_io_wait.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x40 -i 3 --json /dev/fd/63 -q 128 -o 4096 -w flush -t 1 -s 256 00:15:02.488 07:32:18 -- target/bdev_io_wait.sh@34 -- # UNMAP_PID=4080873 00:15:02.488 07:32:18 -- nvmf/common.sh@542 -- # cat 00:15:02.488 07:32:18 -- target/bdev_io_wait.sh@35 -- # sync 00:15:02.488 07:32:18 -- nvmf/common.sh@520 -- # config=() 00:15:02.488 07:32:18 -- nvmf/common.sh@520 -- # local subsystem config 00:15:02.488 07:32:18 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:15:02.488 07:32:18 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:15:02.488 { 00:15:02.488 "params": { 00:15:02.488 "name": "Nvme$subsystem", 00:15:02.488 "trtype": "$TEST_TRANSPORT", 00:15:02.488 "traddr": "$NVMF_FIRST_TARGET_IP", 00:15:02.488 "adrfam": "ipv4", 00:15:02.488 "trsvcid": "$NVMF_PORT", 00:15:02.488 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:15:02.488 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:15:02.488 "hdgst": ${hdgst:-false}, 00:15:02.488 "ddgst": ${ddgst:-false} 00:15:02.488 }, 00:15:02.488 "method": "bdev_nvme_attach_controller" 00:15:02.488 } 00:15:02.488 EOF 00:15:02.488 )") 00:15:02.488 07:32:18 -- target/bdev_io_wait.sh@33 -- # gen_nvmf_target_json 00:15:02.488 07:32:18 -- target/bdev_io_wait.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x80 -i 4 --json /dev/fd/63 -q 128 -o 4096 -w unmap -t 1 -s 256 00:15:02.488 07:32:18 -- nvmf/common.sh@520 -- # config=() 00:15:02.488 07:32:18 -- nvmf/common.sh@542 -- # cat 00:15:02.488 07:32:18 -- nvmf/common.sh@520 -- # local subsystem config 00:15:02.488 07:32:18 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:15:02.488 07:32:18 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:15:02.488 { 00:15:02.488 "params": { 00:15:02.488 "name": "Nvme$subsystem", 00:15:02.488 "trtype": "$TEST_TRANSPORT", 00:15:02.488 "traddr": "$NVMF_FIRST_TARGET_IP", 00:15:02.488 "adrfam": "ipv4", 00:15:02.488 "trsvcid": "$NVMF_PORT", 00:15:02.488 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:15:02.488 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:15:02.488 "hdgst": ${hdgst:-false}, 00:15:02.488 "ddgst": ${ddgst:-false} 00:15:02.488 }, 00:15:02.488 "method": "bdev_nvme_attach_controller" 00:15:02.488 } 00:15:02.488 EOF 00:15:02.488 )") 00:15:02.488 07:32:18 -- nvmf/common.sh@542 -- # cat 00:15:02.488 07:32:18 -- target/bdev_io_wait.sh@37 -- # wait 4080866 00:15:02.488 07:32:18 -- nvmf/common.sh@542 -- # cat 00:15:02.488 07:32:18 -- nvmf/common.sh@544 -- # jq . 00:15:02.488 07:32:18 -- nvmf/common.sh@544 -- # jq . 00:15:02.488 07:32:18 -- nvmf/common.sh@544 -- # jq . 00:15:02.488 07:32:18 -- nvmf/common.sh@545 -- # IFS=, 00:15:02.488 07:32:18 -- nvmf/common.sh@546 -- # printf '%s\n' '{ 00:15:02.488 "params": { 00:15:02.488 "name": "Nvme1", 00:15:02.488 "trtype": "tcp", 00:15:02.488 "traddr": "10.0.0.2", 00:15:02.488 "adrfam": "ipv4", 00:15:02.488 "trsvcid": "4420", 00:15:02.488 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:15:02.488 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:15:02.488 "hdgst": false, 00:15:02.488 "ddgst": false 00:15:02.488 }, 00:15:02.489 "method": "bdev_nvme_attach_controller" 00:15:02.489 }' 00:15:02.489 07:32:18 -- nvmf/common.sh@544 -- # jq . 00:15:02.489 07:32:18 -- nvmf/common.sh@545 -- # IFS=, 00:15:02.489 07:32:18 -- nvmf/common.sh@546 -- # printf '%s\n' '{ 00:15:02.489 "params": { 00:15:02.489 "name": "Nvme1", 00:15:02.489 "trtype": "tcp", 00:15:02.489 "traddr": "10.0.0.2", 00:15:02.489 "adrfam": "ipv4", 00:15:02.489 "trsvcid": "4420", 00:15:02.489 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:15:02.489 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:15:02.489 "hdgst": false, 00:15:02.489 "ddgst": false 00:15:02.489 }, 00:15:02.489 "method": "bdev_nvme_attach_controller" 00:15:02.489 }' 00:15:02.489 07:32:18 -- nvmf/common.sh@545 -- # IFS=, 00:15:02.489 07:32:18 -- nvmf/common.sh@546 -- # printf '%s\n' '{ 00:15:02.489 "params": { 00:15:02.489 "name": "Nvme1", 00:15:02.489 "trtype": "tcp", 00:15:02.489 "traddr": "10.0.0.2", 00:15:02.489 "adrfam": "ipv4", 00:15:02.489 "trsvcid": "4420", 00:15:02.489 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:15:02.489 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:15:02.489 "hdgst": false, 00:15:02.489 "ddgst": false 00:15:02.489 }, 00:15:02.489 "method": "bdev_nvme_attach_controller" 00:15:02.489 }' 00:15:02.489 07:32:18 -- nvmf/common.sh@545 -- # IFS=, 00:15:02.489 07:32:18 -- nvmf/common.sh@546 -- # printf '%s\n' '{ 00:15:02.489 "params": { 00:15:02.489 "name": "Nvme1", 00:15:02.489 "trtype": "tcp", 00:15:02.489 "traddr": "10.0.0.2", 00:15:02.489 "adrfam": "ipv4", 00:15:02.489 "trsvcid": "4420", 00:15:02.489 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:15:02.489 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:15:02.489 "hdgst": false, 00:15:02.489 "ddgst": false 00:15:02.489 }, 00:15:02.489 "method": "bdev_nvme_attach_controller" 00:15:02.489 }' 00:15:02.489 [2024-07-14 07:32:18.583083] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:15:02.489 [2024-07-14 07:32:18.583084] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:15:02.489 [2024-07-14 07:32:18.583083] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:15:02.489 [2024-07-14 07:32:18.583083] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:15:02.489 [2024-07-14 07:32:18.583174] [ DPDK EAL parameters: bdevperf -c 0x20 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib[2024-07-14 07:32:18.583174] [ DPDK EAL parameters: bdevperf -c 0x40 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib[2024-07-14 07:32:18.583174] [ DPDK EAL parameters: bdevperf -c 0x80 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib[2024-07-14 07:32:18.583175] [ DPDK EAL parameters: bdevperf -c 0x10 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk2 --proc-type=auto ] 00:15:02.489 .cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk3 --proc-type=auto ] 00:15:02.489 .cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk4 --proc-type=auto ] 00:15:02.489 .cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk1 --proc-type=auto ] 00:15:02.489 EAL: No free 2048 kB hugepages reported on node 1 00:15:02.746 EAL: No free 2048 kB hugepages reported on node 1 00:15:02.746 [2024-07-14 07:32:18.748047] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:02.746 EAL: No free 2048 kB hugepages reported on node 1 00:15:02.746 [2024-07-14 07:32:18.841961] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 4 00:15:02.746 [2024-07-14 07:32:18.845271] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:02.746 EAL: No free 2048 kB hugepages reported on node 1 00:15:03.005 [2024-07-14 07:32:18.938758] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 6 00:15:03.005 [2024-07-14 07:32:18.942354] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:03.005 [2024-07-14 07:32:19.037239] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 7 00:15:03.005 [2024-07-14 07:32:19.041905] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:03.005 [2024-07-14 07:32:19.137075] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 5 00:15:03.263 Running I/O for 1 seconds... 00:15:03.263 Running I/O for 1 seconds... 00:15:03.263 Running I/O for 1 seconds... 00:15:03.522 Running I/O for 1 seconds... 00:15:04.087 00:15:04.087 Latency(us) 00:15:04.087 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:15:04.087 Job: Nvme1n1 (Core Mask 0x40, workload: flush, depth: 128, IO size: 4096) 00:15:04.087 Nvme1n1 : 1.00 187314.37 731.70 0.00 0.00 680.74 263.96 928.43 00:15:04.087 =================================================================================================================== 00:15:04.087 Total : 187314.37 731.70 0.00 0.00 680.74 263.96 928.43 00:15:04.087 00:15:04.087 Latency(us) 00:15:04.087 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:15:04.087 Job: Nvme1n1 (Core Mask 0x80, workload: unmap, depth: 128, IO size: 4096) 00:15:04.087 Nvme1n1 : 1.01 8474.84 33.10 0.00 0.00 14983.13 7815.77 27573.67 00:15:04.087 =================================================================================================================== 00:15:04.087 Total : 8474.84 33.10 0.00 0.00 14983.13 7815.77 27573.67 00:15:04.346 00:15:04.346 Latency(us) 00:15:04.346 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:15:04.346 Job: Nvme1n1 (Core Mask 0x10, workload: write, depth: 128, IO size: 4096) 00:15:04.346 Nvme1n1 : 1.01 9890.05 38.63 0.00 0.00 12863.13 8592.50 25826.04 00:15:04.346 =================================================================================================================== 00:15:04.346 Total : 9890.05 38.63 0.00 0.00 12863.13 8592.50 25826.04 00:15:04.346 00:15:04.346 Latency(us) 00:15:04.346 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:15:04.346 Job: Nvme1n1 (Core Mask 0x20, workload: read, depth: 128, IO size: 4096) 00:15:04.346 Nvme1n1 : 1.01 8033.77 31.38 0.00 0.00 15870.09 6310.87 37865.24 00:15:04.346 =================================================================================================================== 00:15:04.346 Total : 8033.77 31.38 0.00 0.00 15870.09 6310.87 37865.24 00:15:04.604 07:32:20 -- target/bdev_io_wait.sh@38 -- # wait 4080868 00:15:04.861 07:32:20 -- target/bdev_io_wait.sh@39 -- # wait 4080870 00:15:04.861 07:32:20 -- target/bdev_io_wait.sh@40 -- # wait 4080873 00:15:04.861 07:32:20 -- target/bdev_io_wait.sh@42 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:15:04.861 07:32:20 -- common/autotest_common.sh@551 -- # xtrace_disable 00:15:04.861 07:32:20 -- common/autotest_common.sh@10 -- # set +x 00:15:04.861 07:32:20 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:15:04.861 07:32:20 -- target/bdev_io_wait.sh@44 -- # trap - SIGINT SIGTERM EXIT 00:15:04.861 07:32:20 -- target/bdev_io_wait.sh@46 -- # nvmftestfini 00:15:04.861 07:32:20 -- nvmf/common.sh@476 -- # nvmfcleanup 00:15:04.861 07:32:20 -- nvmf/common.sh@116 -- # sync 00:15:04.861 07:32:20 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:15:04.861 07:32:20 -- nvmf/common.sh@119 -- # set +e 00:15:04.861 07:32:20 -- nvmf/common.sh@120 -- # for i in {1..20} 00:15:04.861 07:32:20 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:15:04.861 rmmod nvme_tcp 00:15:04.861 rmmod nvme_fabrics 00:15:04.861 rmmod nvme_keyring 00:15:04.861 07:32:20 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:15:04.861 07:32:20 -- nvmf/common.sh@123 -- # set -e 00:15:04.861 07:32:20 -- nvmf/common.sh@124 -- # return 0 00:15:04.861 07:32:20 -- nvmf/common.sh@477 -- # '[' -n 4080741 ']' 00:15:04.861 07:32:20 -- nvmf/common.sh@478 -- # killprocess 4080741 00:15:04.861 07:32:20 -- common/autotest_common.sh@926 -- # '[' -z 4080741 ']' 00:15:04.861 07:32:20 -- common/autotest_common.sh@930 -- # kill -0 4080741 00:15:04.861 07:32:20 -- common/autotest_common.sh@931 -- # uname 00:15:04.861 07:32:20 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:15:04.861 07:32:20 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 4080741 00:15:04.861 07:32:20 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:15:04.861 07:32:20 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:15:04.861 07:32:20 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 4080741' 00:15:04.861 killing process with pid 4080741 00:15:04.861 07:32:20 -- common/autotest_common.sh@945 -- # kill 4080741 00:15:04.861 07:32:20 -- common/autotest_common.sh@950 -- # wait 4080741 00:15:05.118 07:32:21 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:15:05.118 07:32:21 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:15:05.118 07:32:21 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:15:05.118 07:32:21 -- nvmf/common.sh@273 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:15:05.118 07:32:21 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:15:05.118 07:32:21 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:05.118 07:32:21 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:15:05.118 07:32:21 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:07.675 07:32:23 -- nvmf/common.sh@278 -- # ip -4 addr flush cvl_0_1 00:15:07.675 00:15:07.675 real 0m7.287s 00:15:07.675 user 0m16.114s 00:15:07.675 sys 0m3.527s 00:15:07.675 07:32:23 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:15:07.675 07:32:23 -- common/autotest_common.sh@10 -- # set +x 00:15:07.675 ************************************ 00:15:07.675 END TEST nvmf_bdev_io_wait 00:15:07.675 ************************************ 00:15:07.675 07:32:23 -- nvmf/nvmf.sh@50 -- # run_test nvmf_queue_depth /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/queue_depth.sh --transport=tcp 00:15:07.675 07:32:23 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:15:07.675 07:32:23 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:15:07.675 07:32:23 -- common/autotest_common.sh@10 -- # set +x 00:15:07.675 ************************************ 00:15:07.675 START TEST nvmf_queue_depth 00:15:07.675 ************************************ 00:15:07.675 07:32:23 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/queue_depth.sh --transport=tcp 00:15:07.675 * Looking for test storage... 00:15:07.675 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:15:07.675 07:32:23 -- target/queue_depth.sh@12 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:15:07.675 07:32:23 -- nvmf/common.sh@7 -- # uname -s 00:15:07.675 07:32:23 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:15:07.675 07:32:23 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:15:07.675 07:32:23 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:15:07.675 07:32:23 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:15:07.675 07:32:23 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:15:07.675 07:32:23 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:15:07.675 07:32:23 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:15:07.675 07:32:23 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:15:07.675 07:32:23 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:15:07.675 07:32:23 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:15:07.675 07:32:23 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:15:07.675 07:32:23 -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:15:07.675 07:32:23 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:15:07.675 07:32:23 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:15:07.675 07:32:23 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:15:07.675 07:32:23 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:15:07.675 07:32:23 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:15:07.675 07:32:23 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:15:07.675 07:32:23 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:15:07.675 07:32:23 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:07.675 07:32:23 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:07.675 07:32:23 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:07.675 07:32:23 -- paths/export.sh@5 -- # export PATH 00:15:07.676 07:32:23 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:07.676 07:32:23 -- nvmf/common.sh@46 -- # : 0 00:15:07.676 07:32:23 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:15:07.676 07:32:23 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:15:07.676 07:32:23 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:15:07.676 07:32:23 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:15:07.676 07:32:23 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:15:07.676 07:32:23 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:15:07.676 07:32:23 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:15:07.676 07:32:23 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:15:07.676 07:32:23 -- target/queue_depth.sh@14 -- # MALLOC_BDEV_SIZE=64 00:15:07.676 07:32:23 -- target/queue_depth.sh@15 -- # MALLOC_BLOCK_SIZE=512 00:15:07.676 07:32:23 -- target/queue_depth.sh@17 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:15:07.676 07:32:23 -- target/queue_depth.sh@19 -- # nvmftestinit 00:15:07.676 07:32:23 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:15:07.676 07:32:23 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:15:07.676 07:32:23 -- nvmf/common.sh@436 -- # prepare_net_devs 00:15:07.676 07:32:23 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:15:07.676 07:32:23 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:15:07.676 07:32:23 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:07.676 07:32:23 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:15:07.676 07:32:23 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:07.676 07:32:23 -- nvmf/common.sh@402 -- # [[ phy != virt ]] 00:15:07.676 07:32:23 -- nvmf/common.sh@402 -- # gather_supported_nvmf_pci_devs 00:15:07.676 07:32:23 -- nvmf/common.sh@284 -- # xtrace_disable 00:15:07.676 07:32:23 -- common/autotest_common.sh@10 -- # set +x 00:15:09.576 07:32:25 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:15:09.576 07:32:25 -- nvmf/common.sh@290 -- # pci_devs=() 00:15:09.576 07:32:25 -- nvmf/common.sh@290 -- # local -a pci_devs 00:15:09.576 07:32:25 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:15:09.576 07:32:25 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:15:09.576 07:32:25 -- nvmf/common.sh@292 -- # pci_drivers=() 00:15:09.576 07:32:25 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:15:09.576 07:32:25 -- nvmf/common.sh@294 -- # net_devs=() 00:15:09.576 07:32:25 -- nvmf/common.sh@294 -- # local -ga net_devs 00:15:09.576 07:32:25 -- nvmf/common.sh@295 -- # e810=() 00:15:09.576 07:32:25 -- nvmf/common.sh@295 -- # local -ga e810 00:15:09.576 07:32:25 -- nvmf/common.sh@296 -- # x722=() 00:15:09.576 07:32:25 -- nvmf/common.sh@296 -- # local -ga x722 00:15:09.576 07:32:25 -- nvmf/common.sh@297 -- # mlx=() 00:15:09.576 07:32:25 -- nvmf/common.sh@297 -- # local -ga mlx 00:15:09.576 07:32:25 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:15:09.576 07:32:25 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:15:09.576 07:32:25 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:15:09.576 07:32:25 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:15:09.576 07:32:25 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:15:09.576 07:32:25 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:15:09.576 07:32:25 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:15:09.576 07:32:25 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:15:09.576 07:32:25 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:15:09.576 07:32:25 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:15:09.576 07:32:25 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:15:09.576 07:32:25 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:15:09.576 07:32:25 -- nvmf/common.sh@320 -- # [[ tcp == rdma ]] 00:15:09.576 07:32:25 -- nvmf/common.sh@326 -- # [[ e810 == mlx5 ]] 00:15:09.576 07:32:25 -- nvmf/common.sh@328 -- # [[ e810 == e810 ]] 00:15:09.576 07:32:25 -- nvmf/common.sh@329 -- # pci_devs=("${e810[@]}") 00:15:09.576 07:32:25 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:15:09.576 07:32:25 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:15:09.576 07:32:25 -- nvmf/common.sh@340 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:15:09.576 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:15:09.576 07:32:25 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:15:09.576 07:32:25 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:15:09.576 07:32:25 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:15:09.576 07:32:25 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:15:09.576 07:32:25 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:15:09.576 07:32:25 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:15:09.576 07:32:25 -- nvmf/common.sh@340 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:15:09.576 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:15:09.576 07:32:25 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:15:09.576 07:32:25 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:15:09.576 07:32:25 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:15:09.576 07:32:25 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:15:09.576 07:32:25 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:15:09.576 07:32:25 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:15:09.576 07:32:25 -- nvmf/common.sh@371 -- # [[ e810 == e810 ]] 00:15:09.576 07:32:25 -- nvmf/common.sh@371 -- # [[ tcp == rdma ]] 00:15:09.576 07:32:25 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:15:09.576 07:32:25 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:15:09.576 07:32:25 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:15:09.576 07:32:25 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:15:09.576 07:32:25 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:15:09.576 Found net devices under 0000:0a:00.0: cvl_0_0 00:15:09.576 07:32:25 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:15:09.576 07:32:25 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:15:09.576 07:32:25 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:15:09.576 07:32:25 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:15:09.576 07:32:25 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:15:09.576 07:32:25 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:15:09.576 Found net devices under 0000:0a:00.1: cvl_0_1 00:15:09.576 07:32:25 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:15:09.576 07:32:25 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:15:09.576 07:32:25 -- nvmf/common.sh@402 -- # is_hw=yes 00:15:09.576 07:32:25 -- nvmf/common.sh@404 -- # [[ yes == yes ]] 00:15:09.576 07:32:25 -- nvmf/common.sh@405 -- # [[ tcp == tcp ]] 00:15:09.576 07:32:25 -- nvmf/common.sh@406 -- # nvmf_tcp_init 00:15:09.576 07:32:25 -- nvmf/common.sh@228 -- # NVMF_INITIATOR_IP=10.0.0.1 00:15:09.576 07:32:25 -- nvmf/common.sh@229 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:15:09.576 07:32:25 -- nvmf/common.sh@230 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:15:09.576 07:32:25 -- nvmf/common.sh@233 -- # (( 2 > 1 )) 00:15:09.576 07:32:25 -- nvmf/common.sh@235 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:15:09.576 07:32:25 -- nvmf/common.sh@236 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:15:09.576 07:32:25 -- nvmf/common.sh@239 -- # NVMF_SECOND_TARGET_IP= 00:15:09.576 07:32:25 -- nvmf/common.sh@241 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:15:09.576 07:32:25 -- nvmf/common.sh@242 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:15:09.576 07:32:25 -- nvmf/common.sh@243 -- # ip -4 addr flush cvl_0_0 00:15:09.576 07:32:25 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_1 00:15:09.576 07:32:25 -- nvmf/common.sh@247 -- # ip netns add cvl_0_0_ns_spdk 00:15:09.577 07:32:25 -- nvmf/common.sh@250 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:15:09.577 07:32:25 -- nvmf/common.sh@253 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:15:09.577 07:32:25 -- nvmf/common.sh@254 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:15:09.577 07:32:25 -- nvmf/common.sh@257 -- # ip link set cvl_0_1 up 00:15:09.577 07:32:25 -- nvmf/common.sh@259 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:15:09.577 07:32:25 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:15:09.577 07:32:25 -- nvmf/common.sh@263 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:15:09.577 07:32:25 -- nvmf/common.sh@266 -- # ping -c 1 10.0.0.2 00:15:09.577 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:15:09.577 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.116 ms 00:15:09.577 00:15:09.577 --- 10.0.0.2 ping statistics --- 00:15:09.577 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:09.577 rtt min/avg/max/mdev = 0.116/0.116/0.116/0.000 ms 00:15:09.577 07:32:25 -- nvmf/common.sh@267 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:15:09.577 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:15:09.577 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.176 ms 00:15:09.577 00:15:09.577 --- 10.0.0.1 ping statistics --- 00:15:09.577 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:09.577 rtt min/avg/max/mdev = 0.176/0.176/0.176/0.000 ms 00:15:09.577 07:32:25 -- nvmf/common.sh@269 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:15:09.577 07:32:25 -- nvmf/common.sh@410 -- # return 0 00:15:09.577 07:32:25 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:15:09.577 07:32:25 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:15:09.577 07:32:25 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:15:09.577 07:32:25 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:15:09.577 07:32:25 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:15:09.577 07:32:25 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:15:09.577 07:32:25 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:15:09.577 07:32:25 -- target/queue_depth.sh@21 -- # nvmfappstart -m 0x2 00:15:09.577 07:32:25 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:15:09.577 07:32:25 -- common/autotest_common.sh@712 -- # xtrace_disable 00:15:09.577 07:32:25 -- common/autotest_common.sh@10 -- # set +x 00:15:09.577 07:32:25 -- nvmf/common.sh@469 -- # nvmfpid=4083110 00:15:09.577 07:32:25 -- nvmf/common.sh@468 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:15:09.577 07:32:25 -- nvmf/common.sh@470 -- # waitforlisten 4083110 00:15:09.577 07:32:25 -- common/autotest_common.sh@819 -- # '[' -z 4083110 ']' 00:15:09.577 07:32:25 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:09.577 07:32:25 -- common/autotest_common.sh@824 -- # local max_retries=100 00:15:09.577 07:32:25 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:09.577 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:09.577 07:32:25 -- common/autotest_common.sh@828 -- # xtrace_disable 00:15:09.577 07:32:25 -- common/autotest_common.sh@10 -- # set +x 00:15:09.577 [2024-07-14 07:32:25.571970] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:15:09.577 [2024-07-14 07:32:25.572047] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:15:09.577 EAL: No free 2048 kB hugepages reported on node 1 00:15:09.577 [2024-07-14 07:32:25.635762] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:09.577 [2024-07-14 07:32:25.739533] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:15:09.577 [2024-07-14 07:32:25.739692] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:15:09.577 [2024-07-14 07:32:25.739710] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:15:09.577 [2024-07-14 07:32:25.739722] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:15:09.577 [2024-07-14 07:32:25.739749] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:15:10.508 07:32:26 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:15:10.508 07:32:26 -- common/autotest_common.sh@852 -- # return 0 00:15:10.508 07:32:26 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:15:10.508 07:32:26 -- common/autotest_common.sh@718 -- # xtrace_disable 00:15:10.508 07:32:26 -- common/autotest_common.sh@10 -- # set +x 00:15:10.508 07:32:26 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:15:10.508 07:32:26 -- target/queue_depth.sh@23 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:15:10.508 07:32:26 -- common/autotest_common.sh@551 -- # xtrace_disable 00:15:10.508 07:32:26 -- common/autotest_common.sh@10 -- # set +x 00:15:10.508 [2024-07-14 07:32:26.542272] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:15:10.508 07:32:26 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:15:10.508 07:32:26 -- target/queue_depth.sh@24 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:15:10.508 07:32:26 -- common/autotest_common.sh@551 -- # xtrace_disable 00:15:10.508 07:32:26 -- common/autotest_common.sh@10 -- # set +x 00:15:10.508 Malloc0 00:15:10.508 07:32:26 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:15:10.508 07:32:26 -- target/queue_depth.sh@25 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:15:10.508 07:32:26 -- common/autotest_common.sh@551 -- # xtrace_disable 00:15:10.508 07:32:26 -- common/autotest_common.sh@10 -- # set +x 00:15:10.508 07:32:26 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:15:10.508 07:32:26 -- target/queue_depth.sh@26 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:15:10.508 07:32:26 -- common/autotest_common.sh@551 -- # xtrace_disable 00:15:10.508 07:32:26 -- common/autotest_common.sh@10 -- # set +x 00:15:10.508 07:32:26 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:15:10.508 07:32:26 -- target/queue_depth.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:15:10.508 07:32:26 -- common/autotest_common.sh@551 -- # xtrace_disable 00:15:10.508 07:32:26 -- common/autotest_common.sh@10 -- # set +x 00:15:10.508 [2024-07-14 07:32:26.607532] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:15:10.508 07:32:26 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:15:10.508 07:32:26 -- target/queue_depth.sh@30 -- # bdevperf_pid=4083268 00:15:10.508 07:32:26 -- target/queue_depth.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 1024 -o 4096 -w verify -t 10 00:15:10.508 07:32:26 -- target/queue_depth.sh@32 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:15:10.508 07:32:26 -- target/queue_depth.sh@33 -- # waitforlisten 4083268 /var/tmp/bdevperf.sock 00:15:10.508 07:32:26 -- common/autotest_common.sh@819 -- # '[' -z 4083268 ']' 00:15:10.508 07:32:26 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:15:10.508 07:32:26 -- common/autotest_common.sh@824 -- # local max_retries=100 00:15:10.508 07:32:26 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:15:10.508 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:15:10.508 07:32:26 -- common/autotest_common.sh@828 -- # xtrace_disable 00:15:10.508 07:32:26 -- common/autotest_common.sh@10 -- # set +x 00:15:10.508 [2024-07-14 07:32:26.649657] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:15:10.508 [2024-07-14 07:32:26.649721] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid4083268 ] 00:15:10.508 EAL: No free 2048 kB hugepages reported on node 1 00:15:10.766 [2024-07-14 07:32:26.711507] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:10.766 [2024-07-14 07:32:26.825581] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:15:11.698 07:32:27 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:15:11.698 07:32:27 -- common/autotest_common.sh@852 -- # return 0 00:15:11.698 07:32:27 -- target/queue_depth.sh@34 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:15:11.698 07:32:27 -- common/autotest_common.sh@551 -- # xtrace_disable 00:15:11.698 07:32:27 -- common/autotest_common.sh@10 -- # set +x 00:15:11.698 NVMe0n1 00:15:11.698 07:32:27 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:15:11.698 07:32:27 -- target/queue_depth.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:15:11.698 Running I/O for 10 seconds... 00:15:23.897 00:15:23.897 Latency(us) 00:15:23.897 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:15:23.897 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 1024, IO size: 4096) 00:15:23.897 Verification LBA range: start 0x0 length 0x4000 00:15:23.897 NVMe0n1 : 10.07 12358.75 48.28 0.00 0.00 82532.58 15049.01 62137.84 00:15:23.897 =================================================================================================================== 00:15:23.897 Total : 12358.75 48.28 0.00 0.00 82532.58 15049.01 62137.84 00:15:23.897 0 00:15:23.897 07:32:37 -- target/queue_depth.sh@39 -- # killprocess 4083268 00:15:23.897 07:32:37 -- common/autotest_common.sh@926 -- # '[' -z 4083268 ']' 00:15:23.897 07:32:37 -- common/autotest_common.sh@930 -- # kill -0 4083268 00:15:23.897 07:32:37 -- common/autotest_common.sh@931 -- # uname 00:15:23.897 07:32:37 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:15:23.897 07:32:37 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 4083268 00:15:23.897 07:32:37 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:15:23.897 07:32:37 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:15:23.897 07:32:37 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 4083268' 00:15:23.897 killing process with pid 4083268 00:15:23.897 07:32:37 -- common/autotest_common.sh@945 -- # kill 4083268 00:15:23.897 Received shutdown signal, test time was about 10.000000 seconds 00:15:23.897 00:15:23.897 Latency(us) 00:15:23.897 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:15:23.897 =================================================================================================================== 00:15:23.897 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:15:23.897 07:32:37 -- common/autotest_common.sh@950 -- # wait 4083268 00:15:23.897 07:32:38 -- target/queue_depth.sh@41 -- # trap - SIGINT SIGTERM EXIT 00:15:23.897 07:32:38 -- target/queue_depth.sh@43 -- # nvmftestfini 00:15:23.897 07:32:38 -- nvmf/common.sh@476 -- # nvmfcleanup 00:15:23.897 07:32:38 -- nvmf/common.sh@116 -- # sync 00:15:23.897 07:32:38 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:15:23.897 07:32:38 -- nvmf/common.sh@119 -- # set +e 00:15:23.897 07:32:38 -- nvmf/common.sh@120 -- # for i in {1..20} 00:15:23.897 07:32:38 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:15:23.897 rmmod nvme_tcp 00:15:23.897 rmmod nvme_fabrics 00:15:23.897 rmmod nvme_keyring 00:15:23.897 07:32:38 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:15:23.897 07:32:38 -- nvmf/common.sh@123 -- # set -e 00:15:23.897 07:32:38 -- nvmf/common.sh@124 -- # return 0 00:15:23.897 07:32:38 -- nvmf/common.sh@477 -- # '[' -n 4083110 ']' 00:15:23.897 07:32:38 -- nvmf/common.sh@478 -- # killprocess 4083110 00:15:23.897 07:32:38 -- common/autotest_common.sh@926 -- # '[' -z 4083110 ']' 00:15:23.897 07:32:38 -- common/autotest_common.sh@930 -- # kill -0 4083110 00:15:23.897 07:32:38 -- common/autotest_common.sh@931 -- # uname 00:15:23.897 07:32:38 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:15:23.897 07:32:38 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 4083110 00:15:23.897 07:32:38 -- common/autotest_common.sh@932 -- # process_name=reactor_1 00:15:23.897 07:32:38 -- common/autotest_common.sh@936 -- # '[' reactor_1 = sudo ']' 00:15:23.897 07:32:38 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 4083110' 00:15:23.897 killing process with pid 4083110 00:15:23.897 07:32:38 -- common/autotest_common.sh@945 -- # kill 4083110 00:15:23.897 07:32:38 -- common/autotest_common.sh@950 -- # wait 4083110 00:15:23.897 07:32:38 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:15:23.897 07:32:38 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:15:23.897 07:32:38 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:15:23.897 07:32:38 -- nvmf/common.sh@273 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:15:23.897 07:32:38 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:15:23.897 07:32:38 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:23.897 07:32:38 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:15:23.897 07:32:38 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:24.833 07:32:40 -- nvmf/common.sh@278 -- # ip -4 addr flush cvl_0_1 00:15:24.833 00:15:24.833 real 0m17.412s 00:15:24.833 user 0m24.930s 00:15:24.833 sys 0m3.154s 00:15:24.833 07:32:40 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:15:24.833 07:32:40 -- common/autotest_common.sh@10 -- # set +x 00:15:24.833 ************************************ 00:15:24.833 END TEST nvmf_queue_depth 00:15:24.833 ************************************ 00:15:24.833 07:32:40 -- nvmf/nvmf.sh@51 -- # run_test nvmf_multipath /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multipath.sh --transport=tcp 00:15:24.833 07:32:40 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:15:24.833 07:32:40 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:15:24.833 07:32:40 -- common/autotest_common.sh@10 -- # set +x 00:15:24.833 ************************************ 00:15:24.833 START TEST nvmf_multipath 00:15:24.833 ************************************ 00:15:24.833 07:32:40 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multipath.sh --transport=tcp 00:15:24.833 * Looking for test storage... 00:15:24.833 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:15:24.833 07:32:40 -- target/multipath.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:15:24.833 07:32:40 -- nvmf/common.sh@7 -- # uname -s 00:15:24.833 07:32:40 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:15:24.833 07:32:40 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:15:24.833 07:32:40 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:15:24.833 07:32:40 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:15:24.833 07:32:40 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:15:24.833 07:32:40 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:15:24.833 07:32:40 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:15:24.833 07:32:40 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:15:24.833 07:32:40 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:15:24.833 07:32:40 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:15:24.833 07:32:40 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:15:24.833 07:32:40 -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:15:24.833 07:32:40 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:15:24.833 07:32:40 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:15:24.833 07:32:40 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:15:24.833 07:32:40 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:15:24.833 07:32:40 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:15:24.833 07:32:40 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:15:24.833 07:32:40 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:15:24.833 07:32:40 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:24.833 07:32:40 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:24.833 07:32:40 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:24.833 07:32:40 -- paths/export.sh@5 -- # export PATH 00:15:24.833 07:32:40 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:24.833 07:32:40 -- nvmf/common.sh@46 -- # : 0 00:15:24.833 07:32:40 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:15:24.833 07:32:40 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:15:24.833 07:32:40 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:15:24.833 07:32:40 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:15:24.833 07:32:40 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:15:24.833 07:32:40 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:15:24.833 07:32:40 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:15:24.833 07:32:40 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:15:24.833 07:32:40 -- target/multipath.sh@11 -- # MALLOC_BDEV_SIZE=64 00:15:24.833 07:32:40 -- target/multipath.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:15:24.833 07:32:40 -- target/multipath.sh@13 -- # nqn=nqn.2016-06.io.spdk:cnode1 00:15:24.833 07:32:40 -- target/multipath.sh@15 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:15:24.833 07:32:40 -- target/multipath.sh@43 -- # nvmftestinit 00:15:24.833 07:32:40 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:15:24.833 07:32:40 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:15:24.833 07:32:40 -- nvmf/common.sh@436 -- # prepare_net_devs 00:15:24.833 07:32:40 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:15:24.833 07:32:40 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:15:24.833 07:32:40 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:24.833 07:32:40 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:15:24.833 07:32:40 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:24.833 07:32:40 -- nvmf/common.sh@402 -- # [[ phy != virt ]] 00:15:24.833 07:32:40 -- nvmf/common.sh@402 -- # gather_supported_nvmf_pci_devs 00:15:24.833 07:32:40 -- nvmf/common.sh@284 -- # xtrace_disable 00:15:24.833 07:32:40 -- common/autotest_common.sh@10 -- # set +x 00:15:26.732 07:32:42 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:15:26.732 07:32:42 -- nvmf/common.sh@290 -- # pci_devs=() 00:15:26.732 07:32:42 -- nvmf/common.sh@290 -- # local -a pci_devs 00:15:26.732 07:32:42 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:15:26.732 07:32:42 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:15:26.732 07:32:42 -- nvmf/common.sh@292 -- # pci_drivers=() 00:15:26.732 07:32:42 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:15:26.732 07:32:42 -- nvmf/common.sh@294 -- # net_devs=() 00:15:26.732 07:32:42 -- nvmf/common.sh@294 -- # local -ga net_devs 00:15:26.732 07:32:42 -- nvmf/common.sh@295 -- # e810=() 00:15:26.732 07:32:42 -- nvmf/common.sh@295 -- # local -ga e810 00:15:26.732 07:32:42 -- nvmf/common.sh@296 -- # x722=() 00:15:26.732 07:32:42 -- nvmf/common.sh@296 -- # local -ga x722 00:15:26.732 07:32:42 -- nvmf/common.sh@297 -- # mlx=() 00:15:26.732 07:32:42 -- nvmf/common.sh@297 -- # local -ga mlx 00:15:26.732 07:32:42 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:15:26.732 07:32:42 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:15:26.732 07:32:42 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:15:26.732 07:32:42 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:15:26.732 07:32:42 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:15:26.733 07:32:42 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:15:26.733 07:32:42 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:15:26.733 07:32:42 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:15:26.733 07:32:42 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:15:26.733 07:32:42 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:15:26.733 07:32:42 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:15:26.733 07:32:42 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:15:26.733 07:32:42 -- nvmf/common.sh@320 -- # [[ tcp == rdma ]] 00:15:26.733 07:32:42 -- nvmf/common.sh@326 -- # [[ e810 == mlx5 ]] 00:15:26.733 07:32:42 -- nvmf/common.sh@328 -- # [[ e810 == e810 ]] 00:15:26.733 07:32:42 -- nvmf/common.sh@329 -- # pci_devs=("${e810[@]}") 00:15:26.733 07:32:42 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:15:26.733 07:32:42 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:15:26.733 07:32:42 -- nvmf/common.sh@340 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:15:26.733 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:15:26.733 07:32:42 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:15:26.733 07:32:42 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:15:26.733 07:32:42 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:15:26.733 07:32:42 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:15:26.733 07:32:42 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:15:26.733 07:32:42 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:15:26.733 07:32:42 -- nvmf/common.sh@340 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:15:26.733 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:15:26.733 07:32:42 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:15:26.733 07:32:42 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:15:26.733 07:32:42 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:15:26.733 07:32:42 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:15:26.733 07:32:42 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:15:26.733 07:32:42 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:15:26.733 07:32:42 -- nvmf/common.sh@371 -- # [[ e810 == e810 ]] 00:15:26.733 07:32:42 -- nvmf/common.sh@371 -- # [[ tcp == rdma ]] 00:15:26.733 07:32:42 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:15:26.733 07:32:42 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:15:26.733 07:32:42 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:15:26.733 07:32:42 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:15:26.733 07:32:42 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:15:26.733 Found net devices under 0000:0a:00.0: cvl_0_0 00:15:26.733 07:32:42 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:15:26.733 07:32:42 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:15:26.733 07:32:42 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:15:26.733 07:32:42 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:15:26.733 07:32:42 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:15:26.733 07:32:42 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:15:26.733 Found net devices under 0000:0a:00.1: cvl_0_1 00:15:26.733 07:32:42 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:15:26.733 07:32:42 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:15:26.733 07:32:42 -- nvmf/common.sh@402 -- # is_hw=yes 00:15:26.733 07:32:42 -- nvmf/common.sh@404 -- # [[ yes == yes ]] 00:15:26.733 07:32:42 -- nvmf/common.sh@405 -- # [[ tcp == tcp ]] 00:15:26.733 07:32:42 -- nvmf/common.sh@406 -- # nvmf_tcp_init 00:15:26.733 07:32:42 -- nvmf/common.sh@228 -- # NVMF_INITIATOR_IP=10.0.0.1 00:15:26.733 07:32:42 -- nvmf/common.sh@229 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:15:26.733 07:32:42 -- nvmf/common.sh@230 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:15:26.733 07:32:42 -- nvmf/common.sh@233 -- # (( 2 > 1 )) 00:15:26.733 07:32:42 -- nvmf/common.sh@235 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:15:26.733 07:32:42 -- nvmf/common.sh@236 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:15:26.733 07:32:42 -- nvmf/common.sh@239 -- # NVMF_SECOND_TARGET_IP= 00:15:26.733 07:32:42 -- nvmf/common.sh@241 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:15:26.733 07:32:42 -- nvmf/common.sh@242 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:15:26.733 07:32:42 -- nvmf/common.sh@243 -- # ip -4 addr flush cvl_0_0 00:15:26.733 07:32:42 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_1 00:15:26.733 07:32:42 -- nvmf/common.sh@247 -- # ip netns add cvl_0_0_ns_spdk 00:15:26.733 07:32:42 -- nvmf/common.sh@250 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:15:26.733 07:32:42 -- nvmf/common.sh@253 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:15:26.733 07:32:42 -- nvmf/common.sh@254 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:15:26.733 07:32:42 -- nvmf/common.sh@257 -- # ip link set cvl_0_1 up 00:15:26.733 07:32:42 -- nvmf/common.sh@259 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:15:26.733 07:32:42 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:15:26.733 07:32:42 -- nvmf/common.sh@263 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:15:26.733 07:32:42 -- nvmf/common.sh@266 -- # ping -c 1 10.0.0.2 00:15:26.733 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:15:26.733 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.135 ms 00:15:26.733 00:15:26.733 --- 10.0.0.2 ping statistics --- 00:15:26.733 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:26.733 rtt min/avg/max/mdev = 0.135/0.135/0.135/0.000 ms 00:15:26.733 07:32:42 -- nvmf/common.sh@267 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:15:26.733 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:15:26.733 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.131 ms 00:15:26.733 00:15:26.733 --- 10.0.0.1 ping statistics --- 00:15:26.733 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:26.733 rtt min/avg/max/mdev = 0.131/0.131/0.131/0.000 ms 00:15:26.733 07:32:42 -- nvmf/common.sh@269 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:15:26.733 07:32:42 -- nvmf/common.sh@410 -- # return 0 00:15:26.733 07:32:42 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:15:26.733 07:32:42 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:15:26.733 07:32:42 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:15:26.733 07:32:42 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:15:26.733 07:32:42 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:15:26.733 07:32:42 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:15:26.733 07:32:42 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:15:26.733 07:32:42 -- target/multipath.sh@45 -- # '[' -z ']' 00:15:26.733 07:32:42 -- target/multipath.sh@46 -- # echo 'only one NIC for nvmf test' 00:15:26.733 only one NIC for nvmf test 00:15:26.733 07:32:42 -- target/multipath.sh@47 -- # nvmftestfini 00:15:26.733 07:32:42 -- nvmf/common.sh@476 -- # nvmfcleanup 00:15:26.733 07:32:42 -- nvmf/common.sh@116 -- # sync 00:15:26.733 07:32:42 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:15:26.733 07:32:42 -- nvmf/common.sh@119 -- # set +e 00:15:26.733 07:32:42 -- nvmf/common.sh@120 -- # for i in {1..20} 00:15:26.733 07:32:42 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:15:26.733 rmmod nvme_tcp 00:15:26.733 rmmod nvme_fabrics 00:15:26.733 rmmod nvme_keyring 00:15:26.992 07:32:42 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:15:26.992 07:32:42 -- nvmf/common.sh@123 -- # set -e 00:15:26.992 07:32:42 -- nvmf/common.sh@124 -- # return 0 00:15:26.992 07:32:42 -- nvmf/common.sh@477 -- # '[' -n '' ']' 00:15:26.992 07:32:42 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:15:26.992 07:32:42 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:15:26.992 07:32:42 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:15:26.992 07:32:42 -- nvmf/common.sh@273 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:15:26.992 07:32:42 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:15:26.992 07:32:42 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:26.992 07:32:42 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:15:26.992 07:32:42 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:28.899 07:32:44 -- nvmf/common.sh@278 -- # ip -4 addr flush cvl_0_1 00:15:28.899 07:32:44 -- target/multipath.sh@48 -- # exit 0 00:15:28.899 07:32:44 -- target/multipath.sh@1 -- # nvmftestfini 00:15:28.899 07:32:44 -- nvmf/common.sh@476 -- # nvmfcleanup 00:15:28.899 07:32:44 -- nvmf/common.sh@116 -- # sync 00:15:28.899 07:32:44 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:15:28.899 07:32:44 -- nvmf/common.sh@119 -- # set +e 00:15:28.899 07:32:44 -- nvmf/common.sh@120 -- # for i in {1..20} 00:15:28.899 07:32:44 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:15:28.899 07:32:44 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:15:28.899 07:32:44 -- nvmf/common.sh@123 -- # set -e 00:15:28.899 07:32:44 -- nvmf/common.sh@124 -- # return 0 00:15:28.899 07:32:44 -- nvmf/common.sh@477 -- # '[' -n '' ']' 00:15:28.900 07:32:44 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:15:28.900 07:32:44 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:15:28.900 07:32:44 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:15:28.900 07:32:44 -- nvmf/common.sh@273 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:15:28.900 07:32:44 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:15:28.900 07:32:44 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:28.900 07:32:44 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:15:28.900 07:32:44 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:28.900 07:32:44 -- nvmf/common.sh@278 -- # ip -4 addr flush cvl_0_1 00:15:28.900 00:15:28.900 real 0m4.285s 00:15:28.900 user 0m0.851s 00:15:28.900 sys 0m1.416s 00:15:28.900 07:32:44 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:15:28.900 07:32:44 -- common/autotest_common.sh@10 -- # set +x 00:15:28.900 ************************************ 00:15:28.900 END TEST nvmf_multipath 00:15:28.900 ************************************ 00:15:28.900 07:32:45 -- nvmf/nvmf.sh@52 -- # run_test nvmf_zcopy /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/zcopy.sh --transport=tcp 00:15:28.900 07:32:45 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:15:28.900 07:32:45 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:15:28.900 07:32:45 -- common/autotest_common.sh@10 -- # set +x 00:15:28.900 ************************************ 00:15:28.900 START TEST nvmf_zcopy 00:15:28.900 ************************************ 00:15:28.900 07:32:45 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/zcopy.sh --transport=tcp 00:15:28.900 * Looking for test storage... 00:15:28.900 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:15:28.900 07:32:45 -- target/zcopy.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:15:28.900 07:32:45 -- nvmf/common.sh@7 -- # uname -s 00:15:28.900 07:32:45 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:15:28.900 07:32:45 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:15:28.900 07:32:45 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:15:28.900 07:32:45 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:15:28.900 07:32:45 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:15:28.900 07:32:45 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:15:28.900 07:32:45 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:15:28.900 07:32:45 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:15:28.900 07:32:45 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:15:28.900 07:32:45 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:15:28.900 07:32:45 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:15:28.900 07:32:45 -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:15:28.900 07:32:45 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:15:28.900 07:32:45 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:15:28.900 07:32:45 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:15:28.900 07:32:45 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:15:29.158 07:32:45 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:15:29.158 07:32:45 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:15:29.158 07:32:45 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:15:29.158 07:32:45 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:29.158 07:32:45 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:29.158 07:32:45 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:29.158 07:32:45 -- paths/export.sh@5 -- # export PATH 00:15:29.158 07:32:45 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:29.158 07:32:45 -- nvmf/common.sh@46 -- # : 0 00:15:29.158 07:32:45 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:15:29.158 07:32:45 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:15:29.158 07:32:45 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:15:29.158 07:32:45 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:15:29.158 07:32:45 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:15:29.158 07:32:45 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:15:29.158 07:32:45 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:15:29.158 07:32:45 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:15:29.158 07:32:45 -- target/zcopy.sh@12 -- # nvmftestinit 00:15:29.158 07:32:45 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:15:29.158 07:32:45 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:15:29.158 07:32:45 -- nvmf/common.sh@436 -- # prepare_net_devs 00:15:29.158 07:32:45 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:15:29.158 07:32:45 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:15:29.158 07:32:45 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:29.158 07:32:45 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:15:29.158 07:32:45 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:29.158 07:32:45 -- nvmf/common.sh@402 -- # [[ phy != virt ]] 00:15:29.158 07:32:45 -- nvmf/common.sh@402 -- # gather_supported_nvmf_pci_devs 00:15:29.158 07:32:45 -- nvmf/common.sh@284 -- # xtrace_disable 00:15:29.158 07:32:45 -- common/autotest_common.sh@10 -- # set +x 00:15:31.057 07:32:46 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:15:31.057 07:32:46 -- nvmf/common.sh@290 -- # pci_devs=() 00:15:31.057 07:32:46 -- nvmf/common.sh@290 -- # local -a pci_devs 00:15:31.058 07:32:46 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:15:31.058 07:32:46 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:15:31.058 07:32:46 -- nvmf/common.sh@292 -- # pci_drivers=() 00:15:31.058 07:32:46 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:15:31.058 07:32:46 -- nvmf/common.sh@294 -- # net_devs=() 00:15:31.058 07:32:46 -- nvmf/common.sh@294 -- # local -ga net_devs 00:15:31.058 07:32:46 -- nvmf/common.sh@295 -- # e810=() 00:15:31.058 07:32:46 -- nvmf/common.sh@295 -- # local -ga e810 00:15:31.058 07:32:46 -- nvmf/common.sh@296 -- # x722=() 00:15:31.058 07:32:46 -- nvmf/common.sh@296 -- # local -ga x722 00:15:31.058 07:32:46 -- nvmf/common.sh@297 -- # mlx=() 00:15:31.058 07:32:46 -- nvmf/common.sh@297 -- # local -ga mlx 00:15:31.058 07:32:46 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:15:31.058 07:32:46 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:15:31.058 07:32:46 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:15:31.058 07:32:46 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:15:31.058 07:32:46 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:15:31.058 07:32:46 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:15:31.058 07:32:46 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:15:31.058 07:32:46 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:15:31.058 07:32:46 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:15:31.058 07:32:46 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:15:31.058 07:32:46 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:15:31.058 07:32:46 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:15:31.058 07:32:47 -- nvmf/common.sh@320 -- # [[ tcp == rdma ]] 00:15:31.058 07:32:47 -- nvmf/common.sh@326 -- # [[ e810 == mlx5 ]] 00:15:31.058 07:32:47 -- nvmf/common.sh@328 -- # [[ e810 == e810 ]] 00:15:31.058 07:32:47 -- nvmf/common.sh@329 -- # pci_devs=("${e810[@]}") 00:15:31.058 07:32:47 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:15:31.058 07:32:47 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:15:31.058 07:32:47 -- nvmf/common.sh@340 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:15:31.058 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:15:31.058 07:32:47 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:15:31.058 07:32:47 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:15:31.058 07:32:47 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:15:31.058 07:32:47 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:15:31.058 07:32:47 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:15:31.058 07:32:47 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:15:31.058 07:32:47 -- nvmf/common.sh@340 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:15:31.058 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:15:31.058 07:32:47 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:15:31.058 07:32:47 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:15:31.058 07:32:47 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:15:31.058 07:32:47 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:15:31.058 07:32:47 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:15:31.058 07:32:47 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:15:31.058 07:32:47 -- nvmf/common.sh@371 -- # [[ e810 == e810 ]] 00:15:31.058 07:32:47 -- nvmf/common.sh@371 -- # [[ tcp == rdma ]] 00:15:31.058 07:32:47 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:15:31.058 07:32:47 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:15:31.058 07:32:47 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:15:31.058 07:32:47 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:15:31.058 07:32:47 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:15:31.058 Found net devices under 0000:0a:00.0: cvl_0_0 00:15:31.058 07:32:47 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:15:31.058 07:32:47 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:15:31.058 07:32:47 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:15:31.058 07:32:47 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:15:31.058 07:32:47 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:15:31.058 07:32:47 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:15:31.058 Found net devices under 0000:0a:00.1: cvl_0_1 00:15:31.058 07:32:47 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:15:31.058 07:32:47 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:15:31.058 07:32:47 -- nvmf/common.sh@402 -- # is_hw=yes 00:15:31.058 07:32:47 -- nvmf/common.sh@404 -- # [[ yes == yes ]] 00:15:31.058 07:32:47 -- nvmf/common.sh@405 -- # [[ tcp == tcp ]] 00:15:31.058 07:32:47 -- nvmf/common.sh@406 -- # nvmf_tcp_init 00:15:31.058 07:32:47 -- nvmf/common.sh@228 -- # NVMF_INITIATOR_IP=10.0.0.1 00:15:31.058 07:32:47 -- nvmf/common.sh@229 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:15:31.058 07:32:47 -- nvmf/common.sh@230 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:15:31.058 07:32:47 -- nvmf/common.sh@233 -- # (( 2 > 1 )) 00:15:31.058 07:32:47 -- nvmf/common.sh@235 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:15:31.058 07:32:47 -- nvmf/common.sh@236 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:15:31.058 07:32:47 -- nvmf/common.sh@239 -- # NVMF_SECOND_TARGET_IP= 00:15:31.058 07:32:47 -- nvmf/common.sh@241 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:15:31.058 07:32:47 -- nvmf/common.sh@242 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:15:31.058 07:32:47 -- nvmf/common.sh@243 -- # ip -4 addr flush cvl_0_0 00:15:31.058 07:32:47 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_1 00:15:31.058 07:32:47 -- nvmf/common.sh@247 -- # ip netns add cvl_0_0_ns_spdk 00:15:31.058 07:32:47 -- nvmf/common.sh@250 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:15:31.058 07:32:47 -- nvmf/common.sh@253 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:15:31.058 07:32:47 -- nvmf/common.sh@254 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:15:31.058 07:32:47 -- nvmf/common.sh@257 -- # ip link set cvl_0_1 up 00:15:31.058 07:32:47 -- nvmf/common.sh@259 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:15:31.058 07:32:47 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:15:31.058 07:32:47 -- nvmf/common.sh@263 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:15:31.058 07:32:47 -- nvmf/common.sh@266 -- # ping -c 1 10.0.0.2 00:15:31.058 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:15:31.058 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.201 ms 00:15:31.058 00:15:31.058 --- 10.0.0.2 ping statistics --- 00:15:31.058 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:31.058 rtt min/avg/max/mdev = 0.201/0.201/0.201/0.000 ms 00:15:31.058 07:32:47 -- nvmf/common.sh@267 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:15:31.058 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:15:31.058 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.135 ms 00:15:31.058 00:15:31.058 --- 10.0.0.1 ping statistics --- 00:15:31.058 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:31.058 rtt min/avg/max/mdev = 0.135/0.135/0.135/0.000 ms 00:15:31.058 07:32:47 -- nvmf/common.sh@269 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:15:31.058 07:32:47 -- nvmf/common.sh@410 -- # return 0 00:15:31.058 07:32:47 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:15:31.058 07:32:47 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:15:31.058 07:32:47 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:15:31.058 07:32:47 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:15:31.058 07:32:47 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:15:31.058 07:32:47 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:15:31.058 07:32:47 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:15:31.058 07:32:47 -- target/zcopy.sh@13 -- # nvmfappstart -m 0x2 00:15:31.058 07:32:47 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:15:31.058 07:32:47 -- common/autotest_common.sh@712 -- # xtrace_disable 00:15:31.058 07:32:47 -- common/autotest_common.sh@10 -- # set +x 00:15:31.058 07:32:47 -- nvmf/common.sh@469 -- # nvmfpid=4088508 00:15:31.058 07:32:47 -- nvmf/common.sh@468 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:15:31.058 07:32:47 -- nvmf/common.sh@470 -- # waitforlisten 4088508 00:15:31.058 07:32:47 -- common/autotest_common.sh@819 -- # '[' -z 4088508 ']' 00:15:31.058 07:32:47 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:31.058 07:32:47 -- common/autotest_common.sh@824 -- # local max_retries=100 00:15:31.058 07:32:47 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:31.058 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:31.058 07:32:47 -- common/autotest_common.sh@828 -- # xtrace_disable 00:15:31.058 07:32:47 -- common/autotest_common.sh@10 -- # set +x 00:15:31.058 [2024-07-14 07:32:47.206809] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:15:31.058 [2024-07-14 07:32:47.206909] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:15:31.317 EAL: No free 2048 kB hugepages reported on node 1 00:15:31.317 [2024-07-14 07:32:47.276459] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:31.317 [2024-07-14 07:32:47.393240] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:15:31.317 [2024-07-14 07:32:47.393407] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:15:31.317 [2024-07-14 07:32:47.393427] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:15:31.317 [2024-07-14 07:32:47.393442] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:15:31.317 [2024-07-14 07:32:47.393474] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:15:32.255 07:32:48 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:15:32.255 07:32:48 -- common/autotest_common.sh@852 -- # return 0 00:15:32.255 07:32:48 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:15:32.255 07:32:48 -- common/autotest_common.sh@718 -- # xtrace_disable 00:15:32.255 07:32:48 -- common/autotest_common.sh@10 -- # set +x 00:15:32.255 07:32:48 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:15:32.255 07:32:48 -- target/zcopy.sh@15 -- # '[' tcp '!=' tcp ']' 00:15:32.255 07:32:48 -- target/zcopy.sh@22 -- # rpc_cmd nvmf_create_transport -t tcp -o -c 0 --zcopy 00:15:32.255 07:32:48 -- common/autotest_common.sh@551 -- # xtrace_disable 00:15:32.255 07:32:48 -- common/autotest_common.sh@10 -- # set +x 00:15:32.255 [2024-07-14 07:32:48.155326] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:15:32.255 07:32:48 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:15:32.255 07:32:48 -- target/zcopy.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:15:32.255 07:32:48 -- common/autotest_common.sh@551 -- # xtrace_disable 00:15:32.255 07:32:48 -- common/autotest_common.sh@10 -- # set +x 00:15:32.255 07:32:48 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:15:32.256 07:32:48 -- target/zcopy.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:15:32.256 07:32:48 -- common/autotest_common.sh@551 -- # xtrace_disable 00:15:32.256 07:32:48 -- common/autotest_common.sh@10 -- # set +x 00:15:32.256 [2024-07-14 07:32:48.171480] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:15:32.256 07:32:48 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:15:32.256 07:32:48 -- target/zcopy.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:15:32.256 07:32:48 -- common/autotest_common.sh@551 -- # xtrace_disable 00:15:32.256 07:32:48 -- common/autotest_common.sh@10 -- # set +x 00:15:32.256 07:32:48 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:15:32.256 07:32:48 -- target/zcopy.sh@29 -- # rpc_cmd bdev_malloc_create 32 4096 -b malloc0 00:15:32.256 07:32:48 -- common/autotest_common.sh@551 -- # xtrace_disable 00:15:32.256 07:32:48 -- common/autotest_common.sh@10 -- # set +x 00:15:32.256 malloc0 00:15:32.256 07:32:48 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:15:32.256 07:32:48 -- target/zcopy.sh@30 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:15:32.256 07:32:48 -- common/autotest_common.sh@551 -- # xtrace_disable 00:15:32.256 07:32:48 -- common/autotest_common.sh@10 -- # set +x 00:15:32.256 07:32:48 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:15:32.256 07:32:48 -- target/zcopy.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -t 10 -q 128 -w verify -o 8192 00:15:32.256 07:32:48 -- target/zcopy.sh@33 -- # gen_nvmf_target_json 00:15:32.256 07:32:48 -- nvmf/common.sh@520 -- # config=() 00:15:32.256 07:32:48 -- nvmf/common.sh@520 -- # local subsystem config 00:15:32.256 07:32:48 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:15:32.256 07:32:48 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:15:32.256 { 00:15:32.256 "params": { 00:15:32.256 "name": "Nvme$subsystem", 00:15:32.256 "trtype": "$TEST_TRANSPORT", 00:15:32.256 "traddr": "$NVMF_FIRST_TARGET_IP", 00:15:32.256 "adrfam": "ipv4", 00:15:32.256 "trsvcid": "$NVMF_PORT", 00:15:32.256 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:15:32.256 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:15:32.256 "hdgst": ${hdgst:-false}, 00:15:32.256 "ddgst": ${ddgst:-false} 00:15:32.256 }, 00:15:32.256 "method": "bdev_nvme_attach_controller" 00:15:32.256 } 00:15:32.256 EOF 00:15:32.256 )") 00:15:32.256 07:32:48 -- nvmf/common.sh@542 -- # cat 00:15:32.256 07:32:48 -- nvmf/common.sh@544 -- # jq . 00:15:32.256 07:32:48 -- nvmf/common.sh@545 -- # IFS=, 00:15:32.256 07:32:48 -- nvmf/common.sh@546 -- # printf '%s\n' '{ 00:15:32.256 "params": { 00:15:32.256 "name": "Nvme1", 00:15:32.256 "trtype": "tcp", 00:15:32.256 "traddr": "10.0.0.2", 00:15:32.256 "adrfam": "ipv4", 00:15:32.256 "trsvcid": "4420", 00:15:32.256 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:15:32.256 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:15:32.256 "hdgst": false, 00:15:32.256 "ddgst": false 00:15:32.256 }, 00:15:32.256 "method": "bdev_nvme_attach_controller" 00:15:32.256 }' 00:15:32.256 [2024-07-14 07:32:48.248015] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:15:32.256 [2024-07-14 07:32:48.248086] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid4088666 ] 00:15:32.256 EAL: No free 2048 kB hugepages reported on node 1 00:15:32.256 [2024-07-14 07:32:48.311595] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:32.561 [2024-07-14 07:32:48.437777] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:15:32.819 Running I/O for 10 seconds... 00:15:42.787 00:15:42.787 Latency(us) 00:15:42.787 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:15:42.787 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 8192) 00:15:42.787 Verification LBA range: start 0x0 length 0x1000 00:15:42.787 Nvme1n1 : 10.01 9007.11 70.37 0.00 0.00 14177.70 1686.95 23495.87 00:15:42.787 =================================================================================================================== 00:15:42.787 Total : 9007.11 70.37 0.00 0.00 14177.70 1686.95 23495.87 00:15:43.046 07:32:59 -- target/zcopy.sh@39 -- # perfpid=4089987 00:15:43.046 07:32:59 -- target/zcopy.sh@41 -- # xtrace_disable 00:15:43.046 07:32:59 -- common/autotest_common.sh@10 -- # set +x 00:15:43.046 07:32:59 -- target/zcopy.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/63 -t 5 -q 128 -w randrw -M 50 -o 8192 00:15:43.046 07:32:59 -- target/zcopy.sh@37 -- # gen_nvmf_target_json 00:15:43.046 07:32:59 -- nvmf/common.sh@520 -- # config=() 00:15:43.046 07:32:59 -- nvmf/common.sh@520 -- # local subsystem config 00:15:43.046 07:32:59 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:15:43.046 07:32:59 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:15:43.046 { 00:15:43.046 "params": { 00:15:43.046 "name": "Nvme$subsystem", 00:15:43.046 "trtype": "$TEST_TRANSPORT", 00:15:43.046 "traddr": "$NVMF_FIRST_TARGET_IP", 00:15:43.046 "adrfam": "ipv4", 00:15:43.046 "trsvcid": "$NVMF_PORT", 00:15:43.046 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:15:43.046 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:15:43.046 "hdgst": ${hdgst:-false}, 00:15:43.046 "ddgst": ${ddgst:-false} 00:15:43.046 }, 00:15:43.046 "method": "bdev_nvme_attach_controller" 00:15:43.046 } 00:15:43.046 EOF 00:15:43.046 )") 00:15:43.046 07:32:59 -- nvmf/common.sh@542 -- # cat 00:15:43.046 [2024-07-14 07:32:59.044136] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:43.046 [2024-07-14 07:32:59.044197] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:43.046 07:32:59 -- nvmf/common.sh@544 -- # jq . 00:15:43.046 07:32:59 -- nvmf/common.sh@545 -- # IFS=, 00:15:43.046 07:32:59 -- nvmf/common.sh@546 -- # printf '%s\n' '{ 00:15:43.046 "params": { 00:15:43.046 "name": "Nvme1", 00:15:43.046 "trtype": "tcp", 00:15:43.046 "traddr": "10.0.0.2", 00:15:43.046 "adrfam": "ipv4", 00:15:43.046 "trsvcid": "4420", 00:15:43.046 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:15:43.046 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:15:43.046 "hdgst": false, 00:15:43.046 "ddgst": false 00:15:43.046 }, 00:15:43.046 "method": "bdev_nvme_attach_controller" 00:15:43.046 }' 00:15:43.046 [2024-07-14 07:32:59.052089] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:43.046 [2024-07-14 07:32:59.052113] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:43.046 [2024-07-14 07:32:59.060110] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:43.046 [2024-07-14 07:32:59.060133] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:43.046 [2024-07-14 07:32:59.068132] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:43.046 [2024-07-14 07:32:59.068155] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:43.046 [2024-07-14 07:32:59.076153] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:43.046 [2024-07-14 07:32:59.076194] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:43.046 [2024-07-14 07:32:59.077722] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:15:43.046 [2024-07-14 07:32:59.077787] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid4089987 ] 00:15:43.046 [2024-07-14 07:32:59.084176] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:43.046 [2024-07-14 07:32:59.084199] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:43.046 [2024-07-14 07:32:59.092210] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:43.046 [2024-07-14 07:32:59.092246] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:43.046 [2024-07-14 07:32:59.100253] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:43.046 [2024-07-14 07:32:59.100273] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:43.046 EAL: No free 2048 kB hugepages reported on node 1 00:15:43.046 [2024-07-14 07:32:59.108255] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:43.046 [2024-07-14 07:32:59.108275] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:43.046 [2024-07-14 07:32:59.116293] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:43.046 [2024-07-14 07:32:59.116319] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:43.046 [2024-07-14 07:32:59.124318] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:43.046 [2024-07-14 07:32:59.124343] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:43.046 [2024-07-14 07:32:59.132341] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:43.046 [2024-07-14 07:32:59.132366] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:43.046 [2024-07-14 07:32:59.140364] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:43.046 [2024-07-14 07:32:59.140389] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:43.046 [2024-07-14 07:32:59.141233] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:43.046 [2024-07-14 07:32:59.148413] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:43.046 [2024-07-14 07:32:59.148448] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:43.046 [2024-07-14 07:32:59.156442] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:43.046 [2024-07-14 07:32:59.156483] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:43.046 [2024-07-14 07:32:59.164430] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:43.046 [2024-07-14 07:32:59.164454] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:43.046 [2024-07-14 07:32:59.172451] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:43.046 [2024-07-14 07:32:59.172476] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:43.046 [2024-07-14 07:32:59.180474] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:43.046 [2024-07-14 07:32:59.180498] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:43.046 [2024-07-14 07:32:59.188520] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:43.046 [2024-07-14 07:32:59.188546] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:43.046 [2024-07-14 07:32:59.196516] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:43.046 [2024-07-14 07:32:59.196542] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:43.046 [2024-07-14 07:32:59.204545] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:43.046 [2024-07-14 07:32:59.204571] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:43.046 [2024-07-14 07:32:59.212604] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:43.046 [2024-07-14 07:32:59.212648] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:43.305 [2024-07-14 07:32:59.220588] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:43.305 [2024-07-14 07:32:59.220615] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:43.305 [2024-07-14 07:32:59.228607] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:43.305 [2024-07-14 07:32:59.228633] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:43.305 [2024-07-14 07:32:59.236629] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:43.305 [2024-07-14 07:32:59.236654] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:43.305 [2024-07-14 07:32:59.244652] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:43.305 [2024-07-14 07:32:59.244677] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:43.305 [2024-07-14 07:32:59.252676] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:43.305 [2024-07-14 07:32:59.252702] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:43.305 [2024-07-14 07:32:59.260106] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:15:43.305 [2024-07-14 07:32:59.260697] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:43.305 [2024-07-14 07:32:59.260721] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:43.305 [2024-07-14 07:32:59.268726] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:43.305 [2024-07-14 07:32:59.268753] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:43.305 [2024-07-14 07:32:59.276767] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:43.305 [2024-07-14 07:32:59.276802] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:43.305 [2024-07-14 07:32:59.284797] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:43.305 [2024-07-14 07:32:59.284836] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:43.305 [2024-07-14 07:32:59.292822] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:43.305 [2024-07-14 07:32:59.292873] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:43.305 [2024-07-14 07:32:59.300838] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:43.305 [2024-07-14 07:32:59.300885] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:43.305 [2024-07-14 07:32:59.308876] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:43.305 [2024-07-14 07:32:59.308941] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:43.305 [2024-07-14 07:32:59.316913] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:43.305 [2024-07-14 07:32:59.316950] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:43.305 [2024-07-14 07:32:59.324930] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:43.305 [2024-07-14 07:32:59.324967] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:43.305 [2024-07-14 07:32:59.332908] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:43.305 [2024-07-14 07:32:59.332946] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:43.305 [2024-07-14 07:32:59.340972] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:43.305 [2024-07-14 07:32:59.341008] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:43.305 [2024-07-14 07:32:59.348988] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:43.305 [2024-07-14 07:32:59.349025] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:43.305 [2024-07-14 07:32:59.357001] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:43.305 [2024-07-14 07:32:59.357028] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:43.305 [2024-07-14 07:32:59.364994] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:43.305 [2024-07-14 07:32:59.365015] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:43.305 [2024-07-14 07:32:59.373018] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:43.305 [2024-07-14 07:32:59.373040] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:43.305 [2024-07-14 07:32:59.381096] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:43.305 [2024-07-14 07:32:59.381130] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:43.305 [2024-07-14 07:32:59.389114] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:43.305 [2024-07-14 07:32:59.389152] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:43.305 [2024-07-14 07:32:59.397137] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:43.305 [2024-07-14 07:32:59.397179] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:43.305 [2024-07-14 07:32:59.405178] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:43.305 [2024-07-14 07:32:59.405207] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:43.305 [2024-07-14 07:32:59.413202] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:43.305 [2024-07-14 07:32:59.413230] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:43.305 [2024-07-14 07:32:59.421225] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:43.305 [2024-07-14 07:32:59.421252] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:43.305 [2024-07-14 07:32:59.429248] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:43.305 [2024-07-14 07:32:59.429273] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:43.305 [2024-07-14 07:32:59.437274] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:43.305 [2024-07-14 07:32:59.437299] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:43.305 [2024-07-14 07:32:59.445296] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:43.305 [2024-07-14 07:32:59.445321] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:43.305 [2024-07-14 07:32:59.453319] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:43.305 [2024-07-14 07:32:59.453344] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:43.305 [2024-07-14 07:32:59.461347] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:43.305 [2024-07-14 07:32:59.461374] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:43.305 [2024-07-14 07:32:59.469364] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:43.305 [2024-07-14 07:32:59.469389] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:43.564 [2024-07-14 07:32:59.477402] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:43.564 [2024-07-14 07:32:59.477428] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:43.564 [2024-07-14 07:32:59.485413] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:43.564 [2024-07-14 07:32:59.485437] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:43.564 [2024-07-14 07:32:59.493436] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:43.564 [2024-07-14 07:32:59.493461] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:43.564 [2024-07-14 07:32:59.501459] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:43.564 [2024-07-14 07:32:59.501484] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:43.564 [2024-07-14 07:32:59.509477] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:43.564 [2024-07-14 07:32:59.509504] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:43.564 [2024-07-14 07:32:59.517498] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:43.564 [2024-07-14 07:32:59.517524] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:43.564 [2024-07-14 07:32:59.525522] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:43.564 [2024-07-14 07:32:59.525547] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:43.564 [2024-07-14 07:32:59.533544] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:43.564 [2024-07-14 07:32:59.533575] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:43.564 [2024-07-14 07:32:59.541569] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:43.564 [2024-07-14 07:32:59.541594] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:43.564 [2024-07-14 07:32:59.549592] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:43.564 [2024-07-14 07:32:59.549618] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:43.564 [2024-07-14 07:32:59.557657] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:43.564 [2024-07-14 07:32:59.557686] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:43.564 [2024-07-14 07:32:59.565643] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:43.564 [2024-07-14 07:32:59.565673] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:43.564 Running I/O for 5 seconds... 00:15:43.564 [2024-07-14 07:32:59.573668] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:43.564 [2024-07-14 07:32:59.573695] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:43.564 [2024-07-14 07:32:59.589071] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:43.564 [2024-07-14 07:32:59.589101] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:43.564 [2024-07-14 07:32:59.601694] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:43.564 [2024-07-14 07:32:59.601735] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:43.564 [2024-07-14 07:32:59.616329] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:43.564 [2024-07-14 07:32:59.616362] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:43.564 [2024-07-14 07:32:59.629355] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:43.564 [2024-07-14 07:32:59.629395] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:43.564 [2024-07-14 07:32:59.643604] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:43.564 [2024-07-14 07:32:59.643644] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:43.564 [2024-07-14 07:32:59.657387] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:43.564 [2024-07-14 07:32:59.657427] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:43.564 [2024-07-14 07:32:59.671494] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:43.564 [2024-07-14 07:32:59.671532] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:43.564 [2024-07-14 07:32:59.683465] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:43.565 [2024-07-14 07:32:59.683493] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:43.565 [2024-07-14 07:32:59.696321] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:43.565 [2024-07-14 07:32:59.696357] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:43.565 [2024-07-14 07:32:59.708297] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:43.565 [2024-07-14 07:32:59.708331] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:43.565 [2024-07-14 07:32:59.721436] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:43.565 [2024-07-14 07:32:59.721469] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:43.565 [2024-07-14 07:32:59.734508] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:43.565 [2024-07-14 07:32:59.734557] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:43.822 [2024-07-14 07:32:59.746939] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:43.822 [2024-07-14 07:32:59.746973] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:43.822 [2024-07-14 07:32:59.760762] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:43.822 [2024-07-14 07:32:59.760802] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:43.822 [2024-07-14 07:32:59.772257] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:43.822 [2024-07-14 07:32:59.772290] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:43.822 [2024-07-14 07:32:59.785148] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:43.822 [2024-07-14 07:32:59.785180] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:43.822 [2024-07-14 07:32:59.796344] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:43.822 [2024-07-14 07:32:59.796372] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:43.822 [2024-07-14 07:32:59.808592] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:43.822 [2024-07-14 07:32:59.808624] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:43.822 [2024-07-14 07:32:59.820921] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:43.822 [2024-07-14 07:32:59.820956] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:43.822 [2024-07-14 07:32:59.833648] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:43.822 [2024-07-14 07:32:59.833681] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:43.822 [2024-07-14 07:32:59.845991] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:43.822 [2024-07-14 07:32:59.846024] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:43.822 [2024-07-14 07:32:59.858223] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:43.822 [2024-07-14 07:32:59.858256] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:43.822 [2024-07-14 07:32:59.871139] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:43.822 [2024-07-14 07:32:59.871171] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:43.822 [2024-07-14 07:32:59.883266] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:43.822 [2024-07-14 07:32:59.883299] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:43.822 [2024-07-14 07:32:59.895902] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:43.822 [2024-07-14 07:32:59.895936] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:43.822 [2024-07-14 07:32:59.909093] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:43.822 [2024-07-14 07:32:59.909128] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:43.822 [2024-07-14 07:32:59.920499] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:43.822 [2024-07-14 07:32:59.920533] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:43.822 [2024-07-14 07:32:59.933490] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:43.822 [2024-07-14 07:32:59.933524] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:43.822 [2024-07-14 07:32:59.945003] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:43.822 [2024-07-14 07:32:59.945037] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:43.822 [2024-07-14 07:32:59.958126] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:43.822 [2024-07-14 07:32:59.958160] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:43.822 [2024-07-14 07:32:59.969950] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:43.822 [2024-07-14 07:32:59.969984] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:43.822 [2024-07-14 07:32:59.983301] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:43.822 [2024-07-14 07:32:59.983335] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:44.080 [2024-07-14 07:32:59.995405] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:44.080 [2024-07-14 07:32:59.995437] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:44.080 [2024-07-14 07:33:00.008621] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:44.080 [2024-07-14 07:33:00.008664] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:44.080 [2024-07-14 07:33:00.020593] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:44.080 [2024-07-14 07:33:00.020639] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:44.080 [2024-07-14 07:33:00.034242] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:44.080 [2024-07-14 07:33:00.034276] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:44.080 [2024-07-14 07:33:00.046498] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:44.080 [2024-07-14 07:33:00.046531] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:44.080 [2024-07-14 07:33:00.059457] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:44.080 [2024-07-14 07:33:00.059489] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:44.080 [2024-07-14 07:33:00.071019] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:44.080 [2024-07-14 07:33:00.071056] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:44.080 [2024-07-14 07:33:00.084262] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:44.080 [2024-07-14 07:33:00.084295] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:44.080 [2024-07-14 07:33:00.096445] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:44.080 [2024-07-14 07:33:00.096478] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:44.080 [2024-07-14 07:33:00.110087] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:44.080 [2024-07-14 07:33:00.110131] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:44.080 [2024-07-14 07:33:00.123016] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:44.080 [2024-07-14 07:33:00.123050] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:44.080 [2024-07-14 07:33:00.136269] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:44.080 [2024-07-14 07:33:00.136303] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:44.080 [2024-07-14 07:33:00.149904] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:44.080 [2024-07-14 07:33:00.149938] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:44.080 [2024-07-14 07:33:00.161687] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:44.080 [2024-07-14 07:33:00.161722] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:44.080 [2024-07-14 07:33:00.174814] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:44.080 [2024-07-14 07:33:00.174861] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:44.080 [2024-07-14 07:33:00.187328] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:44.080 [2024-07-14 07:33:00.187360] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:44.080 [2024-07-14 07:33:00.201374] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:44.080 [2024-07-14 07:33:00.201422] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:44.080 [2024-07-14 07:33:00.214704] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:44.080 [2024-07-14 07:33:00.214739] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:44.080 [2024-07-14 07:33:00.228357] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:44.080 [2024-07-14 07:33:00.228392] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:44.080 [2024-07-14 07:33:00.240839] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:44.080 [2024-07-14 07:33:00.240903] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:44.338 [2024-07-14 07:33:00.254242] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:44.338 [2024-07-14 07:33:00.254291] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:44.338 [2024-07-14 07:33:00.266960] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:44.338 [2024-07-14 07:33:00.266994] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:44.338 [2024-07-14 07:33:00.280265] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:44.338 [2024-07-14 07:33:00.280299] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:44.338 [2024-07-14 07:33:00.293996] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:44.338 [2024-07-14 07:33:00.294031] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:44.338 [2024-07-14 07:33:00.306248] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:44.338 [2024-07-14 07:33:00.306281] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:44.338 [2024-07-14 07:33:00.320107] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:44.338 [2024-07-14 07:33:00.320159] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:44.338 [2024-07-14 07:33:00.332437] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:44.338 [2024-07-14 07:33:00.332471] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:44.338 [2024-07-14 07:33:00.345798] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:44.338 [2024-07-14 07:33:00.345831] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:44.338 [2024-07-14 07:33:00.358434] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:44.338 [2024-07-14 07:33:00.358468] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:44.338 [2024-07-14 07:33:00.370682] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:44.338 [2024-07-14 07:33:00.370715] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:44.338 [2024-07-14 07:33:00.383530] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:44.338 [2024-07-14 07:33:00.383563] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:44.338 [2024-07-14 07:33:00.395912] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:44.338 [2024-07-14 07:33:00.395946] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:44.338 [2024-07-14 07:33:00.407979] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:44.338 [2024-07-14 07:33:00.408013] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:44.338 [2024-07-14 07:33:00.421282] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:44.338 [2024-07-14 07:33:00.421315] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:44.338 [2024-07-14 07:33:00.434244] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:44.338 [2024-07-14 07:33:00.434276] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:44.338 [2024-07-14 07:33:00.447122] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:44.338 [2024-07-14 07:33:00.447170] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:44.338 [2024-07-14 07:33:00.459398] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:44.338 [2024-07-14 07:33:00.459430] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:44.338 [2024-07-14 07:33:00.471781] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:44.338 [2024-07-14 07:33:00.471814] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:44.338 [2024-07-14 07:33:00.484679] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:44.338 [2024-07-14 07:33:00.484711] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:44.338 [2024-07-14 07:33:00.496487] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:44.338 [2024-07-14 07:33:00.496520] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:44.596 [2024-07-14 07:33:00.509973] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:44.596 [2024-07-14 07:33:00.510007] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:44.596 [2024-07-14 07:33:00.522017] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:44.596 [2024-07-14 07:33:00.522050] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:44.596 [2024-07-14 07:33:00.535108] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:44.596 [2024-07-14 07:33:00.535165] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:44.596 [2024-07-14 07:33:00.547707] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:44.596 [2024-07-14 07:33:00.547739] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:44.596 [2024-07-14 07:33:00.560063] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:44.596 [2024-07-14 07:33:00.560096] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:44.596 [2024-07-14 07:33:00.573288] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:44.596 [2024-07-14 07:33:00.573321] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:44.596 [2024-07-14 07:33:00.586409] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:44.596 [2024-07-14 07:33:00.586443] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:44.596 [2024-07-14 07:33:00.599610] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:44.596 [2024-07-14 07:33:00.599643] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:44.596 [2024-07-14 07:33:00.611243] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:44.596 [2024-07-14 07:33:00.611276] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:44.596 [2024-07-14 07:33:00.624033] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:44.596 [2024-07-14 07:33:00.624067] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:44.596 [2024-07-14 07:33:00.636291] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:44.596 [2024-07-14 07:33:00.636323] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:44.596 [2024-07-14 07:33:00.649753] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:44.596 [2024-07-14 07:33:00.649785] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:44.596 [2024-07-14 07:33:00.661809] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:44.596 [2024-07-14 07:33:00.661857] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:44.596 [2024-07-14 07:33:00.674381] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:44.596 [2024-07-14 07:33:00.674409] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:44.596 [2024-07-14 07:33:00.685636] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:44.596 [2024-07-14 07:33:00.685669] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:44.596 [2024-07-14 07:33:00.698601] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:44.596 [2024-07-14 07:33:00.698634] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:44.596 [2024-07-14 07:33:00.710385] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:44.596 [2024-07-14 07:33:00.710426] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:44.596 [2024-07-14 07:33:00.723333] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:44.596 [2024-07-14 07:33:00.723367] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:44.596 [2024-07-14 07:33:00.735412] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:44.596 [2024-07-14 07:33:00.735446] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:44.596 [2024-07-14 07:33:00.747800] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:44.596 [2024-07-14 07:33:00.747833] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:44.596 [2024-07-14 07:33:00.760553] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:44.596 [2024-07-14 07:33:00.760587] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:44.854 [2024-07-14 07:33:00.773171] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:44.854 [2024-07-14 07:33:00.773204] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:44.854 [2024-07-14 07:33:00.785892] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:44.854 [2024-07-14 07:33:00.785938] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:44.854 [2024-07-14 07:33:00.799489] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:44.854 [2024-07-14 07:33:00.799522] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:44.854 [2024-07-14 07:33:00.812354] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:44.854 [2024-07-14 07:33:00.812388] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:44.854 [2024-07-14 07:33:00.825524] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:44.854 [2024-07-14 07:33:00.825557] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:44.854 [2024-07-14 07:33:00.838603] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:44.854 [2024-07-14 07:33:00.838636] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:44.854 [2024-07-14 07:33:00.851304] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:44.854 [2024-07-14 07:33:00.851336] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:44.854 [2024-07-14 07:33:00.865103] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:44.854 [2024-07-14 07:33:00.865137] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:44.854 [2024-07-14 07:33:00.877640] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:44.854 [2024-07-14 07:33:00.877674] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:44.854 [2024-07-14 07:33:00.890761] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:44.854 [2024-07-14 07:33:00.890794] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:44.854 [2024-07-14 07:33:00.902311] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:44.854 [2024-07-14 07:33:00.902345] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:44.854 [2024-07-14 07:33:00.915276] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:44.854 [2024-07-14 07:33:00.915311] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:44.854 [2024-07-14 07:33:00.927600] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:44.854 [2024-07-14 07:33:00.927635] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:44.854 [2024-07-14 07:33:00.940969] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:44.854 [2024-07-14 07:33:00.941003] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:44.854 [2024-07-14 07:33:00.953024] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:44.854 [2024-07-14 07:33:00.953071] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:44.854 [2024-07-14 07:33:00.965977] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:44.854 [2024-07-14 07:33:00.966011] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:44.854 [2024-07-14 07:33:00.978763] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:44.854 [2024-07-14 07:33:00.978810] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:44.854 [2024-07-14 07:33:00.991924] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:44.854 [2024-07-14 07:33:00.991958] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:44.854 [2024-07-14 07:33:01.004757] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:44.854 [2024-07-14 07:33:01.004790] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:44.854 [2024-07-14 07:33:01.018109] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:44.854 [2024-07-14 07:33:01.018144] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:45.112 [2024-07-14 07:33:01.031148] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:45.112 [2024-07-14 07:33:01.031201] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:45.112 [2024-07-14 07:33:01.044128] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:45.112 [2024-07-14 07:33:01.044175] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:45.112 [2024-07-14 07:33:01.056489] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:45.112 [2024-07-14 07:33:01.056522] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:45.112 [2024-07-14 07:33:01.069349] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:45.112 [2024-07-14 07:33:01.069382] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:45.112 [2024-07-14 07:33:01.082484] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:45.112 [2024-07-14 07:33:01.082517] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:45.112 [2024-07-14 07:33:01.094711] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:45.112 [2024-07-14 07:33:01.094744] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:45.112 [2024-07-14 07:33:01.108384] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:45.112 [2024-07-14 07:33:01.108416] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:45.112 [2024-07-14 07:33:01.120626] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:45.112 [2024-07-14 07:33:01.120660] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:45.112 [2024-07-14 07:33:01.134131] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:45.112 [2024-07-14 07:33:01.134178] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:45.112 [2024-07-14 07:33:01.146582] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:45.112 [2024-07-14 07:33:01.146615] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:45.112 [2024-07-14 07:33:01.160206] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:45.112 [2024-07-14 07:33:01.160254] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:45.112 [2024-07-14 07:33:01.171696] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:45.112 [2024-07-14 07:33:01.171728] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:45.112 [2024-07-14 07:33:01.184273] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:45.112 [2024-07-14 07:33:01.184306] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:45.112 [2024-07-14 07:33:01.197362] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:45.112 [2024-07-14 07:33:01.197403] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:45.112 [2024-07-14 07:33:01.210510] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:45.112 [2024-07-14 07:33:01.210549] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:45.112 [2024-07-14 07:33:01.224641] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:45.112 [2024-07-14 07:33:01.224677] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:45.112 [2024-07-14 07:33:01.236768] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:45.112 [2024-07-14 07:33:01.236801] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:45.112 [2024-07-14 07:33:01.249907] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:45.112 [2024-07-14 07:33:01.249956] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:45.112 [2024-07-14 07:33:01.261909] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:45.112 [2024-07-14 07:33:01.261953] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:45.112 [2024-07-14 07:33:01.275360] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:45.112 [2024-07-14 07:33:01.275404] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:45.370 [2024-07-14 07:33:01.288046] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:45.370 [2024-07-14 07:33:01.288080] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:45.370 [2024-07-14 07:33:01.302060] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:45.370 [2024-07-14 07:33:01.302108] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:45.370 [2024-07-14 07:33:01.314617] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:45.370 [2024-07-14 07:33:01.314650] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:45.370 [2024-07-14 07:33:01.328167] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:45.370 [2024-07-14 07:33:01.328199] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:45.370 [2024-07-14 07:33:01.339449] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:45.370 [2024-07-14 07:33:01.339482] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:45.370 [2024-07-14 07:33:01.353217] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:45.370 [2024-07-14 07:33:01.353250] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:45.370 [2024-07-14 07:33:01.365253] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:45.370 [2024-07-14 07:33:01.365286] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:45.370 [2024-07-14 07:33:01.378041] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:45.370 [2024-07-14 07:33:01.378074] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:45.370 [2024-07-14 07:33:01.390391] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:45.370 [2024-07-14 07:33:01.390424] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:45.370 [2024-07-14 07:33:01.403112] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:45.370 [2024-07-14 07:33:01.403160] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:45.370 [2024-07-14 07:33:01.414715] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:45.370 [2024-07-14 07:33:01.414747] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:45.370 [2024-07-14 07:33:01.428164] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:45.370 [2024-07-14 07:33:01.428196] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:45.370 [2024-07-14 07:33:01.440226] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:45.370 [2024-07-14 07:33:01.440267] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:45.370 [2024-07-14 07:33:01.453215] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:45.370 [2024-07-14 07:33:01.453247] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:45.370 [2024-07-14 07:33:01.465126] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:45.370 [2024-07-14 07:33:01.465158] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:45.370 [2024-07-14 07:33:01.478245] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:45.370 [2024-07-14 07:33:01.478278] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:45.370 [2024-07-14 07:33:01.489980] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:45.370 [2024-07-14 07:33:01.490013] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:45.370 [2024-07-14 07:33:01.502861] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:45.370 [2024-07-14 07:33:01.502919] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:45.370 [2024-07-14 07:33:01.515038] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:45.370 [2024-07-14 07:33:01.515073] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:45.370 [2024-07-14 07:33:01.527213] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:45.370 [2024-07-14 07:33:01.527256] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:45.370 [2024-07-14 07:33:01.538955] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:45.370 [2024-07-14 07:33:01.538989] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:45.629 [2024-07-14 07:33:01.552616] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:45.629 [2024-07-14 07:33:01.552649] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:45.629 [2024-07-14 07:33:01.564930] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:45.629 [2024-07-14 07:33:01.564964] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:45.629 [2024-07-14 07:33:01.577970] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:45.629 [2024-07-14 07:33:01.578005] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:45.629 [2024-07-14 07:33:01.590172] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:45.629 [2024-07-14 07:33:01.590217] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:45.629 [2024-07-14 07:33:01.603707] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:45.629 [2024-07-14 07:33:01.603741] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:45.629 [2024-07-14 07:33:01.616531] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:45.629 [2024-07-14 07:33:01.616566] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:45.629 [2024-07-14 07:33:01.629157] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:45.629 [2024-07-14 07:33:01.629214] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:45.629 [2024-07-14 07:33:01.641019] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:45.629 [2024-07-14 07:33:01.641053] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:45.629 [2024-07-14 07:33:01.654116] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:45.629 [2024-07-14 07:33:01.654175] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:45.629 [2024-07-14 07:33:01.665674] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:45.629 [2024-07-14 07:33:01.665708] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:45.629 [2024-07-14 07:33:01.679347] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:45.629 [2024-07-14 07:33:01.679401] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:45.629 [2024-07-14 07:33:01.690772] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:45.629 [2024-07-14 07:33:01.690804] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:45.629 [2024-07-14 07:33:01.704077] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:45.629 [2024-07-14 07:33:01.704110] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:45.629 [2024-07-14 07:33:01.716644] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:45.629 [2024-07-14 07:33:01.716677] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:45.629 [2024-07-14 07:33:01.729641] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:45.629 [2024-07-14 07:33:01.729676] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:45.629 [2024-07-14 07:33:01.743169] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:45.629 [2024-07-14 07:33:01.743222] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:45.629 [2024-07-14 07:33:01.755239] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:45.629 [2024-07-14 07:33:01.755291] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:45.629 [2024-07-14 07:33:01.769276] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:45.629 [2024-07-14 07:33:01.769313] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:45.629 [2024-07-14 07:33:01.780737] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:45.629 [2024-07-14 07:33:01.780769] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:45.629 [2024-07-14 07:33:01.794264] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:45.629 [2024-07-14 07:33:01.794298] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:45.887 [2024-07-14 07:33:01.806081] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:45.887 [2024-07-14 07:33:01.806115] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:45.887 [2024-07-14 07:33:01.819381] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:45.887 [2024-07-14 07:33:01.819414] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:45.887 [2024-07-14 07:33:01.832113] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:45.887 [2024-07-14 07:33:01.832149] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:45.888 [2024-07-14 07:33:01.844322] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:45.888 [2024-07-14 07:33:01.844354] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:45.888 [2024-07-14 07:33:01.857935] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:45.888 [2024-07-14 07:33:01.857970] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:45.888 [2024-07-14 07:33:01.870090] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:45.888 [2024-07-14 07:33:01.870124] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:45.888 [2024-07-14 07:33:01.883278] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:45.888 [2024-07-14 07:33:01.883310] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:45.888 [2024-07-14 07:33:01.895737] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:45.888 [2024-07-14 07:33:01.895770] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:45.888 [2024-07-14 07:33:01.908332] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:45.888 [2024-07-14 07:33:01.908378] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:45.888 [2024-07-14 07:33:01.920178] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:45.888 [2024-07-14 07:33:01.920210] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:45.888 [2024-07-14 07:33:01.933979] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:45.888 [2024-07-14 07:33:01.934015] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:45.888 [2024-07-14 07:33:01.946092] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:45.888 [2024-07-14 07:33:01.946127] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:45.888 [2024-07-14 07:33:01.959595] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:45.888 [2024-07-14 07:33:01.959631] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:45.888 [2024-07-14 07:33:01.971805] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:45.888 [2024-07-14 07:33:01.971837] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:45.888 [2024-07-14 07:33:01.986011] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:45.888 [2024-07-14 07:33:01.986045] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:45.888 [2024-07-14 07:33:01.998244] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:45.888 [2024-07-14 07:33:01.998276] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:45.888 [2024-07-14 07:33:02.011485] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:45.888 [2024-07-14 07:33:02.011517] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:45.888 [2024-07-14 07:33:02.023255] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:45.888 [2024-07-14 07:33:02.023302] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:45.888 [2024-07-14 07:33:02.036041] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:45.888 [2024-07-14 07:33:02.036071] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:45.888 [2024-07-14 07:33:02.047909] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:45.888 [2024-07-14 07:33:02.047957] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:46.146 [2024-07-14 07:33:02.061835] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:46.146 [2024-07-14 07:33:02.061892] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:46.146 [2024-07-14 07:33:02.074145] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:46.146 [2024-07-14 07:33:02.074201] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:46.146 [2024-07-14 07:33:02.087404] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:46.146 [2024-07-14 07:33:02.087436] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:46.146 [2024-07-14 07:33:02.099905] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:46.146 [2024-07-14 07:33:02.099958] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:46.146 [2024-07-14 07:33:02.111947] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:46.146 [2024-07-14 07:33:02.111981] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:46.146 [2024-07-14 07:33:02.125101] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:46.146 [2024-07-14 07:33:02.125134] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:46.146 [2024-07-14 07:33:02.137166] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:46.146 [2024-07-14 07:33:02.137212] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:46.146 [2024-07-14 07:33:02.150026] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:46.146 [2024-07-14 07:33:02.150060] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:46.146 [2024-07-14 07:33:02.163201] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:46.146 [2024-07-14 07:33:02.163233] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:46.146 [2024-07-14 07:33:02.176249] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:46.146 [2024-07-14 07:33:02.176282] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:46.146 [2024-07-14 07:33:02.189378] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:46.146 [2024-07-14 07:33:02.189411] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:46.146 [2024-07-14 07:33:02.202550] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:46.146 [2024-07-14 07:33:02.202582] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:46.146 [2024-07-14 07:33:02.214380] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:46.146 [2024-07-14 07:33:02.214411] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:46.146 [2024-07-14 07:33:02.227963] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:46.146 [2024-07-14 07:33:02.227996] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:46.146 [2024-07-14 07:33:02.240661] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:46.146 [2024-07-14 07:33:02.240692] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:46.146 [2024-07-14 07:33:02.253345] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:46.146 [2024-07-14 07:33:02.253377] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:46.146 [2024-07-14 07:33:02.266037] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:46.146 [2024-07-14 07:33:02.266069] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:46.146 [2024-07-14 07:33:02.278458] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:46.146 [2024-07-14 07:33:02.278490] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:46.146 [2024-07-14 07:33:02.290920] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:46.146 [2024-07-14 07:33:02.290952] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:46.146 [2024-07-14 07:33:02.304689] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:46.146 [2024-07-14 07:33:02.304721] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:46.404 [2024-07-14 07:33:02.318117] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:46.404 [2024-07-14 07:33:02.318152] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:46.404 [2024-07-14 07:33:02.331046] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:46.404 [2024-07-14 07:33:02.331081] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:46.404 [2024-07-14 07:33:02.343698] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:46.404 [2024-07-14 07:33:02.343732] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:46.404 [2024-07-14 07:33:02.357228] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:46.404 [2024-07-14 07:33:02.357261] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:46.404 [2024-07-14 07:33:02.369055] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:46.404 [2024-07-14 07:33:02.369091] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:46.404 [2024-07-14 07:33:02.382238] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:46.404 [2024-07-14 07:33:02.382271] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:46.404 [2024-07-14 07:33:02.395122] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:46.404 [2024-07-14 07:33:02.395169] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:46.404 [2024-07-14 07:33:02.408408] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:46.404 [2024-07-14 07:33:02.408441] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:46.404 [2024-07-14 07:33:02.421075] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:46.404 [2024-07-14 07:33:02.421109] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:46.404 [2024-07-14 07:33:02.434684] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:46.404 [2024-07-14 07:33:02.434717] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:46.404 [2024-07-14 07:33:02.446988] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:46.404 [2024-07-14 07:33:02.447022] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:46.404 [2024-07-14 07:33:02.459537] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:46.404 [2024-07-14 07:33:02.459569] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:46.404 [2024-07-14 07:33:02.471374] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:46.404 [2024-07-14 07:33:02.471405] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:46.404 [2024-07-14 07:33:02.484433] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:46.404 [2024-07-14 07:33:02.484465] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:46.404 [2024-07-14 07:33:02.496647] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:46.404 [2024-07-14 07:33:02.496680] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:46.404 [2024-07-14 07:33:02.510347] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:46.404 [2024-07-14 07:33:02.510379] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:46.404 [2024-07-14 07:33:02.523340] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:46.404 [2024-07-14 07:33:02.523372] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:46.404 [2024-07-14 07:33:02.535454] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:46.404 [2024-07-14 07:33:02.535486] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:46.404 [2024-07-14 07:33:02.549533] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:46.404 [2024-07-14 07:33:02.549564] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:46.404 [2024-07-14 07:33:02.562026] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:46.404 [2024-07-14 07:33:02.562059] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:46.662 [2024-07-14 07:33:02.575152] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:46.662 [2024-07-14 07:33:02.575185] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:46.662 [2024-07-14 07:33:02.588101] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:46.662 [2024-07-14 07:33:02.588148] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:46.662 [2024-07-14 07:33:02.601101] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:46.662 [2024-07-14 07:33:02.601135] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:46.662 [2024-07-14 07:33:02.614372] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:46.662 [2024-07-14 07:33:02.614404] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:46.662 [2024-07-14 07:33:02.627488] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:46.662 [2024-07-14 07:33:02.627520] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:46.662 [2024-07-14 07:33:02.639530] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:46.662 [2024-07-14 07:33:02.639572] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:46.662 [2024-07-14 07:33:02.653049] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:46.662 [2024-07-14 07:33:02.653082] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:46.662 [2024-07-14 07:33:02.665220] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:46.662 [2024-07-14 07:33:02.665252] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:46.662 [2024-07-14 07:33:02.678685] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:46.662 [2024-07-14 07:33:02.678718] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:46.662 [2024-07-14 07:33:02.689908] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:46.662 [2024-07-14 07:33:02.689941] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:46.662 [2024-07-14 07:33:02.702783] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:46.662 [2024-07-14 07:33:02.702815] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:46.662 [2024-07-14 07:33:02.715672] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:46.662 [2024-07-14 07:33:02.715704] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:46.662 [2024-07-14 07:33:02.729071] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:46.662 [2024-07-14 07:33:02.729104] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:46.662 [2024-07-14 07:33:02.741758] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:46.662 [2024-07-14 07:33:02.741790] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:46.662 [2024-07-14 07:33:02.755401] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:46.662 [2024-07-14 07:33:02.755434] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:46.662 [2024-07-14 07:33:02.767658] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:46.662 [2024-07-14 07:33:02.767690] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:46.662 [2024-07-14 07:33:02.780500] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:46.662 [2024-07-14 07:33:02.780534] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:46.662 [2024-07-14 07:33:02.792973] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:46.662 [2024-07-14 07:33:02.793006] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:46.662 [2024-07-14 07:33:02.805641] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:46.662 [2024-07-14 07:33:02.805673] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:46.662 [2024-07-14 07:33:02.818248] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:46.662 [2024-07-14 07:33:02.818281] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:46.662 [2024-07-14 07:33:02.830661] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:46.662 [2024-07-14 07:33:02.830695] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:46.920 [2024-07-14 07:33:02.843976] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:46.920 [2024-07-14 07:33:02.844010] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:46.920 [2024-07-14 07:33:02.857191] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:46.920 [2024-07-14 07:33:02.857223] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:46.920 [2024-07-14 07:33:02.870037] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:46.920 [2024-07-14 07:33:02.870070] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:46.920 [2024-07-14 07:33:02.883573] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:46.920 [2024-07-14 07:33:02.883613] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:46.920 [2024-07-14 07:33:02.896764] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:46.920 [2024-07-14 07:33:02.896797] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:46.920 [2024-07-14 07:33:02.909059] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:46.920 [2024-07-14 07:33:02.909091] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:46.920 [2024-07-14 07:33:02.921644] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:46.920 [2024-07-14 07:33:02.921677] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:46.920 [2024-07-14 07:33:02.933742] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:46.920 [2024-07-14 07:33:02.933775] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:46.920 [2024-07-14 07:33:02.947223] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:46.920 [2024-07-14 07:33:02.947255] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:46.920 [2024-07-14 07:33:02.959283] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:46.920 [2024-07-14 07:33:02.959316] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:46.920 [2024-07-14 07:33:02.972557] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:46.920 [2024-07-14 07:33:02.972590] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:46.920 [2024-07-14 07:33:02.984980] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:46.920 [2024-07-14 07:33:02.985014] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:46.920 [2024-07-14 07:33:02.998379] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:46.920 [2024-07-14 07:33:02.998426] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:46.920 [2024-07-14 07:33:03.010381] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:46.920 [2024-07-14 07:33:03.010413] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:46.920 [2024-07-14 07:33:03.023442] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:46.920 [2024-07-14 07:33:03.023474] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:46.920 [2024-07-14 07:33:03.036222] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:46.920 [2024-07-14 07:33:03.036255] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:46.920 [2024-07-14 07:33:03.048228] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:46.920 [2024-07-14 07:33:03.048276] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:46.920 [2024-07-14 07:33:03.061961] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:46.920 [2024-07-14 07:33:03.061994] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:46.920 [2024-07-14 07:33:03.074094] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:46.920 [2024-07-14 07:33:03.074130] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:46.920 [2024-07-14 07:33:03.087044] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:46.920 [2024-07-14 07:33:03.087078] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:47.178 [2024-07-14 07:33:03.099567] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:47.178 [2024-07-14 07:33:03.099599] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:47.178 [2024-07-14 07:33:03.112528] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:47.178 [2024-07-14 07:33:03.112560] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:47.178 [2024-07-14 07:33:03.125145] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:47.178 [2024-07-14 07:33:03.125201] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:47.178 [2024-07-14 07:33:03.136946] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:47.178 [2024-07-14 07:33:03.136980] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:47.178 [2024-07-14 07:33:03.150538] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:47.178 [2024-07-14 07:33:03.150572] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:47.178 [2024-07-14 07:33:03.162663] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:47.178 [2024-07-14 07:33:03.162695] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:47.178 [2024-07-14 07:33:03.175764] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:47.178 [2024-07-14 07:33:03.175796] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:47.178 [2024-07-14 07:33:03.188285] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:47.178 [2024-07-14 07:33:03.188318] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:47.178 [2024-07-14 07:33:03.201527] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:47.178 [2024-07-14 07:33:03.201559] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:47.178 [2024-07-14 07:33:03.213980] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:47.178 [2024-07-14 07:33:03.214013] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:47.178 [2024-07-14 07:33:03.227418] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:47.178 [2024-07-14 07:33:03.227450] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:47.178 [2024-07-14 07:33:03.239814] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:47.178 [2024-07-14 07:33:03.239861] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:47.178 [2024-07-14 07:33:03.253022] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:47.178 [2024-07-14 07:33:03.253056] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:47.178 [2024-07-14 07:33:03.265277] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:47.178 [2024-07-14 07:33:03.265310] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:47.178 [2024-07-14 07:33:03.277605] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:47.178 [2024-07-14 07:33:03.277637] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:47.178 [2024-07-14 07:33:03.291277] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:47.178 [2024-07-14 07:33:03.291310] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:47.178 [2024-07-14 07:33:03.303737] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:47.178 [2024-07-14 07:33:03.303770] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:47.178 [2024-07-14 07:33:03.317849] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:47.178 [2024-07-14 07:33:03.317905] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:47.178 [2024-07-14 07:33:03.329685] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:47.178 [2024-07-14 07:33:03.329718] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:47.178 [2024-07-14 07:33:03.343431] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:47.178 [2024-07-14 07:33:03.343463] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:47.436 [2024-07-14 07:33:03.355372] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:47.436 [2024-07-14 07:33:03.355403] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:47.436 [2024-07-14 07:33:03.367847] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:47.436 [2024-07-14 07:33:03.367911] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:47.436 [2024-07-14 07:33:03.379978] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:47.436 [2024-07-14 07:33:03.380012] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:47.436 [2024-07-14 07:33:03.393823] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:47.436 [2024-07-14 07:33:03.393886] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:47.436 [2024-07-14 07:33:03.406064] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:47.436 [2024-07-14 07:33:03.406098] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:47.436 [2024-07-14 07:33:03.419393] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:47.436 [2024-07-14 07:33:03.419426] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:47.436 [2024-07-14 07:33:03.430947] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:47.436 [2024-07-14 07:33:03.430981] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:47.436 [2024-07-14 07:33:03.444044] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:47.436 [2024-07-14 07:33:03.444078] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:47.436 [2024-07-14 07:33:03.455886] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:47.436 [2024-07-14 07:33:03.455931] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:47.436 [2024-07-14 07:33:03.469094] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:47.436 [2024-07-14 07:33:03.469131] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:47.436 [2024-07-14 07:33:03.482204] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:47.436 [2024-07-14 07:33:03.482264] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:47.436 [2024-07-14 07:33:03.496009] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:47.436 [2024-07-14 07:33:03.496043] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:47.436 [2024-07-14 07:33:03.509104] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:47.436 [2024-07-14 07:33:03.509138] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:47.436 [2024-07-14 07:33:03.522636] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:47.436 [2024-07-14 07:33:03.522678] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:47.436 [2024-07-14 07:33:03.535770] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:47.436 [2024-07-14 07:33:03.535803] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:47.436 [2024-07-14 07:33:03.549327] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:47.436 [2024-07-14 07:33:03.549359] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:47.436 [2024-07-14 07:33:03.562937] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:47.436 [2024-07-14 07:33:03.562971] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:47.436 [2024-07-14 07:33:03.575470] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:47.436 [2024-07-14 07:33:03.575502] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:47.436 [2024-07-14 07:33:03.589015] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:47.436 [2024-07-14 07:33:03.589048] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:47.436 [2024-07-14 07:33:03.601247] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:47.436 [2024-07-14 07:33:03.601293] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:47.695 [2024-07-14 07:33:03.614828] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:47.695 [2024-07-14 07:33:03.614890] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:47.695 [2024-07-14 07:33:03.626964] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:47.695 [2024-07-14 07:33:03.626998] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:47.695 [2024-07-14 07:33:03.640460] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:47.695 [2024-07-14 07:33:03.640492] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:47.695 [2024-07-14 07:33:03.652734] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:47.695 [2024-07-14 07:33:03.652767] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:47.695 [2024-07-14 07:33:03.665239] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:47.695 [2024-07-14 07:33:03.665271] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:47.695 [2024-07-14 07:33:03.677645] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:47.695 [2024-07-14 07:33:03.677692] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:47.695 [2024-07-14 07:33:03.690260] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:47.695 [2024-07-14 07:33:03.690305] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:47.695 [2024-07-14 07:33:03.702697] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:47.695 [2024-07-14 07:33:03.702729] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:47.695 [2024-07-14 07:33:03.715257] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:47.695 [2024-07-14 07:33:03.715302] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:47.695 [2024-07-14 07:33:03.727406] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:47.695 [2024-07-14 07:33:03.727439] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:47.695 [2024-07-14 07:33:03.740533] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:47.695 [2024-07-14 07:33:03.740564] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:47.695 [2024-07-14 07:33:03.753435] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:47.695 [2024-07-14 07:33:03.753467] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:47.695 [2024-07-14 07:33:03.765716] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:47.695 [2024-07-14 07:33:03.765748] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:47.695 [2024-07-14 07:33:03.778155] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:47.695 [2024-07-14 07:33:03.778186] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:47.695 [2024-07-14 07:33:03.791800] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:47.695 [2024-07-14 07:33:03.791832] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:47.695 [2024-07-14 07:33:03.804208] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:47.695 [2024-07-14 07:33:03.804255] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:47.695 [2024-07-14 07:33:03.817519] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:47.695 [2024-07-14 07:33:03.817550] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:47.695 [2024-07-14 07:33:03.829383] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:47.695 [2024-07-14 07:33:03.829415] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:47.695 [2024-07-14 07:33:03.842539] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:47.695 [2024-07-14 07:33:03.842572] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:47.695 [2024-07-14 07:33:03.855134] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:47.695 [2024-07-14 07:33:03.855167] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:47.953 [2024-07-14 07:33:03.867713] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:47.953 [2024-07-14 07:33:03.867760] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:47.953 [2024-07-14 07:33:03.879434] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:47.953 [2024-07-14 07:33:03.879466] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:47.953 [2024-07-14 07:33:03.891998] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:47.953 [2024-07-14 07:33:03.892032] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:47.953 [2024-07-14 07:33:03.905492] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:47.953 [2024-07-14 07:33:03.905525] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:47.953 [2024-07-14 07:33:03.917730] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:47.953 [2024-07-14 07:33:03.917757] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:47.953 [2024-07-14 07:33:03.929781] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:47.953 [2024-07-14 07:33:03.929813] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:47.953 [2024-07-14 07:33:03.942568] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:47.953 [2024-07-14 07:33:03.942600] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:47.953 [2024-07-14 07:33:03.954412] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:47.953 [2024-07-14 07:33:03.954445] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:47.953 [2024-07-14 07:33:03.968064] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:47.953 [2024-07-14 07:33:03.968098] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:47.953 [2024-07-14 07:33:03.980555] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:47.953 [2024-07-14 07:33:03.980587] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:47.953 [2024-07-14 07:33:03.992974] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:47.953 [2024-07-14 07:33:03.993007] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:47.953 [2024-07-14 07:33:04.004898] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:47.953 [2024-07-14 07:33:04.004931] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:47.953 [2024-07-14 07:33:04.017790] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:47.953 [2024-07-14 07:33:04.017816] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:47.953 [2024-07-14 07:33:04.029772] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:47.953 [2024-07-14 07:33:04.029805] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:47.953 [2024-07-14 07:33:04.042792] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:47.953 [2024-07-14 07:33:04.042824] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:47.953 [2024-07-14 07:33:04.054972] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:47.953 [2024-07-14 07:33:04.055004] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:47.953 [2024-07-14 07:33:04.067722] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:47.953 [2024-07-14 07:33:04.067753] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:47.953 [2024-07-14 07:33:04.079456] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:47.953 [2024-07-14 07:33:04.079490] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:47.953 [2024-07-14 07:33:04.091993] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:47.953 [2024-07-14 07:33:04.092027] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:47.953 [2024-07-14 07:33:04.103927] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:47.953 [2024-07-14 07:33:04.103961] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:47.953 [2024-07-14 07:33:04.117406] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:47.953 [2024-07-14 07:33:04.117439] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:48.211 [2024-07-14 07:33:04.129356] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:48.211 [2024-07-14 07:33:04.129388] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:48.211 [2024-07-14 07:33:04.142350] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:48.211 [2024-07-14 07:33:04.142383] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:48.211 [2024-07-14 07:33:04.154545] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:48.211 [2024-07-14 07:33:04.154578] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:48.211 [2024-07-14 07:33:04.167776] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:48.211 [2024-07-14 07:33:04.167809] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:48.211 [2024-07-14 07:33:04.179798] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:48.211 [2024-07-14 07:33:04.179832] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:48.211 [2024-07-14 07:33:04.192795] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:48.211 [2024-07-14 07:33:04.192842] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:48.211 [2024-07-14 07:33:04.204618] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:48.211 [2024-07-14 07:33:04.204651] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:48.211 [2024-07-14 07:33:04.217654] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:48.211 [2024-07-14 07:33:04.217685] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:48.211 [2024-07-14 07:33:04.230647] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:48.211 [2024-07-14 07:33:04.230680] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:48.211 [2024-07-14 07:33:04.242916] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:48.211 [2024-07-14 07:33:04.242952] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:48.211 [2024-07-14 07:33:04.256792] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:48.211 [2024-07-14 07:33:04.256824] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:48.211 [2024-07-14 07:33:04.268934] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:48.211 [2024-07-14 07:33:04.268968] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:48.211 [2024-07-14 07:33:04.282203] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:48.211 [2024-07-14 07:33:04.282236] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:48.211 [2024-07-14 07:33:04.295220] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:48.211 [2024-07-14 07:33:04.295253] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:48.211 [2024-07-14 07:33:04.308132] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:48.211 [2024-07-14 07:33:04.308178] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:48.211 [2024-07-14 07:33:04.320965] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:48.211 [2024-07-14 07:33:04.320997] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:48.211 [2024-07-14 07:33:04.333404] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:48.211 [2024-07-14 07:33:04.333436] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:48.211 [2024-07-14 07:33:04.347015] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:48.211 [2024-07-14 07:33:04.347049] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:48.211 [2024-07-14 07:33:04.359137] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:48.211 [2024-07-14 07:33:04.359171] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:48.211 [2024-07-14 07:33:04.372375] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:48.211 [2024-07-14 07:33:04.372407] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:48.469 [2024-07-14 07:33:04.384707] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:48.469 [2024-07-14 07:33:04.384739] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:48.469 [2024-07-14 07:33:04.397424] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:48.469 [2024-07-14 07:33:04.397456] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:48.469 [2024-07-14 07:33:04.409697] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:48.469 [2024-07-14 07:33:04.409730] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:48.469 [2024-07-14 07:33:04.420691] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:48.469 [2024-07-14 07:33:04.420723] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:48.469 [2024-07-14 07:33:04.433174] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:48.469 [2024-07-14 07:33:04.433222] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:48.469 [2024-07-14 07:33:04.446478] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:48.469 [2024-07-14 07:33:04.446511] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:48.469 [2024-07-14 07:33:04.459387] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:48.469 [2024-07-14 07:33:04.459419] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:48.469 [2024-07-14 07:33:04.471226] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:48.469 [2024-07-14 07:33:04.471260] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:48.469 [2024-07-14 07:33:04.484438] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:48.469 [2024-07-14 07:33:04.484470] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:48.469 [2024-07-14 07:33:04.496548] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:48.469 [2024-07-14 07:33:04.496580] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:48.469 [2024-07-14 07:33:04.509682] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:48.469 [2024-07-14 07:33:04.509714] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:48.469 [2024-07-14 07:33:04.522204] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:48.469 [2024-07-14 07:33:04.522251] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:48.469 [2024-07-14 07:33:04.535360] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:48.469 [2024-07-14 07:33:04.535392] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:48.469 [2024-07-14 07:33:04.550579] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:48.469 [2024-07-14 07:33:04.550613] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:48.469 [2024-07-14 07:33:04.561919] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:48.469 [2024-07-14 07:33:04.561966] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:48.469 [2024-07-14 07:33:04.574073] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:48.469 [2024-07-14 07:33:04.574109] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:48.469 [2024-07-14 07:33:04.586746] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:48.469 [2024-07-14 07:33:04.586779] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:48.469 [2024-07-14 07:33:04.596129] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:48.469 [2024-07-14 07:33:04.596163] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:48.469 00:15:48.469 Latency(us) 00:15:48.469 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:15:48.469 Job: Nvme1n1 (Core Mask 0x1, workload: randrw, percentage: 50, depth: 128, IO size: 8192) 00:15:48.469 Nvme1n1 : 5.01 9956.54 77.79 0.00 0.00 12837.24 4320.52 23204.60 00:15:48.469 =================================================================================================================== 00:15:48.469 Total : 9956.54 77.79 0.00 0.00 12837.24 4320.52 23204.60 00:15:48.469 [2024-07-14 07:33:04.601302] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:48.469 [2024-07-14 07:33:04.601327] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:48.469 [2024-07-14 07:33:04.609372] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:48.469 [2024-07-14 07:33:04.609397] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:48.469 [2024-07-14 07:33:04.617336] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:48.469 [2024-07-14 07:33:04.617357] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:48.469 [2024-07-14 07:33:04.625428] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:48.469 [2024-07-14 07:33:04.625474] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:48.469 [2024-07-14 07:33:04.633457] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:48.469 [2024-07-14 07:33:04.633508] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:48.727 [2024-07-14 07:33:04.641472] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:48.727 [2024-07-14 07:33:04.641515] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:48.727 [2024-07-14 07:33:04.649490] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:48.727 [2024-07-14 07:33:04.649537] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:48.727 [2024-07-14 07:33:04.657515] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:48.727 [2024-07-14 07:33:04.657563] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:48.727 [2024-07-14 07:33:04.665535] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:48.727 [2024-07-14 07:33:04.665583] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:48.727 [2024-07-14 07:33:04.673560] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:48.727 [2024-07-14 07:33:04.673609] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:48.727 [2024-07-14 07:33:04.681585] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:48.727 [2024-07-14 07:33:04.681633] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:48.727 [2024-07-14 07:33:04.689602] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:48.727 [2024-07-14 07:33:04.689650] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:48.727 [2024-07-14 07:33:04.697617] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:48.727 [2024-07-14 07:33:04.697676] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:48.727 [2024-07-14 07:33:04.705642] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:48.727 [2024-07-14 07:33:04.705690] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:48.727 [2024-07-14 07:33:04.713668] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:48.727 [2024-07-14 07:33:04.713717] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:48.727 [2024-07-14 07:33:04.721686] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:48.727 [2024-07-14 07:33:04.721736] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:48.727 [2024-07-14 07:33:04.729641] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:48.727 [2024-07-14 07:33:04.729661] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:48.727 [2024-07-14 07:33:04.737658] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:48.727 [2024-07-14 07:33:04.737678] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:48.727 [2024-07-14 07:33:04.745679] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:48.727 [2024-07-14 07:33:04.745698] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:48.727 [2024-07-14 07:33:04.753701] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:48.727 [2024-07-14 07:33:04.753720] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:48.727 [2024-07-14 07:33:04.761767] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:48.727 [2024-07-14 07:33:04.761804] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:48.727 [2024-07-14 07:33:04.769807] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:48.727 [2024-07-14 07:33:04.769853] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:48.727 [2024-07-14 07:33:04.777811] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:48.727 [2024-07-14 07:33:04.777873] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:48.727 [2024-07-14 07:33:04.785788] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:48.727 [2024-07-14 07:33:04.785807] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:48.727 [2024-07-14 07:33:04.793809] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:48.727 [2024-07-14 07:33:04.793829] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:48.727 [2024-07-14 07:33:04.801832] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:48.727 [2024-07-14 07:33:04.801874] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:48.727 [2024-07-14 07:33:04.809863] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:48.727 [2024-07-14 07:33:04.809893] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:48.727 [2024-07-14 07:33:04.817950] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:48.727 [2024-07-14 07:33:04.817997] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:48.727 [2024-07-14 07:33:04.825976] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:48.727 [2024-07-14 07:33:04.826025] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:48.727 [2024-07-14 07:33:04.833949] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:48.727 [2024-07-14 07:33:04.833970] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:48.727 [2024-07-14 07:33:04.841967] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:48.727 [2024-07-14 07:33:04.841989] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:48.727 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/zcopy.sh: line 42: kill: (4089987) - No such process 00:15:48.727 07:33:04 -- target/zcopy.sh@49 -- # wait 4089987 00:15:48.727 07:33:04 -- target/zcopy.sh@52 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:15:48.727 07:33:04 -- common/autotest_common.sh@551 -- # xtrace_disable 00:15:48.727 07:33:04 -- common/autotest_common.sh@10 -- # set +x 00:15:48.727 07:33:04 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:15:48.727 07:33:04 -- target/zcopy.sh@53 -- # rpc_cmd bdev_delay_create -b malloc0 -d delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:15:48.727 07:33:04 -- common/autotest_common.sh@551 -- # xtrace_disable 00:15:48.727 07:33:04 -- common/autotest_common.sh@10 -- # set +x 00:15:48.727 delay0 00:15:48.727 07:33:04 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:15:48.727 07:33:04 -- target/zcopy.sh@54 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 delay0 -n 1 00:15:48.727 07:33:04 -- common/autotest_common.sh@551 -- # xtrace_disable 00:15:48.727 07:33:04 -- common/autotest_common.sh@10 -- # set +x 00:15:48.727 07:33:04 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:15:48.727 07:33:04 -- target/zcopy.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -c 0x1 -t 5 -q 64 -w randrw -M 50 -l warning -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 ns:1' 00:15:48.727 EAL: No free 2048 kB hugepages reported on node 1 00:15:48.984 [2024-07-14 07:33:04.953407] nvme_fabric.c: 295:nvme_fabric_discover_probe: *WARNING*: Skipping unsupported current discovery service or discovery service referral 00:15:55.536 Initializing NVMe Controllers 00:15:55.536 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:15:55.536 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:15:55.536 Initialization complete. Launching workers. 00:15:55.536 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 I/O completed: 320, failed: 52 00:15:55.536 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) abort submitted 339, failed to submit 33 00:15:55.536 success 127, unsuccess 212, failed 0 00:15:55.536 07:33:11 -- target/zcopy.sh@59 -- # trap - SIGINT SIGTERM EXIT 00:15:55.536 07:33:11 -- target/zcopy.sh@60 -- # nvmftestfini 00:15:55.536 07:33:11 -- nvmf/common.sh@476 -- # nvmfcleanup 00:15:55.536 07:33:11 -- nvmf/common.sh@116 -- # sync 00:15:55.536 07:33:11 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:15:55.536 07:33:11 -- nvmf/common.sh@119 -- # set +e 00:15:55.536 07:33:11 -- nvmf/common.sh@120 -- # for i in {1..20} 00:15:55.536 07:33:11 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:15:55.536 rmmod nvme_tcp 00:15:55.536 rmmod nvme_fabrics 00:15:55.536 rmmod nvme_keyring 00:15:55.536 07:33:11 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:15:55.536 07:33:11 -- nvmf/common.sh@123 -- # set -e 00:15:55.536 07:33:11 -- nvmf/common.sh@124 -- # return 0 00:15:55.536 07:33:11 -- nvmf/common.sh@477 -- # '[' -n 4088508 ']' 00:15:55.536 07:33:11 -- nvmf/common.sh@478 -- # killprocess 4088508 00:15:55.536 07:33:11 -- common/autotest_common.sh@926 -- # '[' -z 4088508 ']' 00:15:55.536 07:33:11 -- common/autotest_common.sh@930 -- # kill -0 4088508 00:15:55.536 07:33:11 -- common/autotest_common.sh@931 -- # uname 00:15:55.536 07:33:11 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:15:55.536 07:33:11 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 4088508 00:15:55.536 07:33:11 -- common/autotest_common.sh@932 -- # process_name=reactor_1 00:15:55.536 07:33:11 -- common/autotest_common.sh@936 -- # '[' reactor_1 = sudo ']' 00:15:55.536 07:33:11 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 4088508' 00:15:55.536 killing process with pid 4088508 00:15:55.536 07:33:11 -- common/autotest_common.sh@945 -- # kill 4088508 00:15:55.536 07:33:11 -- common/autotest_common.sh@950 -- # wait 4088508 00:15:55.536 07:33:11 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:15:55.536 07:33:11 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:15:55.536 07:33:11 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:15:55.536 07:33:11 -- nvmf/common.sh@273 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:15:55.536 07:33:11 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:15:55.536 07:33:11 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:55.536 07:33:11 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:15:55.536 07:33:11 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:57.459 07:33:13 -- nvmf/common.sh@278 -- # ip -4 addr flush cvl_0_1 00:15:57.459 00:15:57.459 real 0m28.605s 00:15:57.459 user 0m42.163s 00:15:57.459 sys 0m8.258s 00:15:57.459 07:33:13 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:15:57.459 07:33:13 -- common/autotest_common.sh@10 -- # set +x 00:15:57.459 ************************************ 00:15:57.459 END TEST nvmf_zcopy 00:15:57.459 ************************************ 00:15:57.717 07:33:13 -- nvmf/nvmf.sh@53 -- # run_test nvmf_nmic /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nmic.sh --transport=tcp 00:15:57.718 07:33:13 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:15:57.718 07:33:13 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:15:57.718 07:33:13 -- common/autotest_common.sh@10 -- # set +x 00:15:57.718 ************************************ 00:15:57.718 START TEST nvmf_nmic 00:15:57.718 ************************************ 00:15:57.718 07:33:13 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nmic.sh --transport=tcp 00:15:57.718 * Looking for test storage... 00:15:57.718 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:15:57.718 07:33:13 -- target/nmic.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:15:57.718 07:33:13 -- nvmf/common.sh@7 -- # uname -s 00:15:57.718 07:33:13 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:15:57.718 07:33:13 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:15:57.718 07:33:13 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:15:57.718 07:33:13 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:15:57.718 07:33:13 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:15:57.718 07:33:13 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:15:57.718 07:33:13 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:15:57.718 07:33:13 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:15:57.718 07:33:13 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:15:57.718 07:33:13 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:15:57.718 07:33:13 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:15:57.718 07:33:13 -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:15:57.718 07:33:13 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:15:57.718 07:33:13 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:15:57.718 07:33:13 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:15:57.718 07:33:13 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:15:57.718 07:33:13 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:15:57.718 07:33:13 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:15:57.718 07:33:13 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:15:57.718 07:33:13 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:57.718 07:33:13 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:57.718 07:33:13 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:57.718 07:33:13 -- paths/export.sh@5 -- # export PATH 00:15:57.718 07:33:13 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:57.718 07:33:13 -- nvmf/common.sh@46 -- # : 0 00:15:57.718 07:33:13 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:15:57.718 07:33:13 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:15:57.718 07:33:13 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:15:57.718 07:33:13 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:15:57.718 07:33:13 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:15:57.718 07:33:13 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:15:57.718 07:33:13 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:15:57.718 07:33:13 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:15:57.718 07:33:13 -- target/nmic.sh@11 -- # MALLOC_BDEV_SIZE=64 00:15:57.718 07:33:13 -- target/nmic.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:15:57.718 07:33:13 -- target/nmic.sh@14 -- # nvmftestinit 00:15:57.718 07:33:13 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:15:57.718 07:33:13 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:15:57.718 07:33:13 -- nvmf/common.sh@436 -- # prepare_net_devs 00:15:57.718 07:33:13 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:15:57.718 07:33:13 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:15:57.718 07:33:13 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:57.718 07:33:13 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:15:57.718 07:33:13 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:57.718 07:33:13 -- nvmf/common.sh@402 -- # [[ phy != virt ]] 00:15:57.718 07:33:13 -- nvmf/common.sh@402 -- # gather_supported_nvmf_pci_devs 00:15:57.718 07:33:13 -- nvmf/common.sh@284 -- # xtrace_disable 00:15:57.718 07:33:13 -- common/autotest_common.sh@10 -- # set +x 00:15:59.620 07:33:15 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:15:59.620 07:33:15 -- nvmf/common.sh@290 -- # pci_devs=() 00:15:59.620 07:33:15 -- nvmf/common.sh@290 -- # local -a pci_devs 00:15:59.620 07:33:15 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:15:59.620 07:33:15 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:15:59.620 07:33:15 -- nvmf/common.sh@292 -- # pci_drivers=() 00:15:59.620 07:33:15 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:15:59.620 07:33:15 -- nvmf/common.sh@294 -- # net_devs=() 00:15:59.620 07:33:15 -- nvmf/common.sh@294 -- # local -ga net_devs 00:15:59.620 07:33:15 -- nvmf/common.sh@295 -- # e810=() 00:15:59.620 07:33:15 -- nvmf/common.sh@295 -- # local -ga e810 00:15:59.620 07:33:15 -- nvmf/common.sh@296 -- # x722=() 00:15:59.620 07:33:15 -- nvmf/common.sh@296 -- # local -ga x722 00:15:59.620 07:33:15 -- nvmf/common.sh@297 -- # mlx=() 00:15:59.620 07:33:15 -- nvmf/common.sh@297 -- # local -ga mlx 00:15:59.620 07:33:15 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:15:59.620 07:33:15 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:15:59.620 07:33:15 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:15:59.620 07:33:15 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:15:59.620 07:33:15 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:15:59.621 07:33:15 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:15:59.621 07:33:15 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:15:59.621 07:33:15 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:15:59.621 07:33:15 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:15:59.621 07:33:15 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:15:59.621 07:33:15 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:15:59.621 07:33:15 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:15:59.621 07:33:15 -- nvmf/common.sh@320 -- # [[ tcp == rdma ]] 00:15:59.621 07:33:15 -- nvmf/common.sh@326 -- # [[ e810 == mlx5 ]] 00:15:59.621 07:33:15 -- nvmf/common.sh@328 -- # [[ e810 == e810 ]] 00:15:59.621 07:33:15 -- nvmf/common.sh@329 -- # pci_devs=("${e810[@]}") 00:15:59.621 07:33:15 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:15:59.621 07:33:15 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:15:59.621 07:33:15 -- nvmf/common.sh@340 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:15:59.621 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:15:59.621 07:33:15 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:15:59.621 07:33:15 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:15:59.621 07:33:15 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:15:59.621 07:33:15 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:15:59.621 07:33:15 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:15:59.621 07:33:15 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:15:59.621 07:33:15 -- nvmf/common.sh@340 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:15:59.621 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:15:59.621 07:33:15 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:15:59.621 07:33:15 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:15:59.621 07:33:15 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:15:59.621 07:33:15 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:15:59.621 07:33:15 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:15:59.621 07:33:15 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:15:59.621 07:33:15 -- nvmf/common.sh@371 -- # [[ e810 == e810 ]] 00:15:59.621 07:33:15 -- nvmf/common.sh@371 -- # [[ tcp == rdma ]] 00:15:59.621 07:33:15 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:15:59.621 07:33:15 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:15:59.621 07:33:15 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:15:59.621 07:33:15 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:15:59.621 07:33:15 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:15:59.621 Found net devices under 0000:0a:00.0: cvl_0_0 00:15:59.621 07:33:15 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:15:59.621 07:33:15 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:15:59.621 07:33:15 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:15:59.621 07:33:15 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:15:59.621 07:33:15 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:15:59.621 07:33:15 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:15:59.621 Found net devices under 0000:0a:00.1: cvl_0_1 00:15:59.621 07:33:15 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:15:59.621 07:33:15 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:15:59.621 07:33:15 -- nvmf/common.sh@402 -- # is_hw=yes 00:15:59.621 07:33:15 -- nvmf/common.sh@404 -- # [[ yes == yes ]] 00:15:59.621 07:33:15 -- nvmf/common.sh@405 -- # [[ tcp == tcp ]] 00:15:59.621 07:33:15 -- nvmf/common.sh@406 -- # nvmf_tcp_init 00:15:59.621 07:33:15 -- nvmf/common.sh@228 -- # NVMF_INITIATOR_IP=10.0.0.1 00:15:59.621 07:33:15 -- nvmf/common.sh@229 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:15:59.621 07:33:15 -- nvmf/common.sh@230 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:15:59.621 07:33:15 -- nvmf/common.sh@233 -- # (( 2 > 1 )) 00:15:59.621 07:33:15 -- nvmf/common.sh@235 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:15:59.621 07:33:15 -- nvmf/common.sh@236 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:15:59.621 07:33:15 -- nvmf/common.sh@239 -- # NVMF_SECOND_TARGET_IP= 00:15:59.621 07:33:15 -- nvmf/common.sh@241 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:15:59.621 07:33:15 -- nvmf/common.sh@242 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:15:59.621 07:33:15 -- nvmf/common.sh@243 -- # ip -4 addr flush cvl_0_0 00:15:59.621 07:33:15 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_1 00:15:59.621 07:33:15 -- nvmf/common.sh@247 -- # ip netns add cvl_0_0_ns_spdk 00:15:59.621 07:33:15 -- nvmf/common.sh@250 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:15:59.880 07:33:15 -- nvmf/common.sh@253 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:15:59.880 07:33:15 -- nvmf/common.sh@254 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:15:59.880 07:33:15 -- nvmf/common.sh@257 -- # ip link set cvl_0_1 up 00:15:59.880 07:33:15 -- nvmf/common.sh@259 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:15:59.880 07:33:15 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:15:59.880 07:33:15 -- nvmf/common.sh@263 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:15:59.880 07:33:15 -- nvmf/common.sh@266 -- # ping -c 1 10.0.0.2 00:15:59.880 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:15:59.880 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.193 ms 00:15:59.880 00:15:59.880 --- 10.0.0.2 ping statistics --- 00:15:59.880 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:59.880 rtt min/avg/max/mdev = 0.193/0.193/0.193/0.000 ms 00:15:59.880 07:33:15 -- nvmf/common.sh@267 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:15:59.880 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:15:59.880 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.149 ms 00:15:59.880 00:15:59.880 --- 10.0.0.1 ping statistics --- 00:15:59.880 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:59.880 rtt min/avg/max/mdev = 0.149/0.149/0.149/0.000 ms 00:15:59.880 07:33:15 -- nvmf/common.sh@269 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:15:59.880 07:33:15 -- nvmf/common.sh@410 -- # return 0 00:15:59.880 07:33:15 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:15:59.880 07:33:15 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:15:59.880 07:33:15 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:15:59.880 07:33:15 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:15:59.880 07:33:15 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:15:59.880 07:33:15 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:15:59.880 07:33:15 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:15:59.880 07:33:15 -- target/nmic.sh@15 -- # nvmfappstart -m 0xF 00:15:59.880 07:33:15 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:15:59.880 07:33:15 -- common/autotest_common.sh@712 -- # xtrace_disable 00:15:59.880 07:33:15 -- common/autotest_common.sh@10 -- # set +x 00:15:59.880 07:33:15 -- nvmf/common.sh@469 -- # nvmfpid=4093974 00:15:59.880 07:33:15 -- nvmf/common.sh@468 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:15:59.880 07:33:15 -- nvmf/common.sh@470 -- # waitforlisten 4093974 00:15:59.880 07:33:15 -- common/autotest_common.sh@819 -- # '[' -z 4093974 ']' 00:15:59.880 07:33:15 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:59.880 07:33:15 -- common/autotest_common.sh@824 -- # local max_retries=100 00:15:59.880 07:33:15 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:59.880 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:59.880 07:33:15 -- common/autotest_common.sh@828 -- # xtrace_disable 00:15:59.880 07:33:15 -- common/autotest_common.sh@10 -- # set +x 00:15:59.880 [2024-07-14 07:33:15.946759] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:15:59.880 [2024-07-14 07:33:15.946833] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:15:59.880 EAL: No free 2048 kB hugepages reported on node 1 00:15:59.880 [2024-07-14 07:33:16.017009] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:16:00.139 [2024-07-14 07:33:16.139881] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:16:00.139 [2024-07-14 07:33:16.140053] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:16:00.139 [2024-07-14 07:33:16.140075] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:16:00.139 [2024-07-14 07:33:16.140089] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:16:00.139 [2024-07-14 07:33:16.140186] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:16:00.139 [2024-07-14 07:33:16.140251] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:16:00.139 [2024-07-14 07:33:16.140439] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:16:00.139 [2024-07-14 07:33:16.140443] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:16:01.073 07:33:16 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:16:01.073 07:33:16 -- common/autotest_common.sh@852 -- # return 0 00:16:01.073 07:33:16 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:16:01.073 07:33:16 -- common/autotest_common.sh@718 -- # xtrace_disable 00:16:01.073 07:33:16 -- common/autotest_common.sh@10 -- # set +x 00:16:01.073 07:33:16 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:16:01.073 07:33:16 -- target/nmic.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:16:01.073 07:33:16 -- common/autotest_common.sh@551 -- # xtrace_disable 00:16:01.073 07:33:16 -- common/autotest_common.sh@10 -- # set +x 00:16:01.073 [2024-07-14 07:33:16.905325] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:16:01.073 07:33:16 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:16:01.073 07:33:16 -- target/nmic.sh@20 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:16:01.073 07:33:16 -- common/autotest_common.sh@551 -- # xtrace_disable 00:16:01.073 07:33:16 -- common/autotest_common.sh@10 -- # set +x 00:16:01.073 Malloc0 00:16:01.073 07:33:16 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:16:01.073 07:33:16 -- target/nmic.sh@21 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:16:01.073 07:33:16 -- common/autotest_common.sh@551 -- # xtrace_disable 00:16:01.073 07:33:16 -- common/autotest_common.sh@10 -- # set +x 00:16:01.073 07:33:16 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:16:01.073 07:33:16 -- target/nmic.sh@22 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:16:01.073 07:33:16 -- common/autotest_common.sh@551 -- # xtrace_disable 00:16:01.073 07:33:16 -- common/autotest_common.sh@10 -- # set +x 00:16:01.073 07:33:16 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:16:01.073 07:33:16 -- target/nmic.sh@23 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:16:01.073 07:33:16 -- common/autotest_common.sh@551 -- # xtrace_disable 00:16:01.073 07:33:16 -- common/autotest_common.sh@10 -- # set +x 00:16:01.073 [2024-07-14 07:33:16.956745] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:16:01.073 07:33:16 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:16:01.073 07:33:16 -- target/nmic.sh@25 -- # echo 'test case1: single bdev can'\''t be used in multiple subsystems' 00:16:01.073 test case1: single bdev can't be used in multiple subsystems 00:16:01.073 07:33:16 -- target/nmic.sh@26 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK2 00:16:01.073 07:33:16 -- common/autotest_common.sh@551 -- # xtrace_disable 00:16:01.073 07:33:16 -- common/autotest_common.sh@10 -- # set +x 00:16:01.073 07:33:16 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:16:01.073 07:33:16 -- target/nmic.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:16:01.073 07:33:16 -- common/autotest_common.sh@551 -- # xtrace_disable 00:16:01.073 07:33:16 -- common/autotest_common.sh@10 -- # set +x 00:16:01.073 07:33:16 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:16:01.073 07:33:16 -- target/nmic.sh@28 -- # nmic_status=0 00:16:01.073 07:33:16 -- target/nmic.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Malloc0 00:16:01.073 07:33:16 -- common/autotest_common.sh@551 -- # xtrace_disable 00:16:01.073 07:33:16 -- common/autotest_common.sh@10 -- # set +x 00:16:01.073 [2024-07-14 07:33:16.980640] bdev.c:7940:bdev_open: *ERROR*: bdev Malloc0 already claimed: type exclusive_write by module NVMe-oF Target 00:16:01.073 [2024-07-14 07:33:16.980669] subsystem.c:1819:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode2: bdev Malloc0 cannot be opened, error=-1 00:16:01.073 [2024-07-14 07:33:16.980698] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:01.073 request: 00:16:01.073 { 00:16:01.073 "nqn": "nqn.2016-06.io.spdk:cnode2", 00:16:01.073 "namespace": { 00:16:01.073 "bdev_name": "Malloc0" 00:16:01.073 }, 00:16:01.073 "method": "nvmf_subsystem_add_ns", 00:16:01.073 "req_id": 1 00:16:01.073 } 00:16:01.073 Got JSON-RPC error response 00:16:01.073 response: 00:16:01.073 { 00:16:01.073 "code": -32602, 00:16:01.073 "message": "Invalid parameters" 00:16:01.073 } 00:16:01.073 07:33:16 -- common/autotest_common.sh@579 -- # [[ 1 == 0 ]] 00:16:01.073 07:33:16 -- target/nmic.sh@29 -- # nmic_status=1 00:16:01.073 07:33:16 -- target/nmic.sh@31 -- # '[' 1 -eq 0 ']' 00:16:01.073 07:33:16 -- target/nmic.sh@36 -- # echo ' Adding namespace failed - expected result.' 00:16:01.073 Adding namespace failed - expected result. 00:16:01.073 07:33:16 -- target/nmic.sh@39 -- # echo 'test case2: host connect to nvmf target in multiple paths' 00:16:01.073 test case2: host connect to nvmf target in multiple paths 00:16:01.073 07:33:16 -- target/nmic.sh@40 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:16:01.073 07:33:16 -- common/autotest_common.sh@551 -- # xtrace_disable 00:16:01.073 07:33:16 -- common/autotest_common.sh@10 -- # set +x 00:16:01.073 [2024-07-14 07:33:16.988750] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:16:01.073 07:33:16 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:16:01.073 07:33:16 -- target/nmic.sh@41 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:16:01.638 07:33:17 -- target/nmic.sh@42 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4421 00:16:02.204 07:33:18 -- target/nmic.sh@44 -- # waitforserial SPDKISFASTANDAWESOME 00:16:02.204 07:33:18 -- common/autotest_common.sh@1177 -- # local i=0 00:16:02.204 07:33:18 -- common/autotest_common.sh@1178 -- # local nvme_device_counter=1 nvme_devices=0 00:16:02.204 07:33:18 -- common/autotest_common.sh@1179 -- # [[ -n '' ]] 00:16:02.204 07:33:18 -- common/autotest_common.sh@1184 -- # sleep 2 00:16:04.730 07:33:20 -- common/autotest_common.sh@1185 -- # (( i++ <= 15 )) 00:16:04.730 07:33:20 -- common/autotest_common.sh@1186 -- # lsblk -l -o NAME,SERIAL 00:16:04.730 07:33:20 -- common/autotest_common.sh@1186 -- # grep -c SPDKISFASTANDAWESOME 00:16:04.730 07:33:20 -- common/autotest_common.sh@1186 -- # nvme_devices=1 00:16:04.730 07:33:20 -- common/autotest_common.sh@1187 -- # (( nvme_devices == nvme_device_counter )) 00:16:04.730 07:33:20 -- common/autotest_common.sh@1187 -- # return 0 00:16:04.730 07:33:20 -- target/nmic.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t write -r 1 -v 00:16:04.730 [global] 00:16:04.730 thread=1 00:16:04.730 invalidate=1 00:16:04.730 rw=write 00:16:04.730 time_based=1 00:16:04.730 runtime=1 00:16:04.730 ioengine=libaio 00:16:04.730 direct=1 00:16:04.730 bs=4096 00:16:04.730 iodepth=1 00:16:04.730 norandommap=0 00:16:04.730 numjobs=1 00:16:04.730 00:16:04.730 verify_dump=1 00:16:04.730 verify_backlog=512 00:16:04.730 verify_state_save=0 00:16:04.730 do_verify=1 00:16:04.730 verify=crc32c-intel 00:16:04.730 [job0] 00:16:04.730 filename=/dev/nvme0n1 00:16:04.730 Could not set queue depth (nvme0n1) 00:16:04.730 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:16:04.730 fio-3.35 00:16:04.730 Starting 1 thread 00:16:05.663 00:16:05.663 job0: (groupid=0, jobs=1): err= 0: pid=4094701: Sun Jul 14 07:33:21 2024 00:16:05.663 read: IOPS=516, BW=2066KiB/s (2115kB/s)(2080KiB/1007msec) 00:16:05.663 slat (nsec): min=7450, max=57032, avg=16552.16, stdev=4429.64 00:16:05.663 clat (usec): min=332, max=41286, avg=1383.72, stdev=6345.46 00:16:05.663 lat (usec): min=344, max=41297, avg=1400.27, stdev=6346.11 00:16:05.663 clat percentiles (usec): 00:16:05.663 | 1.00th=[ 338], 5.00th=[ 347], 10.00th=[ 351], 20.00th=[ 359], 00:16:05.663 | 30.00th=[ 363], 40.00th=[ 367], 50.00th=[ 367], 60.00th=[ 371], 00:16:05.663 | 70.00th=[ 375], 80.00th=[ 379], 90.00th=[ 388], 95.00th=[ 400], 00:16:05.663 | 99.00th=[41157], 99.50th=[41157], 99.90th=[41157], 99.95th=[41157], 00:16:05.663 | 99.99th=[41157] 00:16:05.663 write: IOPS=1016, BW=4068KiB/s (4165kB/s)(4096KiB/1007msec); 0 zone resets 00:16:05.663 slat (usec): min=7, max=2452, avg=17.65, stdev=76.63 00:16:05.663 clat (usec): min=201, max=699, avg=247.89, stdev=42.69 00:16:05.663 lat (usec): min=209, max=2737, avg=265.54, stdev=91.06 00:16:05.663 clat percentiles (usec): 00:16:05.663 | 1.00th=[ 204], 5.00th=[ 208], 10.00th=[ 212], 20.00th=[ 219], 00:16:05.663 | 30.00th=[ 223], 40.00th=[ 229], 50.00th=[ 237], 60.00th=[ 243], 00:16:05.663 | 70.00th=[ 253], 80.00th=[ 277], 90.00th=[ 293], 95.00th=[ 330], 00:16:05.663 | 99.00th=[ 416], 99.50th=[ 429], 99.90th=[ 441], 99.95th=[ 701], 00:16:05.663 | 99.99th=[ 701] 00:16:05.663 bw ( KiB/s): min= 4096, max= 4096, per=100.00%, avg=4096.00, stdev= 0.00, samples=2 00:16:05.663 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=2 00:16:05.663 lat (usec) : 250=44.37%, 500=54.60%, 750=0.19% 00:16:05.663 lat (msec) : 50=0.84% 00:16:05.663 cpu : usr=2.39%, sys=2.58%, ctx=1548, majf=0, minf=2 00:16:05.663 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:16:05.663 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:05.663 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:05.663 issued rwts: total=520,1024,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:05.663 latency : target=0, window=0, percentile=100.00%, depth=1 00:16:05.663 00:16:05.663 Run status group 0 (all jobs): 00:16:05.663 READ: bw=2066KiB/s (2115kB/s), 2066KiB/s-2066KiB/s (2115kB/s-2115kB/s), io=2080KiB (2130kB), run=1007-1007msec 00:16:05.663 WRITE: bw=4068KiB/s (4165kB/s), 4068KiB/s-4068KiB/s (4165kB/s-4165kB/s), io=4096KiB (4194kB), run=1007-1007msec 00:16:05.663 00:16:05.663 Disk stats (read/write): 00:16:05.663 nvme0n1: ios=574/1024, merge=0/0, ticks=782/242, in_queue=1024, util=98.80% 00:16:05.663 07:33:21 -- target/nmic.sh@48 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:16:05.663 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 2 controller(s) 00:16:05.663 07:33:21 -- target/nmic.sh@49 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:16:05.663 07:33:21 -- common/autotest_common.sh@1198 -- # local i=0 00:16:05.663 07:33:21 -- common/autotest_common.sh@1199 -- # lsblk -o NAME,SERIAL 00:16:05.663 07:33:21 -- common/autotest_common.sh@1199 -- # grep -q -w SPDKISFASTANDAWESOME 00:16:05.663 07:33:21 -- common/autotest_common.sh@1206 -- # lsblk -l -o NAME,SERIAL 00:16:05.663 07:33:21 -- common/autotest_common.sh@1206 -- # grep -q -w SPDKISFASTANDAWESOME 00:16:05.663 07:33:21 -- common/autotest_common.sh@1210 -- # return 0 00:16:05.663 07:33:21 -- target/nmic.sh@51 -- # trap - SIGINT SIGTERM EXIT 00:16:05.663 07:33:21 -- target/nmic.sh@53 -- # nvmftestfini 00:16:05.663 07:33:21 -- nvmf/common.sh@476 -- # nvmfcleanup 00:16:05.663 07:33:21 -- nvmf/common.sh@116 -- # sync 00:16:05.664 07:33:21 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:16:05.664 07:33:21 -- nvmf/common.sh@119 -- # set +e 00:16:05.664 07:33:21 -- nvmf/common.sh@120 -- # for i in {1..20} 00:16:05.664 07:33:21 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:16:05.664 rmmod nvme_tcp 00:16:05.664 rmmod nvme_fabrics 00:16:05.664 rmmod nvme_keyring 00:16:05.664 07:33:21 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:16:05.664 07:33:21 -- nvmf/common.sh@123 -- # set -e 00:16:05.664 07:33:21 -- nvmf/common.sh@124 -- # return 0 00:16:05.664 07:33:21 -- nvmf/common.sh@477 -- # '[' -n 4093974 ']' 00:16:05.664 07:33:21 -- nvmf/common.sh@478 -- # killprocess 4093974 00:16:05.664 07:33:21 -- common/autotest_common.sh@926 -- # '[' -z 4093974 ']' 00:16:05.664 07:33:21 -- common/autotest_common.sh@930 -- # kill -0 4093974 00:16:05.664 07:33:21 -- common/autotest_common.sh@931 -- # uname 00:16:05.664 07:33:21 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:16:05.664 07:33:21 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 4093974 00:16:05.922 07:33:21 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:16:05.922 07:33:21 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:16:05.922 07:33:21 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 4093974' 00:16:05.922 killing process with pid 4093974 00:16:05.922 07:33:21 -- common/autotest_common.sh@945 -- # kill 4093974 00:16:05.922 07:33:21 -- common/autotest_common.sh@950 -- # wait 4093974 00:16:06.182 07:33:22 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:16:06.182 07:33:22 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:16:06.182 07:33:22 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:16:06.182 07:33:22 -- nvmf/common.sh@273 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:16:06.182 07:33:22 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:16:06.182 07:33:22 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:06.182 07:33:22 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:16:06.182 07:33:22 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:08.087 07:33:24 -- nvmf/common.sh@278 -- # ip -4 addr flush cvl_0_1 00:16:08.087 00:16:08.087 real 0m10.578s 00:16:08.087 user 0m24.951s 00:16:08.087 sys 0m2.390s 00:16:08.087 07:33:24 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:16:08.087 07:33:24 -- common/autotest_common.sh@10 -- # set +x 00:16:08.087 ************************************ 00:16:08.087 END TEST nvmf_nmic 00:16:08.087 ************************************ 00:16:08.087 07:33:24 -- nvmf/nvmf.sh@54 -- # run_test nvmf_fio_target /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fio.sh --transport=tcp 00:16:08.087 07:33:24 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:16:08.087 07:33:24 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:16:08.087 07:33:24 -- common/autotest_common.sh@10 -- # set +x 00:16:08.087 ************************************ 00:16:08.087 START TEST nvmf_fio_target 00:16:08.087 ************************************ 00:16:08.087 07:33:24 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fio.sh --transport=tcp 00:16:08.346 * Looking for test storage... 00:16:08.346 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:16:08.346 07:33:24 -- target/fio.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:16:08.346 07:33:24 -- nvmf/common.sh@7 -- # uname -s 00:16:08.346 07:33:24 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:16:08.346 07:33:24 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:16:08.346 07:33:24 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:16:08.346 07:33:24 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:16:08.346 07:33:24 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:16:08.346 07:33:24 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:16:08.346 07:33:24 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:16:08.346 07:33:24 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:16:08.346 07:33:24 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:16:08.346 07:33:24 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:16:08.346 07:33:24 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:16:08.346 07:33:24 -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:16:08.346 07:33:24 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:16:08.346 07:33:24 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:16:08.346 07:33:24 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:16:08.346 07:33:24 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:16:08.346 07:33:24 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:16:08.346 07:33:24 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:16:08.346 07:33:24 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:16:08.346 07:33:24 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:08.346 07:33:24 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:08.346 07:33:24 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:08.346 07:33:24 -- paths/export.sh@5 -- # export PATH 00:16:08.346 07:33:24 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:08.346 07:33:24 -- nvmf/common.sh@46 -- # : 0 00:16:08.346 07:33:24 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:16:08.347 07:33:24 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:16:08.347 07:33:24 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:16:08.347 07:33:24 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:16:08.347 07:33:24 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:16:08.347 07:33:24 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:16:08.347 07:33:24 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:16:08.347 07:33:24 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:16:08.347 07:33:24 -- target/fio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:16:08.347 07:33:24 -- target/fio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:16:08.347 07:33:24 -- target/fio.sh@14 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:16:08.347 07:33:24 -- target/fio.sh@16 -- # nvmftestinit 00:16:08.347 07:33:24 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:16:08.347 07:33:24 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:16:08.347 07:33:24 -- nvmf/common.sh@436 -- # prepare_net_devs 00:16:08.347 07:33:24 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:16:08.347 07:33:24 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:16:08.347 07:33:24 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:08.347 07:33:24 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:16:08.347 07:33:24 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:08.347 07:33:24 -- nvmf/common.sh@402 -- # [[ phy != virt ]] 00:16:08.347 07:33:24 -- nvmf/common.sh@402 -- # gather_supported_nvmf_pci_devs 00:16:08.347 07:33:24 -- nvmf/common.sh@284 -- # xtrace_disable 00:16:08.347 07:33:24 -- common/autotest_common.sh@10 -- # set +x 00:16:10.249 07:33:26 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:16:10.249 07:33:26 -- nvmf/common.sh@290 -- # pci_devs=() 00:16:10.249 07:33:26 -- nvmf/common.sh@290 -- # local -a pci_devs 00:16:10.249 07:33:26 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:16:10.249 07:33:26 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:16:10.249 07:33:26 -- nvmf/common.sh@292 -- # pci_drivers=() 00:16:10.249 07:33:26 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:16:10.249 07:33:26 -- nvmf/common.sh@294 -- # net_devs=() 00:16:10.249 07:33:26 -- nvmf/common.sh@294 -- # local -ga net_devs 00:16:10.249 07:33:26 -- nvmf/common.sh@295 -- # e810=() 00:16:10.249 07:33:26 -- nvmf/common.sh@295 -- # local -ga e810 00:16:10.249 07:33:26 -- nvmf/common.sh@296 -- # x722=() 00:16:10.249 07:33:26 -- nvmf/common.sh@296 -- # local -ga x722 00:16:10.249 07:33:26 -- nvmf/common.sh@297 -- # mlx=() 00:16:10.249 07:33:26 -- nvmf/common.sh@297 -- # local -ga mlx 00:16:10.249 07:33:26 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:16:10.249 07:33:26 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:16:10.249 07:33:26 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:16:10.249 07:33:26 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:16:10.249 07:33:26 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:16:10.249 07:33:26 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:16:10.249 07:33:26 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:16:10.249 07:33:26 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:16:10.249 07:33:26 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:16:10.249 07:33:26 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:16:10.249 07:33:26 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:16:10.249 07:33:26 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:16:10.249 07:33:26 -- nvmf/common.sh@320 -- # [[ tcp == rdma ]] 00:16:10.249 07:33:26 -- nvmf/common.sh@326 -- # [[ e810 == mlx5 ]] 00:16:10.249 07:33:26 -- nvmf/common.sh@328 -- # [[ e810 == e810 ]] 00:16:10.249 07:33:26 -- nvmf/common.sh@329 -- # pci_devs=("${e810[@]}") 00:16:10.249 07:33:26 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:16:10.249 07:33:26 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:16:10.249 07:33:26 -- nvmf/common.sh@340 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:16:10.249 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:16:10.249 07:33:26 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:16:10.249 07:33:26 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:16:10.249 07:33:26 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:16:10.249 07:33:26 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:16:10.249 07:33:26 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:16:10.249 07:33:26 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:16:10.249 07:33:26 -- nvmf/common.sh@340 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:16:10.249 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:16:10.249 07:33:26 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:16:10.249 07:33:26 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:16:10.249 07:33:26 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:16:10.249 07:33:26 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:16:10.249 07:33:26 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:16:10.249 07:33:26 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:16:10.249 07:33:26 -- nvmf/common.sh@371 -- # [[ e810 == e810 ]] 00:16:10.249 07:33:26 -- nvmf/common.sh@371 -- # [[ tcp == rdma ]] 00:16:10.249 07:33:26 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:16:10.249 07:33:26 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:16:10.249 07:33:26 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:16:10.249 07:33:26 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:16:10.249 07:33:26 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:16:10.249 Found net devices under 0000:0a:00.0: cvl_0_0 00:16:10.249 07:33:26 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:16:10.249 07:33:26 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:16:10.249 07:33:26 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:16:10.249 07:33:26 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:16:10.249 07:33:26 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:16:10.249 07:33:26 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:16:10.249 Found net devices under 0000:0a:00.1: cvl_0_1 00:16:10.249 07:33:26 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:16:10.249 07:33:26 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:16:10.249 07:33:26 -- nvmf/common.sh@402 -- # is_hw=yes 00:16:10.249 07:33:26 -- nvmf/common.sh@404 -- # [[ yes == yes ]] 00:16:10.249 07:33:26 -- nvmf/common.sh@405 -- # [[ tcp == tcp ]] 00:16:10.249 07:33:26 -- nvmf/common.sh@406 -- # nvmf_tcp_init 00:16:10.249 07:33:26 -- nvmf/common.sh@228 -- # NVMF_INITIATOR_IP=10.0.0.1 00:16:10.249 07:33:26 -- nvmf/common.sh@229 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:16:10.249 07:33:26 -- nvmf/common.sh@230 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:16:10.249 07:33:26 -- nvmf/common.sh@233 -- # (( 2 > 1 )) 00:16:10.249 07:33:26 -- nvmf/common.sh@235 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:16:10.249 07:33:26 -- nvmf/common.sh@236 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:16:10.249 07:33:26 -- nvmf/common.sh@239 -- # NVMF_SECOND_TARGET_IP= 00:16:10.250 07:33:26 -- nvmf/common.sh@241 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:16:10.250 07:33:26 -- nvmf/common.sh@242 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:16:10.250 07:33:26 -- nvmf/common.sh@243 -- # ip -4 addr flush cvl_0_0 00:16:10.250 07:33:26 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_1 00:16:10.250 07:33:26 -- nvmf/common.sh@247 -- # ip netns add cvl_0_0_ns_spdk 00:16:10.250 07:33:26 -- nvmf/common.sh@250 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:16:10.250 07:33:26 -- nvmf/common.sh@253 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:16:10.250 07:33:26 -- nvmf/common.sh@254 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:16:10.250 07:33:26 -- nvmf/common.sh@257 -- # ip link set cvl_0_1 up 00:16:10.250 07:33:26 -- nvmf/common.sh@259 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:16:10.250 07:33:26 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:16:10.250 07:33:26 -- nvmf/common.sh@263 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:16:10.250 07:33:26 -- nvmf/common.sh@266 -- # ping -c 1 10.0.0.2 00:16:10.250 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:16:10.250 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.224 ms 00:16:10.250 00:16:10.250 --- 10.0.0.2 ping statistics --- 00:16:10.250 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:10.250 rtt min/avg/max/mdev = 0.224/0.224/0.224/0.000 ms 00:16:10.250 07:33:26 -- nvmf/common.sh@267 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:16:10.250 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:16:10.250 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.109 ms 00:16:10.250 00:16:10.250 --- 10.0.0.1 ping statistics --- 00:16:10.250 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:10.250 rtt min/avg/max/mdev = 0.109/0.109/0.109/0.000 ms 00:16:10.250 07:33:26 -- nvmf/common.sh@269 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:16:10.250 07:33:26 -- nvmf/common.sh@410 -- # return 0 00:16:10.250 07:33:26 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:16:10.250 07:33:26 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:16:10.250 07:33:26 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:16:10.250 07:33:26 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:16:10.250 07:33:26 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:16:10.250 07:33:26 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:16:10.250 07:33:26 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:16:10.508 07:33:26 -- target/fio.sh@17 -- # nvmfappstart -m 0xF 00:16:10.508 07:33:26 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:16:10.508 07:33:26 -- common/autotest_common.sh@712 -- # xtrace_disable 00:16:10.508 07:33:26 -- common/autotest_common.sh@10 -- # set +x 00:16:10.508 07:33:26 -- nvmf/common.sh@469 -- # nvmfpid=4096827 00:16:10.508 07:33:26 -- nvmf/common.sh@468 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:16:10.508 07:33:26 -- nvmf/common.sh@470 -- # waitforlisten 4096827 00:16:10.508 07:33:26 -- common/autotest_common.sh@819 -- # '[' -z 4096827 ']' 00:16:10.508 07:33:26 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:10.508 07:33:26 -- common/autotest_common.sh@824 -- # local max_retries=100 00:16:10.508 07:33:26 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:10.508 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:10.508 07:33:26 -- common/autotest_common.sh@828 -- # xtrace_disable 00:16:10.508 07:33:26 -- common/autotest_common.sh@10 -- # set +x 00:16:10.508 [2024-07-14 07:33:26.477704] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:16:10.508 [2024-07-14 07:33:26.477782] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:16:10.508 EAL: No free 2048 kB hugepages reported on node 1 00:16:10.508 [2024-07-14 07:33:26.547309] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:16:10.508 [2024-07-14 07:33:26.666129] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:16:10.508 [2024-07-14 07:33:26.666315] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:16:10.508 [2024-07-14 07:33:26.666334] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:16:10.508 [2024-07-14 07:33:26.666349] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:16:10.508 [2024-07-14 07:33:26.666431] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:16:10.508 [2024-07-14 07:33:26.666461] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:16:10.509 [2024-07-14 07:33:26.666515] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:16:10.509 [2024-07-14 07:33:26.666519] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:16:11.440 07:33:27 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:16:11.440 07:33:27 -- common/autotest_common.sh@852 -- # return 0 00:16:11.440 07:33:27 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:16:11.440 07:33:27 -- common/autotest_common.sh@718 -- # xtrace_disable 00:16:11.440 07:33:27 -- common/autotest_common.sh@10 -- # set +x 00:16:11.440 07:33:27 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:16:11.440 07:33:27 -- target/fio.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:16:11.695 [2024-07-14 07:33:27.677123] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:16:11.695 07:33:27 -- target/fio.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:16:11.953 07:33:27 -- target/fio.sh@21 -- # malloc_bdevs='Malloc0 ' 00:16:11.953 07:33:27 -- target/fio.sh@22 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:16:12.210 07:33:28 -- target/fio.sh@22 -- # malloc_bdevs+=Malloc1 00:16:12.210 07:33:28 -- target/fio.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:16:12.501 07:33:28 -- target/fio.sh@24 -- # raid_malloc_bdevs='Malloc2 ' 00:16:12.501 07:33:28 -- target/fio.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:16:12.759 07:33:28 -- target/fio.sh@25 -- # raid_malloc_bdevs+=Malloc3 00:16:12.759 07:33:28 -- target/fio.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_create -n raid0 -z 64 -r 0 -b 'Malloc2 Malloc3' 00:16:13.017 07:33:28 -- target/fio.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:16:13.275 07:33:29 -- target/fio.sh@29 -- # concat_malloc_bdevs='Malloc4 ' 00:16:13.275 07:33:29 -- target/fio.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:16:13.275 07:33:29 -- target/fio.sh@30 -- # concat_malloc_bdevs+='Malloc5 ' 00:16:13.532 07:33:29 -- target/fio.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:16:13.789 07:33:29 -- target/fio.sh@31 -- # concat_malloc_bdevs+=Malloc6 00:16:13.789 07:33:29 -- target/fio.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_create -n concat0 -r concat -z 64 -b 'Malloc4 Malloc5 Malloc6' 00:16:13.789 07:33:29 -- target/fio.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:16:14.047 07:33:30 -- target/fio.sh@35 -- # for malloc_bdev in $malloc_bdevs 00:16:14.047 07:33:30 -- target/fio.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:16:14.304 07:33:30 -- target/fio.sh@35 -- # for malloc_bdev in $malloc_bdevs 00:16:14.304 07:33:30 -- target/fio.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:16:14.561 07:33:30 -- target/fio.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:16:14.818 [2024-07-14 07:33:30.868319] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:16:14.818 07:33:30 -- target/fio.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 raid0 00:16:15.075 07:33:31 -- target/fio.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 concat0 00:16:15.332 07:33:31 -- target/fio.sh@46 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:16:15.898 07:33:32 -- target/fio.sh@48 -- # waitforserial SPDKISFASTANDAWESOME 4 00:16:15.898 07:33:32 -- common/autotest_common.sh@1177 -- # local i=0 00:16:15.898 07:33:32 -- common/autotest_common.sh@1178 -- # local nvme_device_counter=1 nvme_devices=0 00:16:15.898 07:33:32 -- common/autotest_common.sh@1179 -- # [[ -n 4 ]] 00:16:15.898 07:33:32 -- common/autotest_common.sh@1180 -- # nvme_device_counter=4 00:16:15.898 07:33:32 -- common/autotest_common.sh@1184 -- # sleep 2 00:16:18.426 07:33:34 -- common/autotest_common.sh@1185 -- # (( i++ <= 15 )) 00:16:18.426 07:33:34 -- common/autotest_common.sh@1186 -- # lsblk -l -o NAME,SERIAL 00:16:18.426 07:33:34 -- common/autotest_common.sh@1186 -- # grep -c SPDKISFASTANDAWESOME 00:16:18.426 07:33:34 -- common/autotest_common.sh@1186 -- # nvme_devices=4 00:16:18.426 07:33:34 -- common/autotest_common.sh@1187 -- # (( nvme_devices == nvme_device_counter )) 00:16:18.426 07:33:34 -- common/autotest_common.sh@1187 -- # return 0 00:16:18.426 07:33:34 -- target/fio.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t write -r 1 -v 00:16:18.426 [global] 00:16:18.426 thread=1 00:16:18.426 invalidate=1 00:16:18.426 rw=write 00:16:18.426 time_based=1 00:16:18.426 runtime=1 00:16:18.426 ioengine=libaio 00:16:18.426 direct=1 00:16:18.426 bs=4096 00:16:18.426 iodepth=1 00:16:18.426 norandommap=0 00:16:18.426 numjobs=1 00:16:18.426 00:16:18.426 verify_dump=1 00:16:18.426 verify_backlog=512 00:16:18.426 verify_state_save=0 00:16:18.426 do_verify=1 00:16:18.426 verify=crc32c-intel 00:16:18.426 [job0] 00:16:18.426 filename=/dev/nvme0n1 00:16:18.426 [job1] 00:16:18.426 filename=/dev/nvme0n2 00:16:18.426 [job2] 00:16:18.426 filename=/dev/nvme0n3 00:16:18.426 [job3] 00:16:18.426 filename=/dev/nvme0n4 00:16:18.426 Could not set queue depth (nvme0n1) 00:16:18.426 Could not set queue depth (nvme0n2) 00:16:18.426 Could not set queue depth (nvme0n3) 00:16:18.426 Could not set queue depth (nvme0n4) 00:16:18.426 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:16:18.426 job1: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:16:18.426 job2: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:16:18.426 job3: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:16:18.426 fio-3.35 00:16:18.426 Starting 4 threads 00:16:19.360 00:16:19.360 job0: (groupid=0, jobs=1): err= 0: pid=4097949: Sun Jul 14 07:33:35 2024 00:16:19.360 read: IOPS=1534, BW=6138KiB/s (6285kB/s)(6144KiB/1001msec) 00:16:19.360 slat (nsec): min=5502, max=49435, avg=10300.31, stdev=5575.53 00:16:19.360 clat (usec): min=309, max=721, avg=353.81, stdev=24.26 00:16:19.360 lat (usec): min=315, max=727, avg=364.11, stdev=27.63 00:16:19.360 clat percentiles (usec): 00:16:19.360 | 1.00th=[ 318], 5.00th=[ 322], 10.00th=[ 330], 20.00th=[ 334], 00:16:19.360 | 30.00th=[ 338], 40.00th=[ 347], 50.00th=[ 351], 60.00th=[ 359], 00:16:19.360 | 70.00th=[ 363], 80.00th=[ 371], 90.00th=[ 383], 95.00th=[ 396], 00:16:19.360 | 99.00th=[ 424], 99.50th=[ 433], 99.90th=[ 461], 99.95th=[ 725], 00:16:19.360 | 99.99th=[ 725] 00:16:19.360 write: IOPS=1702, BW=6809KiB/s (6973kB/s)(6816KiB/1001msec); 0 zone resets 00:16:19.360 slat (nsec): min=6771, max=47033, avg=12473.64, stdev=7200.31 00:16:19.360 clat (usec): min=197, max=415, avg=239.99, stdev=34.07 00:16:19.360 lat (usec): min=204, max=462, avg=252.46, stdev=39.07 00:16:19.360 clat percentiles (usec): 00:16:19.360 | 1.00th=[ 200], 5.00th=[ 206], 10.00th=[ 208], 20.00th=[ 212], 00:16:19.360 | 30.00th=[ 217], 40.00th=[ 221], 50.00th=[ 227], 60.00th=[ 237], 00:16:19.360 | 70.00th=[ 255], 80.00th=[ 273], 90.00th=[ 289], 95.00th=[ 306], 00:16:19.360 | 99.00th=[ 334], 99.50th=[ 363], 99.90th=[ 392], 99.95th=[ 416], 00:16:19.360 | 99.99th=[ 416] 00:16:19.360 bw ( KiB/s): min= 8192, max= 8192, per=65.49%, avg=8192.00, stdev= 0.00, samples=1 00:16:19.360 iops : min= 2048, max= 2048, avg=2048.00, stdev= 0.00, samples=1 00:16:19.360 lat (usec) : 250=35.46%, 500=64.51%, 750=0.03% 00:16:19.360 cpu : usr=2.50%, sys=5.40%, ctx=3240, majf=0, minf=1 00:16:19.360 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:16:19.360 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:19.360 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:19.360 issued rwts: total=1536,1704,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:19.360 latency : target=0, window=0, percentile=100.00%, depth=1 00:16:19.360 job1: (groupid=0, jobs=1): err= 0: pid=4097951: Sun Jul 14 07:33:35 2024 00:16:19.360 read: IOPS=20, BW=81.1KiB/s (83.0kB/s)(84.0KiB/1036msec) 00:16:19.360 slat (nsec): min=12408, max=36382, avg=18752.67, stdev=7437.09 00:16:19.360 clat (usec): min=40889, max=41022, avg=40971.65, stdev=29.38 00:16:19.360 lat (usec): min=40909, max=41038, avg=40990.40, stdev=26.62 00:16:19.360 clat percentiles (usec): 00:16:19.360 | 1.00th=[40633], 5.00th=[41157], 10.00th=[41157], 20.00th=[41157], 00:16:19.360 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41157], 60.00th=[41157], 00:16:19.361 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41157], 95.00th=[41157], 00:16:19.361 | 99.00th=[41157], 99.50th=[41157], 99.90th=[41157], 99.95th=[41157], 00:16:19.361 | 99.99th=[41157] 00:16:19.361 write: IOPS=494, BW=1977KiB/s (2024kB/s)(2048KiB/1036msec); 0 zone resets 00:16:19.361 slat (usec): min=8, max=21436, avg=62.17, stdev=946.49 00:16:19.361 clat (usec): min=211, max=484, avg=275.31, stdev=27.72 00:16:19.361 lat (usec): min=221, max=21920, avg=337.47, stdev=956.22 00:16:19.361 clat percentiles (usec): 00:16:19.361 | 1.00th=[ 221], 5.00th=[ 235], 10.00th=[ 241], 20.00th=[ 251], 00:16:19.361 | 30.00th=[ 262], 40.00th=[ 273], 50.00th=[ 281], 60.00th=[ 281], 00:16:19.361 | 70.00th=[ 289], 80.00th=[ 293], 90.00th=[ 302], 95.00th=[ 314], 00:16:19.361 | 99.00th=[ 347], 99.50th=[ 416], 99.90th=[ 486], 99.95th=[ 486], 00:16:19.361 | 99.99th=[ 486] 00:16:19.361 bw ( KiB/s): min= 4096, max= 4096, per=32.74%, avg=4096.00, stdev= 0.00, samples=1 00:16:19.361 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:16:19.361 lat (usec) : 250=18.76%, 500=77.30% 00:16:19.361 lat (msec) : 50=3.94% 00:16:19.361 cpu : usr=0.58%, sys=1.35%, ctx=535, majf=0, minf=2 00:16:19.361 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:16:19.361 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:19.361 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:19.361 issued rwts: total=21,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:19.361 latency : target=0, window=0, percentile=100.00%, depth=1 00:16:19.361 job2: (groupid=0, jobs=1): err= 0: pid=4097952: Sun Jul 14 07:33:35 2024 00:16:19.361 read: IOPS=19, BW=79.0KiB/s (80.9kB/s)(80.0KiB/1013msec) 00:16:19.361 slat (nsec): min=12764, max=34334, avg=16738.40, stdev=6458.52 00:16:19.361 clat (usec): min=40934, max=41466, avg=41001.00, stdev=111.47 00:16:19.361 lat (usec): min=40953, max=41486, avg=41017.74, stdev=111.95 00:16:19.361 clat percentiles (usec): 00:16:19.361 | 1.00th=[41157], 5.00th=[41157], 10.00th=[41157], 20.00th=[41157], 00:16:19.361 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41157], 60.00th=[41157], 00:16:19.361 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41157], 95.00th=[41157], 00:16:19.361 | 99.00th=[41681], 99.50th=[41681], 99.90th=[41681], 99.95th=[41681], 00:16:19.361 | 99.99th=[41681] 00:16:19.361 write: IOPS=505, BW=2022KiB/s (2070kB/s)(2048KiB/1013msec); 0 zone resets 00:16:19.361 slat (usec): min=7, max=21434, avg=64.35, stdev=946.35 00:16:19.361 clat (usec): min=241, max=490, avg=306.99, stdev=54.09 00:16:19.361 lat (usec): min=257, max=21890, avg=371.34, stdev=954.52 00:16:19.361 clat percentiles (usec): 00:16:19.361 | 1.00th=[ 247], 5.00th=[ 255], 10.00th=[ 260], 20.00th=[ 269], 00:16:19.361 | 30.00th=[ 273], 40.00th=[ 277], 50.00th=[ 281], 60.00th=[ 297], 00:16:19.361 | 70.00th=[ 314], 80.00th=[ 359], 90.00th=[ 404], 95.00th=[ 420], 00:16:19.361 | 99.00th=[ 445], 99.50th=[ 469], 99.90th=[ 490], 99.95th=[ 490], 00:16:19.361 | 99.99th=[ 490] 00:16:19.361 bw ( KiB/s): min= 4096, max= 4096, per=32.74%, avg=4096.00, stdev= 0.00, samples=1 00:16:19.361 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:16:19.361 lat (usec) : 250=2.07%, 500=94.17% 00:16:19.361 lat (msec) : 50=3.76% 00:16:19.361 cpu : usr=0.59%, sys=0.99%, ctx=534, majf=0, minf=1 00:16:19.361 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:16:19.361 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:19.361 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:19.361 issued rwts: total=20,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:19.361 latency : target=0, window=0, percentile=100.00%, depth=1 00:16:19.361 job3: (groupid=0, jobs=1): err= 0: pid=4097953: Sun Jul 14 07:33:35 2024 00:16:19.361 read: IOPS=20, BW=83.8KiB/s (85.8kB/s)(84.0KiB/1002msec) 00:16:19.361 slat (nsec): min=7751, max=33515, avg=15576.62, stdev=6270.02 00:16:19.361 clat (usec): min=475, max=41512, avg=39000.08, stdev=8839.28 00:16:19.361 lat (usec): min=489, max=41520, avg=39015.66, stdev=8839.27 00:16:19.361 clat percentiles (usec): 00:16:19.361 | 1.00th=[ 478], 5.00th=[39060], 10.00th=[40633], 20.00th=[41157], 00:16:19.361 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41157], 60.00th=[41157], 00:16:19.361 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41157], 95.00th=[41157], 00:16:19.361 | 99.00th=[41681], 99.50th=[41681], 99.90th=[41681], 99.95th=[41681], 00:16:19.361 | 99.99th=[41681] 00:16:19.361 write: IOPS=510, BW=2044KiB/s (2093kB/s)(2048KiB/1002msec); 0 zone resets 00:16:19.361 slat (nsec): min=6823, max=71540, avg=24133.64, stdev=13246.97 00:16:19.361 clat (usec): min=214, max=600, avg=325.82, stdev=79.38 00:16:19.361 lat (usec): min=230, max=631, avg=349.95, stdev=84.12 00:16:19.361 clat percentiles (usec): 00:16:19.361 | 1.00th=[ 225], 5.00th=[ 233], 10.00th=[ 241], 20.00th=[ 249], 00:16:19.361 | 30.00th=[ 265], 40.00th=[ 281], 50.00th=[ 306], 60.00th=[ 338], 00:16:19.361 | 70.00th=[ 375], 80.00th=[ 412], 90.00th=[ 437], 95.00th=[ 461], 00:16:19.361 | 99.00th=[ 519], 99.50th=[ 537], 99.90th=[ 603], 99.95th=[ 603], 00:16:19.361 | 99.99th=[ 603] 00:16:19.361 bw ( KiB/s): min= 4096, max= 4096, per=32.74%, avg=4096.00, stdev= 0.00, samples=1 00:16:19.361 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:16:19.361 lat (usec) : 250=19.70%, 500=74.48%, 750=2.06% 00:16:19.361 lat (msec) : 50=3.75% 00:16:19.361 cpu : usr=0.80%, sys=1.00%, ctx=533, majf=0, minf=1 00:16:19.361 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:16:19.361 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:19.361 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:19.361 issued rwts: total=21,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:19.361 latency : target=0, window=0, percentile=100.00%, depth=1 00:16:19.361 00:16:19.361 Run status group 0 (all jobs): 00:16:19.361 READ: bw=6170KiB/s (6318kB/s), 79.0KiB/s-6138KiB/s (80.9kB/s-6285kB/s), io=6392KiB (6545kB), run=1001-1036msec 00:16:19.361 WRITE: bw=12.2MiB/s (12.8MB/s), 1977KiB/s-6809KiB/s (2024kB/s-6973kB/s), io=12.7MiB (13.3MB), run=1001-1036msec 00:16:19.361 00:16:19.361 Disk stats (read/write): 00:16:19.361 nvme0n1: ios=1314/1536, merge=0/0, ticks=475/343, in_queue=818, util=87.58% 00:16:19.361 nvme0n2: ios=41/512, merge=0/0, ticks=1641/135, in_queue=1776, util=98.17% 00:16:19.361 nvme0n3: ios=44/512, merge=0/0, ticks=1563/149, in_queue=1712, util=97.80% 00:16:19.361 nvme0n4: ios=72/512, merge=0/0, ticks=706/159, in_queue=865, util=91.55% 00:16:19.361 07:33:35 -- target/fio.sh@51 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t randwrite -r 1 -v 00:16:19.621 [global] 00:16:19.621 thread=1 00:16:19.621 invalidate=1 00:16:19.621 rw=randwrite 00:16:19.621 time_based=1 00:16:19.621 runtime=1 00:16:19.621 ioengine=libaio 00:16:19.621 direct=1 00:16:19.621 bs=4096 00:16:19.621 iodepth=1 00:16:19.621 norandommap=0 00:16:19.621 numjobs=1 00:16:19.621 00:16:19.621 verify_dump=1 00:16:19.621 verify_backlog=512 00:16:19.621 verify_state_save=0 00:16:19.621 do_verify=1 00:16:19.621 verify=crc32c-intel 00:16:19.621 [job0] 00:16:19.621 filename=/dev/nvme0n1 00:16:19.621 [job1] 00:16:19.621 filename=/dev/nvme0n2 00:16:19.621 [job2] 00:16:19.621 filename=/dev/nvme0n3 00:16:19.621 [job3] 00:16:19.621 filename=/dev/nvme0n4 00:16:19.621 Could not set queue depth (nvme0n1) 00:16:19.621 Could not set queue depth (nvme0n2) 00:16:19.621 Could not set queue depth (nvme0n3) 00:16:19.621 Could not set queue depth (nvme0n4) 00:16:19.621 job0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:16:19.621 job1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:16:19.621 job2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:16:19.621 job3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:16:19.621 fio-3.35 00:16:19.621 Starting 4 threads 00:16:20.995 00:16:20.995 job0: (groupid=0, jobs=1): err= 0: pid=4098186: Sun Jul 14 07:33:36 2024 00:16:20.995 read: IOPS=512, BW=2051KiB/s (2100kB/s)(2100KiB/1024msec) 00:16:20.996 slat (nsec): min=7391, max=39690, avg=13904.81, stdev=5647.79 00:16:20.996 clat (usec): min=327, max=41025, avg=1380.99, stdev=6316.00 00:16:20.996 lat (usec): min=335, max=41045, avg=1394.89, stdev=6317.26 00:16:20.996 clat percentiles (usec): 00:16:20.996 | 1.00th=[ 330], 5.00th=[ 343], 10.00th=[ 347], 20.00th=[ 355], 00:16:20.996 | 30.00th=[ 363], 40.00th=[ 367], 50.00th=[ 371], 60.00th=[ 375], 00:16:20.996 | 70.00th=[ 379], 80.00th=[ 388], 90.00th=[ 400], 95.00th=[ 416], 00:16:20.996 | 99.00th=[41157], 99.50th=[41157], 99.90th=[41157], 99.95th=[41157], 00:16:20.996 | 99.99th=[41157] 00:16:20.996 write: IOPS=1000, BW=4000KiB/s (4096kB/s)(4096KiB/1024msec); 0 zone resets 00:16:20.996 slat (nsec): min=7476, max=55914, avg=15508.29, stdev=10088.53 00:16:20.996 clat (usec): min=202, max=866, avg=261.95, stdev=50.89 00:16:20.996 lat (usec): min=210, max=884, avg=277.46, stdev=56.74 00:16:20.996 clat percentiles (usec): 00:16:20.996 | 1.00th=[ 208], 5.00th=[ 215], 10.00th=[ 217], 20.00th=[ 223], 00:16:20.996 | 30.00th=[ 227], 40.00th=[ 233], 50.00th=[ 245], 60.00th=[ 265], 00:16:20.996 | 70.00th=[ 285], 80.00th=[ 297], 90.00th=[ 330], 95.00th=[ 363], 00:16:20.996 | 99.00th=[ 400], 99.50th=[ 408], 99.90th=[ 486], 99.95th=[ 865], 00:16:20.996 | 99.99th=[ 865] 00:16:20.996 bw ( KiB/s): min= 8192, max= 8192, per=68.27%, avg=8192.00, stdev= 0.00, samples=1 00:16:20.996 iops : min= 2048, max= 2048, avg=2048.00, stdev= 0.00, samples=1 00:16:20.996 lat (usec) : 250=35.44%, 500=63.20%, 750=0.32%, 1000=0.19% 00:16:20.996 lat (msec) : 50=0.84% 00:16:20.996 cpu : usr=2.15%, sys=2.54%, ctx=1550, majf=0, minf=1 00:16:20.996 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:16:20.996 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:20.996 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:20.996 issued rwts: total=525,1024,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:20.996 latency : target=0, window=0, percentile=100.00%, depth=1 00:16:20.996 job1: (groupid=0, jobs=1): err= 0: pid=4098187: Sun Jul 14 07:33:36 2024 00:16:20.996 read: IOPS=293, BW=1174KiB/s (1202kB/s)(1176KiB/1002msec) 00:16:20.996 slat (nsec): min=6055, max=35080, avg=11484.00, stdev=4875.52 00:16:20.996 clat (usec): min=327, max=41117, avg=2743.81, stdev=9488.24 00:16:20.996 lat (usec): min=337, max=41131, avg=2755.30, stdev=9490.04 00:16:20.996 clat percentiles (usec): 00:16:20.996 | 1.00th=[ 330], 5.00th=[ 338], 10.00th=[ 343], 20.00th=[ 347], 00:16:20.996 | 30.00th=[ 351], 40.00th=[ 355], 50.00th=[ 371], 60.00th=[ 379], 00:16:20.996 | 70.00th=[ 388], 80.00th=[ 469], 90.00th=[ 529], 95.00th=[40633], 00:16:20.996 | 99.00th=[41157], 99.50th=[41157], 99.90th=[41157], 99.95th=[41157], 00:16:20.996 | 99.99th=[41157] 00:16:20.996 write: IOPS=510, BW=2044KiB/s (2093kB/s)(2048KiB/1002msec); 0 zone resets 00:16:20.996 slat (nsec): min=8189, max=44124, avg=19489.47, stdev=8530.39 00:16:20.996 clat (usec): min=232, max=506, avg=347.20, stdev=54.06 00:16:20.996 lat (usec): min=248, max=524, avg=366.69, stdev=53.19 00:16:20.996 clat percentiles (usec): 00:16:20.996 | 1.00th=[ 239], 5.00th=[ 251], 10.00th=[ 269], 20.00th=[ 306], 00:16:20.996 | 30.00th=[ 322], 40.00th=[ 330], 50.00th=[ 347], 60.00th=[ 367], 00:16:20.996 | 70.00th=[ 379], 80.00th=[ 392], 90.00th=[ 416], 95.00th=[ 429], 00:16:20.996 | 99.00th=[ 478], 99.50th=[ 502], 99.90th=[ 506], 99.95th=[ 506], 00:16:20.996 | 99.99th=[ 506] 00:16:20.996 bw ( KiB/s): min= 4096, max= 4096, per=34.13%, avg=4096.00, stdev= 0.00, samples=1 00:16:20.996 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:16:20.996 lat (usec) : 250=2.85%, 500=91.19%, 750=3.60%, 1000=0.12% 00:16:20.996 lat (msec) : 4=0.12%, 50=2.11% 00:16:20.996 cpu : usr=0.70%, sys=1.50%, ctx=807, majf=0, minf=1 00:16:20.996 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:16:20.996 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:20.996 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:20.996 issued rwts: total=294,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:20.996 latency : target=0, window=0, percentile=100.00%, depth=1 00:16:20.996 job2: (groupid=0, jobs=1): err= 0: pid=4098188: Sun Jul 14 07:33:36 2024 00:16:20.996 read: IOPS=382, BW=1531KiB/s (1567kB/s)(1552KiB/1014msec) 00:16:20.996 slat (nsec): min=7262, max=57808, avg=15769.04, stdev=5946.51 00:16:20.996 clat (usec): min=331, max=41236, avg=2051.36, stdev=8083.57 00:16:20.996 lat (usec): min=338, max=41255, avg=2067.13, stdev=8084.02 00:16:20.996 clat percentiles (usec): 00:16:20.996 | 1.00th=[ 338], 5.00th=[ 343], 10.00th=[ 347], 20.00th=[ 359], 00:16:20.996 | 30.00th=[ 367], 40.00th=[ 371], 50.00th=[ 375], 60.00th=[ 379], 00:16:20.996 | 70.00th=[ 383], 80.00th=[ 388], 90.00th=[ 408], 95.00th=[ 545], 00:16:20.996 | 99.00th=[41157], 99.50th=[41157], 99.90th=[41157], 99.95th=[41157], 00:16:20.996 | 99.99th=[41157] 00:16:20.996 write: IOPS=504, BW=2020KiB/s (2068kB/s)(2048KiB/1014msec); 0 zone resets 00:16:20.996 slat (nsec): min=9442, max=75681, avg=27663.55, stdev=13865.01 00:16:20.996 clat (usec): min=242, max=559, avg=373.72, stdev=63.40 00:16:20.996 lat (usec): min=257, max=599, avg=401.39, stdev=68.16 00:16:20.996 clat percentiles (usec): 00:16:20.996 | 1.00th=[ 249], 5.00th=[ 265], 10.00th=[ 289], 20.00th=[ 326], 00:16:20.996 | 30.00th=[ 338], 40.00th=[ 355], 50.00th=[ 371], 60.00th=[ 388], 00:16:20.996 | 70.00th=[ 404], 80.00th=[ 429], 90.00th=[ 457], 95.00th=[ 490], 00:16:20.996 | 99.00th=[ 529], 99.50th=[ 537], 99.90th=[ 562], 99.95th=[ 562], 00:16:20.996 | 99.99th=[ 562] 00:16:20.996 bw ( KiB/s): min= 4096, max= 4096, per=34.13%, avg=4096.00, stdev= 0.00, samples=1 00:16:20.996 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:16:20.996 lat (usec) : 250=0.78%, 500=94.56%, 750=2.89% 00:16:20.996 lat (msec) : 50=1.78% 00:16:20.996 cpu : usr=2.07%, sys=1.88%, ctx=901, majf=0, minf=2 00:16:20.996 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:16:20.996 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:20.996 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:20.996 issued rwts: total=388,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:20.996 latency : target=0, window=0, percentile=100.00%, depth=1 00:16:20.996 job3: (groupid=0, jobs=1): err= 0: pid=4098189: Sun Jul 14 07:33:36 2024 00:16:20.996 read: IOPS=798, BW=3193KiB/s (3269kB/s)(3196KiB/1001msec) 00:16:20.996 slat (nsec): min=5747, max=56190, avg=13255.70, stdev=10018.91 00:16:20.996 clat (usec): min=365, max=41826, avg=862.69, stdev=3801.17 00:16:20.996 lat (usec): min=387, max=41848, avg=875.94, stdev=3801.94 00:16:20.996 clat percentiles (usec): 00:16:20.996 | 1.00th=[ 383], 5.00th=[ 404], 10.00th=[ 441], 20.00th=[ 457], 00:16:20.996 | 30.00th=[ 465], 40.00th=[ 490], 50.00th=[ 510], 60.00th=[ 519], 00:16:20.996 | 70.00th=[ 523], 80.00th=[ 529], 90.00th=[ 545], 95.00th=[ 562], 00:16:20.996 | 99.00th=[10159], 99.50th=[41157], 99.90th=[41681], 99.95th=[41681], 00:16:20.996 | 99.99th=[41681] 00:16:20.996 write: IOPS=1022, BW=4092KiB/s (4190kB/s)(4096KiB/1001msec); 0 zone resets 00:16:20.996 slat (nsec): min=7383, max=63207, avg=15997.65, stdev=10424.52 00:16:20.996 clat (usec): min=205, max=508, avg=269.81, stdev=54.07 00:16:20.996 lat (usec): min=212, max=515, avg=285.81, stdev=61.51 00:16:20.996 clat percentiles (usec): 00:16:20.996 | 1.00th=[ 208], 5.00th=[ 212], 10.00th=[ 215], 20.00th=[ 221], 00:16:20.996 | 30.00th=[ 227], 40.00th=[ 235], 50.00th=[ 253], 60.00th=[ 281], 00:16:20.996 | 70.00th=[ 297], 80.00th=[ 318], 90.00th=[ 355], 95.00th=[ 375], 00:16:20.996 | 99.00th=[ 412], 99.50th=[ 420], 99.90th=[ 433], 99.95th=[ 510], 00:16:20.996 | 99.99th=[ 510] 00:16:20.996 bw ( KiB/s): min= 4096, max= 4096, per=34.13%, avg=4096.00, stdev= 0.00, samples=1 00:16:20.996 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:16:20.996 lat (usec) : 250=27.54%, 500=47.94%, 750=24.08% 00:16:20.996 lat (msec) : 20=0.05%, 50=0.38% 00:16:20.996 cpu : usr=2.60%, sys=2.80%, ctx=1824, majf=0, minf=1 00:16:20.996 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:16:20.996 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:20.996 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:20.996 issued rwts: total=799,1024,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:20.996 latency : target=0, window=0, percentile=100.00%, depth=1 00:16:20.996 00:16:20.996 Run status group 0 (all jobs): 00:16:20.996 READ: bw=7836KiB/s (8024kB/s), 1174KiB/s-3193KiB/s (1202kB/s-3269kB/s), io=8024KiB (8217kB), run=1001-1024msec 00:16:20.996 WRITE: bw=11.7MiB/s (12.3MB/s), 2020KiB/s-4092KiB/s (2068kB/s-4190kB/s), io=12.0MiB (12.6MB), run=1001-1024msec 00:16:20.996 00:16:20.996 Disk stats (read/write): 00:16:20.996 nvme0n1: ios=565/1024, merge=0/0, ticks=866/262, in_queue=1128, util=91.28% 00:16:20.996 nvme0n2: ios=334/512, merge=0/0, ticks=1031/169, in_queue=1200, util=95.13% 00:16:20.996 nvme0n3: ios=341/512, merge=0/0, ticks=1371/179, in_queue=1550, util=97.92% 00:16:20.996 nvme0n4: ios=638/1024, merge=0/0, ticks=757/258, in_queue=1015, util=100.00% 00:16:20.996 07:33:36 -- target/fio.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t write -r 1 -v 00:16:20.996 [global] 00:16:20.996 thread=1 00:16:20.996 invalidate=1 00:16:20.996 rw=write 00:16:20.996 time_based=1 00:16:20.996 runtime=1 00:16:20.996 ioengine=libaio 00:16:20.996 direct=1 00:16:20.996 bs=4096 00:16:20.996 iodepth=128 00:16:20.996 norandommap=0 00:16:20.996 numjobs=1 00:16:20.996 00:16:20.996 verify_dump=1 00:16:20.996 verify_backlog=512 00:16:20.996 verify_state_save=0 00:16:20.996 do_verify=1 00:16:20.996 verify=crc32c-intel 00:16:20.996 [job0] 00:16:20.996 filename=/dev/nvme0n1 00:16:20.996 [job1] 00:16:20.996 filename=/dev/nvme0n2 00:16:20.996 [job2] 00:16:20.996 filename=/dev/nvme0n3 00:16:20.996 [job3] 00:16:20.996 filename=/dev/nvme0n4 00:16:20.996 Could not set queue depth (nvme0n1) 00:16:20.996 Could not set queue depth (nvme0n2) 00:16:20.996 Could not set queue depth (nvme0n3) 00:16:20.996 Could not set queue depth (nvme0n4) 00:16:21.254 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:16:21.254 job1: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:16:21.254 job2: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:16:21.254 job3: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:16:21.254 fio-3.35 00:16:21.254 Starting 4 threads 00:16:22.629 00:16:22.629 job0: (groupid=0, jobs=1): err= 0: pid=4098417: Sun Jul 14 07:33:38 2024 00:16:22.629 read: IOPS=3548, BW=13.9MiB/s (14.5MB/s)(14.0MiB/1010msec) 00:16:22.629 slat (usec): min=2, max=45731, avg=165.51, stdev=1457.46 00:16:22.629 clat (usec): min=4723, max=85816, avg=20779.08, stdev=19229.32 00:16:22.629 lat (usec): min=4731, max=85822, avg=20944.59, stdev=19335.91 00:16:22.629 clat percentiles (usec): 00:16:22.629 | 1.00th=[ 5014], 5.00th=[ 7570], 10.00th=[ 8094], 20.00th=[ 9765], 00:16:22.629 | 30.00th=[10290], 40.00th=[11338], 50.00th=[12125], 60.00th=[12780], 00:16:22.629 | 70.00th=[15270], 80.00th=[35914], 90.00th=[49021], 95.00th=[72877], 00:16:22.629 | 99.00th=[81265], 99.50th=[85459], 99.90th=[85459], 99.95th=[85459], 00:16:22.629 | 99.99th=[85459] 00:16:22.629 write: IOPS=4013, BW=15.7MiB/s (16.4MB/s)(15.8MiB/1010msec); 0 zone resets 00:16:22.629 slat (usec): min=3, max=9186, avg=93.00, stdev=512.87 00:16:22.629 clat (usec): min=2891, max=61980, avg=12768.89, stdev=7179.00 00:16:22.629 lat (usec): min=2900, max=61984, avg=12861.89, stdev=7190.02 00:16:22.629 clat percentiles (usec): 00:16:22.629 | 1.00th=[ 4228], 5.00th=[ 6063], 10.00th=[ 7832], 20.00th=[ 9765], 00:16:22.629 | 30.00th=[11076], 40.00th=[11469], 50.00th=[11731], 60.00th=[12125], 00:16:22.629 | 70.00th=[12649], 80.00th=[14222], 90.00th=[15401], 95.00th=[20055], 00:16:22.629 | 99.00th=[56361], 99.50th=[62129], 99.90th=[62129], 99.95th=[62129], 00:16:22.629 | 99.99th=[62129] 00:16:22.629 bw ( KiB/s): min=12288, max=19128, per=22.66%, avg=15708.00, stdev=4836.61, samples=2 00:16:22.629 iops : min= 3072, max= 4782, avg=3927.00, stdev=1209.15, samples=2 00:16:22.629 lat (msec) : 4=0.47%, 10=22.37%, 20=63.24%, 50=8.51%, 100=5.41% 00:16:22.629 cpu : usr=2.97%, sys=6.14%, ctx=391, majf=0, minf=1 00:16:22.629 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.2% 00:16:22.629 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:22.629 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:16:22.629 issued rwts: total=3584,4054,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:22.629 latency : target=0, window=0, percentile=100.00%, depth=128 00:16:22.629 job1: (groupid=0, jobs=1): err= 0: pid=4098418: Sun Jul 14 07:33:38 2024 00:16:22.629 read: IOPS=5587, BW=21.8MiB/s (22.9MB/s)(22.0MiB/1008msec) 00:16:22.629 slat (usec): min=2, max=9634, avg=90.43, stdev=656.57 00:16:22.629 clat (usec): min=6631, max=22672, avg=12208.87, stdev=2983.36 00:16:22.629 lat (usec): min=6638, max=24321, avg=12299.29, stdev=3022.71 00:16:22.629 clat percentiles (usec): 00:16:22.629 | 1.00th=[ 7046], 5.00th=[ 7963], 10.00th=[ 8586], 20.00th=[ 9634], 00:16:22.629 | 30.00th=[10290], 40.00th=[10945], 50.00th=[11863], 60.00th=[12649], 00:16:22.629 | 70.00th=[13566], 80.00th=[14877], 90.00th=[16712], 95.00th=[17957], 00:16:22.629 | 99.00th=[19792], 99.50th=[20055], 99.90th=[21365], 99.95th=[21365], 00:16:22.629 | 99.99th=[22676] 00:16:22.629 write: IOPS=5726, BW=22.4MiB/s (23.5MB/s)(22.5MiB/1008msec); 0 zone resets 00:16:22.629 slat (usec): min=4, max=9732, avg=77.60, stdev=580.90 00:16:22.629 clat (usec): min=1446, max=19335, avg=10199.57, stdev=2951.87 00:16:22.629 lat (usec): min=1462, max=19342, avg=10277.17, stdev=2953.80 00:16:22.629 clat percentiles (usec): 00:16:22.629 | 1.00th=[ 4228], 5.00th=[ 5932], 10.00th=[ 6783], 20.00th=[ 8029], 00:16:22.629 | 30.00th=[ 8586], 40.00th=[ 9110], 50.00th=[ 9896], 60.00th=[10421], 00:16:22.629 | 70.00th=[10945], 80.00th=[12387], 90.00th=[15270], 95.00th=[16057], 00:16:22.629 | 99.00th=[16712], 99.50th=[16909], 99.90th=[17695], 99.95th=[19006], 00:16:22.629 | 99.99th=[19268] 00:16:22.629 bw ( KiB/s): min=20696, max=24568, per=32.64%, avg=22632.00, stdev=2737.92, samples=2 00:16:22.629 iops : min= 5174, max= 6142, avg=5658.00, stdev=684.48, samples=2 00:16:22.629 lat (msec) : 2=0.03%, 4=0.32%, 10=37.24%, 20=62.16%, 50=0.25% 00:16:22.629 cpu : usr=6.45%, sys=8.04%, ctx=295, majf=0, minf=1 00:16:22.629 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.3%, >=64=99.4% 00:16:22.629 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:22.629 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:16:22.629 issued rwts: total=5632,5772,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:22.629 latency : target=0, window=0, percentile=100.00%, depth=128 00:16:22.629 job2: (groupid=0, jobs=1): err= 0: pid=4098419: Sun Jul 14 07:33:38 2024 00:16:22.629 read: IOPS=4910, BW=19.2MiB/s (20.1MB/s)(19.3MiB/1007msec) 00:16:22.629 slat (usec): min=2, max=46324, avg=107.36, stdev=1009.41 00:16:22.629 clat (usec): min=2530, max=63633, avg=13636.58, stdev=7128.49 00:16:22.629 lat (usec): min=2533, max=63651, avg=13743.94, stdev=7178.94 00:16:22.629 clat percentiles (usec): 00:16:22.629 | 1.00th=[ 6325], 5.00th=[ 8717], 10.00th=[ 9765], 20.00th=[10945], 00:16:22.629 | 30.00th=[11207], 40.00th=[11600], 50.00th=[11863], 60.00th=[12518], 00:16:22.629 | 70.00th=[13304], 80.00th=[14484], 90.00th=[17433], 95.00th=[19792], 00:16:22.629 | 99.00th=[53216], 99.50th=[53216], 99.90th=[60556], 99.95th=[60556], 00:16:22.629 | 99.99th=[63701] 00:16:22.629 write: IOPS=5084, BW=19.9MiB/s (20.8MB/s)(20.0MiB/1007msec); 0 zone resets 00:16:22.629 slat (usec): min=3, max=17100, avg=86.68, stdev=729.44 00:16:22.629 clat (usec): min=2093, max=29447, avg=11751.42, stdev=3780.64 00:16:22.629 lat (usec): min=2099, max=29452, avg=11838.10, stdev=3789.46 00:16:22.629 clat percentiles (usec): 00:16:22.629 | 1.00th=[ 4080], 5.00th=[ 7177], 10.00th=[ 7504], 20.00th=[ 8586], 00:16:22.629 | 30.00th=[10421], 40.00th=[11076], 50.00th=[11600], 60.00th=[11863], 00:16:22.629 | 70.00th=[12649], 80.00th=[13698], 90.00th=[15795], 95.00th=[17695], 00:16:22.629 | 99.00th=[26346], 99.50th=[27657], 99.90th=[28181], 99.95th=[29492], 00:16:22.629 | 99.99th=[29492] 00:16:22.629 bw ( KiB/s): min=20480, max=20480, per=29.54%, avg=20480.00, stdev= 0.00, samples=2 00:16:22.629 iops : min= 5120, max= 5120, avg=5120.00, stdev= 0.00, samples=2 00:16:22.629 lat (msec) : 4=0.61%, 10=18.13%, 20=76.90%, 50=3.10%, 100=1.26% 00:16:22.629 cpu : usr=3.38%, sys=4.67%, ctx=271, majf=0, minf=1 00:16:22.629 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.3%, >=64=99.4% 00:16:22.629 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:22.629 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:16:22.629 issued rwts: total=4945,5120,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:22.629 latency : target=0, window=0, percentile=100.00%, depth=128 00:16:22.629 job3: (groupid=0, jobs=1): err= 0: pid=4098420: Sun Jul 14 07:33:38 2024 00:16:22.629 read: IOPS=2341, BW=9365KiB/s (9590kB/s)(9440KiB/1008msec) 00:16:22.629 slat (usec): min=2, max=27966, avg=217.98, stdev=1477.83 00:16:22.629 clat (usec): min=4178, max=98542, avg=27075.34, stdev=25737.30 00:16:22.629 lat (msec): min=7, max=108, avg=27.29, stdev=25.92 00:16:22.629 clat percentiles (usec): 00:16:22.629 | 1.00th=[ 7242], 5.00th=[11994], 10.00th=[12256], 20.00th=[12518], 00:16:22.629 | 30.00th=[13698], 40.00th=[15401], 50.00th=[16712], 60.00th=[17957], 00:16:22.629 | 70.00th=[19268], 80.00th=[25560], 90.00th=[83362], 95.00th=[94897], 00:16:22.629 | 99.00th=[95945], 99.50th=[98042], 99.90th=[98042], 99.95th=[98042], 00:16:22.629 | 99.99th=[98042] 00:16:22.629 write: IOPS=2539, BW=9.92MiB/s (10.4MB/s)(10.0MiB/1008msec); 0 zone resets 00:16:22.629 slat (usec): min=3, max=20177, avg=184.82, stdev=1193.08 00:16:22.629 clat (usec): min=6280, max=96109, avg=24849.98, stdev=21665.40 00:16:22.629 lat (usec): min=6298, max=96127, avg=25034.80, stdev=21789.65 00:16:22.629 clat percentiles (usec): 00:16:22.629 | 1.00th=[ 8029], 5.00th=[ 9503], 10.00th=[10683], 20.00th=[11469], 00:16:22.629 | 30.00th=[12649], 40.00th=[14484], 50.00th=[14877], 60.00th=[16581], 00:16:22.629 | 70.00th=[18744], 80.00th=[33817], 90.00th=[65274], 95.00th=[79168], 00:16:22.629 | 99.00th=[95945], 99.50th=[95945], 99.90th=[95945], 99.95th=[95945], 00:16:22.629 | 99.99th=[95945] 00:16:22.629 bw ( KiB/s): min= 8192, max=12312, per=14.79%, avg=10252.00, stdev=2913.28, samples=2 00:16:22.629 iops : min= 2048, max= 3078, avg=2563.00, stdev=728.32, samples=2 00:16:22.629 lat (msec) : 10=4.53%, 20=66.89%, 50=14.17%, 100=14.41% 00:16:22.629 cpu : usr=1.99%, sys=2.78%, ctx=221, majf=0, minf=1 00:16:22.629 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.3%, 32=0.7%, >=64=98.7% 00:16:22.629 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:22.629 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:16:22.629 issued rwts: total=2360,2560,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:22.629 latency : target=0, window=0, percentile=100.00%, depth=128 00:16:22.629 00:16:22.629 Run status group 0 (all jobs): 00:16:22.629 READ: bw=63.9MiB/s (67.0MB/s), 9365KiB/s-21.8MiB/s (9590kB/s-22.9MB/s), io=64.5MiB (67.7MB), run=1007-1010msec 00:16:22.629 WRITE: bw=67.7MiB/s (71.0MB/s), 9.92MiB/s-22.4MiB/s (10.4MB/s-23.5MB/s), io=68.4MiB (71.7MB), run=1007-1010msec 00:16:22.629 00:16:22.629 Disk stats (read/write): 00:16:22.629 nvme0n1: ios=2605/3071, merge=0/0, ticks=25058/17282, in_queue=42340, util=97.60% 00:16:22.629 nvme0n2: ios=4658/5030, merge=0/0, ticks=55626/49210, in_queue=104836, util=97.87% 00:16:22.629 nvme0n3: ios=4122/4280, merge=0/0, ticks=56295/49389, in_queue=105684, util=98.12% 00:16:22.629 nvme0n4: ios=2112/2560, merge=0/0, ticks=16939/20968, in_queue=37907, util=88.53% 00:16:22.629 07:33:38 -- target/fio.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t randwrite -r 1 -v 00:16:22.629 [global] 00:16:22.629 thread=1 00:16:22.629 invalidate=1 00:16:22.629 rw=randwrite 00:16:22.629 time_based=1 00:16:22.629 runtime=1 00:16:22.629 ioengine=libaio 00:16:22.629 direct=1 00:16:22.629 bs=4096 00:16:22.629 iodepth=128 00:16:22.629 norandommap=0 00:16:22.629 numjobs=1 00:16:22.629 00:16:22.629 verify_dump=1 00:16:22.629 verify_backlog=512 00:16:22.629 verify_state_save=0 00:16:22.629 do_verify=1 00:16:22.629 verify=crc32c-intel 00:16:22.629 [job0] 00:16:22.629 filename=/dev/nvme0n1 00:16:22.629 [job1] 00:16:22.629 filename=/dev/nvme0n2 00:16:22.629 [job2] 00:16:22.629 filename=/dev/nvme0n3 00:16:22.629 [job3] 00:16:22.630 filename=/dev/nvme0n4 00:16:22.630 Could not set queue depth (nvme0n1) 00:16:22.630 Could not set queue depth (nvme0n2) 00:16:22.630 Could not set queue depth (nvme0n3) 00:16:22.630 Could not set queue depth (nvme0n4) 00:16:22.630 job0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:16:22.630 job1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:16:22.630 job2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:16:22.630 job3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:16:22.630 fio-3.35 00:16:22.630 Starting 4 threads 00:16:24.003 00:16:24.003 job0: (groupid=0, jobs=1): err= 0: pid=4098776: Sun Jul 14 07:33:39 2024 00:16:24.003 read: IOPS=5437, BW=21.2MiB/s (22.3MB/s)(21.5MiB/1010msec) 00:16:24.003 slat (usec): min=2, max=9859, avg=91.91, stdev=619.08 00:16:24.003 clat (usec): min=5462, max=59542, avg=11825.75, stdev=5620.24 00:16:24.003 lat (usec): min=5476, max=59548, avg=11917.66, stdev=5662.64 00:16:24.003 clat percentiles (usec): 00:16:24.003 | 1.00th=[ 6521], 5.00th=[ 7504], 10.00th=[ 8160], 20.00th=[ 8717], 00:16:24.003 | 30.00th=[ 9372], 40.00th=[10028], 50.00th=[10683], 60.00th=[11469], 00:16:24.003 | 70.00th=[12256], 80.00th=[13566], 90.00th=[16188], 95.00th=[17957], 00:16:24.003 | 99.00th=[44303], 99.50th=[55837], 99.90th=[57934], 99.95th=[59507], 00:16:24.003 | 99.99th=[59507] 00:16:24.003 write: IOPS=5576, BW=21.8MiB/s (22.8MB/s)(22.0MiB/1010msec); 0 zone resets 00:16:24.003 slat (usec): min=4, max=10754, avg=80.19, stdev=494.39 00:16:24.003 clat (usec): min=1585, max=59530, avg=11197.91, stdev=5146.97 00:16:24.003 lat (usec): min=1616, max=59538, avg=11278.10, stdev=5155.83 00:16:24.003 clat percentiles (usec): 00:16:24.003 | 1.00th=[ 4178], 5.00th=[ 6063], 10.00th=[ 6718], 20.00th=[ 7635], 00:16:24.003 | 30.00th=[ 8717], 40.00th=[ 9634], 50.00th=[10290], 60.00th=[10945], 00:16:24.003 | 70.00th=[11469], 80.00th=[13435], 90.00th=[16909], 95.00th=[19792], 00:16:24.003 | 99.00th=[35914], 99.50th=[43254], 99.90th=[51119], 99.95th=[51119], 00:16:24.003 | 99.99th=[59507] 00:16:24.003 bw ( KiB/s): min=20480, max=24576, per=37.40%, avg=22528.00, stdev=2896.31, samples=2 00:16:24.003 iops : min= 5120, max= 6144, avg=5632.00, stdev=724.08, samples=2 00:16:24.003 lat (msec) : 2=0.04%, 4=0.21%, 10=42.14%, 20=55.07%, 50=2.05% 00:16:24.003 lat (msec) : 100=0.49% 00:16:24.003 cpu : usr=7.04%, sys=7.04%, ctx=396, majf=0, minf=1 00:16:24.003 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.3%, >=64=99.4% 00:16:24.003 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:24.003 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:16:24.003 issued rwts: total=5492,5632,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:24.003 latency : target=0, window=0, percentile=100.00%, depth=128 00:16:24.003 job1: (groupid=0, jobs=1): err= 0: pid=4098779: Sun Jul 14 07:33:39 2024 00:16:24.003 read: IOPS=2467, BW=9871KiB/s (10.1MB/s)(9.83MiB/1020msec) 00:16:24.003 slat (usec): min=2, max=11663, avg=179.46, stdev=998.03 00:16:24.003 clat (usec): min=4499, max=46425, avg=23306.89, stdev=6909.98 00:16:24.003 lat (usec): min=4505, max=46431, avg=23486.34, stdev=6944.42 00:16:24.003 clat percentiles (usec): 00:16:24.003 | 1.00th=[ 7373], 5.00th=[10945], 10.00th=[14353], 20.00th=[19530], 00:16:24.003 | 30.00th=[21365], 40.00th=[22152], 50.00th=[23200], 60.00th=[23987], 00:16:24.003 | 70.00th=[24773], 80.00th=[27657], 90.00th=[32637], 95.00th=[37487], 00:16:24.003 | 99.00th=[44827], 99.50th=[44827], 99.90th=[46400], 99.95th=[46400], 00:16:24.003 | 99.99th=[46400] 00:16:24.004 write: IOPS=2509, BW=9.80MiB/s (10.3MB/s)(10.0MiB/1020msec); 0 zone resets 00:16:24.004 slat (usec): min=3, max=17578, avg=206.57, stdev=1115.01 00:16:24.004 clat (usec): min=8320, max=60851, avg=27100.93, stdev=7399.22 00:16:24.004 lat (usec): min=9072, max=60885, avg=27307.50, stdev=7426.08 00:16:24.004 clat percentiles (usec): 00:16:24.004 | 1.00th=[14615], 5.00th=[16712], 10.00th=[18744], 20.00th=[21627], 00:16:24.004 | 30.00th=[22938], 40.00th=[24511], 50.00th=[25822], 60.00th=[27395], 00:16:24.004 | 70.00th=[30278], 80.00th=[32637], 90.00th=[36963], 95.00th=[42206], 00:16:24.004 | 99.00th=[47449], 99.50th=[47973], 99.90th=[50070], 99.95th=[51119], 00:16:24.004 | 99.99th=[61080] 00:16:24.004 bw ( KiB/s): min= 9584, max=10896, per=17.00%, avg=10240.00, stdev=927.72, samples=2 00:16:24.004 iops : min= 2396, max= 2724, avg=2560.00, stdev=231.93, samples=2 00:16:24.004 lat (msec) : 10=2.23%, 20=15.70%, 50=82.02%, 100=0.06% 00:16:24.004 cpu : usr=1.96%, sys=3.83%, ctx=265, majf=0, minf=1 00:16:24.004 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.3%, 32=0.6%, >=64=98.8% 00:16:24.004 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:24.004 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:16:24.004 issued rwts: total=2517,2560,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:24.004 latency : target=0, window=0, percentile=100.00%, depth=128 00:16:24.004 job2: (groupid=0, jobs=1): err= 0: pid=4098780: Sun Jul 14 07:33:39 2024 00:16:24.004 read: IOPS=2130, BW=8523KiB/s (8727kB/s)(8608KiB/1010msec) 00:16:24.004 slat (usec): min=2, max=24886, avg=183.73, stdev=1209.19 00:16:24.004 clat (usec): min=6007, max=65499, avg=23007.11, stdev=10373.47 00:16:24.004 lat (usec): min=6011, max=65506, avg=23190.84, stdev=10462.93 00:16:24.004 clat percentiles (usec): 00:16:24.004 | 1.00th=[ 7963], 5.00th=[11600], 10.00th=[11863], 20.00th=[13173], 00:16:24.004 | 30.00th=[15926], 40.00th=[18744], 50.00th=[21627], 60.00th=[23462], 00:16:24.004 | 70.00th=[26608], 80.00th=[31589], 90.00th=[37487], 95.00th=[43779], 00:16:24.004 | 99.00th=[59507], 99.50th=[61080], 99.90th=[65274], 99.95th=[65274], 00:16:24.004 | 99.99th=[65274] 00:16:24.004 write: IOPS=2534, BW=9.90MiB/s (10.4MB/s)(10.0MiB/1010msec); 0 zone resets 00:16:24.004 slat (usec): min=3, max=17301, avg=212.53, stdev=878.29 00:16:24.004 clat (usec): min=6563, max=82601, avg=30569.93, stdev=19472.63 00:16:24.004 lat (usec): min=6568, max=82609, avg=30782.45, stdev=19591.87 00:16:24.004 clat percentiles (usec): 00:16:24.004 | 1.00th=[ 8586], 5.00th=[11076], 10.00th=[14353], 20.00th=[19268], 00:16:24.004 | 30.00th=[20055], 40.00th=[21103], 50.00th=[22152], 60.00th=[23725], 00:16:24.004 | 70.00th=[30540], 80.00th=[38536], 90.00th=[71828], 95.00th=[76022], 00:16:24.004 | 99.00th=[80217], 99.50th=[81265], 99.90th=[82314], 99.95th=[82314], 00:16:24.004 | 99.99th=[82314] 00:16:24.004 bw ( KiB/s): min= 8008, max=12288, per=16.85%, avg=10148.00, stdev=3026.42, samples=2 00:16:24.004 iops : min= 2002, max= 3072, avg=2537.00, stdev=756.60, samples=2 00:16:24.004 lat (msec) : 10=1.87%, 20=34.66%, 50=54.14%, 100=9.34% 00:16:24.004 cpu : usr=2.08%, sys=2.38%, ctx=389, majf=0, minf=1 00:16:24.004 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.3%, 32=0.7%, >=64=98.7% 00:16:24.004 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:24.004 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:16:24.004 issued rwts: total=2152,2560,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:24.004 latency : target=0, window=0, percentile=100.00%, depth=128 00:16:24.004 job3: (groupid=0, jobs=1): err= 0: pid=4098781: Sun Jul 14 07:33:39 2024 00:16:24.004 read: IOPS=4473, BW=17.5MiB/s (18.3MB/s)(17.5MiB/1003msec) 00:16:24.004 slat (usec): min=2, max=11741, avg=115.21, stdev=738.67 00:16:24.004 clat (usec): min=1124, max=39840, avg=15025.64, stdev=5326.71 00:16:24.004 lat (usec): min=2632, max=39844, avg=15140.85, stdev=5363.62 00:16:24.004 clat percentiles (usec): 00:16:24.004 | 1.00th=[ 4686], 5.00th=[ 6521], 10.00th=[ 9765], 20.00th=[11207], 00:16:24.004 | 30.00th=[11994], 40.00th=[13304], 50.00th=[14091], 60.00th=[14746], 00:16:24.004 | 70.00th=[15926], 80.00th=[19006], 90.00th=[22938], 95.00th=[24773], 00:16:24.004 | 99.00th=[30278], 99.50th=[32375], 99.90th=[39584], 99.95th=[39584], 00:16:24.004 | 99.99th=[39584] 00:16:24.004 write: IOPS=4594, BW=17.9MiB/s (18.8MB/s)(18.0MiB/1003msec); 0 zone resets 00:16:24.004 slat (usec): min=3, max=10041, avg=90.43, stdev=617.82 00:16:24.004 clat (usec): min=906, max=32661, avg=12983.27, stdev=3967.27 00:16:24.004 lat (usec): min=928, max=32673, avg=13073.70, stdev=3979.88 00:16:24.004 clat percentiles (usec): 00:16:24.004 | 1.00th=[ 3949], 5.00th=[ 7242], 10.00th=[ 8225], 20.00th=[10290], 00:16:24.004 | 30.00th=[11076], 40.00th=[12256], 50.00th=[12911], 60.00th=[13566], 00:16:24.004 | 70.00th=[14353], 80.00th=[15401], 90.00th=[16581], 95.00th=[19006], 00:16:24.004 | 99.00th=[27395], 99.50th=[27919], 99.90th=[30278], 99.95th=[30278], 00:16:24.004 | 99.99th=[32637] 00:16:24.004 bw ( KiB/s): min=16384, max=20480, per=30.60%, avg=18432.00, stdev=2896.31, samples=2 00:16:24.004 iops : min= 4096, max= 5120, avg=4608.00, stdev=724.08, samples=2 00:16:24.004 lat (usec) : 1000=0.02% 00:16:24.004 lat (msec) : 2=0.01%, 4=0.65%, 10=13.77%, 20=76.15%, 50=9.40% 00:16:24.004 cpu : usr=2.79%, sys=4.99%, ctx=384, majf=0, minf=1 00:16:24.004 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.3% 00:16:24.004 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:24.004 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:16:24.004 issued rwts: total=4487,4608,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:24.004 latency : target=0, window=0, percentile=100.00%, depth=128 00:16:24.004 00:16:24.004 Run status group 0 (all jobs): 00:16:24.004 READ: bw=56.1MiB/s (58.8MB/s), 8523KiB/s-21.2MiB/s (8727kB/s-22.3MB/s), io=57.2MiB (60.0MB), run=1003-1020msec 00:16:24.004 WRITE: bw=58.8MiB/s (61.7MB/s), 9.80MiB/s-21.8MiB/s (10.3MB/s-22.8MB/s), io=60.0MiB (62.9MB), run=1003-1020msec 00:16:24.004 00:16:24.004 Disk stats (read/write): 00:16:24.004 nvme0n1: ios=4609/4663, merge=0/0, ticks=53208/51844, in_queue=105052, util=97.49% 00:16:24.004 nvme0n2: ios=2098/2111, merge=0/0, ticks=14765/16166, in_queue=30931, util=98.07% 00:16:24.004 nvme0n3: ios=2105/2271, merge=0/0, ticks=30034/38188, in_queue=68222, util=97.60% 00:16:24.004 nvme0n4: ios=3701/4096, merge=0/0, ticks=38907/35655, in_queue=74562, util=98.11% 00:16:24.004 07:33:39 -- target/fio.sh@55 -- # sync 00:16:24.004 07:33:39 -- target/fio.sh@59 -- # fio_pid=4098921 00:16:24.004 07:33:39 -- target/fio.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t read -r 10 00:16:24.004 07:33:39 -- target/fio.sh@61 -- # sleep 3 00:16:24.004 [global] 00:16:24.004 thread=1 00:16:24.004 invalidate=1 00:16:24.004 rw=read 00:16:24.004 time_based=1 00:16:24.004 runtime=10 00:16:24.004 ioengine=libaio 00:16:24.004 direct=1 00:16:24.004 bs=4096 00:16:24.004 iodepth=1 00:16:24.004 norandommap=1 00:16:24.004 numjobs=1 00:16:24.004 00:16:24.004 [job0] 00:16:24.004 filename=/dev/nvme0n1 00:16:24.004 [job1] 00:16:24.004 filename=/dev/nvme0n2 00:16:24.004 [job2] 00:16:24.004 filename=/dev/nvme0n3 00:16:24.004 [job3] 00:16:24.004 filename=/dev/nvme0n4 00:16:24.004 Could not set queue depth (nvme0n1) 00:16:24.004 Could not set queue depth (nvme0n2) 00:16:24.004 Could not set queue depth (nvme0n3) 00:16:24.004 Could not set queue depth (nvme0n4) 00:16:24.004 job0: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:16:24.004 job1: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:16:24.004 job2: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:16:24.004 job3: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:16:24.004 fio-3.35 00:16:24.004 Starting 4 threads 00:16:27.284 07:33:42 -- target/fio.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_delete concat0 00:16:27.284 07:33:43 -- target/fio.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_delete raid0 00:16:27.284 fio: io_u error on file /dev/nvme0n4: Remote I/O error: read offset=7254016, buflen=4096 00:16:27.284 fio: pid=4099016, err=121/file:io_u.c:1889, func=io_u error, error=Remote I/O error 00:16:27.542 07:33:43 -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:16:27.542 07:33:43 -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc0 00:16:27.542 fio: io_u error on file /dev/nvme0n3: Remote I/O error: read offset=26304512, buflen=4096 00:16:27.542 fio: pid=4099015, err=121/file:io_u.c:1889, func=io_u error, error=Remote I/O error 00:16:27.800 07:33:43 -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:16:27.800 07:33:43 -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc1 00:16:27.800 fio: io_u error on file /dev/nvme0n1: Remote I/O error: read offset=352256, buflen=4096 00:16:27.800 fio: pid=4099013, err=121/file:io_u.c:1889, func=io_u error, error=Remote I/O error 00:16:27.800 fio: io_u error on file /dev/nvme0n2: Remote I/O error: read offset=25300992, buflen=4096 00:16:27.800 fio: pid=4099014, err=121/file:io_u.c:1889, func=io_u error, error=Remote I/O error 00:16:28.058 07:33:43 -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:16:28.058 07:33:43 -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc2 00:16:28.058 00:16:28.058 job0: (groupid=0, jobs=1): err=121 (file:io_u.c:1889, func=io_u error, error=Remote I/O error): pid=4099013: Sun Jul 14 07:33:44 2024 00:16:28.058 read: IOPS=25, BW=99.5KiB/s (102kB/s)(344KiB/3457msec) 00:16:28.058 slat (usec): min=7, max=26757, avg=328.00, stdev=2866.47 00:16:28.059 clat (usec): min=454, max=41735, avg=39591.66, stdev=7470.36 00:16:28.059 lat (usec): min=480, max=67948, avg=39923.07, stdev=8069.00 00:16:28.059 clat percentiles (usec): 00:16:28.059 | 1.00th=[ 453], 5.00th=[41157], 10.00th=[41157], 20.00th=[41157], 00:16:28.059 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41157], 60.00th=[41157], 00:16:28.059 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41157], 95.00th=[41157], 00:16:28.059 | 99.00th=[41681], 99.50th=[41681], 99.90th=[41681], 99.95th=[41681], 00:16:28.059 | 99.99th=[41681] 00:16:28.059 bw ( KiB/s): min= 96, max= 120, per=0.65%, avg=101.33, stdev= 9.69, samples=6 00:16:28.059 iops : min= 24, max= 30, avg=25.33, stdev= 2.42, samples=6 00:16:28.059 lat (usec) : 500=1.15%, 750=2.30% 00:16:28.059 lat (msec) : 50=95.40% 00:16:28.059 cpu : usr=0.12%, sys=0.00%, ctx=89, majf=0, minf=1 00:16:28.059 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:16:28.059 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:28.059 complete : 0=1.1%, 4=98.9%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:28.059 issued rwts: total=87,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:28.059 latency : target=0, window=0, percentile=100.00%, depth=1 00:16:28.059 job1: (groupid=0, jobs=1): err=121 (file:io_u.c:1889, func=io_u error, error=Remote I/O error): pid=4099014: Sun Jul 14 07:33:44 2024 00:16:28.059 read: IOPS=1672, BW=6690KiB/s (6851kB/s)(24.1MiB/3693msec) 00:16:28.059 slat (usec): min=4, max=8549, avg=22.92, stdev=147.23 00:16:28.059 clat (usec): min=308, max=41193, avg=568.47, stdev=1809.39 00:16:28.059 lat (usec): min=313, max=42987, avg=591.39, stdev=1821.48 00:16:28.059 clat percentiles (usec): 00:16:28.059 | 1.00th=[ 326], 5.00th=[ 375], 10.00th=[ 424], 20.00th=[ 457], 00:16:28.059 | 30.00th=[ 469], 40.00th=[ 482], 50.00th=[ 490], 60.00th=[ 498], 00:16:28.059 | 70.00th=[ 510], 80.00th=[ 523], 90.00th=[ 545], 95.00th=[ 562], 00:16:28.059 | 99.00th=[ 611], 99.50th=[ 652], 99.90th=[41157], 99.95th=[41157], 00:16:28.059 | 99.99th=[41157] 00:16:28.059 bw ( KiB/s): min= 2541, max= 7872, per=43.17%, avg=6759.57, stdev=1895.62, samples=7 00:16:28.059 iops : min= 635, max= 1968, avg=1689.86, stdev=474.00, samples=7 00:16:28.059 lat (usec) : 500=62.72%, 750=36.89%, 1000=0.10% 00:16:28.059 lat (msec) : 2=0.02%, 4=0.02%, 10=0.03%, 50=0.21% 00:16:28.059 cpu : usr=1.63%, sys=3.63%, ctx=6184, majf=0, minf=1 00:16:28.059 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:16:28.059 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:28.059 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:28.059 issued rwts: total=6178,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:28.059 latency : target=0, window=0, percentile=100.00%, depth=1 00:16:28.059 job2: (groupid=0, jobs=1): err=121 (file:io_u.c:1889, func=io_u error, error=Remote I/O error): pid=4099015: Sun Jul 14 07:33:44 2024 00:16:28.059 read: IOPS=2019, BW=8075KiB/s (8269kB/s)(25.1MiB/3181msec) 00:16:28.059 slat (nsec): min=4569, max=72993, avg=18968.55, stdev=9896.88 00:16:28.059 clat (usec): min=316, max=41218, avg=468.23, stdev=717.08 00:16:28.059 lat (usec): min=324, max=41225, avg=487.20, stdev=717.23 00:16:28.059 clat percentiles (usec): 00:16:28.059 | 1.00th=[ 330], 5.00th=[ 347], 10.00th=[ 367], 20.00th=[ 388], 00:16:28.059 | 30.00th=[ 445], 40.00th=[ 461], 50.00th=[ 469], 60.00th=[ 482], 00:16:28.059 | 70.00th=[ 490], 80.00th=[ 498], 90.00th=[ 519], 95.00th=[ 529], 00:16:28.059 | 99.00th=[ 562], 99.50th=[ 586], 99.90th=[ 644], 99.95th=[ 676], 00:16:28.059 | 99.99th=[41157] 00:16:28.059 bw ( KiB/s): min= 6488, max=10072, per=51.20%, avg=8016.00, stdev=1163.27, samples=6 00:16:28.059 iops : min= 1622, max= 2518, avg=2004.00, stdev=290.82, samples=6 00:16:28.059 lat (usec) : 500=80.51%, 750=19.45% 00:16:28.059 lat (msec) : 50=0.03% 00:16:28.059 cpu : usr=1.89%, sys=4.21%, ctx=6423, majf=0, minf=1 00:16:28.059 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:16:28.059 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:28.059 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:28.059 issued rwts: total=6423,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:28.059 latency : target=0, window=0, percentile=100.00%, depth=1 00:16:28.059 job3: (groupid=0, jobs=1): err=121 (file:io_u.c:1889, func=io_u error, error=Remote I/O error): pid=4099016: Sun Jul 14 07:33:44 2024 00:16:28.059 read: IOPS=611, BW=2443KiB/s (2501kB/s)(7084KiB/2900msec) 00:16:28.059 slat (nsec): min=4447, max=72317, avg=17521.45, stdev=9102.97 00:16:28.059 clat (usec): min=325, max=41669, avg=1602.76, stdev=6728.96 00:16:28.059 lat (usec): min=336, max=41701, avg=1620.29, stdev=6729.47 00:16:28.059 clat percentiles (usec): 00:16:28.059 | 1.00th=[ 338], 5.00th=[ 359], 10.00th=[ 379], 20.00th=[ 400], 00:16:28.059 | 30.00th=[ 429], 40.00th=[ 449], 50.00th=[ 461], 60.00th=[ 469], 00:16:28.059 | 70.00th=[ 482], 80.00th=[ 494], 90.00th=[ 519], 95.00th=[ 562], 00:16:28.059 | 99.00th=[41157], 99.50th=[41157], 99.90th=[41681], 99.95th=[41681], 00:16:28.059 | 99.99th=[41681] 00:16:28.059 bw ( KiB/s): min= 112, max= 5440, per=11.18%, avg=1750.40, stdev=2378.67, samples=5 00:16:28.059 iops : min= 28, max= 1360, avg=437.60, stdev=594.67, samples=5 00:16:28.059 lat (usec) : 500=82.79%, 750=14.28% 00:16:28.059 lat (msec) : 20=0.06%, 50=2.82% 00:16:28.059 cpu : usr=0.62%, sys=1.07%, ctx=1772, majf=0, minf=1 00:16:28.059 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:16:28.059 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:28.059 complete : 0=0.1%, 4=99.9%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:28.059 issued rwts: total=1772,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:28.059 latency : target=0, window=0, percentile=100.00%, depth=1 00:16:28.059 00:16:28.059 Run status group 0 (all jobs): 00:16:28.059 READ: bw=15.3MiB/s (16.0MB/s), 99.5KiB/s-8075KiB/s (102kB/s-8269kB/s), io=56.5MiB (59.2MB), run=2900-3693msec 00:16:28.059 00:16:28.059 Disk stats (read/write): 00:16:28.059 nvme0n1: ios=111/0, merge=0/0, ticks=3434/0, in_queue=3434, util=98.91% 00:16:28.059 nvme0n2: ios=6014/0, merge=0/0, ticks=4338/0, in_queue=4338, util=99.09% 00:16:28.059 nvme0n3: ios=6267/0, merge=0/0, ticks=2890/0, in_queue=2890, util=96.79% 00:16:28.059 nvme0n4: ios=1713/0, merge=0/0, ticks=2792/0, in_queue=2792, util=96.71% 00:16:28.318 07:33:44 -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:16:28.318 07:33:44 -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc3 00:16:28.583 07:33:44 -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:16:28.583 07:33:44 -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc4 00:16:28.583 07:33:44 -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:16:28.583 07:33:44 -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc5 00:16:28.907 07:33:44 -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:16:28.907 07:33:44 -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc6 00:16:29.166 07:33:45 -- target/fio.sh@69 -- # fio_status=0 00:16:29.166 07:33:45 -- target/fio.sh@70 -- # wait 4098921 00:16:29.166 07:33:45 -- target/fio.sh@70 -- # fio_status=4 00:16:29.166 07:33:45 -- target/fio.sh@72 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:16:29.166 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:16:29.166 07:33:45 -- target/fio.sh@73 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:16:29.423 07:33:45 -- common/autotest_common.sh@1198 -- # local i=0 00:16:29.423 07:33:45 -- common/autotest_common.sh@1199 -- # lsblk -o NAME,SERIAL 00:16:29.423 07:33:45 -- common/autotest_common.sh@1199 -- # grep -q -w SPDKISFASTANDAWESOME 00:16:29.423 07:33:45 -- common/autotest_common.sh@1206 -- # lsblk -l -o NAME,SERIAL 00:16:29.423 07:33:45 -- common/autotest_common.sh@1206 -- # grep -q -w SPDKISFASTANDAWESOME 00:16:29.423 07:33:45 -- common/autotest_common.sh@1210 -- # return 0 00:16:29.423 07:33:45 -- target/fio.sh@75 -- # '[' 4 -eq 0 ']' 00:16:29.423 07:33:45 -- target/fio.sh@80 -- # echo 'nvmf hotplug test: fio failed as expected' 00:16:29.423 nvmf hotplug test: fio failed as expected 00:16:29.423 07:33:45 -- target/fio.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:16:29.682 07:33:45 -- target/fio.sh@85 -- # rm -f ./local-job0-0-verify.state 00:16:29.682 07:33:45 -- target/fio.sh@86 -- # rm -f ./local-job1-1-verify.state 00:16:29.682 07:33:45 -- target/fio.sh@87 -- # rm -f ./local-job2-2-verify.state 00:16:29.682 07:33:45 -- target/fio.sh@89 -- # trap - SIGINT SIGTERM EXIT 00:16:29.682 07:33:45 -- target/fio.sh@91 -- # nvmftestfini 00:16:29.682 07:33:45 -- nvmf/common.sh@476 -- # nvmfcleanup 00:16:29.682 07:33:45 -- nvmf/common.sh@116 -- # sync 00:16:29.682 07:33:45 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:16:29.682 07:33:45 -- nvmf/common.sh@119 -- # set +e 00:16:29.682 07:33:45 -- nvmf/common.sh@120 -- # for i in {1..20} 00:16:29.682 07:33:45 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:16:29.682 rmmod nvme_tcp 00:16:29.682 rmmod nvme_fabrics 00:16:29.682 rmmod nvme_keyring 00:16:29.682 07:33:45 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:16:29.682 07:33:45 -- nvmf/common.sh@123 -- # set -e 00:16:29.682 07:33:45 -- nvmf/common.sh@124 -- # return 0 00:16:29.682 07:33:45 -- nvmf/common.sh@477 -- # '[' -n 4096827 ']' 00:16:29.682 07:33:45 -- nvmf/common.sh@478 -- # killprocess 4096827 00:16:29.682 07:33:45 -- common/autotest_common.sh@926 -- # '[' -z 4096827 ']' 00:16:29.682 07:33:45 -- common/autotest_common.sh@930 -- # kill -0 4096827 00:16:29.682 07:33:45 -- common/autotest_common.sh@931 -- # uname 00:16:29.682 07:33:45 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:16:29.682 07:33:45 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 4096827 00:16:29.682 07:33:45 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:16:29.682 07:33:45 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:16:29.682 07:33:45 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 4096827' 00:16:29.682 killing process with pid 4096827 00:16:29.682 07:33:45 -- common/autotest_common.sh@945 -- # kill 4096827 00:16:29.682 07:33:45 -- common/autotest_common.sh@950 -- # wait 4096827 00:16:29.940 07:33:45 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:16:29.940 07:33:45 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:16:29.940 07:33:45 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:16:29.940 07:33:45 -- nvmf/common.sh@273 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:16:29.940 07:33:45 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:16:29.940 07:33:45 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:29.940 07:33:45 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:16:29.940 07:33:45 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:31.842 07:33:47 -- nvmf/common.sh@278 -- # ip -4 addr flush cvl_0_1 00:16:31.842 00:16:31.842 real 0m23.751s 00:16:31.842 user 1m23.809s 00:16:31.842 sys 0m6.121s 00:16:31.842 07:33:47 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:16:31.842 07:33:47 -- common/autotest_common.sh@10 -- # set +x 00:16:31.842 ************************************ 00:16:31.842 END TEST nvmf_fio_target 00:16:31.842 ************************************ 00:16:32.100 07:33:48 -- nvmf/nvmf.sh@55 -- # run_test nvmf_bdevio /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=tcp 00:16:32.100 07:33:48 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:16:32.100 07:33:48 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:16:32.100 07:33:48 -- common/autotest_common.sh@10 -- # set +x 00:16:32.100 ************************************ 00:16:32.100 START TEST nvmf_bdevio 00:16:32.100 ************************************ 00:16:32.100 07:33:48 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=tcp 00:16:32.100 * Looking for test storage... 00:16:32.100 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:16:32.100 07:33:48 -- target/bdevio.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:16:32.100 07:33:48 -- nvmf/common.sh@7 -- # uname -s 00:16:32.100 07:33:48 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:16:32.100 07:33:48 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:16:32.100 07:33:48 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:16:32.100 07:33:48 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:16:32.100 07:33:48 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:16:32.100 07:33:48 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:16:32.100 07:33:48 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:16:32.100 07:33:48 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:16:32.100 07:33:48 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:16:32.100 07:33:48 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:16:32.100 07:33:48 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:16:32.100 07:33:48 -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:16:32.100 07:33:48 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:16:32.100 07:33:48 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:16:32.100 07:33:48 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:16:32.100 07:33:48 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:16:32.100 07:33:48 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:16:32.100 07:33:48 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:16:32.100 07:33:48 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:16:32.100 07:33:48 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:32.100 07:33:48 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:32.100 07:33:48 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:32.100 07:33:48 -- paths/export.sh@5 -- # export PATH 00:16:32.100 07:33:48 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:32.100 07:33:48 -- nvmf/common.sh@46 -- # : 0 00:16:32.100 07:33:48 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:16:32.100 07:33:48 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:16:32.100 07:33:48 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:16:32.100 07:33:48 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:16:32.100 07:33:48 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:16:32.100 07:33:48 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:16:32.100 07:33:48 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:16:32.100 07:33:48 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:16:32.100 07:33:48 -- target/bdevio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:16:32.100 07:33:48 -- target/bdevio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:16:32.100 07:33:48 -- target/bdevio.sh@14 -- # nvmftestinit 00:16:32.100 07:33:48 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:16:32.100 07:33:48 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:16:32.100 07:33:48 -- nvmf/common.sh@436 -- # prepare_net_devs 00:16:32.100 07:33:48 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:16:32.100 07:33:48 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:16:32.100 07:33:48 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:32.100 07:33:48 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:16:32.100 07:33:48 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:32.100 07:33:48 -- nvmf/common.sh@402 -- # [[ phy != virt ]] 00:16:32.100 07:33:48 -- nvmf/common.sh@402 -- # gather_supported_nvmf_pci_devs 00:16:32.100 07:33:48 -- nvmf/common.sh@284 -- # xtrace_disable 00:16:32.100 07:33:48 -- common/autotest_common.sh@10 -- # set +x 00:16:33.998 07:33:49 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:16:33.998 07:33:49 -- nvmf/common.sh@290 -- # pci_devs=() 00:16:33.998 07:33:49 -- nvmf/common.sh@290 -- # local -a pci_devs 00:16:33.998 07:33:49 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:16:33.998 07:33:49 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:16:33.998 07:33:49 -- nvmf/common.sh@292 -- # pci_drivers=() 00:16:33.998 07:33:49 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:16:33.998 07:33:49 -- nvmf/common.sh@294 -- # net_devs=() 00:16:33.998 07:33:49 -- nvmf/common.sh@294 -- # local -ga net_devs 00:16:33.998 07:33:49 -- nvmf/common.sh@295 -- # e810=() 00:16:33.998 07:33:49 -- nvmf/common.sh@295 -- # local -ga e810 00:16:33.998 07:33:49 -- nvmf/common.sh@296 -- # x722=() 00:16:33.998 07:33:49 -- nvmf/common.sh@296 -- # local -ga x722 00:16:33.998 07:33:49 -- nvmf/common.sh@297 -- # mlx=() 00:16:33.998 07:33:49 -- nvmf/common.sh@297 -- # local -ga mlx 00:16:33.998 07:33:49 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:16:33.999 07:33:49 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:16:33.999 07:33:49 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:16:33.999 07:33:49 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:16:33.999 07:33:49 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:16:33.999 07:33:49 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:16:33.999 07:33:49 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:16:33.999 07:33:49 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:16:33.999 07:33:49 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:16:33.999 07:33:49 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:16:33.999 07:33:49 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:16:33.999 07:33:49 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:16:33.999 07:33:49 -- nvmf/common.sh@320 -- # [[ tcp == rdma ]] 00:16:33.999 07:33:49 -- nvmf/common.sh@326 -- # [[ e810 == mlx5 ]] 00:16:33.999 07:33:49 -- nvmf/common.sh@328 -- # [[ e810 == e810 ]] 00:16:33.999 07:33:49 -- nvmf/common.sh@329 -- # pci_devs=("${e810[@]}") 00:16:33.999 07:33:49 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:16:33.999 07:33:49 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:16:33.999 07:33:49 -- nvmf/common.sh@340 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:16:33.999 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:16:33.999 07:33:49 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:16:33.999 07:33:49 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:16:33.999 07:33:49 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:16:33.999 07:33:49 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:16:33.999 07:33:49 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:16:33.999 07:33:49 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:16:33.999 07:33:49 -- nvmf/common.sh@340 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:16:33.999 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:16:33.999 07:33:49 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:16:33.999 07:33:49 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:16:33.999 07:33:49 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:16:33.999 07:33:49 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:16:33.999 07:33:49 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:16:33.999 07:33:49 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:16:33.999 07:33:49 -- nvmf/common.sh@371 -- # [[ e810 == e810 ]] 00:16:33.999 07:33:49 -- nvmf/common.sh@371 -- # [[ tcp == rdma ]] 00:16:33.999 07:33:49 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:16:33.999 07:33:49 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:16:33.999 07:33:49 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:16:33.999 07:33:49 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:16:33.999 07:33:49 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:16:33.999 Found net devices under 0000:0a:00.0: cvl_0_0 00:16:33.999 07:33:49 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:16:33.999 07:33:49 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:16:33.999 07:33:49 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:16:33.999 07:33:49 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:16:33.999 07:33:49 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:16:33.999 07:33:49 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:16:33.999 Found net devices under 0000:0a:00.1: cvl_0_1 00:16:33.999 07:33:49 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:16:33.999 07:33:49 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:16:33.999 07:33:49 -- nvmf/common.sh@402 -- # is_hw=yes 00:16:33.999 07:33:49 -- nvmf/common.sh@404 -- # [[ yes == yes ]] 00:16:33.999 07:33:49 -- nvmf/common.sh@405 -- # [[ tcp == tcp ]] 00:16:33.999 07:33:49 -- nvmf/common.sh@406 -- # nvmf_tcp_init 00:16:33.999 07:33:49 -- nvmf/common.sh@228 -- # NVMF_INITIATOR_IP=10.0.0.1 00:16:33.999 07:33:49 -- nvmf/common.sh@229 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:16:33.999 07:33:49 -- nvmf/common.sh@230 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:16:33.999 07:33:49 -- nvmf/common.sh@233 -- # (( 2 > 1 )) 00:16:33.999 07:33:49 -- nvmf/common.sh@235 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:16:33.999 07:33:49 -- nvmf/common.sh@236 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:16:33.999 07:33:49 -- nvmf/common.sh@239 -- # NVMF_SECOND_TARGET_IP= 00:16:33.999 07:33:49 -- nvmf/common.sh@241 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:16:33.999 07:33:49 -- nvmf/common.sh@242 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:16:33.999 07:33:49 -- nvmf/common.sh@243 -- # ip -4 addr flush cvl_0_0 00:16:33.999 07:33:50 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_1 00:16:33.999 07:33:50 -- nvmf/common.sh@247 -- # ip netns add cvl_0_0_ns_spdk 00:16:33.999 07:33:50 -- nvmf/common.sh@250 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:16:33.999 07:33:50 -- nvmf/common.sh@253 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:16:33.999 07:33:50 -- nvmf/common.sh@254 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:16:33.999 07:33:50 -- nvmf/common.sh@257 -- # ip link set cvl_0_1 up 00:16:33.999 07:33:50 -- nvmf/common.sh@259 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:16:33.999 07:33:50 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:16:33.999 07:33:50 -- nvmf/common.sh@263 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:16:33.999 07:33:50 -- nvmf/common.sh@266 -- # ping -c 1 10.0.0.2 00:16:33.999 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:16:33.999 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.127 ms 00:16:33.999 00:16:33.999 --- 10.0.0.2 ping statistics --- 00:16:33.999 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:33.999 rtt min/avg/max/mdev = 0.127/0.127/0.127/0.000 ms 00:16:33.999 07:33:50 -- nvmf/common.sh@267 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:16:33.999 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:16:33.999 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.229 ms 00:16:33.999 00:16:33.999 --- 10.0.0.1 ping statistics --- 00:16:33.999 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:33.999 rtt min/avg/max/mdev = 0.229/0.229/0.229/0.000 ms 00:16:33.999 07:33:50 -- nvmf/common.sh@269 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:16:33.999 07:33:50 -- nvmf/common.sh@410 -- # return 0 00:16:33.999 07:33:50 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:16:33.999 07:33:50 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:16:33.999 07:33:50 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:16:33.999 07:33:50 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:16:33.999 07:33:50 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:16:33.999 07:33:50 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:16:33.999 07:33:50 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:16:33.999 07:33:50 -- target/bdevio.sh@16 -- # nvmfappstart -m 0x78 00:16:33.999 07:33:50 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:16:33.999 07:33:50 -- common/autotest_common.sh@712 -- # xtrace_disable 00:16:33.999 07:33:50 -- common/autotest_common.sh@10 -- # set +x 00:16:33.999 07:33:50 -- nvmf/common.sh@469 -- # nvmfpid=4101666 00:16:33.999 07:33:50 -- nvmf/common.sh@468 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x78 00:16:33.999 07:33:50 -- nvmf/common.sh@470 -- # waitforlisten 4101666 00:16:33.999 07:33:50 -- common/autotest_common.sh@819 -- # '[' -z 4101666 ']' 00:16:33.999 07:33:50 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:33.999 07:33:50 -- common/autotest_common.sh@824 -- # local max_retries=100 00:16:33.999 07:33:50 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:33.999 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:33.999 07:33:50 -- common/autotest_common.sh@828 -- # xtrace_disable 00:16:33.999 07:33:50 -- common/autotest_common.sh@10 -- # set +x 00:16:34.258 [2024-07-14 07:33:50.196736] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:16:34.258 [2024-07-14 07:33:50.196805] [ DPDK EAL parameters: nvmf -c 0x78 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:16:34.258 EAL: No free 2048 kB hugepages reported on node 1 00:16:34.258 [2024-07-14 07:33:50.270039] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:16:34.258 [2024-07-14 07:33:50.390198] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:16:34.258 [2024-07-14 07:33:50.390362] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:16:34.258 [2024-07-14 07:33:50.390382] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:16:34.258 [2024-07-14 07:33:50.390396] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:16:34.258 [2024-07-14 07:33:50.390463] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 4 00:16:34.258 [2024-07-14 07:33:50.390681] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 5 00:16:34.258 [2024-07-14 07:33:50.390742] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:16:34.258 [2024-07-14 07:33:50.390738] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 6 00:16:35.193 07:33:51 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:16:35.193 07:33:51 -- common/autotest_common.sh@852 -- # return 0 00:16:35.193 07:33:51 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:16:35.193 07:33:51 -- common/autotest_common.sh@718 -- # xtrace_disable 00:16:35.193 07:33:51 -- common/autotest_common.sh@10 -- # set +x 00:16:35.193 07:33:51 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:16:35.193 07:33:51 -- target/bdevio.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:16:35.193 07:33:51 -- common/autotest_common.sh@551 -- # xtrace_disable 00:16:35.193 07:33:51 -- common/autotest_common.sh@10 -- # set +x 00:16:35.193 [2024-07-14 07:33:51.258608] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:16:35.193 07:33:51 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:16:35.193 07:33:51 -- target/bdevio.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:16:35.193 07:33:51 -- common/autotest_common.sh@551 -- # xtrace_disable 00:16:35.193 07:33:51 -- common/autotest_common.sh@10 -- # set +x 00:16:35.193 Malloc0 00:16:35.193 07:33:51 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:16:35.193 07:33:51 -- target/bdevio.sh@20 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:16:35.193 07:33:51 -- common/autotest_common.sh@551 -- # xtrace_disable 00:16:35.193 07:33:51 -- common/autotest_common.sh@10 -- # set +x 00:16:35.193 07:33:51 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:16:35.193 07:33:51 -- target/bdevio.sh@21 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:16:35.193 07:33:51 -- common/autotest_common.sh@551 -- # xtrace_disable 00:16:35.193 07:33:51 -- common/autotest_common.sh@10 -- # set +x 00:16:35.193 07:33:51 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:16:35.193 07:33:51 -- target/bdevio.sh@22 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:16:35.193 07:33:51 -- common/autotest_common.sh@551 -- # xtrace_disable 00:16:35.193 07:33:51 -- common/autotest_common.sh@10 -- # set +x 00:16:35.193 [2024-07-14 07:33:51.312398] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:16:35.193 07:33:51 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:16:35.193 07:33:51 -- target/bdevio.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/bdev/bdevio/bdevio --json /dev/fd/62 00:16:35.193 07:33:51 -- target/bdevio.sh@24 -- # gen_nvmf_target_json 00:16:35.193 07:33:51 -- nvmf/common.sh@520 -- # config=() 00:16:35.193 07:33:51 -- nvmf/common.sh@520 -- # local subsystem config 00:16:35.193 07:33:51 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:16:35.193 07:33:51 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:16:35.193 { 00:16:35.193 "params": { 00:16:35.193 "name": "Nvme$subsystem", 00:16:35.193 "trtype": "$TEST_TRANSPORT", 00:16:35.193 "traddr": "$NVMF_FIRST_TARGET_IP", 00:16:35.193 "adrfam": "ipv4", 00:16:35.193 "trsvcid": "$NVMF_PORT", 00:16:35.193 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:16:35.193 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:16:35.193 "hdgst": ${hdgst:-false}, 00:16:35.193 "ddgst": ${ddgst:-false} 00:16:35.193 }, 00:16:35.193 "method": "bdev_nvme_attach_controller" 00:16:35.193 } 00:16:35.193 EOF 00:16:35.193 )") 00:16:35.193 07:33:51 -- nvmf/common.sh@542 -- # cat 00:16:35.193 07:33:51 -- nvmf/common.sh@544 -- # jq . 00:16:35.193 07:33:51 -- nvmf/common.sh@545 -- # IFS=, 00:16:35.193 07:33:51 -- nvmf/common.sh@546 -- # printf '%s\n' '{ 00:16:35.193 "params": { 00:16:35.193 "name": "Nvme1", 00:16:35.193 "trtype": "tcp", 00:16:35.193 "traddr": "10.0.0.2", 00:16:35.193 "adrfam": "ipv4", 00:16:35.193 "trsvcid": "4420", 00:16:35.193 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:16:35.193 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:16:35.193 "hdgst": false, 00:16:35.193 "ddgst": false 00:16:35.193 }, 00:16:35.193 "method": "bdev_nvme_attach_controller" 00:16:35.193 }' 00:16:35.193 [2024-07-14 07:33:51.356509] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:16:35.193 [2024-07-14 07:33:51.356576] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid4101825 ] 00:16:35.450 EAL: No free 2048 kB hugepages reported on node 1 00:16:35.450 [2024-07-14 07:33:51.419048] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 3 00:16:35.450 [2024-07-14 07:33:51.528480] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:16:35.450 [2024-07-14 07:33:51.528530] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:16:35.450 [2024-07-14 07:33:51.528533] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:16:35.708 [2024-07-14 07:33:51.744748] rpc.c: 181:spdk_rpc_listen: *ERROR*: RPC Unix domain socket path /var/tmp/spdk.sock in use. Specify another. 00:16:35.708 [2024-07-14 07:33:51.744796] rpc.c: 90:spdk_rpc_initialize: *ERROR*: Unable to start RPC service at /var/tmp/spdk.sock 00:16:35.708 I/O targets: 00:16:35.708 Nvme1n1: 131072 blocks of 512 bytes (64 MiB) 00:16:35.708 00:16:35.708 00:16:35.709 CUnit - A unit testing framework for C - Version 2.1-3 00:16:35.709 http://cunit.sourceforge.net/ 00:16:35.709 00:16:35.709 00:16:35.709 Suite: bdevio tests on: Nvme1n1 00:16:35.709 Test: blockdev write read block ...passed 00:16:35.709 Test: blockdev write zeroes read block ...passed 00:16:35.709 Test: blockdev write zeroes read no split ...passed 00:16:35.966 Test: blockdev write zeroes read split ...passed 00:16:35.966 Test: blockdev write zeroes read split partial ...passed 00:16:35.966 Test: blockdev reset ...[2024-07-14 07:33:51.957781] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:35.966 [2024-07-14 07:33:51.957901] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22b2180 (9): Bad file descriptor 00:16:35.966 [2024-07-14 07:33:51.974585] bdev_nvme.c:2040:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:16:35.966 passed 00:16:35.966 Test: blockdev write read 8 blocks ...passed 00:16:35.966 Test: blockdev write read size > 128k ...passed 00:16:35.966 Test: blockdev write read invalid size ...passed 00:16:35.966 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:16:35.966 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:16:35.966 Test: blockdev write read max offset ...passed 00:16:35.966 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:16:36.224 Test: blockdev writev readv 8 blocks ...passed 00:16:36.224 Test: blockdev writev readv 30 x 1block ...passed 00:16:36.224 Test: blockdev writev readv block ...passed 00:16:36.224 Test: blockdev writev readv size > 128k ...passed 00:16:36.224 Test: blockdev writev readv size > 128k in two iovs ...passed 00:16:36.224 Test: blockdev comparev and writev ...[2024-07-14 07:33:52.236149] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:16:36.224 [2024-07-14 07:33:52.236186] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:36.224 [2024-07-14 07:33:52.236210] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:16:36.224 [2024-07-14 07:33:52.236226] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:16:36.224 [2024-07-14 07:33:52.236629] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:16:36.224 [2024-07-14 07:33:52.236654] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:16:36.224 [2024-07-14 07:33:52.236675] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:16:36.224 [2024-07-14 07:33:52.236691] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:16:36.224 [2024-07-14 07:33:52.237098] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:16:36.225 [2024-07-14 07:33:52.237122] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:16:36.225 [2024-07-14 07:33:52.237143] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:16:36.225 [2024-07-14 07:33:52.237159] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:16:36.225 [2024-07-14 07:33:52.237551] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:16:36.225 [2024-07-14 07:33:52.237574] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:16:36.225 [2024-07-14 07:33:52.237595] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:16:36.225 [2024-07-14 07:33:52.237611] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:16:36.225 passed 00:16:36.225 Test: blockdev nvme passthru rw ...passed 00:16:36.225 Test: blockdev nvme passthru vendor specific ...[2024-07-14 07:33:52.322272] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:16:36.225 [2024-07-14 07:33:52.322305] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:16:36.225 [2024-07-14 07:33:52.322545] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:16:36.225 [2024-07-14 07:33:52.322568] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:16:36.225 [2024-07-14 07:33:52.322807] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:16:36.225 [2024-07-14 07:33:52.322829] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:16:36.225 [2024-07-14 07:33:52.323068] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:16:36.225 [2024-07-14 07:33:52.323091] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:16:36.225 passed 00:16:36.225 Test: blockdev nvme admin passthru ...passed 00:16:36.225 Test: blockdev copy ...passed 00:16:36.225 00:16:36.225 Run Summary: Type Total Ran Passed Failed Inactive 00:16:36.225 suites 1 1 n/a 0 0 00:16:36.225 tests 23 23 23 0 0 00:16:36.225 asserts 152 152 152 0 n/a 00:16:36.225 00:16:36.225 Elapsed time = 1.271 seconds 00:16:36.483 07:33:52 -- target/bdevio.sh@26 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:16:36.483 07:33:52 -- common/autotest_common.sh@551 -- # xtrace_disable 00:16:36.483 07:33:52 -- common/autotest_common.sh@10 -- # set +x 00:16:36.483 07:33:52 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:16:36.483 07:33:52 -- target/bdevio.sh@28 -- # trap - SIGINT SIGTERM EXIT 00:16:36.483 07:33:52 -- target/bdevio.sh@30 -- # nvmftestfini 00:16:36.483 07:33:52 -- nvmf/common.sh@476 -- # nvmfcleanup 00:16:36.483 07:33:52 -- nvmf/common.sh@116 -- # sync 00:16:36.483 07:33:52 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:16:36.483 07:33:52 -- nvmf/common.sh@119 -- # set +e 00:16:36.483 07:33:52 -- nvmf/common.sh@120 -- # for i in {1..20} 00:16:36.483 07:33:52 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:16:36.483 rmmod nvme_tcp 00:16:36.483 rmmod nvme_fabrics 00:16:36.740 rmmod nvme_keyring 00:16:36.740 07:33:52 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:16:36.740 07:33:52 -- nvmf/common.sh@123 -- # set -e 00:16:36.740 07:33:52 -- nvmf/common.sh@124 -- # return 0 00:16:36.740 07:33:52 -- nvmf/common.sh@477 -- # '[' -n 4101666 ']' 00:16:36.740 07:33:52 -- nvmf/common.sh@478 -- # killprocess 4101666 00:16:36.740 07:33:52 -- common/autotest_common.sh@926 -- # '[' -z 4101666 ']' 00:16:36.740 07:33:52 -- common/autotest_common.sh@930 -- # kill -0 4101666 00:16:36.740 07:33:52 -- common/autotest_common.sh@931 -- # uname 00:16:36.740 07:33:52 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:16:36.740 07:33:52 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 4101666 00:16:36.740 07:33:52 -- common/autotest_common.sh@932 -- # process_name=reactor_3 00:16:36.740 07:33:52 -- common/autotest_common.sh@936 -- # '[' reactor_3 = sudo ']' 00:16:36.740 07:33:52 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 4101666' 00:16:36.740 killing process with pid 4101666 00:16:36.740 07:33:52 -- common/autotest_common.sh@945 -- # kill 4101666 00:16:36.740 07:33:52 -- common/autotest_common.sh@950 -- # wait 4101666 00:16:36.997 07:33:53 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:16:36.997 07:33:53 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:16:36.997 07:33:53 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:16:36.997 07:33:53 -- nvmf/common.sh@273 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:16:36.997 07:33:53 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:16:36.997 07:33:53 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:36.997 07:33:53 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:16:36.997 07:33:53 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:38.897 07:33:55 -- nvmf/common.sh@278 -- # ip -4 addr flush cvl_0_1 00:16:38.897 00:16:38.897 real 0m7.024s 00:16:38.897 user 0m13.599s 00:16:38.897 sys 0m2.040s 00:16:38.897 07:33:55 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:16:38.897 07:33:55 -- common/autotest_common.sh@10 -- # set +x 00:16:38.897 ************************************ 00:16:38.897 END TEST nvmf_bdevio 00:16:38.897 ************************************ 00:16:39.155 07:33:55 -- nvmf/nvmf.sh@57 -- # '[' tcp = tcp ']' 00:16:39.155 07:33:55 -- nvmf/nvmf.sh@58 -- # run_test nvmf_bdevio_no_huge /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=tcp --no-hugepages 00:16:39.155 07:33:55 -- common/autotest_common.sh@1077 -- # '[' 4 -le 1 ']' 00:16:39.155 07:33:55 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:16:39.155 07:33:55 -- common/autotest_common.sh@10 -- # set +x 00:16:39.155 ************************************ 00:16:39.155 START TEST nvmf_bdevio_no_huge 00:16:39.155 ************************************ 00:16:39.155 07:33:55 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=tcp --no-hugepages 00:16:39.155 * Looking for test storage... 00:16:39.155 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:16:39.155 07:33:55 -- target/bdevio.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:16:39.155 07:33:55 -- nvmf/common.sh@7 -- # uname -s 00:16:39.155 07:33:55 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:16:39.155 07:33:55 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:16:39.155 07:33:55 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:16:39.155 07:33:55 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:16:39.155 07:33:55 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:16:39.155 07:33:55 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:16:39.155 07:33:55 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:16:39.155 07:33:55 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:16:39.155 07:33:55 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:16:39.155 07:33:55 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:16:39.155 07:33:55 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:16:39.155 07:33:55 -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:16:39.155 07:33:55 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:16:39.155 07:33:55 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:16:39.155 07:33:55 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:16:39.155 07:33:55 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:16:39.155 07:33:55 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:16:39.155 07:33:55 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:16:39.155 07:33:55 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:16:39.155 07:33:55 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:39.155 07:33:55 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:39.155 07:33:55 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:39.155 07:33:55 -- paths/export.sh@5 -- # export PATH 00:16:39.155 07:33:55 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:39.155 07:33:55 -- nvmf/common.sh@46 -- # : 0 00:16:39.155 07:33:55 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:16:39.155 07:33:55 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:16:39.155 07:33:55 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:16:39.155 07:33:55 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:16:39.155 07:33:55 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:16:39.155 07:33:55 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:16:39.155 07:33:55 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:16:39.155 07:33:55 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:16:39.155 07:33:55 -- target/bdevio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:16:39.155 07:33:55 -- target/bdevio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:16:39.155 07:33:55 -- target/bdevio.sh@14 -- # nvmftestinit 00:16:39.155 07:33:55 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:16:39.155 07:33:55 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:16:39.155 07:33:55 -- nvmf/common.sh@436 -- # prepare_net_devs 00:16:39.155 07:33:55 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:16:39.155 07:33:55 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:16:39.155 07:33:55 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:39.155 07:33:55 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:16:39.155 07:33:55 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:39.155 07:33:55 -- nvmf/common.sh@402 -- # [[ phy != virt ]] 00:16:39.155 07:33:55 -- nvmf/common.sh@402 -- # gather_supported_nvmf_pci_devs 00:16:39.155 07:33:55 -- nvmf/common.sh@284 -- # xtrace_disable 00:16:39.155 07:33:55 -- common/autotest_common.sh@10 -- # set +x 00:16:41.055 07:33:57 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:16:41.055 07:33:57 -- nvmf/common.sh@290 -- # pci_devs=() 00:16:41.055 07:33:57 -- nvmf/common.sh@290 -- # local -a pci_devs 00:16:41.055 07:33:57 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:16:41.055 07:33:57 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:16:41.055 07:33:57 -- nvmf/common.sh@292 -- # pci_drivers=() 00:16:41.055 07:33:57 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:16:41.055 07:33:57 -- nvmf/common.sh@294 -- # net_devs=() 00:16:41.055 07:33:57 -- nvmf/common.sh@294 -- # local -ga net_devs 00:16:41.055 07:33:57 -- nvmf/common.sh@295 -- # e810=() 00:16:41.055 07:33:57 -- nvmf/common.sh@295 -- # local -ga e810 00:16:41.055 07:33:57 -- nvmf/common.sh@296 -- # x722=() 00:16:41.055 07:33:57 -- nvmf/common.sh@296 -- # local -ga x722 00:16:41.055 07:33:57 -- nvmf/common.sh@297 -- # mlx=() 00:16:41.055 07:33:57 -- nvmf/common.sh@297 -- # local -ga mlx 00:16:41.055 07:33:57 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:16:41.055 07:33:57 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:16:41.055 07:33:57 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:16:41.055 07:33:57 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:16:41.055 07:33:57 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:16:41.055 07:33:57 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:16:41.055 07:33:57 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:16:41.055 07:33:57 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:16:41.055 07:33:57 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:16:41.056 07:33:57 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:16:41.056 07:33:57 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:16:41.056 07:33:57 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:16:41.056 07:33:57 -- nvmf/common.sh@320 -- # [[ tcp == rdma ]] 00:16:41.056 07:33:57 -- nvmf/common.sh@326 -- # [[ e810 == mlx5 ]] 00:16:41.056 07:33:57 -- nvmf/common.sh@328 -- # [[ e810 == e810 ]] 00:16:41.056 07:33:57 -- nvmf/common.sh@329 -- # pci_devs=("${e810[@]}") 00:16:41.056 07:33:57 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:16:41.056 07:33:57 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:16:41.056 07:33:57 -- nvmf/common.sh@340 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:16:41.056 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:16:41.056 07:33:57 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:16:41.056 07:33:57 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:16:41.056 07:33:57 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:16:41.056 07:33:57 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:16:41.056 07:33:57 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:16:41.056 07:33:57 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:16:41.056 07:33:57 -- nvmf/common.sh@340 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:16:41.056 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:16:41.056 07:33:57 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:16:41.056 07:33:57 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:16:41.056 07:33:57 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:16:41.056 07:33:57 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:16:41.056 07:33:57 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:16:41.056 07:33:57 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:16:41.056 07:33:57 -- nvmf/common.sh@371 -- # [[ e810 == e810 ]] 00:16:41.056 07:33:57 -- nvmf/common.sh@371 -- # [[ tcp == rdma ]] 00:16:41.056 07:33:57 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:16:41.056 07:33:57 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:16:41.056 07:33:57 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:16:41.056 07:33:57 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:16:41.056 07:33:57 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:16:41.056 Found net devices under 0000:0a:00.0: cvl_0_0 00:16:41.056 07:33:57 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:16:41.056 07:33:57 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:16:41.056 07:33:57 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:16:41.056 07:33:57 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:16:41.056 07:33:57 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:16:41.056 07:33:57 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:16:41.056 Found net devices under 0000:0a:00.1: cvl_0_1 00:16:41.056 07:33:57 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:16:41.056 07:33:57 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:16:41.056 07:33:57 -- nvmf/common.sh@402 -- # is_hw=yes 00:16:41.056 07:33:57 -- nvmf/common.sh@404 -- # [[ yes == yes ]] 00:16:41.056 07:33:57 -- nvmf/common.sh@405 -- # [[ tcp == tcp ]] 00:16:41.056 07:33:57 -- nvmf/common.sh@406 -- # nvmf_tcp_init 00:16:41.056 07:33:57 -- nvmf/common.sh@228 -- # NVMF_INITIATOR_IP=10.0.0.1 00:16:41.056 07:33:57 -- nvmf/common.sh@229 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:16:41.056 07:33:57 -- nvmf/common.sh@230 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:16:41.056 07:33:57 -- nvmf/common.sh@233 -- # (( 2 > 1 )) 00:16:41.056 07:33:57 -- nvmf/common.sh@235 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:16:41.056 07:33:57 -- nvmf/common.sh@236 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:16:41.056 07:33:57 -- nvmf/common.sh@239 -- # NVMF_SECOND_TARGET_IP= 00:16:41.056 07:33:57 -- nvmf/common.sh@241 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:16:41.056 07:33:57 -- nvmf/common.sh@242 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:16:41.056 07:33:57 -- nvmf/common.sh@243 -- # ip -4 addr flush cvl_0_0 00:16:41.056 07:33:57 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_1 00:16:41.056 07:33:57 -- nvmf/common.sh@247 -- # ip netns add cvl_0_0_ns_spdk 00:16:41.056 07:33:57 -- nvmf/common.sh@250 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:16:41.056 07:33:57 -- nvmf/common.sh@253 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:16:41.056 07:33:57 -- nvmf/common.sh@254 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:16:41.056 07:33:57 -- nvmf/common.sh@257 -- # ip link set cvl_0_1 up 00:16:41.056 07:33:57 -- nvmf/common.sh@259 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:16:41.056 07:33:57 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:16:41.056 07:33:57 -- nvmf/common.sh@263 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:16:41.056 07:33:57 -- nvmf/common.sh@266 -- # ping -c 1 10.0.0.2 00:16:41.056 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:16:41.056 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.121 ms 00:16:41.056 00:16:41.056 --- 10.0.0.2 ping statistics --- 00:16:41.056 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:41.056 rtt min/avg/max/mdev = 0.121/0.121/0.121/0.000 ms 00:16:41.056 07:33:57 -- nvmf/common.sh@267 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:16:41.056 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:16:41.056 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.164 ms 00:16:41.056 00:16:41.056 --- 10.0.0.1 ping statistics --- 00:16:41.056 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:41.056 rtt min/avg/max/mdev = 0.164/0.164/0.164/0.000 ms 00:16:41.056 07:33:57 -- nvmf/common.sh@269 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:16:41.056 07:33:57 -- nvmf/common.sh@410 -- # return 0 00:16:41.056 07:33:57 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:16:41.056 07:33:57 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:16:41.056 07:33:57 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:16:41.056 07:33:57 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:16:41.056 07:33:57 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:16:41.056 07:33:57 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:16:41.056 07:33:57 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:16:41.314 07:33:57 -- target/bdevio.sh@16 -- # nvmfappstart -m 0x78 00:16:41.314 07:33:57 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:16:41.314 07:33:57 -- common/autotest_common.sh@712 -- # xtrace_disable 00:16:41.314 07:33:57 -- common/autotest_common.sh@10 -- # set +x 00:16:41.314 07:33:57 -- nvmf/common.sh@469 -- # nvmfpid=4103912 00:16:41.314 07:33:57 -- nvmf/common.sh@468 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --no-huge -s 1024 -m 0x78 00:16:41.315 07:33:57 -- nvmf/common.sh@470 -- # waitforlisten 4103912 00:16:41.315 07:33:57 -- common/autotest_common.sh@819 -- # '[' -z 4103912 ']' 00:16:41.315 07:33:57 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:41.315 07:33:57 -- common/autotest_common.sh@824 -- # local max_retries=100 00:16:41.315 07:33:57 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:41.315 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:41.315 07:33:57 -- common/autotest_common.sh@828 -- # xtrace_disable 00:16:41.315 07:33:57 -- common/autotest_common.sh@10 -- # set +x 00:16:41.315 [2024-07-14 07:33:57.287021] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:16:41.315 [2024-07-14 07:33:57.287100] [ DPDK EAL parameters: nvmf -c 0x78 -m 1024 --no-huge --iova-mode=va --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --file-prefix=spdk0 --proc-type=auto ] 00:16:41.315 [2024-07-14 07:33:57.362357] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:16:41.315 [2024-07-14 07:33:57.478197] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:16:41.315 [2024-07-14 07:33:57.478373] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:16:41.315 [2024-07-14 07:33:57.478392] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:16:41.315 [2024-07-14 07:33:57.478406] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:16:41.315 [2024-07-14 07:33:57.478465] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 4 00:16:41.315 [2024-07-14 07:33:57.478499] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 5 00:16:41.315 [2024-07-14 07:33:57.478526] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 6 00:16:41.315 [2024-07-14 07:33:57.478528] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:16:42.248 07:33:58 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:16:42.248 07:33:58 -- common/autotest_common.sh@852 -- # return 0 00:16:42.249 07:33:58 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:16:42.249 07:33:58 -- common/autotest_common.sh@718 -- # xtrace_disable 00:16:42.249 07:33:58 -- common/autotest_common.sh@10 -- # set +x 00:16:42.249 07:33:58 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:16:42.249 07:33:58 -- target/bdevio.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:16:42.249 07:33:58 -- common/autotest_common.sh@551 -- # xtrace_disable 00:16:42.249 07:33:58 -- common/autotest_common.sh@10 -- # set +x 00:16:42.249 [2024-07-14 07:33:58.285743] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:16:42.249 07:33:58 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:16:42.249 07:33:58 -- target/bdevio.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:16:42.249 07:33:58 -- common/autotest_common.sh@551 -- # xtrace_disable 00:16:42.249 07:33:58 -- common/autotest_common.sh@10 -- # set +x 00:16:42.249 Malloc0 00:16:42.249 07:33:58 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:16:42.249 07:33:58 -- target/bdevio.sh@20 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:16:42.249 07:33:58 -- common/autotest_common.sh@551 -- # xtrace_disable 00:16:42.249 07:33:58 -- common/autotest_common.sh@10 -- # set +x 00:16:42.249 07:33:58 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:16:42.249 07:33:58 -- target/bdevio.sh@21 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:16:42.249 07:33:58 -- common/autotest_common.sh@551 -- # xtrace_disable 00:16:42.249 07:33:58 -- common/autotest_common.sh@10 -- # set +x 00:16:42.249 07:33:58 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:16:42.249 07:33:58 -- target/bdevio.sh@22 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:16:42.249 07:33:58 -- common/autotest_common.sh@551 -- # xtrace_disable 00:16:42.249 07:33:58 -- common/autotest_common.sh@10 -- # set +x 00:16:42.249 [2024-07-14 07:33:58.323838] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:16:42.249 07:33:58 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:16:42.249 07:33:58 -- target/bdevio.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/bdev/bdevio/bdevio --json /dev/fd/62 --no-huge -s 1024 00:16:42.249 07:33:58 -- target/bdevio.sh@24 -- # gen_nvmf_target_json 00:16:42.249 07:33:58 -- nvmf/common.sh@520 -- # config=() 00:16:42.249 07:33:58 -- nvmf/common.sh@520 -- # local subsystem config 00:16:42.249 07:33:58 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:16:42.249 07:33:58 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:16:42.249 { 00:16:42.249 "params": { 00:16:42.249 "name": "Nvme$subsystem", 00:16:42.249 "trtype": "$TEST_TRANSPORT", 00:16:42.249 "traddr": "$NVMF_FIRST_TARGET_IP", 00:16:42.249 "adrfam": "ipv4", 00:16:42.249 "trsvcid": "$NVMF_PORT", 00:16:42.249 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:16:42.249 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:16:42.249 "hdgst": ${hdgst:-false}, 00:16:42.249 "ddgst": ${ddgst:-false} 00:16:42.249 }, 00:16:42.249 "method": "bdev_nvme_attach_controller" 00:16:42.249 } 00:16:42.249 EOF 00:16:42.249 )") 00:16:42.249 07:33:58 -- nvmf/common.sh@542 -- # cat 00:16:42.249 07:33:58 -- nvmf/common.sh@544 -- # jq . 00:16:42.249 07:33:58 -- nvmf/common.sh@545 -- # IFS=, 00:16:42.249 07:33:58 -- nvmf/common.sh@546 -- # printf '%s\n' '{ 00:16:42.249 "params": { 00:16:42.249 "name": "Nvme1", 00:16:42.249 "trtype": "tcp", 00:16:42.249 "traddr": "10.0.0.2", 00:16:42.249 "adrfam": "ipv4", 00:16:42.249 "trsvcid": "4420", 00:16:42.249 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:16:42.249 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:16:42.249 "hdgst": false, 00:16:42.249 "ddgst": false 00:16:42.249 }, 00:16:42.249 "method": "bdev_nvme_attach_controller" 00:16:42.249 }' 00:16:42.249 [2024-07-14 07:33:58.368005] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:16:42.249 [2024-07-14 07:33:58.368076] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 -m 1024 --no-huge --iova-mode=va --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --file-prefix=spdk_pid4104073 ] 00:16:42.507 [2024-07-14 07:33:58.431673] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 3 00:16:42.507 [2024-07-14 07:33:58.543801] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:16:42.507 [2024-07-14 07:33:58.543848] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:16:42.507 [2024-07-14 07:33:58.543851] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:16:42.764 [2024-07-14 07:33:58.898829] rpc.c: 181:spdk_rpc_listen: *ERROR*: RPC Unix domain socket path /var/tmp/spdk.sock in use. Specify another. 00:16:42.764 [2024-07-14 07:33:58.898889] rpc.c: 90:spdk_rpc_initialize: *ERROR*: Unable to start RPC service at /var/tmp/spdk.sock 00:16:42.764 I/O targets: 00:16:42.764 Nvme1n1: 131072 blocks of 512 bytes (64 MiB) 00:16:42.764 00:16:42.764 00:16:42.764 CUnit - A unit testing framework for C - Version 2.1-3 00:16:42.764 http://cunit.sourceforge.net/ 00:16:42.764 00:16:42.764 00:16:42.764 Suite: bdevio tests on: Nvme1n1 00:16:43.022 Test: blockdev write read block ...passed 00:16:43.022 Test: blockdev write zeroes read block ...passed 00:16:43.022 Test: blockdev write zeroes read no split ...passed 00:16:43.022 Test: blockdev write zeroes read split ...passed 00:16:43.022 Test: blockdev write zeroes read split partial ...passed 00:16:43.022 Test: blockdev reset ...[2024-07-14 07:33:59.116360] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:43.022 [2024-07-14 07:33:59.116464] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x24d5b00 (9): Bad file descriptor 00:16:43.022 [2024-07-14 07:33:59.128793] bdev_nvme.c:2040:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:16:43.022 passed 00:16:43.022 Test: blockdev write read 8 blocks ...passed 00:16:43.022 Test: blockdev write read size > 128k ...passed 00:16:43.022 Test: blockdev write read invalid size ...passed 00:16:43.280 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:16:43.280 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:16:43.280 Test: blockdev write read max offset ...passed 00:16:43.280 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:16:43.280 Test: blockdev writev readv 8 blocks ...passed 00:16:43.280 Test: blockdev writev readv 30 x 1block ...passed 00:16:43.280 Test: blockdev writev readv block ...passed 00:16:43.280 Test: blockdev writev readv size > 128k ...passed 00:16:43.280 Test: blockdev writev readv size > 128k in two iovs ...passed 00:16:43.280 Test: blockdev comparev and writev ...[2024-07-14 07:33:59.349447] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:16:43.280 [2024-07-14 07:33:59.349482] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:43.280 [2024-07-14 07:33:59.349505] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:16:43.280 [2024-07-14 07:33:59.349522] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:16:43.280 [2024-07-14 07:33:59.349915] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:16:43.280 [2024-07-14 07:33:59.349940] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:16:43.280 [2024-07-14 07:33:59.349961] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:16:43.280 [2024-07-14 07:33:59.349977] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:16:43.280 [2024-07-14 07:33:59.350341] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:16:43.280 [2024-07-14 07:33:59.350364] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:16:43.280 [2024-07-14 07:33:59.350385] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:16:43.280 [2024-07-14 07:33:59.350401] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:16:43.280 [2024-07-14 07:33:59.350771] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:16:43.280 [2024-07-14 07:33:59.350794] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:16:43.280 [2024-07-14 07:33:59.350815] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:16:43.280 [2024-07-14 07:33:59.350831] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:16:43.280 passed 00:16:43.280 Test: blockdev nvme passthru rw ...passed 00:16:43.281 Test: blockdev nvme passthru vendor specific ...[2024-07-14 07:33:59.434220] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:16:43.281 [2024-07-14 07:33:59.434246] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:16:43.281 [2024-07-14 07:33:59.434482] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:16:43.281 [2024-07-14 07:33:59.434505] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:16:43.281 [2024-07-14 07:33:59.434742] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:16:43.281 [2024-07-14 07:33:59.434765] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:16:43.281 [2024-07-14 07:33:59.434986] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:16:43.281 [2024-07-14 07:33:59.435009] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:16:43.281 passed 00:16:43.281 Test: blockdev nvme admin passthru ...passed 00:16:43.539 Test: blockdev copy ...passed 00:16:43.539 00:16:43.539 Run Summary: Type Total Ran Passed Failed Inactive 00:16:43.539 suites 1 1 n/a 0 0 00:16:43.539 tests 23 23 23 0 0 00:16:43.539 asserts 152 152 152 0 n/a 00:16:43.539 00:16:43.539 Elapsed time = 1.178 seconds 00:16:43.796 07:33:59 -- target/bdevio.sh@26 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:16:43.796 07:33:59 -- common/autotest_common.sh@551 -- # xtrace_disable 00:16:43.796 07:33:59 -- common/autotest_common.sh@10 -- # set +x 00:16:43.796 07:33:59 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:16:43.796 07:33:59 -- target/bdevio.sh@28 -- # trap - SIGINT SIGTERM EXIT 00:16:43.797 07:33:59 -- target/bdevio.sh@30 -- # nvmftestfini 00:16:43.797 07:33:59 -- nvmf/common.sh@476 -- # nvmfcleanup 00:16:43.797 07:33:59 -- nvmf/common.sh@116 -- # sync 00:16:43.797 07:33:59 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:16:43.797 07:33:59 -- nvmf/common.sh@119 -- # set +e 00:16:43.797 07:33:59 -- nvmf/common.sh@120 -- # for i in {1..20} 00:16:43.797 07:33:59 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:16:43.797 rmmod nvme_tcp 00:16:43.797 rmmod nvme_fabrics 00:16:43.797 rmmod nvme_keyring 00:16:43.797 07:33:59 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:16:43.797 07:33:59 -- nvmf/common.sh@123 -- # set -e 00:16:43.797 07:33:59 -- nvmf/common.sh@124 -- # return 0 00:16:43.797 07:33:59 -- nvmf/common.sh@477 -- # '[' -n 4103912 ']' 00:16:43.797 07:33:59 -- nvmf/common.sh@478 -- # killprocess 4103912 00:16:43.797 07:33:59 -- common/autotest_common.sh@926 -- # '[' -z 4103912 ']' 00:16:43.797 07:33:59 -- common/autotest_common.sh@930 -- # kill -0 4103912 00:16:43.797 07:33:59 -- common/autotest_common.sh@931 -- # uname 00:16:43.797 07:33:59 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:16:43.797 07:33:59 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 4103912 00:16:43.797 07:33:59 -- common/autotest_common.sh@932 -- # process_name=reactor_3 00:16:43.797 07:33:59 -- common/autotest_common.sh@936 -- # '[' reactor_3 = sudo ']' 00:16:43.797 07:33:59 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 4103912' 00:16:43.797 killing process with pid 4103912 00:16:44.077 07:33:59 -- common/autotest_common.sh@945 -- # kill 4103912 00:16:44.077 07:33:59 -- common/autotest_common.sh@950 -- # wait 4103912 00:16:44.351 07:34:00 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:16:44.351 07:34:00 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:16:44.351 07:34:00 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:16:44.351 07:34:00 -- nvmf/common.sh@273 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:16:44.351 07:34:00 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:16:44.351 07:34:00 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:44.351 07:34:00 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:16:44.351 07:34:00 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:46.880 07:34:02 -- nvmf/common.sh@278 -- # ip -4 addr flush cvl_0_1 00:16:46.881 00:16:46.881 real 0m7.366s 00:16:46.881 user 0m14.774s 00:16:46.881 sys 0m2.511s 00:16:46.881 07:34:02 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:16:46.881 07:34:02 -- common/autotest_common.sh@10 -- # set +x 00:16:46.881 ************************************ 00:16:46.881 END TEST nvmf_bdevio_no_huge 00:16:46.881 ************************************ 00:16:46.881 07:34:02 -- nvmf/nvmf.sh@59 -- # run_test nvmf_tls /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/tls.sh --transport=tcp 00:16:46.881 07:34:02 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:16:46.881 07:34:02 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:16:46.881 07:34:02 -- common/autotest_common.sh@10 -- # set +x 00:16:46.881 ************************************ 00:16:46.881 START TEST nvmf_tls 00:16:46.881 ************************************ 00:16:46.881 07:34:02 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/tls.sh --transport=tcp 00:16:46.881 * Looking for test storage... 00:16:46.881 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:16:46.881 07:34:02 -- target/tls.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:16:46.881 07:34:02 -- nvmf/common.sh@7 -- # uname -s 00:16:46.881 07:34:02 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:16:46.881 07:34:02 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:16:46.881 07:34:02 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:16:46.881 07:34:02 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:16:46.881 07:34:02 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:16:46.881 07:34:02 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:16:46.881 07:34:02 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:16:46.881 07:34:02 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:16:46.881 07:34:02 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:16:46.881 07:34:02 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:16:46.881 07:34:02 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:16:46.881 07:34:02 -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:16:46.881 07:34:02 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:16:46.881 07:34:02 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:16:46.881 07:34:02 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:16:46.881 07:34:02 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:16:46.881 07:34:02 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:16:46.881 07:34:02 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:16:46.881 07:34:02 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:16:46.881 07:34:02 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:46.881 07:34:02 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:46.881 07:34:02 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:46.881 07:34:02 -- paths/export.sh@5 -- # export PATH 00:16:46.881 07:34:02 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:46.881 07:34:02 -- nvmf/common.sh@46 -- # : 0 00:16:46.881 07:34:02 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:16:46.881 07:34:02 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:16:46.881 07:34:02 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:16:46.881 07:34:02 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:16:46.881 07:34:02 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:16:46.881 07:34:02 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:16:46.881 07:34:02 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:16:46.881 07:34:02 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:16:46.881 07:34:02 -- target/tls.sh@12 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:16:46.881 07:34:02 -- target/tls.sh@71 -- # nvmftestinit 00:16:46.881 07:34:02 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:16:46.881 07:34:02 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:16:46.881 07:34:02 -- nvmf/common.sh@436 -- # prepare_net_devs 00:16:46.881 07:34:02 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:16:46.881 07:34:02 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:16:46.881 07:34:02 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:46.881 07:34:02 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:16:46.881 07:34:02 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:46.881 07:34:02 -- nvmf/common.sh@402 -- # [[ phy != virt ]] 00:16:46.881 07:34:02 -- nvmf/common.sh@402 -- # gather_supported_nvmf_pci_devs 00:16:46.881 07:34:02 -- nvmf/common.sh@284 -- # xtrace_disable 00:16:46.881 07:34:02 -- common/autotest_common.sh@10 -- # set +x 00:16:48.258 07:34:04 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:16:48.258 07:34:04 -- nvmf/common.sh@290 -- # pci_devs=() 00:16:48.258 07:34:04 -- nvmf/common.sh@290 -- # local -a pci_devs 00:16:48.258 07:34:04 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:16:48.258 07:34:04 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:16:48.258 07:34:04 -- nvmf/common.sh@292 -- # pci_drivers=() 00:16:48.258 07:34:04 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:16:48.258 07:34:04 -- nvmf/common.sh@294 -- # net_devs=() 00:16:48.258 07:34:04 -- nvmf/common.sh@294 -- # local -ga net_devs 00:16:48.258 07:34:04 -- nvmf/common.sh@295 -- # e810=() 00:16:48.258 07:34:04 -- nvmf/common.sh@295 -- # local -ga e810 00:16:48.258 07:34:04 -- nvmf/common.sh@296 -- # x722=() 00:16:48.258 07:34:04 -- nvmf/common.sh@296 -- # local -ga x722 00:16:48.258 07:34:04 -- nvmf/common.sh@297 -- # mlx=() 00:16:48.258 07:34:04 -- nvmf/common.sh@297 -- # local -ga mlx 00:16:48.258 07:34:04 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:16:48.258 07:34:04 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:16:48.258 07:34:04 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:16:48.258 07:34:04 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:16:48.258 07:34:04 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:16:48.258 07:34:04 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:16:48.258 07:34:04 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:16:48.258 07:34:04 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:16:48.258 07:34:04 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:16:48.258 07:34:04 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:16:48.258 07:34:04 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:16:48.258 07:34:04 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:16:48.259 07:34:04 -- nvmf/common.sh@320 -- # [[ tcp == rdma ]] 00:16:48.259 07:34:04 -- nvmf/common.sh@326 -- # [[ e810 == mlx5 ]] 00:16:48.259 07:34:04 -- nvmf/common.sh@328 -- # [[ e810 == e810 ]] 00:16:48.259 07:34:04 -- nvmf/common.sh@329 -- # pci_devs=("${e810[@]}") 00:16:48.259 07:34:04 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:16:48.259 07:34:04 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:16:48.259 07:34:04 -- nvmf/common.sh@340 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:16:48.259 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:16:48.259 07:34:04 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:16:48.259 07:34:04 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:16:48.259 07:34:04 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:16:48.259 07:34:04 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:16:48.259 07:34:04 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:16:48.259 07:34:04 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:16:48.259 07:34:04 -- nvmf/common.sh@340 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:16:48.259 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:16:48.259 07:34:04 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:16:48.259 07:34:04 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:16:48.259 07:34:04 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:16:48.259 07:34:04 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:16:48.259 07:34:04 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:16:48.259 07:34:04 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:16:48.259 07:34:04 -- nvmf/common.sh@371 -- # [[ e810 == e810 ]] 00:16:48.259 07:34:04 -- nvmf/common.sh@371 -- # [[ tcp == rdma ]] 00:16:48.259 07:34:04 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:16:48.259 07:34:04 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:16:48.259 07:34:04 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:16:48.259 07:34:04 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:16:48.259 07:34:04 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:16:48.259 Found net devices under 0000:0a:00.0: cvl_0_0 00:16:48.259 07:34:04 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:16:48.259 07:34:04 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:16:48.259 07:34:04 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:16:48.259 07:34:04 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:16:48.259 07:34:04 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:16:48.259 07:34:04 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:16:48.259 Found net devices under 0000:0a:00.1: cvl_0_1 00:16:48.259 07:34:04 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:16:48.259 07:34:04 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:16:48.259 07:34:04 -- nvmf/common.sh@402 -- # is_hw=yes 00:16:48.259 07:34:04 -- nvmf/common.sh@404 -- # [[ yes == yes ]] 00:16:48.259 07:34:04 -- nvmf/common.sh@405 -- # [[ tcp == tcp ]] 00:16:48.259 07:34:04 -- nvmf/common.sh@406 -- # nvmf_tcp_init 00:16:48.259 07:34:04 -- nvmf/common.sh@228 -- # NVMF_INITIATOR_IP=10.0.0.1 00:16:48.259 07:34:04 -- nvmf/common.sh@229 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:16:48.259 07:34:04 -- nvmf/common.sh@230 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:16:48.259 07:34:04 -- nvmf/common.sh@233 -- # (( 2 > 1 )) 00:16:48.259 07:34:04 -- nvmf/common.sh@235 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:16:48.259 07:34:04 -- nvmf/common.sh@236 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:16:48.259 07:34:04 -- nvmf/common.sh@239 -- # NVMF_SECOND_TARGET_IP= 00:16:48.259 07:34:04 -- nvmf/common.sh@241 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:16:48.259 07:34:04 -- nvmf/common.sh@242 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:16:48.259 07:34:04 -- nvmf/common.sh@243 -- # ip -4 addr flush cvl_0_0 00:16:48.259 07:34:04 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_1 00:16:48.259 07:34:04 -- nvmf/common.sh@247 -- # ip netns add cvl_0_0_ns_spdk 00:16:48.259 07:34:04 -- nvmf/common.sh@250 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:16:48.517 07:34:04 -- nvmf/common.sh@253 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:16:48.517 07:34:04 -- nvmf/common.sh@254 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:16:48.517 07:34:04 -- nvmf/common.sh@257 -- # ip link set cvl_0_1 up 00:16:48.517 07:34:04 -- nvmf/common.sh@259 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:16:48.517 07:34:04 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:16:48.518 07:34:04 -- nvmf/common.sh@263 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:16:48.518 07:34:04 -- nvmf/common.sh@266 -- # ping -c 1 10.0.0.2 00:16:48.518 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:16:48.518 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.198 ms 00:16:48.518 00:16:48.518 --- 10.0.0.2 ping statistics --- 00:16:48.518 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:48.518 rtt min/avg/max/mdev = 0.198/0.198/0.198/0.000 ms 00:16:48.518 07:34:04 -- nvmf/common.sh@267 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:16:48.518 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:16:48.518 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.099 ms 00:16:48.518 00:16:48.518 --- 10.0.0.1 ping statistics --- 00:16:48.518 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:48.518 rtt min/avg/max/mdev = 0.099/0.099/0.099/0.000 ms 00:16:48.518 07:34:04 -- nvmf/common.sh@269 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:16:48.518 07:34:04 -- nvmf/common.sh@410 -- # return 0 00:16:48.518 07:34:04 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:16:48.518 07:34:04 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:16:48.518 07:34:04 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:16:48.518 07:34:04 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:16:48.518 07:34:04 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:16:48.518 07:34:04 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:16:48.518 07:34:04 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:16:48.518 07:34:04 -- target/tls.sh@72 -- # nvmfappstart -m 0x2 --wait-for-rpc 00:16:48.518 07:34:04 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:16:48.518 07:34:04 -- common/autotest_common.sh@712 -- # xtrace_disable 00:16:48.518 07:34:04 -- common/autotest_common.sh@10 -- # set +x 00:16:48.518 07:34:04 -- nvmf/common.sh@469 -- # nvmfpid=4106165 00:16:48.518 07:34:04 -- nvmf/common.sh@468 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 --wait-for-rpc 00:16:48.518 07:34:04 -- nvmf/common.sh@470 -- # waitforlisten 4106165 00:16:48.518 07:34:04 -- common/autotest_common.sh@819 -- # '[' -z 4106165 ']' 00:16:48.518 07:34:04 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:48.518 07:34:04 -- common/autotest_common.sh@824 -- # local max_retries=100 00:16:48.518 07:34:04 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:48.518 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:48.518 07:34:04 -- common/autotest_common.sh@828 -- # xtrace_disable 00:16:48.518 07:34:04 -- common/autotest_common.sh@10 -- # set +x 00:16:48.518 [2024-07-14 07:34:04.626179] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:16:48.518 [2024-07-14 07:34:04.626264] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:16:48.518 EAL: No free 2048 kB hugepages reported on node 1 00:16:48.776 [2024-07-14 07:34:04.701314] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:48.776 [2024-07-14 07:34:04.816475] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:16:48.776 [2024-07-14 07:34:04.816639] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:16:48.776 [2024-07-14 07:34:04.816658] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:16:48.776 [2024-07-14 07:34:04.816672] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:16:48.776 [2024-07-14 07:34:04.816706] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:16:49.708 07:34:05 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:16:49.708 07:34:05 -- common/autotest_common.sh@852 -- # return 0 00:16:49.708 07:34:05 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:16:49.708 07:34:05 -- common/autotest_common.sh@718 -- # xtrace_disable 00:16:49.708 07:34:05 -- common/autotest_common.sh@10 -- # set +x 00:16:49.708 07:34:05 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:16:49.708 07:34:05 -- target/tls.sh@74 -- # '[' tcp '!=' tcp ']' 00:16:49.708 07:34:05 -- target/tls.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_set_default_impl -i ssl 00:16:49.708 true 00:16:49.708 07:34:05 -- target/tls.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:16:49.708 07:34:05 -- target/tls.sh@82 -- # jq -r .tls_version 00:16:49.965 07:34:06 -- target/tls.sh@82 -- # version=0 00:16:49.965 07:34:06 -- target/tls.sh@83 -- # [[ 0 != \0 ]] 00:16:49.965 07:34:06 -- target/tls.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --tls-version 13 00:16:50.222 07:34:06 -- target/tls.sh@90 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:16:50.222 07:34:06 -- target/tls.sh@90 -- # jq -r .tls_version 00:16:50.480 07:34:06 -- target/tls.sh@90 -- # version=13 00:16:50.480 07:34:06 -- target/tls.sh@91 -- # [[ 13 != \1\3 ]] 00:16:50.480 07:34:06 -- target/tls.sh@97 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --tls-version 7 00:16:50.739 07:34:06 -- target/tls.sh@98 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:16:50.739 07:34:06 -- target/tls.sh@98 -- # jq -r .tls_version 00:16:50.997 07:34:07 -- target/tls.sh@98 -- # version=7 00:16:50.997 07:34:07 -- target/tls.sh@99 -- # [[ 7 != \7 ]] 00:16:50.997 07:34:07 -- target/tls.sh@105 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:16:50.997 07:34:07 -- target/tls.sh@105 -- # jq -r .enable_ktls 00:16:51.255 07:34:07 -- target/tls.sh@105 -- # ktls=false 00:16:51.255 07:34:07 -- target/tls.sh@106 -- # [[ false != \f\a\l\s\e ]] 00:16:51.255 07:34:07 -- target/tls.sh@112 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --enable-ktls 00:16:51.513 07:34:07 -- target/tls.sh@113 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:16:51.513 07:34:07 -- target/tls.sh@113 -- # jq -r .enable_ktls 00:16:51.772 07:34:07 -- target/tls.sh@113 -- # ktls=true 00:16:51.772 07:34:07 -- target/tls.sh@114 -- # [[ true != \t\r\u\e ]] 00:16:51.772 07:34:07 -- target/tls.sh@120 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --disable-ktls 00:16:52.031 07:34:07 -- target/tls.sh@121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:16:52.031 07:34:07 -- target/tls.sh@121 -- # jq -r .enable_ktls 00:16:52.289 07:34:08 -- target/tls.sh@121 -- # ktls=false 00:16:52.289 07:34:08 -- target/tls.sh@122 -- # [[ false != \f\a\l\s\e ]] 00:16:52.289 07:34:08 -- target/tls.sh@127 -- # format_interchange_psk 00112233445566778899aabbccddeeff 00:16:52.289 07:34:08 -- target/tls.sh@49 -- # local key hash crc 00:16:52.289 07:34:08 -- target/tls.sh@51 -- # key=00112233445566778899aabbccddeeff 00:16:52.289 07:34:08 -- target/tls.sh@51 -- # hash=01 00:16:52.289 07:34:08 -- target/tls.sh@52 -- # echo -n 00112233445566778899aabbccddeeff 00:16:52.289 07:34:08 -- target/tls.sh@52 -- # gzip -1 -c 00:16:52.289 07:34:08 -- target/tls.sh@52 -- # tail -c8 00:16:52.289 07:34:08 -- target/tls.sh@52 -- # head -c 4 00:16:52.289 07:34:08 -- target/tls.sh@52 -- # crc='p$H�' 00:16:52.289 07:34:08 -- target/tls.sh@54 -- # base64 /dev/fd/62 00:16:52.289 07:34:08 -- target/tls.sh@54 -- # echo -n '00112233445566778899aabbccddeeffp$H�' 00:16:52.289 07:34:08 -- target/tls.sh@54 -- # echo NVMeTLSkey-1:01:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: 00:16:52.289 07:34:08 -- target/tls.sh@127 -- # key=NVMeTLSkey-1:01:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: 00:16:52.289 07:34:08 -- target/tls.sh@128 -- # format_interchange_psk ffeeddccbbaa99887766554433221100 00:16:52.289 07:34:08 -- target/tls.sh@49 -- # local key hash crc 00:16:52.289 07:34:08 -- target/tls.sh@51 -- # key=ffeeddccbbaa99887766554433221100 00:16:52.289 07:34:08 -- target/tls.sh@51 -- # hash=01 00:16:52.289 07:34:08 -- target/tls.sh@52 -- # echo -n ffeeddccbbaa99887766554433221100 00:16:52.289 07:34:08 -- target/tls.sh@52 -- # gzip -1 -c 00:16:52.289 07:34:08 -- target/tls.sh@52 -- # tail -c8 00:16:52.289 07:34:08 -- target/tls.sh@52 -- # head -c 4 00:16:52.289 07:34:08 -- target/tls.sh@52 -- # crc=$'_\006o\330' 00:16:52.289 07:34:08 -- target/tls.sh@54 -- # base64 /dev/fd/62 00:16:52.289 07:34:08 -- target/tls.sh@54 -- # echo -n $'ffeeddccbbaa99887766554433221100_\006o\330' 00:16:52.289 07:34:08 -- target/tls.sh@54 -- # echo NVMeTLSkey-1:01:ZmZlZWRkY2NiYmFhOTk4ODc3NjY1NTQ0MzMyMjExMDBfBm/Y: 00:16:52.289 07:34:08 -- target/tls.sh@128 -- # key_2=NVMeTLSkey-1:01:ZmZlZWRkY2NiYmFhOTk4ODc3NjY1NTQ0MzMyMjExMDBfBm/Y: 00:16:52.289 07:34:08 -- target/tls.sh@130 -- # key_path=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/key1.txt 00:16:52.289 07:34:08 -- target/tls.sh@131 -- # key_2_path=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/key2.txt 00:16:52.289 07:34:08 -- target/tls.sh@133 -- # echo -n NVMeTLSkey-1:01:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: 00:16:52.289 07:34:08 -- target/tls.sh@134 -- # echo -n NVMeTLSkey-1:01:ZmZlZWRkY2NiYmFhOTk4ODc3NjY1NTQ0MzMyMjExMDBfBm/Y: 00:16:52.289 07:34:08 -- target/tls.sh@136 -- # chmod 0600 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/key1.txt 00:16:52.289 07:34:08 -- target/tls.sh@137 -- # chmod 0600 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/key2.txt 00:16:52.289 07:34:08 -- target/tls.sh@139 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --tls-version 13 00:16:52.548 07:34:08 -- target/tls.sh@140 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py framework_start_init 00:16:52.806 07:34:08 -- target/tls.sh@142 -- # setup_nvmf_tgt /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/key1.txt 00:16:52.806 07:34:08 -- target/tls.sh@58 -- # local key=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/key1.txt 00:16:52.806 07:34:08 -- target/tls.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:16:53.064 [2024-07-14 07:34:09.088503] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:16:53.064 07:34:09 -- target/tls.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:16:53.323 07:34:09 -- target/tls.sh@62 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:16:53.581 [2024-07-14 07:34:09.654047] tcp.c: 912:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:16:53.581 [2024-07-14 07:34:09.654282] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:16:53.581 07:34:09 -- target/tls.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:16:53.839 malloc0 00:16:53.839 07:34:09 -- target/tls.sh@65 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:16:54.097 07:34:10 -- target/tls.sh@67 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/key1.txt 00:16:54.355 07:34:10 -- target/tls.sh@146 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -S ssl -q 64 -o 4096 -w randrw -M 30 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 hostnqn:nqn.2016-06.io.spdk:host1' --psk-path /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/key1.txt 00:16:54.355 EAL: No free 2048 kB hugepages reported on node 1 00:17:06.548 Initializing NVMe Controllers 00:17:06.548 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:17:06.548 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:17:06.548 Initialization complete. Launching workers. 00:17:06.548 ======================================================== 00:17:06.548 Latency(us) 00:17:06.548 Device Information : IOPS MiB/s Average min max 00:17:06.548 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 7676.15 29.98 8340.32 1168.16 9311.14 00:17:06.548 ======================================================== 00:17:06.548 Total : 7676.15 29.98 8340.32 1168.16 9311.14 00:17:06.548 00:17:06.548 07:34:20 -- target/tls.sh@152 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/key1.txt 00:17:06.548 07:34:20 -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:17:06.548 07:34:20 -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:17:06.548 07:34:20 -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:17:06.548 07:34:20 -- target/tls.sh@23 -- # psk='--psk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/key1.txt' 00:17:06.548 07:34:20 -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:17:06.548 07:34:20 -- target/tls.sh@28 -- # bdevperf_pid=4108251 00:17:06.548 07:34:20 -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:17:06.548 07:34:20 -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:17:06.548 07:34:20 -- target/tls.sh@31 -- # waitforlisten 4108251 /var/tmp/bdevperf.sock 00:17:06.548 07:34:20 -- common/autotest_common.sh@819 -- # '[' -z 4108251 ']' 00:17:06.548 07:34:20 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:17:06.548 07:34:20 -- common/autotest_common.sh@824 -- # local max_retries=100 00:17:06.548 07:34:20 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:17:06.548 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:17:06.548 07:34:20 -- common/autotest_common.sh@828 -- # xtrace_disable 00:17:06.548 07:34:20 -- common/autotest_common.sh@10 -- # set +x 00:17:06.548 [2024-07-14 07:34:20.585403] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:17:06.548 [2024-07-14 07:34:20.585480] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid4108251 ] 00:17:06.548 EAL: No free 2048 kB hugepages reported on node 1 00:17:06.548 [2024-07-14 07:34:20.642241] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:06.548 [2024-07-14 07:34:20.749469] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:17:06.548 07:34:21 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:17:06.548 07:34:21 -- common/autotest_common.sh@852 -- # return 0 00:17:06.548 07:34:21 -- target/tls.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/key1.txt 00:17:06.548 [2024-07-14 07:34:21.783112] bdev_nvme_rpc.c: 477:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:17:06.548 TLSTESTn1 00:17:06.548 07:34:21 -- target/tls.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -t 20 -s /var/tmp/bdevperf.sock perform_tests 00:17:06.548 Running I/O for 10 seconds... 00:17:16.534 00:17:16.534 Latency(us) 00:17:16.534 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:16.534 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:17:16.534 Verification LBA range: start 0x0 length 0x2000 00:17:16.534 TLSTESTn1 : 10.04 1760.10 6.88 0.00 0.00 72599.04 8738.13 83109.36 00:17:16.534 =================================================================================================================== 00:17:16.534 Total : 1760.10 6.88 0.00 0.00 72599.04 8738.13 83109.36 00:17:16.534 0 00:17:16.534 07:34:32 -- target/tls.sh@44 -- # trap 'nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:17:16.534 07:34:32 -- target/tls.sh@45 -- # killprocess 4108251 00:17:16.534 07:34:32 -- common/autotest_common.sh@926 -- # '[' -z 4108251 ']' 00:17:16.534 07:34:32 -- common/autotest_common.sh@930 -- # kill -0 4108251 00:17:16.534 07:34:32 -- common/autotest_common.sh@931 -- # uname 00:17:16.534 07:34:32 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:17:16.534 07:34:32 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 4108251 00:17:16.534 07:34:32 -- common/autotest_common.sh@932 -- # process_name=reactor_2 00:17:16.534 07:34:32 -- common/autotest_common.sh@936 -- # '[' reactor_2 = sudo ']' 00:17:16.534 07:34:32 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 4108251' 00:17:16.534 killing process with pid 4108251 00:17:16.534 07:34:32 -- common/autotest_common.sh@945 -- # kill 4108251 00:17:16.534 Received shutdown signal, test time was about 10.000000 seconds 00:17:16.534 00:17:16.534 Latency(us) 00:17:16.534 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:16.534 =================================================================================================================== 00:17:16.534 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:17:16.534 07:34:32 -- common/autotest_common.sh@950 -- # wait 4108251 00:17:16.534 07:34:32 -- target/tls.sh@155 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/key2.txt 00:17:16.534 07:34:32 -- common/autotest_common.sh@640 -- # local es=0 00:17:16.534 07:34:32 -- common/autotest_common.sh@642 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/key2.txt 00:17:16.534 07:34:32 -- common/autotest_common.sh@628 -- # local arg=run_bdevperf 00:17:16.534 07:34:32 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:17:16.534 07:34:32 -- common/autotest_common.sh@632 -- # type -t run_bdevperf 00:17:16.534 07:34:32 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:17:16.534 07:34:32 -- common/autotest_common.sh@643 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/key2.txt 00:17:16.534 07:34:32 -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:17:16.534 07:34:32 -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:17:16.534 07:34:32 -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:17:16.534 07:34:32 -- target/tls.sh@23 -- # psk='--psk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/key2.txt' 00:17:16.534 07:34:32 -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:17:16.534 07:34:32 -- target/tls.sh@28 -- # bdevperf_pid=4109626 00:17:16.534 07:34:32 -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:17:16.534 07:34:32 -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:17:16.534 07:34:32 -- target/tls.sh@31 -- # waitforlisten 4109626 /var/tmp/bdevperf.sock 00:17:16.534 07:34:32 -- common/autotest_common.sh@819 -- # '[' -z 4109626 ']' 00:17:16.534 07:34:32 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:17:16.534 07:34:32 -- common/autotest_common.sh@824 -- # local max_retries=100 00:17:16.534 07:34:32 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:17:16.534 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:17:16.534 07:34:32 -- common/autotest_common.sh@828 -- # xtrace_disable 00:17:16.534 07:34:32 -- common/autotest_common.sh@10 -- # set +x 00:17:16.534 [2024-07-14 07:34:32.376992] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:17:16.535 [2024-07-14 07:34:32.377073] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid4109626 ] 00:17:16.535 EAL: No free 2048 kB hugepages reported on node 1 00:17:16.535 [2024-07-14 07:34:32.433951] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:16.535 [2024-07-14 07:34:32.534173] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:17:17.467 07:34:33 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:17:17.467 07:34:33 -- common/autotest_common.sh@852 -- # return 0 00:17:17.468 07:34:33 -- target/tls.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/key2.txt 00:17:17.468 [2024-07-14 07:34:33.517120] bdev_nvme_rpc.c: 477:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:17:17.468 [2024-07-14 07:34:33.528563] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:17:17.468 [2024-07-14 07:34:33.529376] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf40870 (107): Transport endpoint is not connected 00:17:17.468 [2024-07-14 07:34:33.530365] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf40870 (9): Bad file descriptor 00:17:17.468 [2024-07-14 07:34:33.531364] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:17:17.468 [2024-07-14 07:34:33.531382] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.2 00:17:17.468 [2024-07-14 07:34:33.531411] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:17:17.468 request: 00:17:17.468 { 00:17:17.468 "name": "TLSTEST", 00:17:17.468 "trtype": "tcp", 00:17:17.468 "traddr": "10.0.0.2", 00:17:17.468 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:17:17.468 "adrfam": "ipv4", 00:17:17.468 "trsvcid": "4420", 00:17:17.468 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:17:17.468 "psk": "/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/key2.txt", 00:17:17.468 "method": "bdev_nvme_attach_controller", 00:17:17.468 "req_id": 1 00:17:17.468 } 00:17:17.468 Got JSON-RPC error response 00:17:17.468 response: 00:17:17.468 { 00:17:17.468 "code": -32602, 00:17:17.468 "message": "Invalid parameters" 00:17:17.468 } 00:17:17.468 07:34:33 -- target/tls.sh@36 -- # killprocess 4109626 00:17:17.468 07:34:33 -- common/autotest_common.sh@926 -- # '[' -z 4109626 ']' 00:17:17.468 07:34:33 -- common/autotest_common.sh@930 -- # kill -0 4109626 00:17:17.468 07:34:33 -- common/autotest_common.sh@931 -- # uname 00:17:17.468 07:34:33 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:17:17.468 07:34:33 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 4109626 00:17:17.468 07:34:33 -- common/autotest_common.sh@932 -- # process_name=reactor_2 00:17:17.468 07:34:33 -- common/autotest_common.sh@936 -- # '[' reactor_2 = sudo ']' 00:17:17.468 07:34:33 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 4109626' 00:17:17.468 killing process with pid 4109626 00:17:17.468 07:34:33 -- common/autotest_common.sh@945 -- # kill 4109626 00:17:17.468 Received shutdown signal, test time was about 10.000000 seconds 00:17:17.468 00:17:17.468 Latency(us) 00:17:17.468 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:17.468 =================================================================================================================== 00:17:17.468 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:17:17.468 07:34:33 -- common/autotest_common.sh@950 -- # wait 4109626 00:17:17.725 07:34:33 -- target/tls.sh@37 -- # return 1 00:17:17.725 07:34:33 -- common/autotest_common.sh@643 -- # es=1 00:17:17.725 07:34:33 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:17:17.725 07:34:33 -- common/autotest_common.sh@662 -- # [[ -n '' ]] 00:17:17.725 07:34:33 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:17:17.725 07:34:33 -- target/tls.sh@158 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host2 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/key1.txt 00:17:17.725 07:34:33 -- common/autotest_common.sh@640 -- # local es=0 00:17:17.725 07:34:33 -- common/autotest_common.sh@642 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host2 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/key1.txt 00:17:17.725 07:34:33 -- common/autotest_common.sh@628 -- # local arg=run_bdevperf 00:17:17.725 07:34:33 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:17:17.725 07:34:33 -- common/autotest_common.sh@632 -- # type -t run_bdevperf 00:17:17.725 07:34:33 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:17:17.725 07:34:33 -- common/autotest_common.sh@643 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host2 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/key1.txt 00:17:17.725 07:34:33 -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:17:17.725 07:34:33 -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:17:17.725 07:34:33 -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host2 00:17:17.725 07:34:33 -- target/tls.sh@23 -- # psk='--psk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/key1.txt' 00:17:17.725 07:34:33 -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:17:17.725 07:34:33 -- target/tls.sh@28 -- # bdevperf_pid=4109774 00:17:17.725 07:34:33 -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:17:17.725 07:34:33 -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:17:17.725 07:34:33 -- target/tls.sh@31 -- # waitforlisten 4109774 /var/tmp/bdevperf.sock 00:17:17.726 07:34:33 -- common/autotest_common.sh@819 -- # '[' -z 4109774 ']' 00:17:17.726 07:34:33 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:17:17.726 07:34:33 -- common/autotest_common.sh@824 -- # local max_retries=100 00:17:17.726 07:34:33 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:17:17.726 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:17:17.726 07:34:33 -- common/autotest_common.sh@828 -- # xtrace_disable 00:17:17.726 07:34:33 -- common/autotest_common.sh@10 -- # set +x 00:17:17.726 [2024-07-14 07:34:33.887696] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:17:17.726 [2024-07-14 07:34:33.887774] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid4109774 ] 00:17:17.984 EAL: No free 2048 kB hugepages reported on node 1 00:17:17.984 [2024-07-14 07:34:33.944819] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:17.984 [2024-07-14 07:34:34.049676] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:17:18.914 07:34:34 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:17:18.914 07:34:34 -- common/autotest_common.sh@852 -- # return 0 00:17:18.914 07:34:34 -- target/tls.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host2 --psk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/key1.txt 00:17:18.914 [2024-07-14 07:34:35.083985] bdev_nvme_rpc.c: 477:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:17:19.171 [2024-07-14 07:34:35.092425] tcp.c: 866:tcp_sock_get_key: *ERROR*: Could not find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host2 nqn.2016-06.io.spdk:cnode1 00:17:19.171 [2024-07-14 07:34:35.092456] posix.c: 583:posix_sock_psk_find_session_server_cb: *ERROR*: Unable to find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host2 nqn.2016-06.io.spdk:cnode1 00:17:19.171 [2024-07-14 07:34:35.092508] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:17:19.171 [2024-07-14 07:34:35.093279] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x622870 (107): Transport endpoint is not connected 00:17:19.171 [2024-07-14 07:34:35.094267] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x622870 (9): Bad file descriptor 00:17:19.171 [2024-07-14 07:34:35.095266] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:17:19.171 [2024-07-14 07:34:35.095284] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.2 00:17:19.171 [2024-07-14 07:34:35.095313] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:17:19.171 request: 00:17:19.171 { 00:17:19.171 "name": "TLSTEST", 00:17:19.171 "trtype": "tcp", 00:17:19.171 "traddr": "10.0.0.2", 00:17:19.171 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:17:19.172 "adrfam": "ipv4", 00:17:19.172 "trsvcid": "4420", 00:17:19.172 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:17:19.172 "psk": "/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/key1.txt", 00:17:19.172 "method": "bdev_nvme_attach_controller", 00:17:19.172 "req_id": 1 00:17:19.172 } 00:17:19.172 Got JSON-RPC error response 00:17:19.172 response: 00:17:19.172 { 00:17:19.172 "code": -32602, 00:17:19.172 "message": "Invalid parameters" 00:17:19.172 } 00:17:19.172 07:34:35 -- target/tls.sh@36 -- # killprocess 4109774 00:17:19.172 07:34:35 -- common/autotest_common.sh@926 -- # '[' -z 4109774 ']' 00:17:19.172 07:34:35 -- common/autotest_common.sh@930 -- # kill -0 4109774 00:17:19.172 07:34:35 -- common/autotest_common.sh@931 -- # uname 00:17:19.172 07:34:35 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:17:19.172 07:34:35 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 4109774 00:17:19.172 07:34:35 -- common/autotest_common.sh@932 -- # process_name=reactor_2 00:17:19.172 07:34:35 -- common/autotest_common.sh@936 -- # '[' reactor_2 = sudo ']' 00:17:19.172 07:34:35 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 4109774' 00:17:19.172 killing process with pid 4109774 00:17:19.172 07:34:35 -- common/autotest_common.sh@945 -- # kill 4109774 00:17:19.172 Received shutdown signal, test time was about 10.000000 seconds 00:17:19.172 00:17:19.172 Latency(us) 00:17:19.172 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:19.172 =================================================================================================================== 00:17:19.172 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:17:19.172 07:34:35 -- common/autotest_common.sh@950 -- # wait 4109774 00:17:19.429 07:34:35 -- target/tls.sh@37 -- # return 1 00:17:19.429 07:34:35 -- common/autotest_common.sh@643 -- # es=1 00:17:19.429 07:34:35 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:17:19.429 07:34:35 -- common/autotest_common.sh@662 -- # [[ -n '' ]] 00:17:19.429 07:34:35 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:17:19.429 07:34:35 -- target/tls.sh@161 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode2 nqn.2016-06.io.spdk:host1 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/key1.txt 00:17:19.429 07:34:35 -- common/autotest_common.sh@640 -- # local es=0 00:17:19.429 07:34:35 -- common/autotest_common.sh@642 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode2 nqn.2016-06.io.spdk:host1 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/key1.txt 00:17:19.429 07:34:35 -- common/autotest_common.sh@628 -- # local arg=run_bdevperf 00:17:19.429 07:34:35 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:17:19.429 07:34:35 -- common/autotest_common.sh@632 -- # type -t run_bdevperf 00:17:19.429 07:34:35 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:17:19.429 07:34:35 -- common/autotest_common.sh@643 -- # run_bdevperf nqn.2016-06.io.spdk:cnode2 nqn.2016-06.io.spdk:host1 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/key1.txt 00:17:19.429 07:34:35 -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:17:19.429 07:34:35 -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode2 00:17:19.429 07:34:35 -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:17:19.429 07:34:35 -- target/tls.sh@23 -- # psk='--psk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/key1.txt' 00:17:19.429 07:34:35 -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:17:19.429 07:34:35 -- target/tls.sh@28 -- # bdevperf_pid=4110047 00:17:19.429 07:34:35 -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:17:19.429 07:34:35 -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:17:19.429 07:34:35 -- target/tls.sh@31 -- # waitforlisten 4110047 /var/tmp/bdevperf.sock 00:17:19.429 07:34:35 -- common/autotest_common.sh@819 -- # '[' -z 4110047 ']' 00:17:19.429 07:34:35 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:17:19.429 07:34:35 -- common/autotest_common.sh@824 -- # local max_retries=100 00:17:19.429 07:34:35 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:17:19.429 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:17:19.429 07:34:35 -- common/autotest_common.sh@828 -- # xtrace_disable 00:17:19.429 07:34:35 -- common/autotest_common.sh@10 -- # set +x 00:17:19.429 [2024-07-14 07:34:35.416742] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:17:19.429 [2024-07-14 07:34:35.416822] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid4110047 ] 00:17:19.429 EAL: No free 2048 kB hugepages reported on node 1 00:17:19.429 [2024-07-14 07:34:35.475753] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:19.429 [2024-07-14 07:34:35.584601] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:17:20.363 07:34:36 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:17:20.363 07:34:36 -- common/autotest_common.sh@852 -- # return 0 00:17:20.363 07:34:36 -- target/tls.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -q nqn.2016-06.io.spdk:host1 --psk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/key1.txt 00:17:20.620 [2024-07-14 07:34:36.645351] bdev_nvme_rpc.c: 477:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:17:20.620 [2024-07-14 07:34:36.656549] tcp.c: 866:tcp_sock_get_key: *ERROR*: Could not find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host1 nqn.2016-06.io.spdk:cnode2 00:17:20.620 [2024-07-14 07:34:36.656588] posix.c: 583:posix_sock_psk_find_session_server_cb: *ERROR*: Unable to find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host1 nqn.2016-06.io.spdk:cnode2 00:17:20.621 [2024-07-14 07:34:36.656626] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:17:20.621 [2024-07-14 07:34:36.657473] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc90870 (107): Transport endpoint is not connected 00:17:20.621 [2024-07-14 07:34:36.658463] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc90870 (9): Bad file descriptor 00:17:20.621 [2024-07-14 07:34:36.659463] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode2] Ctrlr is in error state 00:17:20.621 [2024-07-14 07:34:36.659480] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.2 00:17:20.621 [2024-07-14 07:34:36.659508] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode2] in failed state. 00:17:20.621 request: 00:17:20.621 { 00:17:20.621 "name": "TLSTEST", 00:17:20.621 "trtype": "tcp", 00:17:20.621 "traddr": "10.0.0.2", 00:17:20.621 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:17:20.621 "adrfam": "ipv4", 00:17:20.621 "trsvcid": "4420", 00:17:20.621 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:17:20.621 "psk": "/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/key1.txt", 00:17:20.621 "method": "bdev_nvme_attach_controller", 00:17:20.621 "req_id": 1 00:17:20.621 } 00:17:20.621 Got JSON-RPC error response 00:17:20.621 response: 00:17:20.621 { 00:17:20.621 "code": -32602, 00:17:20.621 "message": "Invalid parameters" 00:17:20.621 } 00:17:20.621 07:34:36 -- target/tls.sh@36 -- # killprocess 4110047 00:17:20.621 07:34:36 -- common/autotest_common.sh@926 -- # '[' -z 4110047 ']' 00:17:20.621 07:34:36 -- common/autotest_common.sh@930 -- # kill -0 4110047 00:17:20.621 07:34:36 -- common/autotest_common.sh@931 -- # uname 00:17:20.621 07:34:36 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:17:20.621 07:34:36 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 4110047 00:17:20.621 07:34:36 -- common/autotest_common.sh@932 -- # process_name=reactor_2 00:17:20.621 07:34:36 -- common/autotest_common.sh@936 -- # '[' reactor_2 = sudo ']' 00:17:20.621 07:34:36 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 4110047' 00:17:20.621 killing process with pid 4110047 00:17:20.621 07:34:36 -- common/autotest_common.sh@945 -- # kill 4110047 00:17:20.621 Received shutdown signal, test time was about 10.000000 seconds 00:17:20.621 00:17:20.621 Latency(us) 00:17:20.621 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:20.621 =================================================================================================================== 00:17:20.621 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:17:20.621 07:34:36 -- common/autotest_common.sh@950 -- # wait 4110047 00:17:20.878 07:34:36 -- target/tls.sh@37 -- # return 1 00:17:20.878 07:34:36 -- common/autotest_common.sh@643 -- # es=1 00:17:20.878 07:34:36 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:17:20.878 07:34:36 -- common/autotest_common.sh@662 -- # [[ -n '' ]] 00:17:20.878 07:34:36 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:17:20.878 07:34:36 -- target/tls.sh@164 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 '' 00:17:20.878 07:34:36 -- common/autotest_common.sh@640 -- # local es=0 00:17:20.878 07:34:36 -- common/autotest_common.sh@642 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 '' 00:17:20.878 07:34:36 -- common/autotest_common.sh@628 -- # local arg=run_bdevperf 00:17:20.878 07:34:36 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:17:20.878 07:34:36 -- common/autotest_common.sh@632 -- # type -t run_bdevperf 00:17:20.878 07:34:36 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:17:20.878 07:34:36 -- common/autotest_common.sh@643 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 '' 00:17:20.878 07:34:36 -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:17:20.878 07:34:36 -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:17:20.878 07:34:36 -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:17:20.878 07:34:36 -- target/tls.sh@23 -- # psk= 00:17:20.878 07:34:36 -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:17:20.878 07:34:36 -- target/tls.sh@28 -- # bdevperf_pid=4110199 00:17:20.878 07:34:36 -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:17:20.878 07:34:36 -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:17:20.878 07:34:36 -- target/tls.sh@31 -- # waitforlisten 4110199 /var/tmp/bdevperf.sock 00:17:20.878 07:34:36 -- common/autotest_common.sh@819 -- # '[' -z 4110199 ']' 00:17:20.878 07:34:36 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:17:20.878 07:34:36 -- common/autotest_common.sh@824 -- # local max_retries=100 00:17:20.878 07:34:36 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:17:20.878 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:17:20.878 07:34:36 -- common/autotest_common.sh@828 -- # xtrace_disable 00:17:20.878 07:34:36 -- common/autotest_common.sh@10 -- # set +x 00:17:20.878 [2024-07-14 07:34:36.982041] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:17:20.878 [2024-07-14 07:34:36.982130] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid4110199 ] 00:17:20.878 EAL: No free 2048 kB hugepages reported on node 1 00:17:20.878 [2024-07-14 07:34:37.047700] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:21.136 [2024-07-14 07:34:37.158120] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:17:22.067 07:34:37 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:17:22.067 07:34:37 -- common/autotest_common.sh@852 -- # return 0 00:17:22.067 07:34:37 -- target/tls.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 00:17:22.067 [2024-07-14 07:34:38.160647] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:17:22.067 [2024-07-14 07:34:38.162812] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b5b330 (9): Bad file descriptor 00:17:22.067 [2024-07-14 07:34:38.163808] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:17:22.067 [2024-07-14 07:34:38.163829] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.2 00:17:22.067 [2024-07-14 07:34:38.163858] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:17:22.067 request: 00:17:22.067 { 00:17:22.067 "name": "TLSTEST", 00:17:22.067 "trtype": "tcp", 00:17:22.067 "traddr": "10.0.0.2", 00:17:22.067 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:17:22.067 "adrfam": "ipv4", 00:17:22.067 "trsvcid": "4420", 00:17:22.067 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:17:22.067 "method": "bdev_nvme_attach_controller", 00:17:22.067 "req_id": 1 00:17:22.067 } 00:17:22.067 Got JSON-RPC error response 00:17:22.067 response: 00:17:22.067 { 00:17:22.067 "code": -32602, 00:17:22.067 "message": "Invalid parameters" 00:17:22.067 } 00:17:22.067 07:34:38 -- target/tls.sh@36 -- # killprocess 4110199 00:17:22.067 07:34:38 -- common/autotest_common.sh@926 -- # '[' -z 4110199 ']' 00:17:22.067 07:34:38 -- common/autotest_common.sh@930 -- # kill -0 4110199 00:17:22.067 07:34:38 -- common/autotest_common.sh@931 -- # uname 00:17:22.067 07:34:38 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:17:22.067 07:34:38 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 4110199 00:17:22.067 07:34:38 -- common/autotest_common.sh@932 -- # process_name=reactor_2 00:17:22.067 07:34:38 -- common/autotest_common.sh@936 -- # '[' reactor_2 = sudo ']' 00:17:22.067 07:34:38 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 4110199' 00:17:22.067 killing process with pid 4110199 00:17:22.067 07:34:38 -- common/autotest_common.sh@945 -- # kill 4110199 00:17:22.067 Received shutdown signal, test time was about 10.000000 seconds 00:17:22.067 00:17:22.067 Latency(us) 00:17:22.067 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:22.067 =================================================================================================================== 00:17:22.067 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:17:22.067 07:34:38 -- common/autotest_common.sh@950 -- # wait 4110199 00:17:22.324 07:34:38 -- target/tls.sh@37 -- # return 1 00:17:22.324 07:34:38 -- common/autotest_common.sh@643 -- # es=1 00:17:22.324 07:34:38 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:17:22.324 07:34:38 -- common/autotest_common.sh@662 -- # [[ -n '' ]] 00:17:22.324 07:34:38 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:17:22.324 07:34:38 -- target/tls.sh@167 -- # killprocess 4106165 00:17:22.324 07:34:38 -- common/autotest_common.sh@926 -- # '[' -z 4106165 ']' 00:17:22.324 07:34:38 -- common/autotest_common.sh@930 -- # kill -0 4106165 00:17:22.324 07:34:38 -- common/autotest_common.sh@931 -- # uname 00:17:22.324 07:34:38 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:17:22.324 07:34:38 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 4106165 00:17:22.581 07:34:38 -- common/autotest_common.sh@932 -- # process_name=reactor_1 00:17:22.582 07:34:38 -- common/autotest_common.sh@936 -- # '[' reactor_1 = sudo ']' 00:17:22.582 07:34:38 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 4106165' 00:17:22.582 killing process with pid 4106165 00:17:22.582 07:34:38 -- common/autotest_common.sh@945 -- # kill 4106165 00:17:22.582 07:34:38 -- common/autotest_common.sh@950 -- # wait 4106165 00:17:22.839 07:34:38 -- target/tls.sh@168 -- # format_interchange_psk 00112233445566778899aabbccddeeff0011223344556677 02 00:17:22.839 07:34:38 -- target/tls.sh@49 -- # local key hash crc 00:17:22.839 07:34:38 -- target/tls.sh@51 -- # key=00112233445566778899aabbccddeeff0011223344556677 00:17:22.839 07:34:38 -- target/tls.sh@51 -- # hash=02 00:17:22.839 07:34:38 -- target/tls.sh@52 -- # echo -n 00112233445566778899aabbccddeeff0011223344556677 00:17:22.839 07:34:38 -- target/tls.sh@52 -- # gzip -1 -c 00:17:22.839 07:34:38 -- target/tls.sh@52 -- # tail -c8 00:17:22.839 07:34:38 -- target/tls.sh@52 -- # head -c 4 00:17:22.839 07:34:38 -- target/tls.sh@52 -- # crc='�e�'\''' 00:17:22.839 07:34:38 -- target/tls.sh@54 -- # base64 /dev/fd/62 00:17:22.839 07:34:38 -- target/tls.sh@54 -- # echo -n '00112233445566778899aabbccddeeff0011223344556677�e�'\''' 00:17:22.839 07:34:38 -- target/tls.sh@54 -- # echo NVMeTLSkey-1:02:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmYwMDExMjIzMzQ0NTU2Njc3wWXNJw==: 00:17:22.839 07:34:38 -- target/tls.sh@168 -- # key_long=NVMeTLSkey-1:02:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmYwMDExMjIzMzQ0NTU2Njc3wWXNJw==: 00:17:22.839 07:34:38 -- target/tls.sh@169 -- # key_long_path=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/key_long.txt 00:17:22.839 07:34:38 -- target/tls.sh@170 -- # echo -n NVMeTLSkey-1:02:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmYwMDExMjIzMzQ0NTU2Njc3wWXNJw==: 00:17:22.839 07:34:38 -- target/tls.sh@171 -- # chmod 0600 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/key_long.txt 00:17:22.839 07:34:38 -- target/tls.sh@172 -- # nvmfappstart -m 0x2 00:17:22.839 07:34:38 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:17:22.839 07:34:38 -- common/autotest_common.sh@712 -- # xtrace_disable 00:17:22.839 07:34:38 -- common/autotest_common.sh@10 -- # set +x 00:17:22.839 07:34:38 -- nvmf/common.sh@469 -- # nvmfpid=4110490 00:17:22.839 07:34:38 -- nvmf/common.sh@468 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:17:22.839 07:34:38 -- nvmf/common.sh@470 -- # waitforlisten 4110490 00:17:22.839 07:34:38 -- common/autotest_common.sh@819 -- # '[' -z 4110490 ']' 00:17:22.839 07:34:38 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:22.839 07:34:38 -- common/autotest_common.sh@824 -- # local max_retries=100 00:17:22.839 07:34:38 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:22.839 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:22.839 07:34:38 -- common/autotest_common.sh@828 -- # xtrace_disable 00:17:22.839 07:34:38 -- common/autotest_common.sh@10 -- # set +x 00:17:22.839 [2024-07-14 07:34:38.849888] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:17:22.839 [2024-07-14 07:34:38.849982] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:17:22.839 EAL: No free 2048 kB hugepages reported on node 1 00:17:22.839 [2024-07-14 07:34:38.917976] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:23.096 [2024-07-14 07:34:39.036888] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:17:23.096 [2024-07-14 07:34:39.037054] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:17:23.097 [2024-07-14 07:34:39.037071] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:17:23.097 [2024-07-14 07:34:39.037084] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:17:23.097 [2024-07-14 07:34:39.037131] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:17:23.661 07:34:39 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:17:23.661 07:34:39 -- common/autotest_common.sh@852 -- # return 0 00:17:23.661 07:34:39 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:17:23.661 07:34:39 -- common/autotest_common.sh@718 -- # xtrace_disable 00:17:23.661 07:34:39 -- common/autotest_common.sh@10 -- # set +x 00:17:23.919 07:34:39 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:17:23.919 07:34:39 -- target/tls.sh@174 -- # setup_nvmf_tgt /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/key_long.txt 00:17:23.919 07:34:39 -- target/tls.sh@58 -- # local key=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/key_long.txt 00:17:23.919 07:34:39 -- target/tls.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:17:23.919 [2024-07-14 07:34:40.064348] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:17:23.919 07:34:40 -- target/tls.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:17:24.177 07:34:40 -- target/tls.sh@62 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:17:24.434 [2024-07-14 07:34:40.525570] tcp.c: 912:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:17:24.434 [2024-07-14 07:34:40.525805] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:17:24.434 07:34:40 -- target/tls.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:17:24.693 malloc0 00:17:24.693 07:34:40 -- target/tls.sh@65 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:17:24.951 07:34:41 -- target/tls.sh@67 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/key_long.txt 00:17:25.209 07:34:41 -- target/tls.sh@176 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/key_long.txt 00:17:25.209 07:34:41 -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:17:25.209 07:34:41 -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:17:25.209 07:34:41 -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:17:25.209 07:34:41 -- target/tls.sh@23 -- # psk='--psk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/key_long.txt' 00:17:25.209 07:34:41 -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:17:25.209 07:34:41 -- target/tls.sh@28 -- # bdevperf_pid=4110795 00:17:25.209 07:34:41 -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:17:25.209 07:34:41 -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:17:25.209 07:34:41 -- target/tls.sh@31 -- # waitforlisten 4110795 /var/tmp/bdevperf.sock 00:17:25.209 07:34:41 -- common/autotest_common.sh@819 -- # '[' -z 4110795 ']' 00:17:25.209 07:34:41 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:17:25.209 07:34:41 -- common/autotest_common.sh@824 -- # local max_retries=100 00:17:25.209 07:34:41 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:17:25.209 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:17:25.209 07:34:41 -- common/autotest_common.sh@828 -- # xtrace_disable 00:17:25.209 07:34:41 -- common/autotest_common.sh@10 -- # set +x 00:17:25.209 [2024-07-14 07:34:41.316769] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:17:25.209 [2024-07-14 07:34:41.316860] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid4110795 ] 00:17:25.209 EAL: No free 2048 kB hugepages reported on node 1 00:17:25.467 [2024-07-14 07:34:41.379977] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:25.467 [2024-07-14 07:34:41.486650] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:17:26.400 07:34:42 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:17:26.400 07:34:42 -- common/autotest_common.sh@852 -- # return 0 00:17:26.400 07:34:42 -- target/tls.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/key_long.txt 00:17:26.400 [2024-07-14 07:34:42.500323] bdev_nvme_rpc.c: 477:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:17:26.659 TLSTESTn1 00:17:26.659 07:34:42 -- target/tls.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -t 20 -s /var/tmp/bdevperf.sock perform_tests 00:17:26.659 Running I/O for 10 seconds... 00:17:36.671 00:17:36.671 Latency(us) 00:17:36.671 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:36.671 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:17:36.671 Verification LBA range: start 0x0 length 0x2000 00:17:36.671 TLSTESTn1 : 10.04 1768.55 6.91 0.00 0.00 72263.56 7039.05 95925.29 00:17:36.671 =================================================================================================================== 00:17:36.671 Total : 1768.55 6.91 0.00 0.00 72263.56 7039.05 95925.29 00:17:36.671 0 00:17:36.671 07:34:52 -- target/tls.sh@44 -- # trap 'nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:17:36.671 07:34:52 -- target/tls.sh@45 -- # killprocess 4110795 00:17:36.671 07:34:52 -- common/autotest_common.sh@926 -- # '[' -z 4110795 ']' 00:17:36.671 07:34:52 -- common/autotest_common.sh@930 -- # kill -0 4110795 00:17:36.671 07:34:52 -- common/autotest_common.sh@931 -- # uname 00:17:36.671 07:34:52 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:17:36.671 07:34:52 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 4110795 00:17:36.671 07:34:52 -- common/autotest_common.sh@932 -- # process_name=reactor_2 00:17:36.671 07:34:52 -- common/autotest_common.sh@936 -- # '[' reactor_2 = sudo ']' 00:17:36.671 07:34:52 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 4110795' 00:17:36.671 killing process with pid 4110795 00:17:36.671 07:34:52 -- common/autotest_common.sh@945 -- # kill 4110795 00:17:36.671 Received shutdown signal, test time was about 10.000000 seconds 00:17:36.671 00:17:36.671 Latency(us) 00:17:36.671 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:36.671 =================================================================================================================== 00:17:36.671 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:17:36.671 07:34:52 -- common/autotest_common.sh@950 -- # wait 4110795 00:17:36.930 07:34:53 -- target/tls.sh@179 -- # chmod 0666 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/key_long.txt 00:17:36.930 07:34:53 -- target/tls.sh@180 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/key_long.txt 00:17:36.930 07:34:53 -- common/autotest_common.sh@640 -- # local es=0 00:17:36.930 07:34:53 -- common/autotest_common.sh@642 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/key_long.txt 00:17:36.930 07:34:53 -- common/autotest_common.sh@628 -- # local arg=run_bdevperf 00:17:36.930 07:34:53 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:17:36.930 07:34:53 -- common/autotest_common.sh@632 -- # type -t run_bdevperf 00:17:36.930 07:34:53 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:17:36.930 07:34:53 -- common/autotest_common.sh@643 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/key_long.txt 00:17:36.930 07:34:53 -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:17:36.930 07:34:53 -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:17:36.930 07:34:53 -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:17:36.930 07:34:53 -- target/tls.sh@23 -- # psk='--psk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/key_long.txt' 00:17:36.930 07:34:53 -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:17:36.930 07:34:53 -- target/tls.sh@28 -- # bdevperf_pid=4112160 00:17:36.930 07:34:53 -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:17:36.930 07:34:53 -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:17:36.930 07:34:53 -- target/tls.sh@31 -- # waitforlisten 4112160 /var/tmp/bdevperf.sock 00:17:36.930 07:34:53 -- common/autotest_common.sh@819 -- # '[' -z 4112160 ']' 00:17:36.930 07:34:53 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:17:36.930 07:34:53 -- common/autotest_common.sh@824 -- # local max_retries=100 00:17:36.930 07:34:53 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:17:36.930 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:17:36.930 07:34:53 -- common/autotest_common.sh@828 -- # xtrace_disable 00:17:36.930 07:34:53 -- common/autotest_common.sh@10 -- # set +x 00:17:37.190 [2024-07-14 07:34:53.102121] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:17:37.190 [2024-07-14 07:34:53.102209] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid4112160 ] 00:17:37.190 EAL: No free 2048 kB hugepages reported on node 1 00:17:37.190 [2024-07-14 07:34:53.158753] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:37.190 [2024-07-14 07:34:53.265322] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:17:38.121 07:34:54 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:17:38.121 07:34:54 -- common/autotest_common.sh@852 -- # return 0 00:17:38.121 07:34:54 -- target/tls.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/key_long.txt 00:17:38.379 [2024-07-14 07:34:54.295557] bdev_nvme_rpc.c: 477:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:17:38.379 [2024-07-14 07:34:54.295598] bdev_nvme_rpc.c: 336:tcp_load_psk: *ERROR*: Incorrect permissions for PSK file 00:17:38.379 request: 00:17:38.379 { 00:17:38.379 "name": "TLSTEST", 00:17:38.379 "trtype": "tcp", 00:17:38.379 "traddr": "10.0.0.2", 00:17:38.379 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:17:38.379 "adrfam": "ipv4", 00:17:38.379 "trsvcid": "4420", 00:17:38.379 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:17:38.379 "psk": "/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/key_long.txt", 00:17:38.379 "method": "bdev_nvme_attach_controller", 00:17:38.379 "req_id": 1 00:17:38.379 } 00:17:38.379 Got JSON-RPC error response 00:17:38.379 response: 00:17:38.379 { 00:17:38.379 "code": -22, 00:17:38.379 "message": "Could not retrieve PSK from file: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/key_long.txt" 00:17:38.379 } 00:17:38.379 07:34:54 -- target/tls.sh@36 -- # killprocess 4112160 00:17:38.379 07:34:54 -- common/autotest_common.sh@926 -- # '[' -z 4112160 ']' 00:17:38.379 07:34:54 -- common/autotest_common.sh@930 -- # kill -0 4112160 00:17:38.379 07:34:54 -- common/autotest_common.sh@931 -- # uname 00:17:38.379 07:34:54 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:17:38.379 07:34:54 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 4112160 00:17:38.379 07:34:54 -- common/autotest_common.sh@932 -- # process_name=reactor_2 00:17:38.379 07:34:54 -- common/autotest_common.sh@936 -- # '[' reactor_2 = sudo ']' 00:17:38.379 07:34:54 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 4112160' 00:17:38.379 killing process with pid 4112160 00:17:38.379 07:34:54 -- common/autotest_common.sh@945 -- # kill 4112160 00:17:38.379 Received shutdown signal, test time was about 10.000000 seconds 00:17:38.379 00:17:38.379 Latency(us) 00:17:38.379 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:38.379 =================================================================================================================== 00:17:38.379 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:17:38.379 07:34:54 -- common/autotest_common.sh@950 -- # wait 4112160 00:17:38.637 07:34:54 -- target/tls.sh@37 -- # return 1 00:17:38.637 07:34:54 -- common/autotest_common.sh@643 -- # es=1 00:17:38.637 07:34:54 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:17:38.637 07:34:54 -- common/autotest_common.sh@662 -- # [[ -n '' ]] 00:17:38.637 07:34:54 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:17:38.637 07:34:54 -- target/tls.sh@183 -- # killprocess 4110490 00:17:38.637 07:34:54 -- common/autotest_common.sh@926 -- # '[' -z 4110490 ']' 00:17:38.637 07:34:54 -- common/autotest_common.sh@930 -- # kill -0 4110490 00:17:38.637 07:34:54 -- common/autotest_common.sh@931 -- # uname 00:17:38.637 07:34:54 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:17:38.637 07:34:54 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 4110490 00:17:38.637 07:34:54 -- common/autotest_common.sh@932 -- # process_name=reactor_1 00:17:38.637 07:34:54 -- common/autotest_common.sh@936 -- # '[' reactor_1 = sudo ']' 00:17:38.637 07:34:54 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 4110490' 00:17:38.637 killing process with pid 4110490 00:17:38.637 07:34:54 -- common/autotest_common.sh@945 -- # kill 4110490 00:17:38.637 07:34:54 -- common/autotest_common.sh@950 -- # wait 4110490 00:17:38.894 07:34:54 -- target/tls.sh@184 -- # nvmfappstart -m 0x2 00:17:38.894 07:34:54 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:17:38.894 07:34:54 -- common/autotest_common.sh@712 -- # xtrace_disable 00:17:38.894 07:34:54 -- common/autotest_common.sh@10 -- # set +x 00:17:38.894 07:34:54 -- nvmf/common.sh@469 -- # nvmfpid=4112438 00:17:38.894 07:34:54 -- nvmf/common.sh@468 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:17:38.894 07:34:54 -- nvmf/common.sh@470 -- # waitforlisten 4112438 00:17:38.894 07:34:54 -- common/autotest_common.sh@819 -- # '[' -z 4112438 ']' 00:17:38.894 07:34:54 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:38.894 07:34:54 -- common/autotest_common.sh@824 -- # local max_retries=100 00:17:38.894 07:34:54 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:38.895 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:38.895 07:34:54 -- common/autotest_common.sh@828 -- # xtrace_disable 00:17:38.895 07:34:54 -- common/autotest_common.sh@10 -- # set +x 00:17:38.895 [2024-07-14 07:34:54.970701] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:17:38.895 [2024-07-14 07:34:54.970804] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:17:38.895 EAL: No free 2048 kB hugepages reported on node 1 00:17:38.895 [2024-07-14 07:34:55.037144] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:39.152 [2024-07-14 07:34:55.152736] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:17:39.152 [2024-07-14 07:34:55.152950] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:17:39.152 [2024-07-14 07:34:55.152973] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:17:39.152 [2024-07-14 07:34:55.152986] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:17:39.152 [2024-07-14 07:34:55.153014] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:17:40.087 07:34:55 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:17:40.087 07:34:55 -- common/autotest_common.sh@852 -- # return 0 00:17:40.087 07:34:55 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:17:40.087 07:34:55 -- common/autotest_common.sh@718 -- # xtrace_disable 00:17:40.087 07:34:55 -- common/autotest_common.sh@10 -- # set +x 00:17:40.087 07:34:55 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:17:40.087 07:34:55 -- target/tls.sh@186 -- # NOT setup_nvmf_tgt /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/key_long.txt 00:17:40.087 07:34:55 -- common/autotest_common.sh@640 -- # local es=0 00:17:40.087 07:34:55 -- common/autotest_common.sh@642 -- # valid_exec_arg setup_nvmf_tgt /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/key_long.txt 00:17:40.087 07:34:55 -- common/autotest_common.sh@628 -- # local arg=setup_nvmf_tgt 00:17:40.087 07:34:55 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:17:40.087 07:34:55 -- common/autotest_common.sh@632 -- # type -t setup_nvmf_tgt 00:17:40.087 07:34:55 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:17:40.087 07:34:55 -- common/autotest_common.sh@643 -- # setup_nvmf_tgt /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/key_long.txt 00:17:40.087 07:34:55 -- target/tls.sh@58 -- # local key=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/key_long.txt 00:17:40.087 07:34:55 -- target/tls.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:17:40.087 [2024-07-14 07:34:56.236833] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:17:40.345 07:34:56 -- target/tls.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:17:40.603 07:34:56 -- target/tls.sh@62 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:17:40.861 [2024-07-14 07:34:56.798384] tcp.c: 912:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:17:40.861 [2024-07-14 07:34:56.798636] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:17:40.861 07:34:56 -- target/tls.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:17:41.120 malloc0 00:17:41.120 07:34:57 -- target/tls.sh@65 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:17:41.378 07:34:57 -- target/tls.sh@67 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/key_long.txt 00:17:41.378 [2024-07-14 07:34:57.507582] tcp.c:3549:tcp_load_psk: *ERROR*: Incorrect permissions for PSK file 00:17:41.378 [2024-07-14 07:34:57.507625] tcp.c:3618:nvmf_tcp_subsystem_add_host: *ERROR*: Could not retrieve PSK from file 00:17:41.378 [2024-07-14 07:34:57.507661] subsystem.c: 880:spdk_nvmf_subsystem_add_host: *ERROR*: Unable to add host to TCP transport 00:17:41.378 request: 00:17:41.378 { 00:17:41.378 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:17:41.378 "host": "nqn.2016-06.io.spdk:host1", 00:17:41.378 "psk": "/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/key_long.txt", 00:17:41.378 "method": "nvmf_subsystem_add_host", 00:17:41.378 "req_id": 1 00:17:41.378 } 00:17:41.378 Got JSON-RPC error response 00:17:41.378 response: 00:17:41.378 { 00:17:41.378 "code": -32603, 00:17:41.378 "message": "Internal error" 00:17:41.378 } 00:17:41.378 07:34:57 -- common/autotest_common.sh@643 -- # es=1 00:17:41.378 07:34:57 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:17:41.378 07:34:57 -- common/autotest_common.sh@662 -- # [[ -n '' ]] 00:17:41.378 07:34:57 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:17:41.378 07:34:57 -- target/tls.sh@189 -- # killprocess 4112438 00:17:41.378 07:34:57 -- common/autotest_common.sh@926 -- # '[' -z 4112438 ']' 00:17:41.378 07:34:57 -- common/autotest_common.sh@930 -- # kill -0 4112438 00:17:41.378 07:34:57 -- common/autotest_common.sh@931 -- # uname 00:17:41.378 07:34:57 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:17:41.378 07:34:57 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 4112438 00:17:41.636 07:34:57 -- common/autotest_common.sh@932 -- # process_name=reactor_1 00:17:41.636 07:34:57 -- common/autotest_common.sh@936 -- # '[' reactor_1 = sudo ']' 00:17:41.636 07:34:57 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 4112438' 00:17:41.636 killing process with pid 4112438 00:17:41.636 07:34:57 -- common/autotest_common.sh@945 -- # kill 4112438 00:17:41.636 07:34:57 -- common/autotest_common.sh@950 -- # wait 4112438 00:17:41.893 07:34:57 -- target/tls.sh@190 -- # chmod 0600 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/key_long.txt 00:17:41.893 07:34:57 -- target/tls.sh@193 -- # nvmfappstart -m 0x2 00:17:41.893 07:34:57 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:17:41.893 07:34:57 -- common/autotest_common.sh@712 -- # xtrace_disable 00:17:41.893 07:34:57 -- common/autotest_common.sh@10 -- # set +x 00:17:41.893 07:34:57 -- nvmf/common.sh@469 -- # nvmfpid=4112858 00:17:41.893 07:34:57 -- nvmf/common.sh@468 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:17:41.893 07:34:57 -- nvmf/common.sh@470 -- # waitforlisten 4112858 00:17:41.893 07:34:57 -- common/autotest_common.sh@819 -- # '[' -z 4112858 ']' 00:17:41.893 07:34:57 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:41.893 07:34:57 -- common/autotest_common.sh@824 -- # local max_retries=100 00:17:41.893 07:34:57 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:41.893 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:41.893 07:34:57 -- common/autotest_common.sh@828 -- # xtrace_disable 00:17:41.893 07:34:57 -- common/autotest_common.sh@10 -- # set +x 00:17:41.893 [2024-07-14 07:34:57.911117] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:17:41.893 [2024-07-14 07:34:57.911237] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:17:41.893 EAL: No free 2048 kB hugepages reported on node 1 00:17:41.893 [2024-07-14 07:34:57.980715] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:42.150 [2024-07-14 07:34:58.092941] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:17:42.150 [2024-07-14 07:34:58.093108] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:17:42.150 [2024-07-14 07:34:58.093130] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:17:42.151 [2024-07-14 07:34:58.093146] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:17:42.151 [2024-07-14 07:34:58.093177] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:17:42.715 07:34:58 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:17:42.715 07:34:58 -- common/autotest_common.sh@852 -- # return 0 00:17:42.715 07:34:58 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:17:42.715 07:34:58 -- common/autotest_common.sh@718 -- # xtrace_disable 00:17:42.715 07:34:58 -- common/autotest_common.sh@10 -- # set +x 00:17:42.715 07:34:58 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:17:42.715 07:34:58 -- target/tls.sh@194 -- # setup_nvmf_tgt /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/key_long.txt 00:17:42.715 07:34:58 -- target/tls.sh@58 -- # local key=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/key_long.txt 00:17:42.715 07:34:58 -- target/tls.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:17:42.972 [2024-07-14 07:34:59.085462] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:17:42.972 07:34:59 -- target/tls.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:17:43.229 07:34:59 -- target/tls.sh@62 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:17:43.484 [2024-07-14 07:34:59.550714] tcp.c: 912:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:17:43.484 [2024-07-14 07:34:59.550975] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:17:43.484 07:34:59 -- target/tls.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:17:43.741 malloc0 00:17:43.741 07:34:59 -- target/tls.sh@65 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:17:43.999 07:35:00 -- target/tls.sh@67 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/key_long.txt 00:17:44.256 07:35:00 -- target/tls.sh@197 -- # bdevperf_pid=4113179 00:17:44.256 07:35:00 -- target/tls.sh@196 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:17:44.256 07:35:00 -- target/tls.sh@199 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:17:44.256 07:35:00 -- target/tls.sh@200 -- # waitforlisten 4113179 /var/tmp/bdevperf.sock 00:17:44.256 07:35:00 -- common/autotest_common.sh@819 -- # '[' -z 4113179 ']' 00:17:44.256 07:35:00 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:17:44.256 07:35:00 -- common/autotest_common.sh@824 -- # local max_retries=100 00:17:44.256 07:35:00 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:17:44.256 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:17:44.256 07:35:00 -- common/autotest_common.sh@828 -- # xtrace_disable 00:17:44.256 07:35:00 -- common/autotest_common.sh@10 -- # set +x 00:17:44.256 [2024-07-14 07:35:00.336061] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:17:44.256 [2024-07-14 07:35:00.336137] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid4113179 ] 00:17:44.256 EAL: No free 2048 kB hugepages reported on node 1 00:17:44.256 [2024-07-14 07:35:00.393374] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:44.514 [2024-07-14 07:35:00.496622] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:17:45.448 07:35:01 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:17:45.448 07:35:01 -- common/autotest_common.sh@852 -- # return 0 00:17:45.448 07:35:01 -- target/tls.sh@201 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/key_long.txt 00:17:45.448 [2024-07-14 07:35:01.520348] bdev_nvme_rpc.c: 477:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:17:45.448 TLSTESTn1 00:17:45.448 07:35:01 -- target/tls.sh@205 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py save_config 00:17:46.013 07:35:01 -- target/tls.sh@205 -- # tgtconf='{ 00:17:46.013 "subsystems": [ 00:17:46.013 { 00:17:46.013 "subsystem": "iobuf", 00:17:46.013 "config": [ 00:17:46.013 { 00:17:46.013 "method": "iobuf_set_options", 00:17:46.013 "params": { 00:17:46.013 "small_pool_count": 8192, 00:17:46.013 "large_pool_count": 1024, 00:17:46.013 "small_bufsize": 8192, 00:17:46.013 "large_bufsize": 135168 00:17:46.013 } 00:17:46.013 } 00:17:46.013 ] 00:17:46.013 }, 00:17:46.013 { 00:17:46.013 "subsystem": "sock", 00:17:46.013 "config": [ 00:17:46.013 { 00:17:46.013 "method": "sock_impl_set_options", 00:17:46.013 "params": { 00:17:46.013 "impl_name": "posix", 00:17:46.013 "recv_buf_size": 2097152, 00:17:46.013 "send_buf_size": 2097152, 00:17:46.013 "enable_recv_pipe": true, 00:17:46.013 "enable_quickack": false, 00:17:46.013 "enable_placement_id": 0, 00:17:46.013 "enable_zerocopy_send_server": true, 00:17:46.013 "enable_zerocopy_send_client": false, 00:17:46.013 "zerocopy_threshold": 0, 00:17:46.013 "tls_version": 0, 00:17:46.013 "enable_ktls": false 00:17:46.013 } 00:17:46.013 }, 00:17:46.013 { 00:17:46.013 "method": "sock_impl_set_options", 00:17:46.013 "params": { 00:17:46.013 "impl_name": "ssl", 00:17:46.013 "recv_buf_size": 4096, 00:17:46.013 "send_buf_size": 4096, 00:17:46.013 "enable_recv_pipe": true, 00:17:46.013 "enable_quickack": false, 00:17:46.013 "enable_placement_id": 0, 00:17:46.013 "enable_zerocopy_send_server": true, 00:17:46.013 "enable_zerocopy_send_client": false, 00:17:46.013 "zerocopy_threshold": 0, 00:17:46.013 "tls_version": 0, 00:17:46.013 "enable_ktls": false 00:17:46.013 } 00:17:46.013 } 00:17:46.013 ] 00:17:46.013 }, 00:17:46.013 { 00:17:46.013 "subsystem": "vmd", 00:17:46.013 "config": [] 00:17:46.013 }, 00:17:46.013 { 00:17:46.013 "subsystem": "accel", 00:17:46.013 "config": [ 00:17:46.013 { 00:17:46.013 "method": "accel_set_options", 00:17:46.013 "params": { 00:17:46.013 "small_cache_size": 128, 00:17:46.013 "large_cache_size": 16, 00:17:46.013 "task_count": 2048, 00:17:46.013 "sequence_count": 2048, 00:17:46.013 "buf_count": 2048 00:17:46.013 } 00:17:46.013 } 00:17:46.013 ] 00:17:46.013 }, 00:17:46.013 { 00:17:46.013 "subsystem": "bdev", 00:17:46.013 "config": [ 00:17:46.013 { 00:17:46.013 "method": "bdev_set_options", 00:17:46.013 "params": { 00:17:46.013 "bdev_io_pool_size": 65535, 00:17:46.013 "bdev_io_cache_size": 256, 00:17:46.013 "bdev_auto_examine": true, 00:17:46.013 "iobuf_small_cache_size": 128, 00:17:46.013 "iobuf_large_cache_size": 16 00:17:46.013 } 00:17:46.013 }, 00:17:46.013 { 00:17:46.013 "method": "bdev_raid_set_options", 00:17:46.013 "params": { 00:17:46.013 "process_window_size_kb": 1024 00:17:46.013 } 00:17:46.013 }, 00:17:46.013 { 00:17:46.013 "method": "bdev_iscsi_set_options", 00:17:46.013 "params": { 00:17:46.013 "timeout_sec": 30 00:17:46.013 } 00:17:46.013 }, 00:17:46.013 { 00:17:46.013 "method": "bdev_nvme_set_options", 00:17:46.013 "params": { 00:17:46.013 "action_on_timeout": "none", 00:17:46.014 "timeout_us": 0, 00:17:46.014 "timeout_admin_us": 0, 00:17:46.014 "keep_alive_timeout_ms": 10000, 00:17:46.014 "transport_retry_count": 4, 00:17:46.014 "arbitration_burst": 0, 00:17:46.014 "low_priority_weight": 0, 00:17:46.014 "medium_priority_weight": 0, 00:17:46.014 "high_priority_weight": 0, 00:17:46.014 "nvme_adminq_poll_period_us": 10000, 00:17:46.014 "nvme_ioq_poll_period_us": 0, 00:17:46.014 "io_queue_requests": 0, 00:17:46.014 "delay_cmd_submit": true, 00:17:46.014 "bdev_retry_count": 3, 00:17:46.014 "transport_ack_timeout": 0, 00:17:46.014 "ctrlr_loss_timeout_sec": 0, 00:17:46.014 "reconnect_delay_sec": 0, 00:17:46.014 "fast_io_fail_timeout_sec": 0, 00:17:46.014 "generate_uuids": false, 00:17:46.014 "transport_tos": 0, 00:17:46.014 "io_path_stat": false, 00:17:46.014 "allow_accel_sequence": false 00:17:46.014 } 00:17:46.014 }, 00:17:46.014 { 00:17:46.014 "method": "bdev_nvme_set_hotplug", 00:17:46.014 "params": { 00:17:46.014 "period_us": 100000, 00:17:46.014 "enable": false 00:17:46.014 } 00:17:46.014 }, 00:17:46.014 { 00:17:46.014 "method": "bdev_malloc_create", 00:17:46.014 "params": { 00:17:46.014 "name": "malloc0", 00:17:46.014 "num_blocks": 8192, 00:17:46.014 "block_size": 4096, 00:17:46.014 "physical_block_size": 4096, 00:17:46.014 "uuid": "3d21d373-c8ad-42af-b4f0-ae9071ac2222", 00:17:46.014 "optimal_io_boundary": 0 00:17:46.014 } 00:17:46.014 }, 00:17:46.014 { 00:17:46.014 "method": "bdev_wait_for_examine" 00:17:46.014 } 00:17:46.014 ] 00:17:46.014 }, 00:17:46.014 { 00:17:46.014 "subsystem": "nbd", 00:17:46.014 "config": [] 00:17:46.014 }, 00:17:46.014 { 00:17:46.014 "subsystem": "scheduler", 00:17:46.014 "config": [ 00:17:46.014 { 00:17:46.014 "method": "framework_set_scheduler", 00:17:46.014 "params": { 00:17:46.014 "name": "static" 00:17:46.014 } 00:17:46.014 } 00:17:46.014 ] 00:17:46.014 }, 00:17:46.014 { 00:17:46.014 "subsystem": "nvmf", 00:17:46.014 "config": [ 00:17:46.014 { 00:17:46.014 "method": "nvmf_set_config", 00:17:46.014 "params": { 00:17:46.014 "discovery_filter": "match_any", 00:17:46.014 "admin_cmd_passthru": { 00:17:46.014 "identify_ctrlr": false 00:17:46.014 } 00:17:46.014 } 00:17:46.014 }, 00:17:46.014 { 00:17:46.014 "method": "nvmf_set_max_subsystems", 00:17:46.014 "params": { 00:17:46.014 "max_subsystems": 1024 00:17:46.014 } 00:17:46.014 }, 00:17:46.014 { 00:17:46.014 "method": "nvmf_set_crdt", 00:17:46.014 "params": { 00:17:46.014 "crdt1": 0, 00:17:46.014 "crdt2": 0, 00:17:46.014 "crdt3": 0 00:17:46.014 } 00:17:46.014 }, 00:17:46.014 { 00:17:46.014 "method": "nvmf_create_transport", 00:17:46.014 "params": { 00:17:46.014 "trtype": "TCP", 00:17:46.014 "max_queue_depth": 128, 00:17:46.014 "max_io_qpairs_per_ctrlr": 127, 00:17:46.014 "in_capsule_data_size": 4096, 00:17:46.014 "max_io_size": 131072, 00:17:46.014 "io_unit_size": 131072, 00:17:46.014 "max_aq_depth": 128, 00:17:46.014 "num_shared_buffers": 511, 00:17:46.014 "buf_cache_size": 4294967295, 00:17:46.014 "dif_insert_or_strip": false, 00:17:46.014 "zcopy": false, 00:17:46.014 "c2h_success": false, 00:17:46.014 "sock_priority": 0, 00:17:46.014 "abort_timeout_sec": 1 00:17:46.014 } 00:17:46.014 }, 00:17:46.014 { 00:17:46.014 "method": "nvmf_create_subsystem", 00:17:46.014 "params": { 00:17:46.014 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:17:46.014 "allow_any_host": false, 00:17:46.014 "serial_number": "SPDK00000000000001", 00:17:46.014 "model_number": "SPDK bdev Controller", 00:17:46.014 "max_namespaces": 10, 00:17:46.014 "min_cntlid": 1, 00:17:46.014 "max_cntlid": 65519, 00:17:46.014 "ana_reporting": false 00:17:46.014 } 00:17:46.014 }, 00:17:46.014 { 00:17:46.014 "method": "nvmf_subsystem_add_host", 00:17:46.014 "params": { 00:17:46.014 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:17:46.014 "host": "nqn.2016-06.io.spdk:host1", 00:17:46.014 "psk": "/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/key_long.txt" 00:17:46.014 } 00:17:46.014 }, 00:17:46.014 { 00:17:46.014 "method": "nvmf_subsystem_add_ns", 00:17:46.014 "params": { 00:17:46.014 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:17:46.014 "namespace": { 00:17:46.014 "nsid": 1, 00:17:46.014 "bdev_name": "malloc0", 00:17:46.014 "nguid": "3D21D373C8AD42AFB4F0AE9071AC2222", 00:17:46.014 "uuid": "3d21d373-c8ad-42af-b4f0-ae9071ac2222" 00:17:46.014 } 00:17:46.014 } 00:17:46.014 }, 00:17:46.014 { 00:17:46.014 "method": "nvmf_subsystem_add_listener", 00:17:46.014 "params": { 00:17:46.014 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:17:46.014 "listen_address": { 00:17:46.014 "trtype": "TCP", 00:17:46.014 "adrfam": "IPv4", 00:17:46.014 "traddr": "10.0.0.2", 00:17:46.014 "trsvcid": "4420" 00:17:46.014 }, 00:17:46.014 "secure_channel": true 00:17:46.014 } 00:17:46.014 } 00:17:46.014 ] 00:17:46.014 } 00:17:46.014 ] 00:17:46.014 }' 00:17:46.014 07:35:01 -- target/tls.sh@206 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock save_config 00:17:46.272 07:35:02 -- target/tls.sh@206 -- # bdevperfconf='{ 00:17:46.272 "subsystems": [ 00:17:46.272 { 00:17:46.272 "subsystem": "iobuf", 00:17:46.272 "config": [ 00:17:46.272 { 00:17:46.272 "method": "iobuf_set_options", 00:17:46.272 "params": { 00:17:46.272 "small_pool_count": 8192, 00:17:46.272 "large_pool_count": 1024, 00:17:46.272 "small_bufsize": 8192, 00:17:46.272 "large_bufsize": 135168 00:17:46.272 } 00:17:46.272 } 00:17:46.272 ] 00:17:46.272 }, 00:17:46.272 { 00:17:46.272 "subsystem": "sock", 00:17:46.272 "config": [ 00:17:46.272 { 00:17:46.272 "method": "sock_impl_set_options", 00:17:46.272 "params": { 00:17:46.272 "impl_name": "posix", 00:17:46.272 "recv_buf_size": 2097152, 00:17:46.272 "send_buf_size": 2097152, 00:17:46.272 "enable_recv_pipe": true, 00:17:46.272 "enable_quickack": false, 00:17:46.272 "enable_placement_id": 0, 00:17:46.272 "enable_zerocopy_send_server": true, 00:17:46.272 "enable_zerocopy_send_client": false, 00:17:46.272 "zerocopy_threshold": 0, 00:17:46.272 "tls_version": 0, 00:17:46.272 "enable_ktls": false 00:17:46.272 } 00:17:46.272 }, 00:17:46.272 { 00:17:46.272 "method": "sock_impl_set_options", 00:17:46.272 "params": { 00:17:46.272 "impl_name": "ssl", 00:17:46.272 "recv_buf_size": 4096, 00:17:46.272 "send_buf_size": 4096, 00:17:46.272 "enable_recv_pipe": true, 00:17:46.272 "enable_quickack": false, 00:17:46.272 "enable_placement_id": 0, 00:17:46.272 "enable_zerocopy_send_server": true, 00:17:46.272 "enable_zerocopy_send_client": false, 00:17:46.272 "zerocopy_threshold": 0, 00:17:46.272 "tls_version": 0, 00:17:46.272 "enable_ktls": false 00:17:46.272 } 00:17:46.272 } 00:17:46.272 ] 00:17:46.272 }, 00:17:46.272 { 00:17:46.272 "subsystem": "vmd", 00:17:46.272 "config": [] 00:17:46.272 }, 00:17:46.272 { 00:17:46.272 "subsystem": "accel", 00:17:46.272 "config": [ 00:17:46.272 { 00:17:46.272 "method": "accel_set_options", 00:17:46.272 "params": { 00:17:46.272 "small_cache_size": 128, 00:17:46.272 "large_cache_size": 16, 00:17:46.272 "task_count": 2048, 00:17:46.272 "sequence_count": 2048, 00:17:46.272 "buf_count": 2048 00:17:46.272 } 00:17:46.272 } 00:17:46.272 ] 00:17:46.272 }, 00:17:46.272 { 00:17:46.272 "subsystem": "bdev", 00:17:46.272 "config": [ 00:17:46.272 { 00:17:46.272 "method": "bdev_set_options", 00:17:46.272 "params": { 00:17:46.272 "bdev_io_pool_size": 65535, 00:17:46.272 "bdev_io_cache_size": 256, 00:17:46.272 "bdev_auto_examine": true, 00:17:46.272 "iobuf_small_cache_size": 128, 00:17:46.272 "iobuf_large_cache_size": 16 00:17:46.272 } 00:17:46.272 }, 00:17:46.272 { 00:17:46.272 "method": "bdev_raid_set_options", 00:17:46.272 "params": { 00:17:46.272 "process_window_size_kb": 1024 00:17:46.272 } 00:17:46.272 }, 00:17:46.272 { 00:17:46.272 "method": "bdev_iscsi_set_options", 00:17:46.272 "params": { 00:17:46.272 "timeout_sec": 30 00:17:46.272 } 00:17:46.272 }, 00:17:46.272 { 00:17:46.272 "method": "bdev_nvme_set_options", 00:17:46.272 "params": { 00:17:46.272 "action_on_timeout": "none", 00:17:46.272 "timeout_us": 0, 00:17:46.272 "timeout_admin_us": 0, 00:17:46.272 "keep_alive_timeout_ms": 10000, 00:17:46.272 "transport_retry_count": 4, 00:17:46.272 "arbitration_burst": 0, 00:17:46.272 "low_priority_weight": 0, 00:17:46.272 "medium_priority_weight": 0, 00:17:46.272 "high_priority_weight": 0, 00:17:46.272 "nvme_adminq_poll_period_us": 10000, 00:17:46.272 "nvme_ioq_poll_period_us": 0, 00:17:46.272 "io_queue_requests": 512, 00:17:46.272 "delay_cmd_submit": true, 00:17:46.272 "bdev_retry_count": 3, 00:17:46.272 "transport_ack_timeout": 0, 00:17:46.272 "ctrlr_loss_timeout_sec": 0, 00:17:46.272 "reconnect_delay_sec": 0, 00:17:46.272 "fast_io_fail_timeout_sec": 0, 00:17:46.272 "generate_uuids": false, 00:17:46.272 "transport_tos": 0, 00:17:46.272 "io_path_stat": false, 00:17:46.272 "allow_accel_sequence": false 00:17:46.272 } 00:17:46.273 }, 00:17:46.273 { 00:17:46.273 "method": "bdev_nvme_attach_controller", 00:17:46.273 "params": { 00:17:46.273 "name": "TLSTEST", 00:17:46.273 "trtype": "TCP", 00:17:46.273 "adrfam": "IPv4", 00:17:46.273 "traddr": "10.0.0.2", 00:17:46.273 "trsvcid": "4420", 00:17:46.273 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:17:46.273 "prchk_reftag": false, 00:17:46.273 "prchk_guard": false, 00:17:46.273 "ctrlr_loss_timeout_sec": 0, 00:17:46.273 "reconnect_delay_sec": 0, 00:17:46.273 "fast_io_fail_timeout_sec": 0, 00:17:46.273 "psk": "/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/key_long.txt", 00:17:46.273 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:17:46.273 "hdgst": false, 00:17:46.273 "ddgst": false 00:17:46.273 } 00:17:46.273 }, 00:17:46.273 { 00:17:46.273 "method": "bdev_nvme_set_hotplug", 00:17:46.273 "params": { 00:17:46.273 "period_us": 100000, 00:17:46.273 "enable": false 00:17:46.273 } 00:17:46.273 }, 00:17:46.273 { 00:17:46.273 "method": "bdev_wait_for_examine" 00:17:46.273 } 00:17:46.273 ] 00:17:46.273 }, 00:17:46.273 { 00:17:46.273 "subsystem": "nbd", 00:17:46.273 "config": [] 00:17:46.273 } 00:17:46.273 ] 00:17:46.273 }' 00:17:46.273 07:35:02 -- target/tls.sh@208 -- # killprocess 4113179 00:17:46.273 07:35:02 -- common/autotest_common.sh@926 -- # '[' -z 4113179 ']' 00:17:46.273 07:35:02 -- common/autotest_common.sh@930 -- # kill -0 4113179 00:17:46.273 07:35:02 -- common/autotest_common.sh@931 -- # uname 00:17:46.273 07:35:02 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:17:46.273 07:35:02 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 4113179 00:17:46.273 07:35:02 -- common/autotest_common.sh@932 -- # process_name=reactor_2 00:17:46.273 07:35:02 -- common/autotest_common.sh@936 -- # '[' reactor_2 = sudo ']' 00:17:46.273 07:35:02 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 4113179' 00:17:46.273 killing process with pid 4113179 00:17:46.273 07:35:02 -- common/autotest_common.sh@945 -- # kill 4113179 00:17:46.273 Received shutdown signal, test time was about 10.000000 seconds 00:17:46.273 00:17:46.273 Latency(us) 00:17:46.273 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:46.273 =================================================================================================================== 00:17:46.273 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:17:46.273 07:35:02 -- common/autotest_common.sh@950 -- # wait 4113179 00:17:46.530 07:35:02 -- target/tls.sh@209 -- # killprocess 4112858 00:17:46.530 07:35:02 -- common/autotest_common.sh@926 -- # '[' -z 4112858 ']' 00:17:46.530 07:35:02 -- common/autotest_common.sh@930 -- # kill -0 4112858 00:17:46.530 07:35:02 -- common/autotest_common.sh@931 -- # uname 00:17:46.530 07:35:02 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:17:46.530 07:35:02 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 4112858 00:17:46.530 07:35:02 -- common/autotest_common.sh@932 -- # process_name=reactor_1 00:17:46.530 07:35:02 -- common/autotest_common.sh@936 -- # '[' reactor_1 = sudo ']' 00:17:46.530 07:35:02 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 4112858' 00:17:46.530 killing process with pid 4112858 00:17:46.530 07:35:02 -- common/autotest_common.sh@945 -- # kill 4112858 00:17:46.530 07:35:02 -- common/autotest_common.sh@950 -- # wait 4112858 00:17:46.786 07:35:02 -- target/tls.sh@212 -- # nvmfappstart -m 0x2 -c /dev/fd/62 00:17:46.786 07:35:02 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:17:46.786 07:35:02 -- target/tls.sh@212 -- # echo '{ 00:17:46.786 "subsystems": [ 00:17:46.786 { 00:17:46.786 "subsystem": "iobuf", 00:17:46.786 "config": [ 00:17:46.786 { 00:17:46.786 "method": "iobuf_set_options", 00:17:46.786 "params": { 00:17:46.786 "small_pool_count": 8192, 00:17:46.786 "large_pool_count": 1024, 00:17:46.786 "small_bufsize": 8192, 00:17:46.786 "large_bufsize": 135168 00:17:46.786 } 00:17:46.786 } 00:17:46.786 ] 00:17:46.786 }, 00:17:46.786 { 00:17:46.786 "subsystem": "sock", 00:17:46.786 "config": [ 00:17:46.786 { 00:17:46.786 "method": "sock_impl_set_options", 00:17:46.786 "params": { 00:17:46.786 "impl_name": "posix", 00:17:46.786 "recv_buf_size": 2097152, 00:17:46.786 "send_buf_size": 2097152, 00:17:46.786 "enable_recv_pipe": true, 00:17:46.786 "enable_quickack": false, 00:17:46.786 "enable_placement_id": 0, 00:17:46.786 "enable_zerocopy_send_server": true, 00:17:46.786 "enable_zerocopy_send_client": false, 00:17:46.786 "zerocopy_threshold": 0, 00:17:46.786 "tls_version": 0, 00:17:46.786 "enable_ktls": false 00:17:46.786 } 00:17:46.786 }, 00:17:46.786 { 00:17:46.786 "method": "sock_impl_set_options", 00:17:46.786 "params": { 00:17:46.786 "impl_name": "ssl", 00:17:46.786 "recv_buf_size": 4096, 00:17:46.786 "send_buf_size": 4096, 00:17:46.786 "enable_recv_pipe": true, 00:17:46.786 "enable_quickack": false, 00:17:46.786 "enable_placement_id": 0, 00:17:46.786 "enable_zerocopy_send_server": true, 00:17:46.786 "enable_zerocopy_send_client": false, 00:17:46.786 "zerocopy_threshold": 0, 00:17:46.786 "tls_version": 0, 00:17:46.786 "enable_ktls": false 00:17:46.786 } 00:17:46.786 } 00:17:46.786 ] 00:17:46.786 }, 00:17:46.786 { 00:17:46.786 "subsystem": "vmd", 00:17:46.786 "config": [] 00:17:46.786 }, 00:17:46.786 { 00:17:46.786 "subsystem": "accel", 00:17:46.786 "config": [ 00:17:46.786 { 00:17:46.786 "method": "accel_set_options", 00:17:46.786 "params": { 00:17:46.786 "small_cache_size": 128, 00:17:46.786 "large_cache_size": 16, 00:17:46.786 "task_count": 2048, 00:17:46.786 "sequence_count": 2048, 00:17:46.786 "buf_count": 2048 00:17:46.786 } 00:17:46.786 } 00:17:46.786 ] 00:17:46.786 }, 00:17:46.786 { 00:17:46.786 "subsystem": "bdev", 00:17:46.786 "config": [ 00:17:46.786 { 00:17:46.786 "method": "bdev_set_options", 00:17:46.786 "params": { 00:17:46.786 "bdev_io_pool_size": 65535, 00:17:46.786 "bdev_io_cache_size": 256, 00:17:46.786 "bdev_auto_examine": true, 00:17:46.786 "iobuf_small_cache_size": 128, 00:17:46.786 "iobuf_large_cache_size": 16 00:17:46.786 } 00:17:46.786 }, 00:17:46.786 { 00:17:46.786 "method": "bdev_raid_set_options", 00:17:46.786 "params": { 00:17:46.786 "process_window_size_kb": 1024 00:17:46.786 } 00:17:46.786 }, 00:17:46.786 { 00:17:46.786 "method": "bdev_iscsi_set_options", 00:17:46.786 "params": { 00:17:46.786 "timeout_sec": 30 00:17:46.786 } 00:17:46.786 }, 00:17:46.786 { 00:17:46.786 "method": "bdev_nvme_set_options", 00:17:46.786 "params": { 00:17:46.786 "action_on_timeout": "none", 00:17:46.786 "timeout_us": 0, 00:17:46.786 "timeout_admin_us": 0, 00:17:46.786 "keep_alive_timeout_ms": 10000, 00:17:46.786 "transport_retry_count": 4, 00:17:46.786 "arbitration_burst": 0, 00:17:46.786 "low_priority_weight": 0, 00:17:46.786 "medium_priority_weight": 0, 00:17:46.786 "high_priority_weight": 0, 00:17:46.786 "nvme_adminq_poll_period_us": 10000, 00:17:46.786 "nvme_ioq_poll_period_us": 0, 00:17:46.786 "io_queue_requests": 0, 00:17:46.786 "delay_cmd_submit": true, 00:17:46.786 "bdev_retry_count": 3, 00:17:46.786 "transport_ack_timeout": 0, 00:17:46.786 "ctrlr_loss_timeout_sec": 0, 00:17:46.786 "reconnect_delay_sec": 0, 00:17:46.786 "fast_io_fail_timeout_sec": 0, 00:17:46.786 "generate_uuids": false, 00:17:46.786 "transport_tos": 0, 00:17:46.786 "io_path_stat": false, 00:17:46.786 "allow_accel_sequence": false 00:17:46.786 } 00:17:46.786 }, 00:17:46.786 { 00:17:46.786 "method": "bdev_nvme_set_hotplug", 00:17:46.786 "params": { 00:17:46.786 "period_us": 100000, 00:17:46.786 "enable": false 00:17:46.786 } 00:17:46.786 }, 00:17:46.786 { 00:17:46.786 "method": "bdev_malloc_create", 00:17:46.786 "params": { 00:17:46.786 "name": "malloc0", 00:17:46.786 "num_blocks": 8192, 00:17:46.786 "block_size": 4096, 00:17:46.786 "physical_block_size": 4096, 00:17:46.786 "uuid": "3d21d373-c8ad-42af-b4f0-ae9071ac2222", 00:17:46.786 "optimal_io_boundary": 0 00:17:46.786 } 00:17:46.786 }, 00:17:46.786 { 00:17:46.786 "method": "bdev_wait_for_examine" 00:17:46.786 } 00:17:46.786 ] 00:17:46.786 }, 00:17:46.786 { 00:17:46.786 "subsystem": "nbd", 00:17:46.786 "config": [] 00:17:46.786 }, 00:17:46.786 { 00:17:46.786 "subsystem": "scheduler", 00:17:46.786 "config": [ 00:17:46.786 { 00:17:46.786 "method": "framework_set_scheduler", 00:17:46.786 "params": { 00:17:46.786 "name": "static" 00:17:46.786 } 00:17:46.786 } 00:17:46.786 ] 00:17:46.786 }, 00:17:46.786 { 00:17:46.786 "subsystem": "nvmf", 00:17:46.787 "config": [ 00:17:46.787 { 00:17:46.787 "method": "nvmf_set_config", 00:17:46.787 "params": { 00:17:46.787 "discovery_filter": "match_any", 00:17:46.787 "admin_cmd_passthru": { 00:17:46.787 "identify_ctrlr": false 00:17:46.787 } 00:17:46.787 } 00:17:46.787 }, 00:17:46.787 { 00:17:46.787 "method": "nvmf_set_max_subsystems", 00:17:46.787 "params": { 00:17:46.787 "max_subsystems": 1024 00:17:46.787 } 00:17:46.787 }, 00:17:46.787 { 00:17:46.787 "method": "nvmf_set_crdt", 00:17:46.787 "params": { 00:17:46.787 "crdt1": 0, 00:17:46.787 "crdt2": 0, 00:17:46.787 "crdt3": 0 00:17:46.787 } 00:17:46.787 }, 00:17:46.787 { 00:17:46.787 "method": "nvmf_create_transport", 00:17:46.787 "params": { 00:17:46.787 "trtype": "TCP", 00:17:46.787 "max_queue_depth": 128, 00:17:46.787 "max_io_qpairs_per_ctrlr": 127, 00:17:46.787 "in_capsule_data_size": 4096, 00:17:46.787 "max_io_size": 131072, 00:17:46.787 "io_unit_size": 131072, 00:17:46.787 "max_aq_depth": 128, 00:17:46.787 "num_shared_buffers": 511, 00:17:46.787 "buf_cache_size": 4294967295, 00:17:46.787 "dif_insert_or_strip": false, 00:17:46.787 "zcopy": false, 00:17:46.787 "c2h_success": false, 00:17:46.787 "sock_priority": 0, 00:17:46.787 "abort_timeout_sec": 1 00:17:46.787 } 00:17:46.787 }, 00:17:46.787 { 00:17:46.787 "method": "nvmf_create_subsystem", 00:17:46.787 "params": { 00:17:46.787 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:17:46.787 "allow_any_host": false, 00:17:46.787 "serial_number": "SPDK00000000000001", 00:17:46.787 "model_number": "SPDK bdev Controller", 00:17:46.787 "max_namespaces": 10, 00:17:46.787 "min_cntlid": 1, 00:17:46.787 "max_cntlid": 65519, 00:17:46.787 "ana_reporting": false 00:17:46.787 } 00:17:46.787 }, 00:17:46.787 { 00:17:46.787 "method": "nvmf_subsystem_add_host", 00:17:46.787 "params": { 00:17:46.787 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:17:46.787 "host": "nqn.2016-06.io.spdk:host1", 00:17:46.787 "psk": "/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/key_long.txt" 00:17:46.787 } 00:17:46.787 }, 00:17:46.787 { 00:17:46.787 "method": "nvmf_subsystem_add_ns", 00:17:46.787 "params": { 00:17:46.787 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:17:46.787 "namespace": { 00:17:46.787 "nsid": 1, 00:17:46.787 "bdev_name": "malloc0", 00:17:46.787 "nguid": "3D21D373C8AD42AFB4F0AE9071AC2222", 00:17:46.787 "uuid": "3d21d373-c8ad-42af-b4f0-ae9071ac2222" 00:17:46.787 } 00:17:46.787 } 00:17:46.787 }, 00:17:46.787 { 00:17:46.787 "method": "nvmf_subsystem_add_listener", 00:17:46.787 "params": { 00:17:46.787 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:17:46.787 "listen_address": { 00:17:46.787 "trtype": "TCP", 00:17:46.787 "adrfam": "IPv4", 00:17:46.787 "traddr": "10.0.0.2", 00:17:46.787 "trsvcid": "4420" 00:17:46.787 }, 00:17:46.787 "secure_channel": true 00:17:46.787 } 00:17:46.787 } 00:17:46.787 ] 00:17:46.787 } 00:17:46.787 ] 00:17:46.787 }' 00:17:46.787 07:35:02 -- common/autotest_common.sh@712 -- # xtrace_disable 00:17:46.787 07:35:02 -- common/autotest_common.sh@10 -- # set +x 00:17:46.787 07:35:02 -- nvmf/common.sh@469 -- # nvmfpid=4113468 00:17:46.787 07:35:02 -- nvmf/common.sh@468 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 -c /dev/fd/62 00:17:46.787 07:35:02 -- nvmf/common.sh@470 -- # waitforlisten 4113468 00:17:46.787 07:35:02 -- common/autotest_common.sh@819 -- # '[' -z 4113468 ']' 00:17:46.787 07:35:02 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:46.787 07:35:02 -- common/autotest_common.sh@824 -- # local max_retries=100 00:17:46.787 07:35:02 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:46.787 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:46.787 07:35:02 -- common/autotest_common.sh@828 -- # xtrace_disable 00:17:46.787 07:35:02 -- common/autotest_common.sh@10 -- # set +x 00:17:46.787 [2024-07-14 07:35:02.916764] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:17:46.787 [2024-07-14 07:35:02.916840] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:17:46.787 EAL: No free 2048 kB hugepages reported on node 1 00:17:47.043 [2024-07-14 07:35:02.982993] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:47.043 [2024-07-14 07:35:03.100073] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:17:47.043 [2024-07-14 07:35:03.100241] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:17:47.043 [2024-07-14 07:35:03.100261] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:17:47.043 [2024-07-14 07:35:03.100275] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:17:47.043 [2024-07-14 07:35:03.100314] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:17:47.299 [2024-07-14 07:35:03.335647] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:17:47.299 [2024-07-14 07:35:03.367719] tcp.c: 912:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:17:47.299 [2024-07-14 07:35:03.367961] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:17:47.892 07:35:03 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:17:47.892 07:35:03 -- common/autotest_common.sh@852 -- # return 0 00:17:47.892 07:35:03 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:17:47.892 07:35:03 -- common/autotest_common.sh@718 -- # xtrace_disable 00:17:47.892 07:35:03 -- common/autotest_common.sh@10 -- # set +x 00:17:47.892 07:35:03 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:17:47.892 07:35:03 -- target/tls.sh@216 -- # bdevperf_pid=4113624 00:17:47.892 07:35:03 -- target/tls.sh@217 -- # waitforlisten 4113624 /var/tmp/bdevperf.sock 00:17:47.892 07:35:03 -- common/autotest_common.sh@819 -- # '[' -z 4113624 ']' 00:17:47.892 07:35:03 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:17:47.892 07:35:03 -- target/tls.sh@213 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 -c /dev/fd/63 00:17:47.892 07:35:03 -- common/autotest_common.sh@824 -- # local max_retries=100 00:17:47.892 07:35:03 -- target/tls.sh@213 -- # echo '{ 00:17:47.892 "subsystems": [ 00:17:47.892 { 00:17:47.892 "subsystem": "iobuf", 00:17:47.892 "config": [ 00:17:47.892 { 00:17:47.892 "method": "iobuf_set_options", 00:17:47.892 "params": { 00:17:47.892 "small_pool_count": 8192, 00:17:47.892 "large_pool_count": 1024, 00:17:47.892 "small_bufsize": 8192, 00:17:47.892 "large_bufsize": 135168 00:17:47.892 } 00:17:47.892 } 00:17:47.892 ] 00:17:47.892 }, 00:17:47.892 { 00:17:47.892 "subsystem": "sock", 00:17:47.892 "config": [ 00:17:47.892 { 00:17:47.892 "method": "sock_impl_set_options", 00:17:47.892 "params": { 00:17:47.892 "impl_name": "posix", 00:17:47.892 "recv_buf_size": 2097152, 00:17:47.892 "send_buf_size": 2097152, 00:17:47.892 "enable_recv_pipe": true, 00:17:47.892 "enable_quickack": false, 00:17:47.892 "enable_placement_id": 0, 00:17:47.892 "enable_zerocopy_send_server": true, 00:17:47.892 "enable_zerocopy_send_client": false, 00:17:47.892 "zerocopy_threshold": 0, 00:17:47.892 "tls_version": 0, 00:17:47.892 "enable_ktls": false 00:17:47.892 } 00:17:47.892 }, 00:17:47.892 { 00:17:47.892 "method": "sock_impl_set_options", 00:17:47.892 "params": { 00:17:47.892 "impl_name": "ssl", 00:17:47.892 "recv_buf_size": 4096, 00:17:47.892 "send_buf_size": 4096, 00:17:47.892 "enable_recv_pipe": true, 00:17:47.892 "enable_quickack": false, 00:17:47.892 "enable_placement_id": 0, 00:17:47.892 "enable_zerocopy_send_server": true, 00:17:47.892 "enable_zerocopy_send_client": false, 00:17:47.892 "zerocopy_threshold": 0, 00:17:47.892 "tls_version": 0, 00:17:47.892 "enable_ktls": false 00:17:47.892 } 00:17:47.892 } 00:17:47.892 ] 00:17:47.892 }, 00:17:47.892 { 00:17:47.892 "subsystem": "vmd", 00:17:47.892 "config": [] 00:17:47.892 }, 00:17:47.892 { 00:17:47.892 "subsystem": "accel", 00:17:47.892 "config": [ 00:17:47.892 { 00:17:47.892 "method": "accel_set_options", 00:17:47.892 "params": { 00:17:47.892 "small_cache_size": 128, 00:17:47.892 "large_cache_size": 16, 00:17:47.892 "task_count": 2048, 00:17:47.892 "sequence_count": 2048, 00:17:47.892 "buf_count": 2048 00:17:47.892 } 00:17:47.892 } 00:17:47.892 ] 00:17:47.892 }, 00:17:47.892 { 00:17:47.892 "subsystem": "bdev", 00:17:47.892 "config": [ 00:17:47.892 { 00:17:47.892 "method": "bdev_set_options", 00:17:47.892 "params": { 00:17:47.892 "bdev_io_pool_size": 65535, 00:17:47.892 "bdev_io_cache_size": 256, 00:17:47.892 "bdev_auto_examine": true, 00:17:47.892 "iobuf_small_cache_size": 128, 00:17:47.892 "iobuf_large_cache_size": 16 00:17:47.892 } 00:17:47.892 }, 00:17:47.892 { 00:17:47.892 "method": "bdev_raid_set_options", 00:17:47.892 "params": { 00:17:47.892 "process_window_size_kb": 1024 00:17:47.892 } 00:17:47.892 }, 00:17:47.892 { 00:17:47.892 "method": "bdev_iscsi_set_options", 00:17:47.892 "params": { 00:17:47.892 "timeout_sec": 30 00:17:47.892 } 00:17:47.892 }, 00:17:47.892 { 00:17:47.892 "method": "bdev_nvme_set_options", 00:17:47.892 "params": { 00:17:47.892 "action_on_timeout": "none", 00:17:47.892 "timeout_us": 0, 00:17:47.892 "timeout_admin_us": 0, 00:17:47.892 "keep_alive_timeout_ms": 10000, 00:17:47.892 "transport_retry_count": 4, 00:17:47.892 "arbitration_burst": 0, 00:17:47.892 "low_priority_weight": 0, 00:17:47.892 "medium_priority_weight": 0, 00:17:47.892 "high_priority_weight": 0, 00:17:47.892 "nvme_adminq_poll_period_us": 10000, 00:17:47.892 "nvme_ioq_poll_period_us": 0, 00:17:47.892 "io_queue_requests": 512, 00:17:47.892 "delay_cmd_submit": true, 00:17:47.892 "bdev_retry_count": 3, 00:17:47.892 "transport_ack_timeout": 0, 00:17:47.892 "ctrlr_loss_timeout_sec": 0, 00:17:47.892 "reconnect_delay_sec": 0, 00:17:47.892 "fast_io_fail_timeout_sec": 0, 00:17:47.892 "generate_uuids": false, 00:17:47.892 "transport_tos": 0, 00:17:47.892 "io_path_stat": false, 00:17:47.892 "allow_accel_sequence": false 00:17:47.892 } 00:17:47.892 }, 00:17:47.892 { 00:17:47.892 "method": "bdev_nvme_attach_controller", 00:17:47.892 "params": { 00:17:47.892 "name": "TLSTEST", 00:17:47.892 "trtype": "TCP", 00:17:47.892 "adrfam": "IPv4", 00:17:47.892 "traddr": "10.0.0.2", 00:17:47.892 "trsvcid": "4420", 00:17:47.892 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:17:47.892 "prchk_reftag": false, 00:17:47.892 "prchk_guard": false, 00:17:47.892 "ctrlr_loss_timeout_sec": 0, 00:17:47.892 "reconnect_delay_sec": 0, 00:17:47.892 "fast_io_fail_timeout_sec": 0, 00:17:47.892 "psk": "/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/key_long.txt", 00:17:47.892 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:17:47.892 "hdgst": false, 00:17:47.892 "ddgst": false 00:17:47.892 } 00:17:47.892 }, 00:17:47.892 { 00:17:47.892 "method": "bdev_nvme_set_hotplug", 00:17:47.892 "params": { 00:17:47.892 "period_us": 100000, 00:17:47.892 "enable": false 00:17:47.892 } 00:17:47.892 }, 00:17:47.892 { 00:17:47.892 "method": "bdev_wait_for_examine" 00:17:47.892 } 00:17:47.892 ] 00:17:47.892 }, 00:17:47.892 { 00:17:47.892 "subsystem": "nbd", 00:17:47.892 "config": [] 00:17:47.892 } 00:17:47.892 ] 00:17:47.892 }' 00:17:47.892 07:35:03 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:17:47.892 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:17:47.892 07:35:03 -- common/autotest_common.sh@828 -- # xtrace_disable 00:17:47.892 07:35:03 -- common/autotest_common.sh@10 -- # set +x 00:17:47.892 [2024-07-14 07:35:03.954558] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:17:47.892 [2024-07-14 07:35:03.954636] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid4113624 ] 00:17:47.892 EAL: No free 2048 kB hugepages reported on node 1 00:17:47.892 [2024-07-14 07:35:04.010859] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:48.149 [2024-07-14 07:35:04.115719] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:17:48.149 [2024-07-14 07:35:04.275006] bdev_nvme_rpc.c: 477:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:17:49.080 07:35:04 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:17:49.080 07:35:04 -- common/autotest_common.sh@852 -- # return 0 00:17:49.080 07:35:04 -- target/tls.sh@220 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -t 20 -s /var/tmp/bdevperf.sock perform_tests 00:17:49.080 Running I/O for 10 seconds... 00:17:59.042 00:17:59.042 Latency(us) 00:17:59.042 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:59.042 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:17:59.042 Verification LBA range: start 0x0 length 0x2000 00:17:59.042 TLSTESTn1 : 10.04 1373.31 5.36 0.00 0.00 92982.44 13107.20 114178.28 00:17:59.042 =================================================================================================================== 00:17:59.042 Total : 1373.31 5.36 0.00 0.00 92982.44 13107.20 114178.28 00:17:59.042 0 00:17:59.042 07:35:15 -- target/tls.sh@222 -- # trap 'nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:17:59.042 07:35:15 -- target/tls.sh@223 -- # killprocess 4113624 00:17:59.042 07:35:15 -- common/autotest_common.sh@926 -- # '[' -z 4113624 ']' 00:17:59.042 07:35:15 -- common/autotest_common.sh@930 -- # kill -0 4113624 00:17:59.042 07:35:15 -- common/autotest_common.sh@931 -- # uname 00:17:59.042 07:35:15 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:17:59.042 07:35:15 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 4113624 00:17:59.042 07:35:15 -- common/autotest_common.sh@932 -- # process_name=reactor_2 00:17:59.042 07:35:15 -- common/autotest_common.sh@936 -- # '[' reactor_2 = sudo ']' 00:17:59.042 07:35:15 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 4113624' 00:17:59.042 killing process with pid 4113624 00:17:59.042 07:35:15 -- common/autotest_common.sh@945 -- # kill 4113624 00:17:59.042 Received shutdown signal, test time was about 10.000000 seconds 00:17:59.042 00:17:59.042 Latency(us) 00:17:59.042 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:59.042 =================================================================================================================== 00:17:59.042 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:17:59.042 07:35:15 -- common/autotest_common.sh@950 -- # wait 4113624 00:17:59.301 07:35:15 -- target/tls.sh@224 -- # killprocess 4113468 00:17:59.301 07:35:15 -- common/autotest_common.sh@926 -- # '[' -z 4113468 ']' 00:17:59.301 07:35:15 -- common/autotest_common.sh@930 -- # kill -0 4113468 00:17:59.301 07:35:15 -- common/autotest_common.sh@931 -- # uname 00:17:59.301 07:35:15 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:17:59.301 07:35:15 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 4113468 00:17:59.301 07:35:15 -- common/autotest_common.sh@932 -- # process_name=reactor_1 00:17:59.301 07:35:15 -- common/autotest_common.sh@936 -- # '[' reactor_1 = sudo ']' 00:17:59.301 07:35:15 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 4113468' 00:17:59.301 killing process with pid 4113468 00:17:59.301 07:35:15 -- common/autotest_common.sh@945 -- # kill 4113468 00:17:59.301 07:35:15 -- common/autotest_common.sh@950 -- # wait 4113468 00:17:59.560 07:35:15 -- target/tls.sh@226 -- # trap - SIGINT SIGTERM EXIT 00:17:59.560 07:35:15 -- target/tls.sh@227 -- # cleanup 00:17:59.560 07:35:15 -- target/tls.sh@15 -- # process_shm --id 0 00:17:59.560 07:35:15 -- common/autotest_common.sh@796 -- # type=--id 00:17:59.560 07:35:15 -- common/autotest_common.sh@797 -- # id=0 00:17:59.560 07:35:15 -- common/autotest_common.sh@798 -- # '[' --id = --pid ']' 00:17:59.560 07:35:15 -- common/autotest_common.sh@802 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:17:59.560 07:35:15 -- common/autotest_common.sh@802 -- # shm_files=nvmf_trace.0 00:17:59.560 07:35:15 -- common/autotest_common.sh@804 -- # [[ -z nvmf_trace.0 ]] 00:17:59.560 07:35:15 -- common/autotest_common.sh@808 -- # for n in $shm_files 00:17:59.560 07:35:15 -- common/autotest_common.sh@809 -- # tar -C /dev/shm/ -cvzf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:17:59.560 nvmf_trace.0 00:17:59.820 07:35:15 -- common/autotest_common.sh@811 -- # return 0 00:17:59.820 07:35:15 -- target/tls.sh@16 -- # killprocess 4113624 00:17:59.820 07:35:15 -- common/autotest_common.sh@926 -- # '[' -z 4113624 ']' 00:17:59.820 07:35:15 -- common/autotest_common.sh@930 -- # kill -0 4113624 00:17:59.820 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 930: kill: (4113624) - No such process 00:17:59.820 07:35:15 -- common/autotest_common.sh@953 -- # echo 'Process with pid 4113624 is not found' 00:17:59.820 Process with pid 4113624 is not found 00:17:59.820 07:35:15 -- target/tls.sh@17 -- # nvmftestfini 00:17:59.820 07:35:15 -- nvmf/common.sh@476 -- # nvmfcleanup 00:17:59.820 07:35:15 -- nvmf/common.sh@116 -- # sync 00:17:59.820 07:35:15 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:17:59.820 07:35:15 -- nvmf/common.sh@119 -- # set +e 00:17:59.820 07:35:15 -- nvmf/common.sh@120 -- # for i in {1..20} 00:17:59.820 07:35:15 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:17:59.820 rmmod nvme_tcp 00:17:59.820 rmmod nvme_fabrics 00:17:59.820 rmmod nvme_keyring 00:17:59.820 07:35:15 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:17:59.820 07:35:15 -- nvmf/common.sh@123 -- # set -e 00:17:59.820 07:35:15 -- nvmf/common.sh@124 -- # return 0 00:17:59.820 07:35:15 -- nvmf/common.sh@477 -- # '[' -n 4113468 ']' 00:17:59.820 07:35:15 -- nvmf/common.sh@478 -- # killprocess 4113468 00:17:59.820 07:35:15 -- common/autotest_common.sh@926 -- # '[' -z 4113468 ']' 00:17:59.820 07:35:15 -- common/autotest_common.sh@930 -- # kill -0 4113468 00:17:59.820 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 930: kill: (4113468) - No such process 00:17:59.820 07:35:15 -- common/autotest_common.sh@953 -- # echo 'Process with pid 4113468 is not found' 00:17:59.820 Process with pid 4113468 is not found 00:17:59.820 07:35:15 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:17:59.820 07:35:15 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:17:59.820 07:35:15 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:17:59.820 07:35:15 -- nvmf/common.sh@273 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:17:59.820 07:35:15 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:17:59.820 07:35:15 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:59.820 07:35:15 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:17:59.820 07:35:15 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:01.727 07:35:17 -- nvmf/common.sh@278 -- # ip -4 addr flush cvl_0_1 00:18:01.727 07:35:17 -- target/tls.sh@18 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/key1.txt /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/key2.txt /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/key_long.txt 00:18:01.727 00:18:01.727 real 1m15.391s 00:18:01.727 user 1m52.750s 00:18:01.727 sys 0m24.651s 00:18:01.727 07:35:17 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:18:01.727 07:35:17 -- common/autotest_common.sh@10 -- # set +x 00:18:01.727 ************************************ 00:18:01.727 END TEST nvmf_tls 00:18:01.727 ************************************ 00:18:01.727 07:35:17 -- nvmf/nvmf.sh@60 -- # run_test nvmf_fips /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/fips.sh --transport=tcp 00:18:01.727 07:35:17 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:18:01.727 07:35:17 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:18:01.727 07:35:17 -- common/autotest_common.sh@10 -- # set +x 00:18:01.727 ************************************ 00:18:01.727 START TEST nvmf_fips 00:18:01.727 ************************************ 00:18:01.727 07:35:17 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/fips.sh --transport=tcp 00:18:01.986 * Looking for test storage... 00:18:01.986 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips 00:18:01.986 07:35:17 -- fips/fips.sh@11 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:18:01.986 07:35:17 -- nvmf/common.sh@7 -- # uname -s 00:18:01.986 07:35:17 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:18:01.986 07:35:17 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:18:01.986 07:35:17 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:18:01.986 07:35:17 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:18:01.986 07:35:17 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:18:01.986 07:35:17 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:18:01.986 07:35:17 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:18:01.986 07:35:17 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:18:01.986 07:35:17 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:18:01.986 07:35:17 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:18:01.986 07:35:17 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:18:01.986 07:35:17 -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:18:01.986 07:35:17 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:18:01.986 07:35:17 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:18:01.986 07:35:17 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:18:01.986 07:35:17 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:18:01.986 07:35:17 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:18:01.986 07:35:17 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:18:01.987 07:35:17 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:18:01.987 07:35:17 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:01.987 07:35:17 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:01.987 07:35:17 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:01.987 07:35:17 -- paths/export.sh@5 -- # export PATH 00:18:01.987 07:35:17 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:01.987 07:35:17 -- nvmf/common.sh@46 -- # : 0 00:18:01.987 07:35:17 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:18:01.987 07:35:17 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:18:01.987 07:35:17 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:18:01.987 07:35:17 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:18:01.987 07:35:17 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:18:01.987 07:35:17 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:18:01.987 07:35:17 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:18:01.987 07:35:17 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:18:01.987 07:35:17 -- fips/fips.sh@12 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:18:01.987 07:35:17 -- fips/fips.sh@89 -- # check_openssl_version 00:18:01.987 07:35:17 -- fips/fips.sh@83 -- # local target=3.0.0 00:18:01.987 07:35:17 -- fips/fips.sh@85 -- # openssl version 00:18:01.987 07:35:17 -- fips/fips.sh@85 -- # awk '{print $2}' 00:18:01.987 07:35:17 -- fips/fips.sh@85 -- # ge 3.0.9 3.0.0 00:18:01.987 07:35:17 -- scripts/common.sh@375 -- # cmp_versions 3.0.9 '>=' 3.0.0 00:18:01.987 07:35:17 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:18:01.987 07:35:17 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:18:01.987 07:35:17 -- scripts/common.sh@335 -- # IFS=.-: 00:18:01.987 07:35:17 -- scripts/common.sh@335 -- # read -ra ver1 00:18:01.987 07:35:17 -- scripts/common.sh@336 -- # IFS=.-: 00:18:01.987 07:35:17 -- scripts/common.sh@336 -- # read -ra ver2 00:18:01.987 07:35:17 -- scripts/common.sh@337 -- # local 'op=>=' 00:18:01.987 07:35:17 -- scripts/common.sh@339 -- # ver1_l=3 00:18:01.987 07:35:17 -- scripts/common.sh@340 -- # ver2_l=3 00:18:01.987 07:35:17 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:18:01.987 07:35:17 -- scripts/common.sh@343 -- # case "$op" in 00:18:01.987 07:35:17 -- scripts/common.sh@347 -- # : 1 00:18:01.987 07:35:17 -- scripts/common.sh@363 -- # (( v = 0 )) 00:18:01.987 07:35:17 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:18:01.987 07:35:17 -- scripts/common.sh@364 -- # decimal 3 00:18:01.987 07:35:17 -- scripts/common.sh@352 -- # local d=3 00:18:01.987 07:35:17 -- scripts/common.sh@353 -- # [[ 3 =~ ^[0-9]+$ ]] 00:18:01.987 07:35:17 -- scripts/common.sh@354 -- # echo 3 00:18:01.987 07:35:17 -- scripts/common.sh@364 -- # ver1[v]=3 00:18:01.987 07:35:17 -- scripts/common.sh@365 -- # decimal 3 00:18:01.987 07:35:17 -- scripts/common.sh@352 -- # local d=3 00:18:01.987 07:35:17 -- scripts/common.sh@353 -- # [[ 3 =~ ^[0-9]+$ ]] 00:18:01.987 07:35:17 -- scripts/common.sh@354 -- # echo 3 00:18:01.987 07:35:17 -- scripts/common.sh@365 -- # ver2[v]=3 00:18:01.987 07:35:17 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:18:01.987 07:35:17 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:18:01.987 07:35:17 -- scripts/common.sh@363 -- # (( v++ )) 00:18:01.987 07:35:17 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:18:01.987 07:35:17 -- scripts/common.sh@364 -- # decimal 0 00:18:01.987 07:35:17 -- scripts/common.sh@352 -- # local d=0 00:18:01.987 07:35:17 -- scripts/common.sh@353 -- # [[ 0 =~ ^[0-9]+$ ]] 00:18:01.987 07:35:17 -- scripts/common.sh@354 -- # echo 0 00:18:01.987 07:35:17 -- scripts/common.sh@364 -- # ver1[v]=0 00:18:01.987 07:35:17 -- scripts/common.sh@365 -- # decimal 0 00:18:01.987 07:35:17 -- scripts/common.sh@352 -- # local d=0 00:18:01.987 07:35:17 -- scripts/common.sh@353 -- # [[ 0 =~ ^[0-9]+$ ]] 00:18:01.987 07:35:17 -- scripts/common.sh@354 -- # echo 0 00:18:01.987 07:35:17 -- scripts/common.sh@365 -- # ver2[v]=0 00:18:01.987 07:35:17 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:18:01.987 07:35:17 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:18:01.987 07:35:17 -- scripts/common.sh@363 -- # (( v++ )) 00:18:01.987 07:35:17 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:18:01.987 07:35:17 -- scripts/common.sh@364 -- # decimal 9 00:18:01.987 07:35:17 -- scripts/common.sh@352 -- # local d=9 00:18:01.987 07:35:17 -- scripts/common.sh@353 -- # [[ 9 =~ ^[0-9]+$ ]] 00:18:01.987 07:35:17 -- scripts/common.sh@354 -- # echo 9 00:18:01.987 07:35:17 -- scripts/common.sh@364 -- # ver1[v]=9 00:18:01.987 07:35:17 -- scripts/common.sh@365 -- # decimal 0 00:18:01.987 07:35:17 -- scripts/common.sh@352 -- # local d=0 00:18:01.987 07:35:17 -- scripts/common.sh@353 -- # [[ 0 =~ ^[0-9]+$ ]] 00:18:01.987 07:35:17 -- scripts/common.sh@354 -- # echo 0 00:18:01.987 07:35:17 -- scripts/common.sh@365 -- # ver2[v]=0 00:18:01.987 07:35:17 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:18:01.987 07:35:17 -- scripts/common.sh@366 -- # return 0 00:18:01.987 07:35:17 -- fips/fips.sh@95 -- # openssl info -modulesdir 00:18:01.987 07:35:17 -- fips/fips.sh@95 -- # [[ ! -f /usr/lib64/ossl-modules/fips.so ]] 00:18:01.987 07:35:17 -- fips/fips.sh@100 -- # openssl fipsinstall -help 00:18:01.987 07:35:17 -- fips/fips.sh@100 -- # warn='This command is not enabled in the Red Hat Enterprise Linux OpenSSL build, please consult Red Hat documentation to learn how to enable FIPS mode' 00:18:01.987 07:35:17 -- fips/fips.sh@101 -- # [[ This command is not enabled in the Red Hat Enterprise Linux OpenSSL build, please consult Red Hat documentation to learn how to enable FIPS mode == \T\h\i\s\ \c\o\m\m\a\n\d\ \i\s\ \n\o\t\ \e\n\a\b\l\e\d* ]] 00:18:01.987 07:35:17 -- fips/fips.sh@104 -- # export callback=build_openssl_config 00:18:01.987 07:35:17 -- fips/fips.sh@104 -- # callback=build_openssl_config 00:18:01.987 07:35:17 -- fips/fips.sh@105 -- # export OPENSSL_FORCE_FIPS_MODE=build_openssl_config 00:18:01.987 07:35:17 -- fips/fips.sh@105 -- # OPENSSL_FORCE_FIPS_MODE=build_openssl_config 00:18:01.987 07:35:17 -- fips/fips.sh@114 -- # build_openssl_config 00:18:01.987 07:35:17 -- fips/fips.sh@37 -- # cat 00:18:01.987 07:35:17 -- fips/fips.sh@57 -- # [[ ! -t 0 ]] 00:18:01.987 07:35:17 -- fips/fips.sh@58 -- # cat - 00:18:01.987 07:35:17 -- fips/fips.sh@115 -- # export OPENSSL_CONF=spdk_fips.conf 00:18:01.987 07:35:17 -- fips/fips.sh@115 -- # OPENSSL_CONF=spdk_fips.conf 00:18:01.987 07:35:17 -- fips/fips.sh@117 -- # mapfile -t providers 00:18:01.987 07:35:17 -- fips/fips.sh@117 -- # OPENSSL_CONF=spdk_fips.conf 00:18:01.987 07:35:17 -- fips/fips.sh@117 -- # openssl list -providers 00:18:01.987 07:35:17 -- fips/fips.sh@117 -- # grep name 00:18:01.987 07:35:18 -- fips/fips.sh@121 -- # (( 2 != 2 )) 00:18:01.987 07:35:18 -- fips/fips.sh@121 -- # [[ name: openssl base provider != *base* ]] 00:18:01.987 07:35:18 -- fips/fips.sh@121 -- # [[ name: red hat enterprise linux 9 - openssl fips provider != *fips* ]] 00:18:01.987 07:35:18 -- fips/fips.sh@128 -- # NOT openssl md5 /dev/fd/62 00:18:01.987 07:35:18 -- fips/fips.sh@128 -- # : 00:18:01.987 07:35:18 -- common/autotest_common.sh@640 -- # local es=0 00:18:01.987 07:35:18 -- common/autotest_common.sh@642 -- # valid_exec_arg openssl md5 /dev/fd/62 00:18:01.987 07:35:18 -- common/autotest_common.sh@628 -- # local arg=openssl 00:18:01.987 07:35:18 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:18:01.987 07:35:18 -- common/autotest_common.sh@632 -- # type -t openssl 00:18:01.987 07:35:18 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:18:01.987 07:35:18 -- common/autotest_common.sh@634 -- # type -P openssl 00:18:01.987 07:35:18 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:18:01.987 07:35:18 -- common/autotest_common.sh@634 -- # arg=/usr/bin/openssl 00:18:01.987 07:35:18 -- common/autotest_common.sh@634 -- # [[ -x /usr/bin/openssl ]] 00:18:01.987 07:35:18 -- common/autotest_common.sh@643 -- # openssl md5 /dev/fd/62 00:18:01.987 Error setting digest 00:18:01.987 00B2D863AD7F0000:error:0308010C:digital envelope routines:inner_evp_generic_fetch:unsupported:crypto/evp/evp_fetch.c:373:Global default library context, Algorithm (MD5 : 97), Properties () 00:18:01.987 00B2D863AD7F0000:error:03000086:digital envelope routines:evp_md_init_internal:initialization error:crypto/evp/digest.c:254: 00:18:01.987 07:35:18 -- common/autotest_common.sh@643 -- # es=1 00:18:01.987 07:35:18 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:18:01.987 07:35:18 -- common/autotest_common.sh@662 -- # [[ -n '' ]] 00:18:01.987 07:35:18 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:18:01.987 07:35:18 -- fips/fips.sh@131 -- # nvmftestinit 00:18:01.987 07:35:18 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:18:01.987 07:35:18 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:18:01.987 07:35:18 -- nvmf/common.sh@436 -- # prepare_net_devs 00:18:01.987 07:35:18 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:18:01.987 07:35:18 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:18:01.987 07:35:18 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:01.987 07:35:18 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:18:01.987 07:35:18 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:01.987 07:35:18 -- nvmf/common.sh@402 -- # [[ phy != virt ]] 00:18:01.987 07:35:18 -- nvmf/common.sh@402 -- # gather_supported_nvmf_pci_devs 00:18:01.987 07:35:18 -- nvmf/common.sh@284 -- # xtrace_disable 00:18:01.987 07:35:18 -- common/autotest_common.sh@10 -- # set +x 00:18:04.518 07:35:20 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:18:04.518 07:35:20 -- nvmf/common.sh@290 -- # pci_devs=() 00:18:04.518 07:35:20 -- nvmf/common.sh@290 -- # local -a pci_devs 00:18:04.518 07:35:20 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:18:04.518 07:35:20 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:18:04.518 07:35:20 -- nvmf/common.sh@292 -- # pci_drivers=() 00:18:04.518 07:35:20 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:18:04.518 07:35:20 -- nvmf/common.sh@294 -- # net_devs=() 00:18:04.518 07:35:20 -- nvmf/common.sh@294 -- # local -ga net_devs 00:18:04.518 07:35:20 -- nvmf/common.sh@295 -- # e810=() 00:18:04.518 07:35:20 -- nvmf/common.sh@295 -- # local -ga e810 00:18:04.518 07:35:20 -- nvmf/common.sh@296 -- # x722=() 00:18:04.518 07:35:20 -- nvmf/common.sh@296 -- # local -ga x722 00:18:04.518 07:35:20 -- nvmf/common.sh@297 -- # mlx=() 00:18:04.518 07:35:20 -- nvmf/common.sh@297 -- # local -ga mlx 00:18:04.518 07:35:20 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:18:04.518 07:35:20 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:18:04.518 07:35:20 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:18:04.518 07:35:20 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:18:04.518 07:35:20 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:18:04.518 07:35:20 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:18:04.518 07:35:20 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:18:04.518 07:35:20 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:18:04.519 07:35:20 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:18:04.519 07:35:20 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:18:04.519 07:35:20 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:18:04.519 07:35:20 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:18:04.519 07:35:20 -- nvmf/common.sh@320 -- # [[ tcp == rdma ]] 00:18:04.519 07:35:20 -- nvmf/common.sh@326 -- # [[ e810 == mlx5 ]] 00:18:04.519 07:35:20 -- nvmf/common.sh@328 -- # [[ e810 == e810 ]] 00:18:04.519 07:35:20 -- nvmf/common.sh@329 -- # pci_devs=("${e810[@]}") 00:18:04.519 07:35:20 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:18:04.519 07:35:20 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:18:04.519 07:35:20 -- nvmf/common.sh@340 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:18:04.519 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:18:04.519 07:35:20 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:18:04.519 07:35:20 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:18:04.519 07:35:20 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:18:04.519 07:35:20 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:18:04.519 07:35:20 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:18:04.519 07:35:20 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:18:04.519 07:35:20 -- nvmf/common.sh@340 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:18:04.519 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:18:04.519 07:35:20 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:18:04.519 07:35:20 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:18:04.519 07:35:20 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:18:04.519 07:35:20 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:18:04.519 07:35:20 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:18:04.519 07:35:20 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:18:04.519 07:35:20 -- nvmf/common.sh@371 -- # [[ e810 == e810 ]] 00:18:04.519 07:35:20 -- nvmf/common.sh@371 -- # [[ tcp == rdma ]] 00:18:04.519 07:35:20 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:18:04.519 07:35:20 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:18:04.519 07:35:20 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:18:04.519 07:35:20 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:18:04.519 07:35:20 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:18:04.519 Found net devices under 0000:0a:00.0: cvl_0_0 00:18:04.519 07:35:20 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:18:04.519 07:35:20 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:18:04.519 07:35:20 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:18:04.519 07:35:20 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:18:04.519 07:35:20 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:18:04.519 07:35:20 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:18:04.519 Found net devices under 0000:0a:00.1: cvl_0_1 00:18:04.519 07:35:20 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:18:04.519 07:35:20 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:18:04.519 07:35:20 -- nvmf/common.sh@402 -- # is_hw=yes 00:18:04.519 07:35:20 -- nvmf/common.sh@404 -- # [[ yes == yes ]] 00:18:04.519 07:35:20 -- nvmf/common.sh@405 -- # [[ tcp == tcp ]] 00:18:04.519 07:35:20 -- nvmf/common.sh@406 -- # nvmf_tcp_init 00:18:04.519 07:35:20 -- nvmf/common.sh@228 -- # NVMF_INITIATOR_IP=10.0.0.1 00:18:04.519 07:35:20 -- nvmf/common.sh@229 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:18:04.519 07:35:20 -- nvmf/common.sh@230 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:18:04.519 07:35:20 -- nvmf/common.sh@233 -- # (( 2 > 1 )) 00:18:04.519 07:35:20 -- nvmf/common.sh@235 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:18:04.519 07:35:20 -- nvmf/common.sh@236 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:18:04.519 07:35:20 -- nvmf/common.sh@239 -- # NVMF_SECOND_TARGET_IP= 00:18:04.519 07:35:20 -- nvmf/common.sh@241 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:18:04.519 07:35:20 -- nvmf/common.sh@242 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:18:04.519 07:35:20 -- nvmf/common.sh@243 -- # ip -4 addr flush cvl_0_0 00:18:04.519 07:35:20 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_1 00:18:04.519 07:35:20 -- nvmf/common.sh@247 -- # ip netns add cvl_0_0_ns_spdk 00:18:04.519 07:35:20 -- nvmf/common.sh@250 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:18:04.519 07:35:20 -- nvmf/common.sh@253 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:18:04.519 07:35:20 -- nvmf/common.sh@254 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:18:04.519 07:35:20 -- nvmf/common.sh@257 -- # ip link set cvl_0_1 up 00:18:04.519 07:35:20 -- nvmf/common.sh@259 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:18:04.519 07:35:20 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:18:04.519 07:35:20 -- nvmf/common.sh@263 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:18:04.519 07:35:20 -- nvmf/common.sh@266 -- # ping -c 1 10.0.0.2 00:18:04.519 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:18:04.519 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.111 ms 00:18:04.519 00:18:04.519 --- 10.0.0.2 ping statistics --- 00:18:04.519 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:04.519 rtt min/avg/max/mdev = 0.111/0.111/0.111/0.000 ms 00:18:04.519 07:35:20 -- nvmf/common.sh@267 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:18:04.519 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:18:04.519 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.135 ms 00:18:04.519 00:18:04.519 --- 10.0.0.1 ping statistics --- 00:18:04.519 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:04.519 rtt min/avg/max/mdev = 0.135/0.135/0.135/0.000 ms 00:18:04.519 07:35:20 -- nvmf/common.sh@269 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:18:04.519 07:35:20 -- nvmf/common.sh@410 -- # return 0 00:18:04.519 07:35:20 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:18:04.519 07:35:20 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:18:04.519 07:35:20 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:18:04.519 07:35:20 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:18:04.519 07:35:20 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:18:04.519 07:35:20 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:18:04.519 07:35:20 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:18:04.519 07:35:20 -- fips/fips.sh@132 -- # nvmfappstart -m 0x2 00:18:04.519 07:35:20 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:18:04.519 07:35:20 -- common/autotest_common.sh@712 -- # xtrace_disable 00:18:04.519 07:35:20 -- common/autotest_common.sh@10 -- # set +x 00:18:04.519 07:35:20 -- nvmf/common.sh@469 -- # nvmfpid=4116969 00:18:04.519 07:35:20 -- nvmf/common.sh@468 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:18:04.519 07:35:20 -- nvmf/common.sh@470 -- # waitforlisten 4116969 00:18:04.519 07:35:20 -- common/autotest_common.sh@819 -- # '[' -z 4116969 ']' 00:18:04.519 07:35:20 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:04.519 07:35:20 -- common/autotest_common.sh@824 -- # local max_retries=100 00:18:04.519 07:35:20 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:04.519 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:04.519 07:35:20 -- common/autotest_common.sh@828 -- # xtrace_disable 00:18:04.519 07:35:20 -- common/autotest_common.sh@10 -- # set +x 00:18:04.519 [2024-07-14 07:35:20.344100] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:18:04.519 [2024-07-14 07:35:20.344194] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:18:04.519 EAL: No free 2048 kB hugepages reported on node 1 00:18:04.519 [2024-07-14 07:35:20.411091] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:04.519 [2024-07-14 07:35:20.531910] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:18:04.519 [2024-07-14 07:35:20.532071] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:18:04.519 [2024-07-14 07:35:20.532091] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:18:04.519 [2024-07-14 07:35:20.532104] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:18:04.519 [2024-07-14 07:35:20.532135] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:18:05.453 07:35:21 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:18:05.453 07:35:21 -- common/autotest_common.sh@852 -- # return 0 00:18:05.453 07:35:21 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:18:05.453 07:35:21 -- common/autotest_common.sh@718 -- # xtrace_disable 00:18:05.453 07:35:21 -- common/autotest_common.sh@10 -- # set +x 00:18:05.453 07:35:21 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:18:05.453 07:35:21 -- fips/fips.sh@134 -- # trap cleanup EXIT 00:18:05.453 07:35:21 -- fips/fips.sh@137 -- # key=NVMeTLSkey-1:01:VRLbtnN9AQb2WXW3c9+wEf/DRLz0QuLdbYvEhwtdWwNf9LrZ: 00:18:05.453 07:35:21 -- fips/fips.sh@138 -- # key_path=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/key.txt 00:18:05.453 07:35:21 -- fips/fips.sh@139 -- # echo -n NVMeTLSkey-1:01:VRLbtnN9AQb2WXW3c9+wEf/DRLz0QuLdbYvEhwtdWwNf9LrZ: 00:18:05.453 07:35:21 -- fips/fips.sh@140 -- # chmod 0600 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/key.txt 00:18:05.453 07:35:21 -- fips/fips.sh@142 -- # setup_nvmf_tgt_conf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/key.txt 00:18:05.453 07:35:21 -- fips/fips.sh@22 -- # local key=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/key.txt 00:18:05.453 07:35:21 -- fips/fips.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:18:05.453 [2024-07-14 07:35:21.548073] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:18:05.453 [2024-07-14 07:35:21.564093] tcp.c: 912:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:18:05.453 [2024-07-14 07:35:21.564313] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:18:05.453 malloc0 00:18:05.453 07:35:21 -- fips/fips.sh@145 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:18:05.453 07:35:21 -- fips/fips.sh@148 -- # bdevperf_pid=4117208 00:18:05.453 07:35:21 -- fips/fips.sh@146 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:18:05.453 07:35:21 -- fips/fips.sh@149 -- # waitforlisten 4117208 /var/tmp/bdevperf.sock 00:18:05.454 07:35:21 -- common/autotest_common.sh@819 -- # '[' -z 4117208 ']' 00:18:05.454 07:35:21 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:18:05.454 07:35:21 -- common/autotest_common.sh@824 -- # local max_retries=100 00:18:05.454 07:35:21 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:18:05.454 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:18:05.454 07:35:21 -- common/autotest_common.sh@828 -- # xtrace_disable 00:18:05.454 07:35:21 -- common/autotest_common.sh@10 -- # set +x 00:18:05.712 [2024-07-14 07:35:21.681093] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:18:05.712 [2024-07-14 07:35:21.681178] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid4117208 ] 00:18:05.712 EAL: No free 2048 kB hugepages reported on node 1 00:18:05.712 [2024-07-14 07:35:21.736290] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:05.712 [2024-07-14 07:35:21.838714] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:18:06.644 07:35:22 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:18:06.644 07:35:22 -- common/autotest_common.sh@852 -- # return 0 00:18:06.644 07:35:22 -- fips/fips.sh@151 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/key.txt 00:18:06.902 [2024-07-14 07:35:22.859676] bdev_nvme_rpc.c: 477:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:18:06.902 TLSTESTn1 00:18:06.902 07:35:22 -- fips/fips.sh@155 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:18:06.902 Running I/O for 10 seconds... 00:18:19.111 00:18:19.111 Latency(us) 00:18:19.111 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:19.111 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:18:19.111 Verification LBA range: start 0x0 length 0x2000 00:18:19.111 TLSTESTn1 : 10.03 1747.74 6.83 0.00 0.00 73112.73 4854.52 80390.83 00:18:19.111 =================================================================================================================== 00:18:19.111 Total : 1747.74 6.83 0.00 0.00 73112.73 4854.52 80390.83 00:18:19.111 0 00:18:19.111 07:35:33 -- fips/fips.sh@1 -- # cleanup 00:18:19.111 07:35:33 -- fips/fips.sh@15 -- # process_shm --id 0 00:18:19.111 07:35:33 -- common/autotest_common.sh@796 -- # type=--id 00:18:19.111 07:35:33 -- common/autotest_common.sh@797 -- # id=0 00:18:19.111 07:35:33 -- common/autotest_common.sh@798 -- # '[' --id = --pid ']' 00:18:19.111 07:35:33 -- common/autotest_common.sh@802 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:18:19.111 07:35:33 -- common/autotest_common.sh@802 -- # shm_files=nvmf_trace.0 00:18:19.111 07:35:33 -- common/autotest_common.sh@804 -- # [[ -z nvmf_trace.0 ]] 00:18:19.111 07:35:33 -- common/autotest_common.sh@808 -- # for n in $shm_files 00:18:19.111 07:35:33 -- common/autotest_common.sh@809 -- # tar -C /dev/shm/ -cvzf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:18:19.111 nvmf_trace.0 00:18:19.111 07:35:33 -- common/autotest_common.sh@811 -- # return 0 00:18:19.111 07:35:33 -- fips/fips.sh@16 -- # killprocess 4117208 00:18:19.111 07:35:33 -- common/autotest_common.sh@926 -- # '[' -z 4117208 ']' 00:18:19.111 07:35:33 -- common/autotest_common.sh@930 -- # kill -0 4117208 00:18:19.111 07:35:33 -- common/autotest_common.sh@931 -- # uname 00:18:19.111 07:35:33 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:18:19.111 07:35:33 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 4117208 00:18:19.111 07:35:33 -- common/autotest_common.sh@932 -- # process_name=reactor_2 00:18:19.111 07:35:33 -- common/autotest_common.sh@936 -- # '[' reactor_2 = sudo ']' 00:18:19.111 07:35:33 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 4117208' 00:18:19.111 killing process with pid 4117208 00:18:19.111 07:35:33 -- common/autotest_common.sh@945 -- # kill 4117208 00:18:19.111 Received shutdown signal, test time was about 10.000000 seconds 00:18:19.111 00:18:19.111 Latency(us) 00:18:19.111 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:19.112 =================================================================================================================== 00:18:19.112 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:18:19.112 07:35:33 -- common/autotest_common.sh@950 -- # wait 4117208 00:18:19.112 07:35:33 -- fips/fips.sh@17 -- # nvmftestfini 00:18:19.112 07:35:33 -- nvmf/common.sh@476 -- # nvmfcleanup 00:18:19.112 07:35:33 -- nvmf/common.sh@116 -- # sync 00:18:19.112 07:35:33 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:18:19.112 07:35:33 -- nvmf/common.sh@119 -- # set +e 00:18:19.112 07:35:33 -- nvmf/common.sh@120 -- # for i in {1..20} 00:18:19.112 07:35:33 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:18:19.112 rmmod nvme_tcp 00:18:19.112 rmmod nvme_fabrics 00:18:19.112 rmmod nvme_keyring 00:18:19.112 07:35:33 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:18:19.112 07:35:33 -- nvmf/common.sh@123 -- # set -e 00:18:19.112 07:35:33 -- nvmf/common.sh@124 -- # return 0 00:18:19.112 07:35:33 -- nvmf/common.sh@477 -- # '[' -n 4116969 ']' 00:18:19.112 07:35:33 -- nvmf/common.sh@478 -- # killprocess 4116969 00:18:19.112 07:35:33 -- common/autotest_common.sh@926 -- # '[' -z 4116969 ']' 00:18:19.112 07:35:33 -- common/autotest_common.sh@930 -- # kill -0 4116969 00:18:19.112 07:35:33 -- common/autotest_common.sh@931 -- # uname 00:18:19.112 07:35:33 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:18:19.112 07:35:33 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 4116969 00:18:19.112 07:35:33 -- common/autotest_common.sh@932 -- # process_name=reactor_1 00:18:19.112 07:35:33 -- common/autotest_common.sh@936 -- # '[' reactor_1 = sudo ']' 00:18:19.112 07:35:33 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 4116969' 00:18:19.112 killing process with pid 4116969 00:18:19.112 07:35:33 -- common/autotest_common.sh@945 -- # kill 4116969 00:18:19.112 07:35:33 -- common/autotest_common.sh@950 -- # wait 4116969 00:18:19.112 07:35:33 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:18:19.112 07:35:33 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:18:19.112 07:35:33 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:18:19.112 07:35:33 -- nvmf/common.sh@273 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:18:19.112 07:35:33 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:18:19.112 07:35:33 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:19.112 07:35:33 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:18:19.112 07:35:33 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:19.677 07:35:35 -- nvmf/common.sh@278 -- # ip -4 addr flush cvl_0_1 00:18:19.677 07:35:35 -- fips/fips.sh@18 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/key.txt 00:18:19.935 00:18:19.935 real 0m17.969s 00:18:19.935 user 0m22.512s 00:18:19.935 sys 0m6.830s 00:18:19.935 07:35:35 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:18:19.935 07:35:35 -- common/autotest_common.sh@10 -- # set +x 00:18:19.935 ************************************ 00:18:19.935 END TEST nvmf_fips 00:18:19.935 ************************************ 00:18:19.935 07:35:35 -- nvmf/nvmf.sh@63 -- # '[' 1 -eq 1 ']' 00:18:19.935 07:35:35 -- nvmf/nvmf.sh@64 -- # run_test nvmf_fuzz /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fabrics_fuzz.sh --transport=tcp 00:18:19.935 07:35:35 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:18:19.935 07:35:35 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:18:19.935 07:35:35 -- common/autotest_common.sh@10 -- # set +x 00:18:19.935 ************************************ 00:18:19.935 START TEST nvmf_fuzz 00:18:19.935 ************************************ 00:18:19.935 07:35:35 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fabrics_fuzz.sh --transport=tcp 00:18:19.935 * Looking for test storage... 00:18:19.935 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:18:19.935 07:35:35 -- target/fabrics_fuzz.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:18:19.935 07:35:35 -- nvmf/common.sh@7 -- # uname -s 00:18:19.935 07:35:35 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:18:19.935 07:35:35 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:18:19.935 07:35:35 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:18:19.935 07:35:35 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:18:19.935 07:35:35 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:18:19.935 07:35:35 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:18:19.935 07:35:35 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:18:19.935 07:35:35 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:18:19.935 07:35:35 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:18:19.935 07:35:35 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:18:19.935 07:35:35 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:18:19.935 07:35:35 -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:18:19.935 07:35:35 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:18:19.935 07:35:35 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:18:19.935 07:35:35 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:18:19.936 07:35:35 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:18:19.936 07:35:35 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:18:19.936 07:35:35 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:18:19.936 07:35:35 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:18:19.936 07:35:35 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:19.936 07:35:35 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:19.936 07:35:35 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:19.936 07:35:35 -- paths/export.sh@5 -- # export PATH 00:18:19.936 07:35:35 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:19.936 07:35:35 -- nvmf/common.sh@46 -- # : 0 00:18:19.936 07:35:35 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:18:19.936 07:35:35 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:18:19.936 07:35:35 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:18:19.936 07:35:35 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:18:19.936 07:35:35 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:18:19.936 07:35:35 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:18:19.936 07:35:35 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:18:19.936 07:35:35 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:18:19.936 07:35:35 -- target/fabrics_fuzz.sh@11 -- # nvmftestinit 00:18:19.936 07:35:35 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:18:19.936 07:35:35 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:18:19.936 07:35:35 -- nvmf/common.sh@436 -- # prepare_net_devs 00:18:19.936 07:35:35 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:18:19.936 07:35:35 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:18:19.936 07:35:35 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:19.936 07:35:35 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:18:19.936 07:35:35 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:19.936 07:35:35 -- nvmf/common.sh@402 -- # [[ phy != virt ]] 00:18:19.936 07:35:35 -- nvmf/common.sh@402 -- # gather_supported_nvmf_pci_devs 00:18:19.936 07:35:35 -- nvmf/common.sh@284 -- # xtrace_disable 00:18:19.936 07:35:35 -- common/autotest_common.sh@10 -- # set +x 00:18:21.837 07:35:37 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:18:21.837 07:35:37 -- nvmf/common.sh@290 -- # pci_devs=() 00:18:21.837 07:35:37 -- nvmf/common.sh@290 -- # local -a pci_devs 00:18:21.837 07:35:37 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:18:21.837 07:35:37 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:18:21.837 07:35:37 -- nvmf/common.sh@292 -- # pci_drivers=() 00:18:21.837 07:35:37 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:18:21.837 07:35:37 -- nvmf/common.sh@294 -- # net_devs=() 00:18:21.837 07:35:37 -- nvmf/common.sh@294 -- # local -ga net_devs 00:18:21.837 07:35:37 -- nvmf/common.sh@295 -- # e810=() 00:18:21.837 07:35:37 -- nvmf/common.sh@295 -- # local -ga e810 00:18:21.837 07:35:37 -- nvmf/common.sh@296 -- # x722=() 00:18:21.837 07:35:37 -- nvmf/common.sh@296 -- # local -ga x722 00:18:21.837 07:35:37 -- nvmf/common.sh@297 -- # mlx=() 00:18:21.837 07:35:37 -- nvmf/common.sh@297 -- # local -ga mlx 00:18:21.837 07:35:37 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:18:21.837 07:35:37 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:18:21.837 07:35:37 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:18:21.837 07:35:37 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:18:21.837 07:35:37 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:18:21.837 07:35:37 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:18:21.837 07:35:37 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:18:21.837 07:35:37 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:18:21.837 07:35:37 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:18:21.837 07:35:37 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:18:21.837 07:35:37 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:18:21.837 07:35:37 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:18:21.837 07:35:37 -- nvmf/common.sh@320 -- # [[ tcp == rdma ]] 00:18:21.837 07:35:37 -- nvmf/common.sh@326 -- # [[ e810 == mlx5 ]] 00:18:21.837 07:35:37 -- nvmf/common.sh@328 -- # [[ e810 == e810 ]] 00:18:21.837 07:35:37 -- nvmf/common.sh@329 -- # pci_devs=("${e810[@]}") 00:18:21.837 07:35:37 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:18:21.837 07:35:37 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:18:21.837 07:35:37 -- nvmf/common.sh@340 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:18:21.837 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:18:21.837 07:35:37 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:18:21.837 07:35:37 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:18:21.837 07:35:37 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:18:21.837 07:35:37 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:18:21.837 07:35:37 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:18:21.837 07:35:37 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:18:21.837 07:35:37 -- nvmf/common.sh@340 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:18:21.837 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:18:21.837 07:35:37 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:18:21.837 07:35:37 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:18:21.837 07:35:37 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:18:21.837 07:35:37 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:18:21.837 07:35:37 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:18:21.837 07:35:37 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:18:21.837 07:35:37 -- nvmf/common.sh@371 -- # [[ e810 == e810 ]] 00:18:21.837 07:35:37 -- nvmf/common.sh@371 -- # [[ tcp == rdma ]] 00:18:21.837 07:35:37 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:18:21.837 07:35:37 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:18:21.837 07:35:37 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:18:21.837 07:35:37 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:18:21.837 07:35:37 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:18:21.837 Found net devices under 0000:0a:00.0: cvl_0_0 00:18:21.837 07:35:37 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:18:21.837 07:35:37 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:18:21.837 07:35:37 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:18:21.837 07:35:37 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:18:21.837 07:35:37 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:18:21.837 07:35:37 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:18:21.837 Found net devices under 0000:0a:00.1: cvl_0_1 00:18:21.837 07:35:37 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:18:21.837 07:35:37 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:18:21.837 07:35:37 -- nvmf/common.sh@402 -- # is_hw=yes 00:18:21.837 07:35:37 -- nvmf/common.sh@404 -- # [[ yes == yes ]] 00:18:21.837 07:35:37 -- nvmf/common.sh@405 -- # [[ tcp == tcp ]] 00:18:21.837 07:35:37 -- nvmf/common.sh@406 -- # nvmf_tcp_init 00:18:21.837 07:35:37 -- nvmf/common.sh@228 -- # NVMF_INITIATOR_IP=10.0.0.1 00:18:21.837 07:35:37 -- nvmf/common.sh@229 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:18:21.837 07:35:37 -- nvmf/common.sh@230 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:18:21.837 07:35:37 -- nvmf/common.sh@233 -- # (( 2 > 1 )) 00:18:21.837 07:35:37 -- nvmf/common.sh@235 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:18:21.837 07:35:37 -- nvmf/common.sh@236 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:18:21.837 07:35:37 -- nvmf/common.sh@239 -- # NVMF_SECOND_TARGET_IP= 00:18:21.837 07:35:37 -- nvmf/common.sh@241 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:18:21.837 07:35:37 -- nvmf/common.sh@242 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:18:21.837 07:35:37 -- nvmf/common.sh@243 -- # ip -4 addr flush cvl_0_0 00:18:21.837 07:35:37 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_1 00:18:21.837 07:35:37 -- nvmf/common.sh@247 -- # ip netns add cvl_0_0_ns_spdk 00:18:21.837 07:35:37 -- nvmf/common.sh@250 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:18:22.096 07:35:38 -- nvmf/common.sh@253 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:18:22.096 07:35:38 -- nvmf/common.sh@254 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:18:22.096 07:35:38 -- nvmf/common.sh@257 -- # ip link set cvl_0_1 up 00:18:22.096 07:35:38 -- nvmf/common.sh@259 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:18:22.096 07:35:38 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:18:22.096 07:35:38 -- nvmf/common.sh@263 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:18:22.096 07:35:38 -- nvmf/common.sh@266 -- # ping -c 1 10.0.0.2 00:18:22.096 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:18:22.096 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.175 ms 00:18:22.096 00:18:22.096 --- 10.0.0.2 ping statistics --- 00:18:22.096 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:22.096 rtt min/avg/max/mdev = 0.175/0.175/0.175/0.000 ms 00:18:22.096 07:35:38 -- nvmf/common.sh@267 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:18:22.096 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:18:22.096 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.162 ms 00:18:22.096 00:18:22.096 --- 10.0.0.1 ping statistics --- 00:18:22.096 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:22.096 rtt min/avg/max/mdev = 0.162/0.162/0.162/0.000 ms 00:18:22.096 07:35:38 -- nvmf/common.sh@269 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:18:22.096 07:35:38 -- nvmf/common.sh@410 -- # return 0 00:18:22.096 07:35:38 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:18:22.096 07:35:38 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:18:22.096 07:35:38 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:18:22.096 07:35:38 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:18:22.096 07:35:38 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:18:22.096 07:35:38 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:18:22.097 07:35:38 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:18:22.097 07:35:38 -- target/fabrics_fuzz.sh@14 -- # nvmfpid=4120562 00:18:22.097 07:35:38 -- target/fabrics_fuzz.sh@13 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:18:22.097 07:35:38 -- target/fabrics_fuzz.sh@16 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $nvmfpid; nvmftestfini $1; exit 1' SIGINT SIGTERM EXIT 00:18:22.097 07:35:38 -- target/fabrics_fuzz.sh@18 -- # waitforlisten 4120562 00:18:22.097 07:35:38 -- common/autotest_common.sh@819 -- # '[' -z 4120562 ']' 00:18:22.097 07:35:38 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:22.097 07:35:38 -- common/autotest_common.sh@824 -- # local max_retries=100 00:18:22.097 07:35:38 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:22.097 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:22.097 07:35:38 -- common/autotest_common.sh@828 -- # xtrace_disable 00:18:22.097 07:35:38 -- common/autotest_common.sh@10 -- # set +x 00:18:23.030 07:35:39 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:18:23.030 07:35:39 -- common/autotest_common.sh@852 -- # return 0 00:18:23.030 07:35:39 -- target/fabrics_fuzz.sh@19 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:18:23.030 07:35:39 -- common/autotest_common.sh@551 -- # xtrace_disable 00:18:23.030 07:35:39 -- common/autotest_common.sh@10 -- # set +x 00:18:23.030 07:35:39 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:18:23.030 07:35:39 -- target/fabrics_fuzz.sh@21 -- # rpc_cmd bdev_malloc_create -b Malloc0 64 512 00:18:23.030 07:35:39 -- common/autotest_common.sh@551 -- # xtrace_disable 00:18:23.030 07:35:39 -- common/autotest_common.sh@10 -- # set +x 00:18:23.030 Malloc0 00:18:23.030 07:35:39 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:18:23.030 07:35:39 -- target/fabrics_fuzz.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:18:23.031 07:35:39 -- common/autotest_common.sh@551 -- # xtrace_disable 00:18:23.031 07:35:39 -- common/autotest_common.sh@10 -- # set +x 00:18:23.031 07:35:39 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:18:23.031 07:35:39 -- target/fabrics_fuzz.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:18:23.031 07:35:39 -- common/autotest_common.sh@551 -- # xtrace_disable 00:18:23.031 07:35:39 -- common/autotest_common.sh@10 -- # set +x 00:18:23.031 07:35:39 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:18:23.031 07:35:39 -- target/fabrics_fuzz.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:18:23.031 07:35:39 -- common/autotest_common.sh@551 -- # xtrace_disable 00:18:23.031 07:35:39 -- common/autotest_common.sh@10 -- # set +x 00:18:23.031 07:35:39 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:18:23.031 07:35:39 -- target/fabrics_fuzz.sh@27 -- # trid='trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:10.0.0.2 trsvcid:4420' 00:18:23.031 07:35:39 -- target/fabrics_fuzz.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/fuzz/nvme_fuzz/nvme_fuzz -m 0x2 -r /var/tmp/nvme_fuzz -t 30 -S 123456 -F 'trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:10.0.0.2 trsvcid:4420' -N -a 00:18:55.095 Fuzzing completed. Shutting down the fuzz application 00:18:55.095 00:18:55.095 Dumping successful admin opcodes: 00:18:55.095 8, 9, 10, 24, 00:18:55.095 Dumping successful io opcodes: 00:18:55.095 0, 9, 00:18:55.095 NS: 0x200003aeff00 I/O qp, Total commands completed: 455261, total successful commands: 2641, random_seed: 4124256128 00:18:55.095 NS: 0x200003aeff00 admin qp, Total commands completed: 56656, total successful commands: 449, random_seed: 2085990144 00:18:55.095 07:36:09 -- target/fabrics_fuzz.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/fuzz/nvme_fuzz/nvme_fuzz -m 0x2 -r /var/tmp/nvme_fuzz -F 'trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:10.0.0.2 trsvcid:4420' -j /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/fuzz/nvme_fuzz/example.json -a 00:18:55.095 Fuzzing completed. Shutting down the fuzz application 00:18:55.095 00:18:55.095 Dumping successful admin opcodes: 00:18:55.095 24, 00:18:55.095 Dumping successful io opcodes: 00:18:55.095 00:18:55.095 NS: 0x200003aeff00 I/O qp, Total commands completed: 0, total successful commands: 0, random_seed: 336189986 00:18:55.095 NS: 0x200003aeff00 admin qp, Total commands completed: 16, total successful commands: 4, random_seed: 336356853 00:18:55.095 07:36:11 -- target/fabrics_fuzz.sh@34 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:18:55.095 07:36:11 -- common/autotest_common.sh@551 -- # xtrace_disable 00:18:55.095 07:36:11 -- common/autotest_common.sh@10 -- # set +x 00:18:55.095 07:36:11 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:18:55.095 07:36:11 -- target/fabrics_fuzz.sh@36 -- # trap - SIGINT SIGTERM EXIT 00:18:55.095 07:36:11 -- target/fabrics_fuzz.sh@38 -- # nvmftestfini 00:18:55.095 07:36:11 -- nvmf/common.sh@476 -- # nvmfcleanup 00:18:55.095 07:36:11 -- nvmf/common.sh@116 -- # sync 00:18:55.352 07:36:11 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:18:55.352 07:36:11 -- nvmf/common.sh@119 -- # set +e 00:18:55.352 07:36:11 -- nvmf/common.sh@120 -- # for i in {1..20} 00:18:55.352 07:36:11 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:18:55.352 rmmod nvme_tcp 00:18:55.352 rmmod nvme_fabrics 00:18:55.352 rmmod nvme_keyring 00:18:55.352 07:36:11 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:18:55.352 07:36:11 -- nvmf/common.sh@123 -- # set -e 00:18:55.352 07:36:11 -- nvmf/common.sh@124 -- # return 0 00:18:55.352 07:36:11 -- nvmf/common.sh@477 -- # '[' -n 4120562 ']' 00:18:55.352 07:36:11 -- nvmf/common.sh@478 -- # killprocess 4120562 00:18:55.352 07:36:11 -- common/autotest_common.sh@926 -- # '[' -z 4120562 ']' 00:18:55.352 07:36:11 -- common/autotest_common.sh@930 -- # kill -0 4120562 00:18:55.352 07:36:11 -- common/autotest_common.sh@931 -- # uname 00:18:55.352 07:36:11 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:18:55.352 07:36:11 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 4120562 00:18:55.352 07:36:11 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:18:55.352 07:36:11 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:18:55.352 07:36:11 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 4120562' 00:18:55.352 killing process with pid 4120562 00:18:55.352 07:36:11 -- common/autotest_common.sh@945 -- # kill 4120562 00:18:55.352 07:36:11 -- common/autotest_common.sh@950 -- # wait 4120562 00:18:55.631 07:36:11 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:18:55.631 07:36:11 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:18:55.631 07:36:11 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:18:55.631 07:36:11 -- nvmf/common.sh@273 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:18:55.631 07:36:11 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:18:55.631 07:36:11 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:55.631 07:36:11 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:18:55.631 07:36:11 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:57.542 07:36:13 -- nvmf/common.sh@278 -- # ip -4 addr flush cvl_0_1 00:18:57.542 07:36:13 -- target/fabrics_fuzz.sh@39 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf_fuzz_logs1.txt /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf_fuzz_logs2.txt 00:18:57.800 00:18:57.800 real 0m37.842s 00:18:57.800 user 0m51.354s 00:18:57.800 sys 0m15.708s 00:18:57.800 07:36:13 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:18:57.800 07:36:13 -- common/autotest_common.sh@10 -- # set +x 00:18:57.800 ************************************ 00:18:57.800 END TEST nvmf_fuzz 00:18:57.800 ************************************ 00:18:57.800 07:36:13 -- nvmf/nvmf.sh@65 -- # run_test nvmf_multiconnection /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multiconnection.sh --transport=tcp 00:18:57.800 07:36:13 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:18:57.800 07:36:13 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:18:57.800 07:36:13 -- common/autotest_common.sh@10 -- # set +x 00:18:57.800 ************************************ 00:18:57.800 START TEST nvmf_multiconnection 00:18:57.800 ************************************ 00:18:57.800 07:36:13 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multiconnection.sh --transport=tcp 00:18:57.800 * Looking for test storage... 00:18:57.800 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:18:57.800 07:36:13 -- target/multiconnection.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:18:57.800 07:36:13 -- nvmf/common.sh@7 -- # uname -s 00:18:57.800 07:36:13 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:18:57.800 07:36:13 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:18:57.800 07:36:13 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:18:57.800 07:36:13 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:18:57.800 07:36:13 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:18:57.800 07:36:13 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:18:57.800 07:36:13 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:18:57.800 07:36:13 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:18:57.800 07:36:13 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:18:57.800 07:36:13 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:18:57.800 07:36:13 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:18:57.800 07:36:13 -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:18:57.800 07:36:13 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:18:57.800 07:36:13 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:18:57.800 07:36:13 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:18:57.800 07:36:13 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:18:57.800 07:36:13 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:18:57.800 07:36:13 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:18:57.800 07:36:13 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:18:57.800 07:36:13 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:57.801 07:36:13 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:57.801 07:36:13 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:57.801 07:36:13 -- paths/export.sh@5 -- # export PATH 00:18:57.801 07:36:13 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:57.801 07:36:13 -- nvmf/common.sh@46 -- # : 0 00:18:57.801 07:36:13 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:18:57.801 07:36:13 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:18:57.801 07:36:13 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:18:57.801 07:36:13 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:18:57.801 07:36:13 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:18:57.801 07:36:13 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:18:57.801 07:36:13 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:18:57.801 07:36:13 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:18:57.801 07:36:13 -- target/multiconnection.sh@11 -- # MALLOC_BDEV_SIZE=64 00:18:57.801 07:36:13 -- target/multiconnection.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:18:57.801 07:36:13 -- target/multiconnection.sh@14 -- # NVMF_SUBSYS=11 00:18:57.801 07:36:13 -- target/multiconnection.sh@16 -- # nvmftestinit 00:18:57.801 07:36:13 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:18:57.801 07:36:13 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:18:57.801 07:36:13 -- nvmf/common.sh@436 -- # prepare_net_devs 00:18:57.801 07:36:13 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:18:57.801 07:36:13 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:18:57.801 07:36:13 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:57.801 07:36:13 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:18:57.801 07:36:13 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:57.801 07:36:13 -- nvmf/common.sh@402 -- # [[ phy != virt ]] 00:18:57.801 07:36:13 -- nvmf/common.sh@402 -- # gather_supported_nvmf_pci_devs 00:18:57.801 07:36:13 -- nvmf/common.sh@284 -- # xtrace_disable 00:18:57.801 07:36:13 -- common/autotest_common.sh@10 -- # set +x 00:18:59.701 07:36:15 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:18:59.701 07:36:15 -- nvmf/common.sh@290 -- # pci_devs=() 00:18:59.701 07:36:15 -- nvmf/common.sh@290 -- # local -a pci_devs 00:18:59.701 07:36:15 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:18:59.701 07:36:15 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:18:59.701 07:36:15 -- nvmf/common.sh@292 -- # pci_drivers=() 00:18:59.701 07:36:15 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:18:59.701 07:36:15 -- nvmf/common.sh@294 -- # net_devs=() 00:18:59.701 07:36:15 -- nvmf/common.sh@294 -- # local -ga net_devs 00:18:59.701 07:36:15 -- nvmf/common.sh@295 -- # e810=() 00:18:59.701 07:36:15 -- nvmf/common.sh@295 -- # local -ga e810 00:18:59.701 07:36:15 -- nvmf/common.sh@296 -- # x722=() 00:18:59.701 07:36:15 -- nvmf/common.sh@296 -- # local -ga x722 00:18:59.701 07:36:15 -- nvmf/common.sh@297 -- # mlx=() 00:18:59.701 07:36:15 -- nvmf/common.sh@297 -- # local -ga mlx 00:18:59.701 07:36:15 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:18:59.701 07:36:15 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:18:59.701 07:36:15 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:18:59.701 07:36:15 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:18:59.701 07:36:15 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:18:59.701 07:36:15 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:18:59.701 07:36:15 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:18:59.701 07:36:15 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:18:59.701 07:36:15 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:18:59.701 07:36:15 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:18:59.701 07:36:15 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:18:59.701 07:36:15 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:18:59.701 07:36:15 -- nvmf/common.sh@320 -- # [[ tcp == rdma ]] 00:18:59.701 07:36:15 -- nvmf/common.sh@326 -- # [[ e810 == mlx5 ]] 00:18:59.701 07:36:15 -- nvmf/common.sh@328 -- # [[ e810 == e810 ]] 00:18:59.701 07:36:15 -- nvmf/common.sh@329 -- # pci_devs=("${e810[@]}") 00:18:59.701 07:36:15 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:18:59.701 07:36:15 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:18:59.701 07:36:15 -- nvmf/common.sh@340 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:18:59.701 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:18:59.701 07:36:15 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:18:59.701 07:36:15 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:18:59.701 07:36:15 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:18:59.701 07:36:15 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:18:59.701 07:36:15 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:18:59.701 07:36:15 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:18:59.701 07:36:15 -- nvmf/common.sh@340 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:18:59.701 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:18:59.701 07:36:15 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:18:59.701 07:36:15 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:18:59.701 07:36:15 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:18:59.701 07:36:15 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:18:59.701 07:36:15 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:18:59.701 07:36:15 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:18:59.701 07:36:15 -- nvmf/common.sh@371 -- # [[ e810 == e810 ]] 00:18:59.701 07:36:15 -- nvmf/common.sh@371 -- # [[ tcp == rdma ]] 00:18:59.701 07:36:15 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:18:59.701 07:36:15 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:18:59.701 07:36:15 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:18:59.701 07:36:15 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:18:59.701 07:36:15 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:18:59.701 Found net devices under 0000:0a:00.0: cvl_0_0 00:18:59.701 07:36:15 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:18:59.701 07:36:15 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:18:59.701 07:36:15 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:18:59.701 07:36:15 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:18:59.701 07:36:15 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:18:59.701 07:36:15 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:18:59.701 Found net devices under 0000:0a:00.1: cvl_0_1 00:18:59.701 07:36:15 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:18:59.701 07:36:15 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:18:59.701 07:36:15 -- nvmf/common.sh@402 -- # is_hw=yes 00:18:59.701 07:36:15 -- nvmf/common.sh@404 -- # [[ yes == yes ]] 00:18:59.701 07:36:15 -- nvmf/common.sh@405 -- # [[ tcp == tcp ]] 00:18:59.701 07:36:15 -- nvmf/common.sh@406 -- # nvmf_tcp_init 00:18:59.701 07:36:15 -- nvmf/common.sh@228 -- # NVMF_INITIATOR_IP=10.0.0.1 00:18:59.701 07:36:15 -- nvmf/common.sh@229 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:18:59.701 07:36:15 -- nvmf/common.sh@230 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:18:59.701 07:36:15 -- nvmf/common.sh@233 -- # (( 2 > 1 )) 00:18:59.701 07:36:15 -- nvmf/common.sh@235 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:18:59.701 07:36:15 -- nvmf/common.sh@236 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:18:59.701 07:36:15 -- nvmf/common.sh@239 -- # NVMF_SECOND_TARGET_IP= 00:18:59.701 07:36:15 -- nvmf/common.sh@241 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:18:59.701 07:36:15 -- nvmf/common.sh@242 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:18:59.701 07:36:15 -- nvmf/common.sh@243 -- # ip -4 addr flush cvl_0_0 00:18:59.701 07:36:15 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_1 00:18:59.701 07:36:15 -- nvmf/common.sh@247 -- # ip netns add cvl_0_0_ns_spdk 00:18:59.701 07:36:15 -- nvmf/common.sh@250 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:18:59.701 07:36:15 -- nvmf/common.sh@253 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:18:59.701 07:36:15 -- nvmf/common.sh@254 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:18:59.701 07:36:15 -- nvmf/common.sh@257 -- # ip link set cvl_0_1 up 00:18:59.960 07:36:15 -- nvmf/common.sh@259 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:18:59.960 07:36:15 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:18:59.960 07:36:15 -- nvmf/common.sh@263 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:18:59.960 07:36:15 -- nvmf/common.sh@266 -- # ping -c 1 10.0.0.2 00:18:59.960 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:18:59.960 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.133 ms 00:18:59.960 00:18:59.960 --- 10.0.0.2 ping statistics --- 00:18:59.960 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:59.960 rtt min/avg/max/mdev = 0.133/0.133/0.133/0.000 ms 00:18:59.960 07:36:15 -- nvmf/common.sh@267 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:18:59.960 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:18:59.960 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.151 ms 00:18:59.960 00:18:59.960 --- 10.0.0.1 ping statistics --- 00:18:59.960 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:59.960 rtt min/avg/max/mdev = 0.151/0.151/0.151/0.000 ms 00:18:59.960 07:36:15 -- nvmf/common.sh@269 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:18:59.960 07:36:15 -- nvmf/common.sh@410 -- # return 0 00:18:59.960 07:36:15 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:18:59.960 07:36:15 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:18:59.960 07:36:15 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:18:59.960 07:36:15 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:18:59.960 07:36:15 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:18:59.960 07:36:15 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:18:59.960 07:36:15 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:18:59.960 07:36:15 -- target/multiconnection.sh@17 -- # nvmfappstart -m 0xF 00:18:59.960 07:36:15 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:18:59.960 07:36:15 -- common/autotest_common.sh@712 -- # xtrace_disable 00:18:59.960 07:36:15 -- common/autotest_common.sh@10 -- # set +x 00:18:59.960 07:36:15 -- nvmf/common.sh@469 -- # nvmfpid=4127171 00:18:59.960 07:36:15 -- nvmf/common.sh@468 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:18:59.960 07:36:15 -- nvmf/common.sh@470 -- # waitforlisten 4127171 00:18:59.960 07:36:15 -- common/autotest_common.sh@819 -- # '[' -z 4127171 ']' 00:18:59.960 07:36:15 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:59.960 07:36:15 -- common/autotest_common.sh@824 -- # local max_retries=100 00:18:59.960 07:36:15 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:59.960 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:59.960 07:36:15 -- common/autotest_common.sh@828 -- # xtrace_disable 00:18:59.960 07:36:15 -- common/autotest_common.sh@10 -- # set +x 00:18:59.960 [2024-07-14 07:36:16.010829] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:18:59.960 [2024-07-14 07:36:16.010926] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:18:59.960 EAL: No free 2048 kB hugepages reported on node 1 00:18:59.960 [2024-07-14 07:36:16.081605] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:19:00.219 [2024-07-14 07:36:16.201021] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:19:00.219 [2024-07-14 07:36:16.201175] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:19:00.219 [2024-07-14 07:36:16.201192] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:19:00.219 [2024-07-14 07:36:16.201205] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:19:00.219 [2024-07-14 07:36:16.201273] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:19:00.219 [2024-07-14 07:36:16.201310] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:19:00.219 [2024-07-14 07:36:16.201366] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:19:00.219 [2024-07-14 07:36:16.201369] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:19:00.784 07:36:16 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:19:00.784 07:36:16 -- common/autotest_common.sh@852 -- # return 0 00:19:00.784 07:36:16 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:19:00.784 07:36:16 -- common/autotest_common.sh@718 -- # xtrace_disable 00:19:00.784 07:36:16 -- common/autotest_common.sh@10 -- # set +x 00:19:00.784 07:36:16 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:19:00.784 07:36:16 -- target/multiconnection.sh@19 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:19:00.784 07:36:16 -- common/autotest_common.sh@551 -- # xtrace_disable 00:19:00.784 07:36:16 -- common/autotest_common.sh@10 -- # set +x 00:19:00.784 [2024-07-14 07:36:16.950201] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:19:01.043 07:36:16 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:19:01.043 07:36:16 -- target/multiconnection.sh@21 -- # seq 1 11 00:19:01.043 07:36:16 -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:19:01.043 07:36:16 -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:19:01.043 07:36:16 -- common/autotest_common.sh@551 -- # xtrace_disable 00:19:01.043 07:36:16 -- common/autotest_common.sh@10 -- # set +x 00:19:01.043 Malloc1 00:19:01.043 07:36:16 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:19:01.044 07:36:16 -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK1 00:19:01.044 07:36:16 -- common/autotest_common.sh@551 -- # xtrace_disable 00:19:01.044 07:36:16 -- common/autotest_common.sh@10 -- # set +x 00:19:01.044 07:36:16 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:19:01.044 07:36:16 -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:19:01.044 07:36:16 -- common/autotest_common.sh@551 -- # xtrace_disable 00:19:01.044 07:36:16 -- common/autotest_common.sh@10 -- # set +x 00:19:01.044 07:36:17 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:19:01.044 07:36:17 -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:19:01.044 07:36:17 -- common/autotest_common.sh@551 -- # xtrace_disable 00:19:01.044 07:36:17 -- common/autotest_common.sh@10 -- # set +x 00:19:01.044 [2024-07-14 07:36:17.005889] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:19:01.044 07:36:17 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:19:01.044 07:36:17 -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:19:01.044 07:36:17 -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc2 00:19:01.044 07:36:17 -- common/autotest_common.sh@551 -- # xtrace_disable 00:19:01.044 07:36:17 -- common/autotest_common.sh@10 -- # set +x 00:19:01.044 Malloc2 00:19:01.044 07:36:17 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:19:01.044 07:36:17 -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK2 00:19:01.044 07:36:17 -- common/autotest_common.sh@551 -- # xtrace_disable 00:19:01.044 07:36:17 -- common/autotest_common.sh@10 -- # set +x 00:19:01.044 07:36:17 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:19:01.044 07:36:17 -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Malloc2 00:19:01.044 07:36:17 -- common/autotest_common.sh@551 -- # xtrace_disable 00:19:01.044 07:36:17 -- common/autotest_common.sh@10 -- # set +x 00:19:01.044 07:36:17 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:19:01.044 07:36:17 -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:19:01.044 07:36:17 -- common/autotest_common.sh@551 -- # xtrace_disable 00:19:01.044 07:36:17 -- common/autotest_common.sh@10 -- # set +x 00:19:01.044 07:36:17 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:19:01.044 07:36:17 -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:19:01.044 07:36:17 -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc3 00:19:01.044 07:36:17 -- common/autotest_common.sh@551 -- # xtrace_disable 00:19:01.044 07:36:17 -- common/autotest_common.sh@10 -- # set +x 00:19:01.044 Malloc3 00:19:01.044 07:36:17 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:19:01.044 07:36:17 -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode3 -a -s SPDK3 00:19:01.044 07:36:17 -- common/autotest_common.sh@551 -- # xtrace_disable 00:19:01.044 07:36:17 -- common/autotest_common.sh@10 -- # set +x 00:19:01.044 07:36:17 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:19:01.044 07:36:17 -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode3 Malloc3 00:19:01.044 07:36:17 -- common/autotest_common.sh@551 -- # xtrace_disable 00:19:01.044 07:36:17 -- common/autotest_common.sh@10 -- # set +x 00:19:01.044 07:36:17 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:19:01.044 07:36:17 -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode3 -t tcp -a 10.0.0.2 -s 4420 00:19:01.044 07:36:17 -- common/autotest_common.sh@551 -- # xtrace_disable 00:19:01.044 07:36:17 -- common/autotest_common.sh@10 -- # set +x 00:19:01.044 07:36:17 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:19:01.044 07:36:17 -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:19:01.044 07:36:17 -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc4 00:19:01.044 07:36:17 -- common/autotest_common.sh@551 -- # xtrace_disable 00:19:01.044 07:36:17 -- common/autotest_common.sh@10 -- # set +x 00:19:01.044 Malloc4 00:19:01.044 07:36:17 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:19:01.044 07:36:17 -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode4 -a -s SPDK4 00:19:01.044 07:36:17 -- common/autotest_common.sh@551 -- # xtrace_disable 00:19:01.044 07:36:17 -- common/autotest_common.sh@10 -- # set +x 00:19:01.044 07:36:17 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:19:01.044 07:36:17 -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode4 Malloc4 00:19:01.044 07:36:17 -- common/autotest_common.sh@551 -- # xtrace_disable 00:19:01.044 07:36:17 -- common/autotest_common.sh@10 -- # set +x 00:19:01.044 07:36:17 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:19:01.044 07:36:17 -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode4 -t tcp -a 10.0.0.2 -s 4420 00:19:01.044 07:36:17 -- common/autotest_common.sh@551 -- # xtrace_disable 00:19:01.044 07:36:17 -- common/autotest_common.sh@10 -- # set +x 00:19:01.044 07:36:17 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:19:01.044 07:36:17 -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:19:01.044 07:36:17 -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc5 00:19:01.044 07:36:17 -- common/autotest_common.sh@551 -- # xtrace_disable 00:19:01.044 07:36:17 -- common/autotest_common.sh@10 -- # set +x 00:19:01.044 Malloc5 00:19:01.044 07:36:17 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:19:01.044 07:36:17 -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode5 -a -s SPDK5 00:19:01.044 07:36:17 -- common/autotest_common.sh@551 -- # xtrace_disable 00:19:01.044 07:36:17 -- common/autotest_common.sh@10 -- # set +x 00:19:01.044 07:36:17 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:19:01.044 07:36:17 -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode5 Malloc5 00:19:01.044 07:36:17 -- common/autotest_common.sh@551 -- # xtrace_disable 00:19:01.044 07:36:17 -- common/autotest_common.sh@10 -- # set +x 00:19:01.044 07:36:17 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:19:01.044 07:36:17 -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode5 -t tcp -a 10.0.0.2 -s 4420 00:19:01.044 07:36:17 -- common/autotest_common.sh@551 -- # xtrace_disable 00:19:01.044 07:36:17 -- common/autotest_common.sh@10 -- # set +x 00:19:01.044 07:36:17 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:19:01.044 07:36:17 -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:19:01.044 07:36:17 -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc6 00:19:01.044 07:36:17 -- common/autotest_common.sh@551 -- # xtrace_disable 00:19:01.044 07:36:17 -- common/autotest_common.sh@10 -- # set +x 00:19:01.303 Malloc6 00:19:01.303 07:36:17 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:19:01.304 07:36:17 -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode6 -a -s SPDK6 00:19:01.304 07:36:17 -- common/autotest_common.sh@551 -- # xtrace_disable 00:19:01.304 07:36:17 -- common/autotest_common.sh@10 -- # set +x 00:19:01.304 07:36:17 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:19:01.304 07:36:17 -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode6 Malloc6 00:19:01.304 07:36:17 -- common/autotest_common.sh@551 -- # xtrace_disable 00:19:01.304 07:36:17 -- common/autotest_common.sh@10 -- # set +x 00:19:01.304 07:36:17 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:19:01.304 07:36:17 -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode6 -t tcp -a 10.0.0.2 -s 4420 00:19:01.304 07:36:17 -- common/autotest_common.sh@551 -- # xtrace_disable 00:19:01.304 07:36:17 -- common/autotest_common.sh@10 -- # set +x 00:19:01.304 07:36:17 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:19:01.304 07:36:17 -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:19:01.304 07:36:17 -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc7 00:19:01.304 07:36:17 -- common/autotest_common.sh@551 -- # xtrace_disable 00:19:01.304 07:36:17 -- common/autotest_common.sh@10 -- # set +x 00:19:01.304 Malloc7 00:19:01.304 07:36:17 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:19:01.304 07:36:17 -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode7 -a -s SPDK7 00:19:01.304 07:36:17 -- common/autotest_common.sh@551 -- # xtrace_disable 00:19:01.304 07:36:17 -- common/autotest_common.sh@10 -- # set +x 00:19:01.304 07:36:17 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:19:01.304 07:36:17 -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode7 Malloc7 00:19:01.304 07:36:17 -- common/autotest_common.sh@551 -- # xtrace_disable 00:19:01.304 07:36:17 -- common/autotest_common.sh@10 -- # set +x 00:19:01.304 07:36:17 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:19:01.304 07:36:17 -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode7 -t tcp -a 10.0.0.2 -s 4420 00:19:01.304 07:36:17 -- common/autotest_common.sh@551 -- # xtrace_disable 00:19:01.304 07:36:17 -- common/autotest_common.sh@10 -- # set +x 00:19:01.304 07:36:17 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:19:01.304 07:36:17 -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:19:01.304 07:36:17 -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc8 00:19:01.304 07:36:17 -- common/autotest_common.sh@551 -- # xtrace_disable 00:19:01.304 07:36:17 -- common/autotest_common.sh@10 -- # set +x 00:19:01.304 Malloc8 00:19:01.304 07:36:17 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:19:01.304 07:36:17 -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode8 -a -s SPDK8 00:19:01.304 07:36:17 -- common/autotest_common.sh@551 -- # xtrace_disable 00:19:01.304 07:36:17 -- common/autotest_common.sh@10 -- # set +x 00:19:01.304 07:36:17 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:19:01.304 07:36:17 -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode8 Malloc8 00:19:01.304 07:36:17 -- common/autotest_common.sh@551 -- # xtrace_disable 00:19:01.304 07:36:17 -- common/autotest_common.sh@10 -- # set +x 00:19:01.304 07:36:17 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:19:01.304 07:36:17 -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode8 -t tcp -a 10.0.0.2 -s 4420 00:19:01.304 07:36:17 -- common/autotest_common.sh@551 -- # xtrace_disable 00:19:01.304 07:36:17 -- common/autotest_common.sh@10 -- # set +x 00:19:01.304 07:36:17 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:19:01.304 07:36:17 -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:19:01.304 07:36:17 -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc9 00:19:01.304 07:36:17 -- common/autotest_common.sh@551 -- # xtrace_disable 00:19:01.304 07:36:17 -- common/autotest_common.sh@10 -- # set +x 00:19:01.304 Malloc9 00:19:01.304 07:36:17 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:19:01.304 07:36:17 -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode9 -a -s SPDK9 00:19:01.304 07:36:17 -- common/autotest_common.sh@551 -- # xtrace_disable 00:19:01.304 07:36:17 -- common/autotest_common.sh@10 -- # set +x 00:19:01.304 07:36:17 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:19:01.304 07:36:17 -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode9 Malloc9 00:19:01.304 07:36:17 -- common/autotest_common.sh@551 -- # xtrace_disable 00:19:01.304 07:36:17 -- common/autotest_common.sh@10 -- # set +x 00:19:01.304 07:36:17 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:19:01.304 07:36:17 -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode9 -t tcp -a 10.0.0.2 -s 4420 00:19:01.304 07:36:17 -- common/autotest_common.sh@551 -- # xtrace_disable 00:19:01.304 07:36:17 -- common/autotest_common.sh@10 -- # set +x 00:19:01.304 07:36:17 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:19:01.304 07:36:17 -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:19:01.304 07:36:17 -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc10 00:19:01.304 07:36:17 -- common/autotest_common.sh@551 -- # xtrace_disable 00:19:01.304 07:36:17 -- common/autotest_common.sh@10 -- # set +x 00:19:01.304 Malloc10 00:19:01.304 07:36:17 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:19:01.304 07:36:17 -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode10 -a -s SPDK10 00:19:01.304 07:36:17 -- common/autotest_common.sh@551 -- # xtrace_disable 00:19:01.304 07:36:17 -- common/autotest_common.sh@10 -- # set +x 00:19:01.304 07:36:17 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:19:01.304 07:36:17 -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode10 Malloc10 00:19:01.304 07:36:17 -- common/autotest_common.sh@551 -- # xtrace_disable 00:19:01.304 07:36:17 -- common/autotest_common.sh@10 -- # set +x 00:19:01.304 07:36:17 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:19:01.304 07:36:17 -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode10 -t tcp -a 10.0.0.2 -s 4420 00:19:01.304 07:36:17 -- common/autotest_common.sh@551 -- # xtrace_disable 00:19:01.304 07:36:17 -- common/autotest_common.sh@10 -- # set +x 00:19:01.304 07:36:17 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:19:01.304 07:36:17 -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:19:01.304 07:36:17 -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc11 00:19:01.304 07:36:17 -- common/autotest_common.sh@551 -- # xtrace_disable 00:19:01.304 07:36:17 -- common/autotest_common.sh@10 -- # set +x 00:19:01.563 Malloc11 00:19:01.563 07:36:17 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:19:01.563 07:36:17 -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode11 -a -s SPDK11 00:19:01.563 07:36:17 -- common/autotest_common.sh@551 -- # xtrace_disable 00:19:01.563 07:36:17 -- common/autotest_common.sh@10 -- # set +x 00:19:01.563 07:36:17 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:19:01.563 07:36:17 -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode11 Malloc11 00:19:01.563 07:36:17 -- common/autotest_common.sh@551 -- # xtrace_disable 00:19:01.563 07:36:17 -- common/autotest_common.sh@10 -- # set +x 00:19:01.563 07:36:17 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:19:01.563 07:36:17 -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode11 -t tcp -a 10.0.0.2 -s 4420 00:19:01.563 07:36:17 -- common/autotest_common.sh@551 -- # xtrace_disable 00:19:01.563 07:36:17 -- common/autotest_common.sh@10 -- # set +x 00:19:01.563 07:36:17 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:19:01.563 07:36:17 -- target/multiconnection.sh@28 -- # seq 1 11 00:19:01.563 07:36:17 -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:19:01.563 07:36:17 -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:19:02.129 07:36:18 -- target/multiconnection.sh@30 -- # waitforserial SPDK1 00:19:02.129 07:36:18 -- common/autotest_common.sh@1177 -- # local i=0 00:19:02.129 07:36:18 -- common/autotest_common.sh@1178 -- # local nvme_device_counter=1 nvme_devices=0 00:19:02.129 07:36:18 -- common/autotest_common.sh@1179 -- # [[ -n '' ]] 00:19:02.129 07:36:18 -- common/autotest_common.sh@1184 -- # sleep 2 00:19:04.027 07:36:20 -- common/autotest_common.sh@1185 -- # (( i++ <= 15 )) 00:19:04.027 07:36:20 -- common/autotest_common.sh@1186 -- # lsblk -l -o NAME,SERIAL 00:19:04.027 07:36:20 -- common/autotest_common.sh@1186 -- # grep -c SPDK1 00:19:04.027 07:36:20 -- common/autotest_common.sh@1186 -- # nvme_devices=1 00:19:04.027 07:36:20 -- common/autotest_common.sh@1187 -- # (( nvme_devices == nvme_device_counter )) 00:19:04.027 07:36:20 -- common/autotest_common.sh@1187 -- # return 0 00:19:04.027 07:36:20 -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:19:04.027 07:36:20 -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode2 -a 10.0.0.2 -s 4420 00:19:04.961 07:36:20 -- target/multiconnection.sh@30 -- # waitforserial SPDK2 00:19:04.961 07:36:20 -- common/autotest_common.sh@1177 -- # local i=0 00:19:04.961 07:36:20 -- common/autotest_common.sh@1178 -- # local nvme_device_counter=1 nvme_devices=0 00:19:04.961 07:36:20 -- common/autotest_common.sh@1179 -- # [[ -n '' ]] 00:19:04.961 07:36:20 -- common/autotest_common.sh@1184 -- # sleep 2 00:19:06.884 07:36:22 -- common/autotest_common.sh@1185 -- # (( i++ <= 15 )) 00:19:06.884 07:36:22 -- common/autotest_common.sh@1186 -- # lsblk -l -o NAME,SERIAL 00:19:06.884 07:36:22 -- common/autotest_common.sh@1186 -- # grep -c SPDK2 00:19:06.885 07:36:22 -- common/autotest_common.sh@1186 -- # nvme_devices=1 00:19:06.885 07:36:22 -- common/autotest_common.sh@1187 -- # (( nvme_devices == nvme_device_counter )) 00:19:06.885 07:36:22 -- common/autotest_common.sh@1187 -- # return 0 00:19:06.885 07:36:22 -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:19:06.885 07:36:22 -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode3 -a 10.0.0.2 -s 4420 00:19:07.458 07:36:23 -- target/multiconnection.sh@30 -- # waitforserial SPDK3 00:19:07.458 07:36:23 -- common/autotest_common.sh@1177 -- # local i=0 00:19:07.458 07:36:23 -- common/autotest_common.sh@1178 -- # local nvme_device_counter=1 nvme_devices=0 00:19:07.458 07:36:23 -- common/autotest_common.sh@1179 -- # [[ -n '' ]] 00:19:07.458 07:36:23 -- common/autotest_common.sh@1184 -- # sleep 2 00:19:09.350 07:36:25 -- common/autotest_common.sh@1185 -- # (( i++ <= 15 )) 00:19:09.350 07:36:25 -- common/autotest_common.sh@1186 -- # lsblk -l -o NAME,SERIAL 00:19:09.350 07:36:25 -- common/autotest_common.sh@1186 -- # grep -c SPDK3 00:19:09.350 07:36:25 -- common/autotest_common.sh@1186 -- # nvme_devices=1 00:19:09.350 07:36:25 -- common/autotest_common.sh@1187 -- # (( nvme_devices == nvme_device_counter )) 00:19:09.350 07:36:25 -- common/autotest_common.sh@1187 -- # return 0 00:19:09.350 07:36:25 -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:19:09.350 07:36:25 -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode4 -a 10.0.0.2 -s 4420 00:19:10.282 07:36:26 -- target/multiconnection.sh@30 -- # waitforserial SPDK4 00:19:10.282 07:36:26 -- common/autotest_common.sh@1177 -- # local i=0 00:19:10.282 07:36:26 -- common/autotest_common.sh@1178 -- # local nvme_device_counter=1 nvme_devices=0 00:19:10.282 07:36:26 -- common/autotest_common.sh@1179 -- # [[ -n '' ]] 00:19:10.282 07:36:26 -- common/autotest_common.sh@1184 -- # sleep 2 00:19:12.179 07:36:28 -- common/autotest_common.sh@1185 -- # (( i++ <= 15 )) 00:19:12.179 07:36:28 -- common/autotest_common.sh@1186 -- # lsblk -l -o NAME,SERIAL 00:19:12.179 07:36:28 -- common/autotest_common.sh@1186 -- # grep -c SPDK4 00:19:12.179 07:36:28 -- common/autotest_common.sh@1186 -- # nvme_devices=1 00:19:12.179 07:36:28 -- common/autotest_common.sh@1187 -- # (( nvme_devices == nvme_device_counter )) 00:19:12.179 07:36:28 -- common/autotest_common.sh@1187 -- # return 0 00:19:12.179 07:36:28 -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:19:12.179 07:36:28 -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode5 -a 10.0.0.2 -s 4420 00:19:13.114 07:36:28 -- target/multiconnection.sh@30 -- # waitforserial SPDK5 00:19:13.114 07:36:28 -- common/autotest_common.sh@1177 -- # local i=0 00:19:13.114 07:36:28 -- common/autotest_common.sh@1178 -- # local nvme_device_counter=1 nvme_devices=0 00:19:13.114 07:36:28 -- common/autotest_common.sh@1179 -- # [[ -n '' ]] 00:19:13.114 07:36:28 -- common/autotest_common.sh@1184 -- # sleep 2 00:19:15.010 07:36:30 -- common/autotest_common.sh@1185 -- # (( i++ <= 15 )) 00:19:15.010 07:36:30 -- common/autotest_common.sh@1186 -- # lsblk -l -o NAME,SERIAL 00:19:15.010 07:36:30 -- common/autotest_common.sh@1186 -- # grep -c SPDK5 00:19:15.010 07:36:31 -- common/autotest_common.sh@1186 -- # nvme_devices=1 00:19:15.010 07:36:31 -- common/autotest_common.sh@1187 -- # (( nvme_devices == nvme_device_counter )) 00:19:15.010 07:36:31 -- common/autotest_common.sh@1187 -- # return 0 00:19:15.010 07:36:31 -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:19:15.010 07:36:31 -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode6 -a 10.0.0.2 -s 4420 00:19:15.941 07:36:31 -- target/multiconnection.sh@30 -- # waitforserial SPDK6 00:19:15.941 07:36:31 -- common/autotest_common.sh@1177 -- # local i=0 00:19:15.941 07:36:31 -- common/autotest_common.sh@1178 -- # local nvme_device_counter=1 nvme_devices=0 00:19:15.941 07:36:31 -- common/autotest_common.sh@1179 -- # [[ -n '' ]] 00:19:15.941 07:36:31 -- common/autotest_common.sh@1184 -- # sleep 2 00:19:17.834 07:36:33 -- common/autotest_common.sh@1185 -- # (( i++ <= 15 )) 00:19:17.834 07:36:33 -- common/autotest_common.sh@1186 -- # lsblk -l -o NAME,SERIAL 00:19:17.834 07:36:33 -- common/autotest_common.sh@1186 -- # grep -c SPDK6 00:19:17.834 07:36:33 -- common/autotest_common.sh@1186 -- # nvme_devices=1 00:19:17.834 07:36:33 -- common/autotest_common.sh@1187 -- # (( nvme_devices == nvme_device_counter )) 00:19:17.834 07:36:33 -- common/autotest_common.sh@1187 -- # return 0 00:19:17.835 07:36:33 -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:19:17.835 07:36:33 -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode7 -a 10.0.0.2 -s 4420 00:19:18.765 07:36:34 -- target/multiconnection.sh@30 -- # waitforserial SPDK7 00:19:18.765 07:36:34 -- common/autotest_common.sh@1177 -- # local i=0 00:19:18.765 07:36:34 -- common/autotest_common.sh@1178 -- # local nvme_device_counter=1 nvme_devices=0 00:19:18.765 07:36:34 -- common/autotest_common.sh@1179 -- # [[ -n '' ]] 00:19:18.765 07:36:34 -- common/autotest_common.sh@1184 -- # sleep 2 00:19:20.664 07:36:36 -- common/autotest_common.sh@1185 -- # (( i++ <= 15 )) 00:19:20.664 07:36:36 -- common/autotest_common.sh@1186 -- # lsblk -l -o NAME,SERIAL 00:19:20.664 07:36:36 -- common/autotest_common.sh@1186 -- # grep -c SPDK7 00:19:20.664 07:36:36 -- common/autotest_common.sh@1186 -- # nvme_devices=1 00:19:20.664 07:36:36 -- common/autotest_common.sh@1187 -- # (( nvme_devices == nvme_device_counter )) 00:19:20.664 07:36:36 -- common/autotest_common.sh@1187 -- # return 0 00:19:20.664 07:36:36 -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:19:20.664 07:36:36 -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode8 -a 10.0.0.2 -s 4420 00:19:21.230 07:36:37 -- target/multiconnection.sh@30 -- # waitforserial SPDK8 00:19:21.230 07:36:37 -- common/autotest_common.sh@1177 -- # local i=0 00:19:21.230 07:36:37 -- common/autotest_common.sh@1178 -- # local nvme_device_counter=1 nvme_devices=0 00:19:21.230 07:36:37 -- common/autotest_common.sh@1179 -- # [[ -n '' ]] 00:19:21.230 07:36:37 -- common/autotest_common.sh@1184 -- # sleep 2 00:19:23.755 07:36:39 -- common/autotest_common.sh@1185 -- # (( i++ <= 15 )) 00:19:23.755 07:36:39 -- common/autotest_common.sh@1186 -- # lsblk -l -o NAME,SERIAL 00:19:23.755 07:36:39 -- common/autotest_common.sh@1186 -- # grep -c SPDK8 00:19:23.755 07:36:39 -- common/autotest_common.sh@1186 -- # nvme_devices=1 00:19:23.755 07:36:39 -- common/autotest_common.sh@1187 -- # (( nvme_devices == nvme_device_counter )) 00:19:23.755 07:36:39 -- common/autotest_common.sh@1187 -- # return 0 00:19:23.755 07:36:39 -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:19:23.755 07:36:39 -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode9 -a 10.0.0.2 -s 4420 00:19:24.013 07:36:40 -- target/multiconnection.sh@30 -- # waitforserial SPDK9 00:19:24.013 07:36:40 -- common/autotest_common.sh@1177 -- # local i=0 00:19:24.013 07:36:40 -- common/autotest_common.sh@1178 -- # local nvme_device_counter=1 nvme_devices=0 00:19:24.013 07:36:40 -- common/autotest_common.sh@1179 -- # [[ -n '' ]] 00:19:24.013 07:36:40 -- common/autotest_common.sh@1184 -- # sleep 2 00:19:26.537 07:36:42 -- common/autotest_common.sh@1185 -- # (( i++ <= 15 )) 00:19:26.537 07:36:42 -- common/autotest_common.sh@1186 -- # lsblk -l -o NAME,SERIAL 00:19:26.537 07:36:42 -- common/autotest_common.sh@1186 -- # grep -c SPDK9 00:19:26.537 07:36:42 -- common/autotest_common.sh@1186 -- # nvme_devices=1 00:19:26.537 07:36:42 -- common/autotest_common.sh@1187 -- # (( nvme_devices == nvme_device_counter )) 00:19:26.537 07:36:42 -- common/autotest_common.sh@1187 -- # return 0 00:19:26.537 07:36:42 -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:19:26.537 07:36:42 -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode10 -a 10.0.0.2 -s 4420 00:19:27.101 07:36:43 -- target/multiconnection.sh@30 -- # waitforserial SPDK10 00:19:27.101 07:36:43 -- common/autotest_common.sh@1177 -- # local i=0 00:19:27.101 07:36:43 -- common/autotest_common.sh@1178 -- # local nvme_device_counter=1 nvme_devices=0 00:19:27.101 07:36:43 -- common/autotest_common.sh@1179 -- # [[ -n '' ]] 00:19:27.101 07:36:43 -- common/autotest_common.sh@1184 -- # sleep 2 00:19:29.014 07:36:45 -- common/autotest_common.sh@1185 -- # (( i++ <= 15 )) 00:19:29.014 07:36:45 -- common/autotest_common.sh@1186 -- # lsblk -l -o NAME,SERIAL 00:19:29.014 07:36:45 -- common/autotest_common.sh@1186 -- # grep -c SPDK10 00:19:29.014 07:36:45 -- common/autotest_common.sh@1186 -- # nvme_devices=1 00:19:29.014 07:36:45 -- common/autotest_common.sh@1187 -- # (( nvme_devices == nvme_device_counter )) 00:19:29.014 07:36:45 -- common/autotest_common.sh@1187 -- # return 0 00:19:29.014 07:36:45 -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:19:29.014 07:36:45 -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode11 -a 10.0.0.2 -s 4420 00:19:29.954 07:36:45 -- target/multiconnection.sh@30 -- # waitforserial SPDK11 00:19:29.954 07:36:45 -- common/autotest_common.sh@1177 -- # local i=0 00:19:29.954 07:36:45 -- common/autotest_common.sh@1178 -- # local nvme_device_counter=1 nvme_devices=0 00:19:29.954 07:36:45 -- common/autotest_common.sh@1179 -- # [[ -n '' ]] 00:19:29.954 07:36:45 -- common/autotest_common.sh@1184 -- # sleep 2 00:19:31.853 07:36:47 -- common/autotest_common.sh@1185 -- # (( i++ <= 15 )) 00:19:31.853 07:36:47 -- common/autotest_common.sh@1186 -- # lsblk -l -o NAME,SERIAL 00:19:31.853 07:36:47 -- common/autotest_common.sh@1186 -- # grep -c SPDK11 00:19:31.853 07:36:47 -- common/autotest_common.sh@1186 -- # nvme_devices=1 00:19:31.853 07:36:47 -- common/autotest_common.sh@1187 -- # (( nvme_devices == nvme_device_counter )) 00:19:31.853 07:36:47 -- common/autotest_common.sh@1187 -- # return 0 00:19:31.853 07:36:47 -- target/multiconnection.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 262144 -d 64 -t read -r 10 00:19:31.853 [global] 00:19:31.853 thread=1 00:19:31.853 invalidate=1 00:19:31.853 rw=read 00:19:31.853 time_based=1 00:19:31.853 runtime=10 00:19:31.853 ioengine=libaio 00:19:31.853 direct=1 00:19:31.853 bs=262144 00:19:31.853 iodepth=64 00:19:31.853 norandommap=1 00:19:31.853 numjobs=1 00:19:31.853 00:19:31.853 [job0] 00:19:31.853 filename=/dev/nvme0n1 00:19:31.853 [job1] 00:19:31.853 filename=/dev/nvme10n1 00:19:31.853 [job2] 00:19:31.853 filename=/dev/nvme1n1 00:19:31.853 [job3] 00:19:31.853 filename=/dev/nvme2n1 00:19:31.853 [job4] 00:19:31.853 filename=/dev/nvme3n1 00:19:31.853 [job5] 00:19:31.853 filename=/dev/nvme4n1 00:19:31.853 [job6] 00:19:31.853 filename=/dev/nvme5n1 00:19:31.853 [job7] 00:19:31.853 filename=/dev/nvme6n1 00:19:31.853 [job8] 00:19:31.853 filename=/dev/nvme7n1 00:19:31.853 [job9] 00:19:31.853 filename=/dev/nvme8n1 00:19:31.853 [job10] 00:19:31.853 filename=/dev/nvme9n1 00:19:32.111 Could not set queue depth (nvme0n1) 00:19:32.111 Could not set queue depth (nvme10n1) 00:19:32.111 Could not set queue depth (nvme1n1) 00:19:32.111 Could not set queue depth (nvme2n1) 00:19:32.111 Could not set queue depth (nvme3n1) 00:19:32.111 Could not set queue depth (nvme4n1) 00:19:32.111 Could not set queue depth (nvme5n1) 00:19:32.111 Could not set queue depth (nvme6n1) 00:19:32.111 Could not set queue depth (nvme7n1) 00:19:32.111 Could not set queue depth (nvme8n1) 00:19:32.111 Could not set queue depth (nvme9n1) 00:19:32.111 job0: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:19:32.111 job1: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:19:32.111 job2: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:19:32.111 job3: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:19:32.111 job4: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:19:32.111 job5: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:19:32.111 job6: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:19:32.111 job7: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:19:32.111 job8: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:19:32.111 job9: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:19:32.111 job10: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:19:32.111 fio-3.35 00:19:32.111 Starting 11 threads 00:19:44.315 00:19:44.315 job0: (groupid=0, jobs=1): err= 0: pid=4131554: Sun Jul 14 07:36:58 2024 00:19:44.315 read: IOPS=783, BW=196MiB/s (205MB/s)(1972MiB/10064msec) 00:19:44.315 slat (usec): min=8, max=126311, avg=1016.70, stdev=4517.84 00:19:44.315 clat (usec): min=1075, max=313106, avg=80610.55, stdev=45290.69 00:19:44.315 lat (usec): min=1103, max=334357, avg=81627.25, stdev=45920.04 00:19:44.315 clat percentiles (msec): 00:19:44.315 | 1.00th=[ 5], 5.00th=[ 15], 10.00th=[ 30], 20.00th=[ 50], 00:19:44.315 | 30.00th=[ 59], 40.00th=[ 66], 50.00th=[ 71], 60.00th=[ 81], 00:19:44.315 | 70.00th=[ 93], 80.00th=[ 108], 90.00th=[ 140], 95.00th=[ 178], 00:19:44.316 | 99.00th=[ 222], 99.50th=[ 234], 99.90th=[ 249], 99.95th=[ 268], 00:19:44.316 | 99.99th=[ 313] 00:19:44.316 bw ( KiB/s): min=71680, max=308736, per=11.70%, avg=200277.85, stdev=61855.19, samples=20 00:19:44.316 iops : min= 280, max= 1206, avg=782.25, stdev=241.63, samples=20 00:19:44.316 lat (msec) : 2=0.03%, 4=0.72%, 10=1.97%, 20=4.10%, 50=13.78% 00:19:44.316 lat (msec) : 100=54.10%, 250=25.23%, 500=0.08% 00:19:44.316 cpu : usr=0.62%, sys=2.51%, ctx=1785, majf=0, minf=4097 00:19:44.316 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.2% 00:19:44.316 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:44.316 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:19:44.316 issued rwts: total=7886,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:44.316 latency : target=0, window=0, percentile=100.00%, depth=64 00:19:44.316 job1: (groupid=0, jobs=1): err= 0: pid=4131555: Sun Jul 14 07:36:58 2024 00:19:44.316 read: IOPS=912, BW=228MiB/s (239MB/s)(2308MiB/10114msec) 00:19:44.316 slat (usec): min=9, max=134705, avg=932.06, stdev=3565.45 00:19:44.316 clat (msec): min=3, max=315, avg=69.14, stdev=49.03 00:19:44.316 lat (msec): min=3, max=315, avg=70.08, stdev=49.40 00:19:44.316 clat percentiles (msec): 00:19:44.316 | 1.00th=[ 24], 5.00th=[ 40], 10.00th=[ 42], 20.00th=[ 43], 00:19:44.316 | 30.00th=[ 44], 40.00th=[ 45], 50.00th=[ 47], 60.00th=[ 52], 00:19:44.316 | 70.00th=[ 62], 80.00th=[ 84], 90.00th=[ 140], 95.00th=[ 178], 00:19:44.316 | 99.00th=[ 262], 99.50th=[ 284], 99.90th=[ 296], 99.95th=[ 300], 00:19:44.316 | 99.99th=[ 317] 00:19:44.316 bw ( KiB/s): min=77312, max=369152, per=13.71%, avg=234672.10, stdev=110499.01, samples=20 00:19:44.316 iops : min= 302, max= 1442, avg=916.65, stdev=431.62, samples=20 00:19:44.316 lat (msec) : 4=0.02%, 10=0.35%, 20=0.32%, 50=57.39%, 100=25.45% 00:19:44.316 lat (msec) : 250=15.18%, 500=1.29% 00:19:44.316 cpu : usr=0.51%, sys=3.20%, ctx=1887, majf=0, minf=4097 00:19:44.316 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.3%, >=64=99.3% 00:19:44.316 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:44.316 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:19:44.316 issued rwts: total=9231,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:44.316 latency : target=0, window=0, percentile=100.00%, depth=64 00:19:44.316 job2: (groupid=0, jobs=1): err= 0: pid=4131556: Sun Jul 14 07:36:58 2024 00:19:44.316 read: IOPS=370, BW=92.6MiB/s (97.1MB/s)(932MiB/10071msec) 00:19:44.316 slat (usec): min=14, max=116513, avg=2367.03, stdev=7687.60 00:19:44.316 clat (msec): min=13, max=331, avg=170.37, stdev=50.59 00:19:44.316 lat (msec): min=13, max=342, avg=172.73, stdev=51.64 00:19:44.316 clat percentiles (msec): 00:19:44.316 | 1.00th=[ 68], 5.00th=[ 90], 10.00th=[ 107], 20.00th=[ 129], 00:19:44.316 | 30.00th=[ 142], 40.00th=[ 155], 50.00th=[ 167], 60.00th=[ 178], 00:19:44.316 | 70.00th=[ 192], 80.00th=[ 215], 90.00th=[ 243], 95.00th=[ 262], 00:19:44.316 | 99.00th=[ 296], 99.50th=[ 296], 99.90th=[ 313], 99.95th=[ 330], 00:19:44.316 | 99.99th=[ 330] 00:19:44.316 bw ( KiB/s): min=65536, max=157381, per=5.48%, avg=93833.85, stdev=23551.16, samples=20 00:19:44.316 iops : min= 256, max= 614, avg=366.50, stdev=91.89, samples=20 00:19:44.316 lat (msec) : 20=0.16%, 50=0.11%, 100=6.87%, 250=85.01%, 500=7.86% 00:19:44.316 cpu : usr=0.35%, sys=1.39%, ctx=940, majf=0, minf=4097 00:19:44.316 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.4%, 32=0.9%, >=64=98.3% 00:19:44.316 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:44.316 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:19:44.316 issued rwts: total=3729,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:44.316 latency : target=0, window=0, percentile=100.00%, depth=64 00:19:44.316 job3: (groupid=0, jobs=1): err= 0: pid=4131557: Sun Jul 14 07:36:58 2024 00:19:44.316 read: IOPS=572, BW=143MiB/s (150MB/s)(1440MiB/10053msec) 00:19:44.316 slat (usec): min=13, max=102549, avg=1453.04, stdev=5326.18 00:19:44.316 clat (msec): min=3, max=360, avg=110.19, stdev=65.52 00:19:44.316 lat (msec): min=3, max=360, avg=111.64, stdev=66.51 00:19:44.316 clat percentiles (msec): 00:19:44.316 | 1.00th=[ 8], 5.00th=[ 43], 10.00th=[ 46], 20.00th=[ 50], 00:19:44.316 | 30.00th=[ 57], 40.00th=[ 70], 50.00th=[ 96], 60.00th=[ 118], 00:19:44.316 | 70.00th=[ 140], 80.00th=[ 171], 90.00th=[ 215], 95.00th=[ 241], 00:19:44.316 | 99.00th=[ 271], 99.50th=[ 279], 99.90th=[ 309], 99.95th=[ 351], 00:19:44.316 | 99.99th=[ 359] 00:19:44.316 bw ( KiB/s): min=67584, max=337408, per=8.52%, avg=145802.80, stdev=83736.55, samples=20 00:19:44.316 iops : min= 264, max= 1318, avg=569.50, stdev=327.09, samples=20 00:19:44.316 lat (msec) : 4=0.05%, 10=1.13%, 20=0.30%, 50=19.47%, 100=30.49% 00:19:44.316 lat (msec) : 250=44.92%, 500=3.65% 00:19:44.316 cpu : usr=0.34%, sys=2.17%, ctx=1252, majf=0, minf=4097 00:19:44.316 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.3%, 32=0.6%, >=64=98.9% 00:19:44.316 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:44.316 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:19:44.316 issued rwts: total=5759,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:44.316 latency : target=0, window=0, percentile=100.00%, depth=64 00:19:44.316 job4: (groupid=0, jobs=1): err= 0: pid=4131558: Sun Jul 14 07:36:58 2024 00:19:44.316 read: IOPS=496, BW=124MiB/s (130MB/s)(1252MiB/10076msec) 00:19:44.316 slat (usec): min=13, max=206056, avg=1823.69, stdev=8031.42 00:19:44.316 clat (msec): min=3, max=450, avg=126.91, stdev=69.68 00:19:44.316 lat (msec): min=3, max=450, avg=128.74, stdev=70.94 00:19:44.316 clat percentiles (msec): 00:19:44.316 | 1.00th=[ 10], 5.00th=[ 21], 10.00th=[ 36], 20.00th=[ 57], 00:19:44.316 | 30.00th=[ 91], 40.00th=[ 111], 50.00th=[ 126], 60.00th=[ 140], 00:19:44.316 | 70.00th=[ 157], 80.00th=[ 188], 90.00th=[ 228], 95.00th=[ 249], 00:19:44.316 | 99.00th=[ 292], 99.50th=[ 309], 99.90th=[ 326], 99.95th=[ 409], 00:19:44.316 | 99.99th=[ 451] 00:19:44.316 bw ( KiB/s): min=70144, max=257021, per=7.39%, avg=126489.45, stdev=45676.11, samples=20 00:19:44.316 iops : min= 274, max= 1003, avg=494.05, stdev=178.27, samples=20 00:19:44.316 lat (msec) : 4=0.06%, 10=1.10%, 20=3.86%, 50=10.73%, 100=17.28% 00:19:44.316 lat (msec) : 250=62.25%, 500=4.73% 00:19:44.316 cpu : usr=0.33%, sys=1.87%, ctx=1272, majf=0, minf=3721 00:19:44.316 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.3%, 32=0.6%, >=64=98.7% 00:19:44.316 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:44.316 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:19:44.316 issued rwts: total=5006,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:44.316 latency : target=0, window=0, percentile=100.00%, depth=64 00:19:44.316 job5: (groupid=0, jobs=1): err= 0: pid=4131559: Sun Jul 14 07:36:58 2024 00:19:44.316 read: IOPS=657, BW=164MiB/s (172MB/s)(1664MiB/10118msec) 00:19:44.316 slat (usec): min=13, max=145859, avg=1222.83, stdev=4361.11 00:19:44.316 clat (msec): min=2, max=283, avg=95.98, stdev=48.57 00:19:44.316 lat (msec): min=2, max=288, avg=97.20, stdev=49.11 00:19:44.316 clat percentiles (msec): 00:19:44.316 | 1.00th=[ 12], 5.00th=[ 26], 10.00th=[ 42], 20.00th=[ 57], 00:19:44.316 | 30.00th=[ 69], 40.00th=[ 78], 50.00th=[ 89], 60.00th=[ 102], 00:19:44.316 | 70.00th=[ 116], 80.00th=[ 136], 90.00th=[ 157], 95.00th=[ 180], 00:19:44.316 | 99.00th=[ 257], 99.50th=[ 268], 99.90th=[ 275], 99.95th=[ 275], 00:19:44.316 | 99.99th=[ 284] 00:19:44.316 bw ( KiB/s): min=81920, max=290304, per=9.86%, avg=168775.70, stdev=62366.15, samples=20 00:19:44.316 iops : min= 320, max= 1134, avg=659.20, stdev=243.69, samples=20 00:19:44.316 lat (msec) : 4=0.05%, 10=0.65%, 20=2.46%, 50=12.11%, 100=43.90% 00:19:44.316 lat (msec) : 250=39.65%, 500=1.19% 00:19:44.316 cpu : usr=0.40%, sys=2.56%, ctx=1614, majf=0, minf=4097 00:19:44.316 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.5%, >=64=99.1% 00:19:44.316 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:44.316 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:19:44.316 issued rwts: total=6656,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:44.316 latency : target=0, window=0, percentile=100.00%, depth=64 00:19:44.316 job6: (groupid=0, jobs=1): err= 0: pid=4131560: Sun Jul 14 07:36:58 2024 00:19:44.316 read: IOPS=609, BW=152MiB/s (160MB/s)(1533MiB/10062msec) 00:19:44.316 slat (usec): min=10, max=142892, avg=1070.18, stdev=5959.07 00:19:44.316 clat (usec): min=1267, max=361974, avg=103863.57, stdev=68264.31 00:19:44.316 lat (usec): min=1291, max=366023, avg=104933.75, stdev=69054.88 00:19:44.316 clat percentiles (msec): 00:19:44.316 | 1.00th=[ 5], 5.00th=[ 18], 10.00th=[ 28], 20.00th=[ 42], 00:19:44.316 | 30.00th=[ 57], 40.00th=[ 72], 50.00th=[ 89], 60.00th=[ 110], 00:19:44.316 | 70.00th=[ 138], 80.00th=[ 165], 90.00th=[ 203], 95.00th=[ 232], 00:19:44.316 | 99.00th=[ 264], 99.50th=[ 342], 99.90th=[ 355], 99.95th=[ 355], 00:19:44.316 | 99.99th=[ 363] 00:19:44.316 bw ( KiB/s): min=79360, max=345088, per=9.08%, avg=155377.55, stdev=75223.90, samples=20 00:19:44.316 iops : min= 310, max= 1348, avg=606.90, stdev=293.85, samples=20 00:19:44.316 lat (msec) : 2=0.05%, 4=0.90%, 10=2.17%, 20=2.82%, 50=21.12% 00:19:44.316 lat (msec) : 100=28.19%, 250=42.88%, 500=1.88% 00:19:44.316 cpu : usr=0.31%, sys=2.19%, ctx=1733, majf=0, minf=4097 00:19:44.316 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.3%, 32=0.5%, >=64=99.0% 00:19:44.316 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:44.316 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:19:44.316 issued rwts: total=6133,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:44.316 latency : target=0, window=0, percentile=100.00%, depth=64 00:19:44.316 job7: (groupid=0, jobs=1): err= 0: pid=4131561: Sun Jul 14 07:36:58 2024 00:19:44.316 read: IOPS=576, BW=144MiB/s (151MB/s)(1459MiB/10119msec) 00:19:44.316 slat (usec): min=10, max=165557, avg=1037.99, stdev=5404.14 00:19:44.316 clat (usec): min=1836, max=337490, avg=109869.64, stdev=64297.24 00:19:44.316 lat (usec): min=1879, max=397734, avg=110907.62, stdev=64916.39 00:19:44.316 clat percentiles (msec): 00:19:44.316 | 1.00th=[ 7], 5.00th=[ 17], 10.00th=[ 27], 20.00th=[ 56], 00:19:44.316 | 30.00th=[ 77], 40.00th=[ 91], 50.00th=[ 106], 60.00th=[ 117], 00:19:44.316 | 70.00th=[ 130], 80.00th=[ 150], 90.00th=[ 205], 95.00th=[ 245], 00:19:44.316 | 99.00th=[ 284], 99.50th=[ 288], 99.90th=[ 296], 99.95th=[ 300], 00:19:44.316 | 99.99th=[ 338] 00:19:44.316 bw ( KiB/s): min=77312, max=214528, per=8.63%, avg=147723.75, stdev=36326.41, samples=20 00:19:44.316 iops : min= 302, max= 838, avg=577.00, stdev=141.91, samples=20 00:19:44.316 lat (msec) : 2=0.03%, 4=0.22%, 10=2.09%, 20=4.37%, 50=11.16% 00:19:44.316 lat (msec) : 100=27.47%, 250=49.94%, 500=4.71% 00:19:44.316 cpu : usr=0.29%, sys=1.67%, ctx=1664, majf=0, minf=4097 00:19:44.316 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.3%, 32=0.5%, >=64=98.9% 00:19:44.316 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:44.316 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:19:44.316 issued rwts: total=5835,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:44.316 latency : target=0, window=0, percentile=100.00%, depth=64 00:19:44.316 job8: (groupid=0, jobs=1): err= 0: pid=4131562: Sun Jul 14 07:36:58 2024 00:19:44.316 read: IOPS=586, BW=147MiB/s (154MB/s)(1475MiB/10060msec) 00:19:44.317 slat (usec): min=9, max=123402, avg=1114.86, stdev=4305.33 00:19:44.317 clat (usec): min=1307, max=307115, avg=107980.21, stdev=53139.95 00:19:44.317 lat (usec): min=1364, max=344897, avg=109095.08, stdev=53524.52 00:19:44.317 clat percentiles (msec): 00:19:44.317 | 1.00th=[ 9], 5.00th=[ 24], 10.00th=[ 41], 20.00th=[ 70], 00:19:44.317 | 30.00th=[ 82], 40.00th=[ 91], 50.00th=[ 102], 60.00th=[ 115], 00:19:44.317 | 70.00th=[ 131], 80.00th=[ 148], 90.00th=[ 171], 95.00th=[ 199], 00:19:44.317 | 99.00th=[ 275], 99.50th=[ 284], 99.90th=[ 296], 99.95th=[ 309], 00:19:44.317 | 99.99th=[ 309] 00:19:44.317 bw ( KiB/s): min=100864, max=234027, per=8.73%, avg=149326.95, stdev=36921.28, samples=20 00:19:44.317 iops : min= 394, max= 914, avg=583.40, stdev=144.10, samples=20 00:19:44.317 lat (msec) : 2=0.10%, 4=0.22%, 10=0.92%, 20=2.51%, 50=9.10% 00:19:44.317 lat (msec) : 100=35.71%, 250=48.86%, 500=2.58% 00:19:44.317 cpu : usr=0.36%, sys=2.01%, ctx=1528, majf=0, minf=4097 00:19:44.317 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.3%, 32=0.5%, >=64=98.9% 00:19:44.317 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:44.317 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:19:44.317 issued rwts: total=5898,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:44.317 latency : target=0, window=0, percentile=100.00%, depth=64 00:19:44.317 job9: (groupid=0, jobs=1): err= 0: pid=4131563: Sun Jul 14 07:36:58 2024 00:19:44.317 read: IOPS=496, BW=124MiB/s (130MB/s)(1256MiB/10106msec) 00:19:44.317 slat (usec): min=9, max=221677, avg=1318.55, stdev=7839.56 00:19:44.317 clat (usec): min=1513, max=512259, avg=127380.96, stdev=70591.16 00:19:44.317 lat (usec): min=1542, max=512278, avg=128699.50, stdev=71728.05 00:19:44.317 clat percentiles (msec): 00:19:44.317 | 1.00th=[ 6], 5.00th=[ 16], 10.00th=[ 34], 20.00th=[ 62], 00:19:44.317 | 30.00th=[ 82], 40.00th=[ 103], 50.00th=[ 132], 60.00th=[ 150], 00:19:44.317 | 70.00th=[ 167], 80.00th=[ 186], 90.00th=[ 222], 95.00th=[ 247], 00:19:44.317 | 99.00th=[ 292], 99.50th=[ 296], 99.90th=[ 313], 99.95th=[ 380], 00:19:44.317 | 99.99th=[ 514] 00:19:44.317 bw ( KiB/s): min=78848, max=246272, per=7.42%, avg=126910.90, stdev=40577.23, samples=20 00:19:44.317 iops : min= 308, max= 962, avg=495.70, stdev=158.49, samples=20 00:19:44.317 lat (msec) : 2=0.06%, 4=0.16%, 10=2.79%, 20=3.66%, 50=8.92% 00:19:44.317 lat (msec) : 100=23.78%, 250=56.11%, 500=4.48%, 750=0.04% 00:19:44.317 cpu : usr=0.42%, sys=1.73%, ctx=1296, majf=0, minf=4097 00:19:44.317 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.3%, 32=0.6%, >=64=98.7% 00:19:44.317 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:44.317 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:19:44.317 issued rwts: total=5022,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:44.317 latency : target=0, window=0, percentile=100.00%, depth=64 00:19:44.317 job10: (groupid=0, jobs=1): err= 0: pid=4131568: Sun Jul 14 07:36:58 2024 00:19:44.317 read: IOPS=646, BW=162MiB/s (169MB/s)(1622MiB/10039msec) 00:19:44.317 slat (usec): min=11, max=100464, avg=1169.50, stdev=4887.16 00:19:44.317 clat (usec): min=1579, max=327814, avg=97764.77, stdev=70388.15 00:19:44.317 lat (usec): min=1632, max=327866, avg=98934.27, stdev=71439.66 00:19:44.317 clat percentiles (msec): 00:19:44.317 | 1.00th=[ 5], 5.00th=[ 12], 10.00th=[ 21], 20.00th=[ 36], 00:19:44.317 | 30.00th=[ 48], 40.00th=[ 63], 50.00th=[ 77], 60.00th=[ 92], 00:19:44.317 | 70.00th=[ 136], 80.00th=[ 169], 90.00th=[ 207], 95.00th=[ 232], 00:19:44.317 | 99.00th=[ 268], 99.50th=[ 271], 99.90th=[ 292], 99.95th=[ 313], 00:19:44.317 | 99.99th=[ 330] 00:19:44.317 bw ( KiB/s): min=64000, max=330240, per=9.61%, avg=164480.55, stdev=90326.02, samples=20 00:19:44.317 iops : min= 250, max= 1290, avg=642.50, stdev=352.83, samples=20 00:19:44.317 lat (msec) : 2=0.03%, 4=0.46%, 10=4.08%, 20=4.90%, 50=21.54% 00:19:44.317 lat (msec) : 100=30.99%, 250=34.95%, 500=3.04% 00:19:44.317 cpu : usr=0.49%, sys=2.37%, ctx=1738, majf=0, minf=4097 00:19:44.317 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.5%, >=64=99.0% 00:19:44.317 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:44.317 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:19:44.317 issued rwts: total=6489,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:44.317 latency : target=0, window=0, percentile=100.00%, depth=64 00:19:44.317 00:19:44.317 Run status group 0 (all jobs): 00:19:44.317 READ: bw=1671MiB/s (1752MB/s), 92.6MiB/s-228MiB/s (97.1MB/s-239MB/s), io=16.5GiB (17.7GB), run=10039-10119msec 00:19:44.317 00:19:44.317 Disk stats (read/write): 00:19:44.317 nvme0n1: ios=15521/0, merge=0/0, ticks=1234783/0, in_queue=1234783, util=97.25% 00:19:44.317 nvme10n1: ios=18250/0, merge=0/0, ticks=1229880/0, in_queue=1229880, util=97.47% 00:19:44.317 nvme1n1: ios=7293/0, merge=0/0, ticks=1229427/0, in_queue=1229427, util=97.74% 00:19:44.317 nvme2n1: ios=11321/0, merge=0/0, ticks=1235250/0, in_queue=1235250, util=97.88% 00:19:44.317 nvme3n1: ios=9857/0, merge=0/0, ticks=1231075/0, in_queue=1231075, util=97.95% 00:19:44.317 nvme4n1: ios=13149/0, merge=0/0, ticks=1233113/0, in_queue=1233113, util=98.27% 00:19:44.317 nvme5n1: ios=12038/0, merge=0/0, ticks=1240157/0, in_queue=1240157, util=98.45% 00:19:44.317 nvme6n1: ios=11494/0, merge=0/0, ticks=1240466/0, in_queue=1240466, util=98.55% 00:19:44.317 nvme7n1: ios=11616/0, merge=0/0, ticks=1241066/0, in_queue=1241066, util=98.94% 00:19:44.317 nvme8n1: ios=9875/0, merge=0/0, ticks=1238241/0, in_queue=1238241, util=99.11% 00:19:44.317 nvme9n1: ios=12667/0, merge=0/0, ticks=1234342/0, in_queue=1234342, util=99.26% 00:19:44.317 07:36:58 -- target/multiconnection.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 262144 -d 64 -t randwrite -r 10 00:19:44.317 [global] 00:19:44.317 thread=1 00:19:44.317 invalidate=1 00:19:44.317 rw=randwrite 00:19:44.317 time_based=1 00:19:44.317 runtime=10 00:19:44.317 ioengine=libaio 00:19:44.317 direct=1 00:19:44.317 bs=262144 00:19:44.317 iodepth=64 00:19:44.317 norandommap=1 00:19:44.317 numjobs=1 00:19:44.317 00:19:44.317 [job0] 00:19:44.317 filename=/dev/nvme0n1 00:19:44.317 [job1] 00:19:44.317 filename=/dev/nvme10n1 00:19:44.317 [job2] 00:19:44.317 filename=/dev/nvme1n1 00:19:44.317 [job3] 00:19:44.317 filename=/dev/nvme2n1 00:19:44.317 [job4] 00:19:44.317 filename=/dev/nvme3n1 00:19:44.317 [job5] 00:19:44.317 filename=/dev/nvme4n1 00:19:44.317 [job6] 00:19:44.317 filename=/dev/nvme5n1 00:19:44.317 [job7] 00:19:44.317 filename=/dev/nvme6n1 00:19:44.317 [job8] 00:19:44.317 filename=/dev/nvme7n1 00:19:44.317 [job9] 00:19:44.317 filename=/dev/nvme8n1 00:19:44.317 [job10] 00:19:44.317 filename=/dev/nvme9n1 00:19:44.317 Could not set queue depth (nvme0n1) 00:19:44.317 Could not set queue depth (nvme10n1) 00:19:44.317 Could not set queue depth (nvme1n1) 00:19:44.317 Could not set queue depth (nvme2n1) 00:19:44.317 Could not set queue depth (nvme3n1) 00:19:44.317 Could not set queue depth (nvme4n1) 00:19:44.317 Could not set queue depth (nvme5n1) 00:19:44.317 Could not set queue depth (nvme6n1) 00:19:44.317 Could not set queue depth (nvme7n1) 00:19:44.317 Could not set queue depth (nvme8n1) 00:19:44.317 Could not set queue depth (nvme9n1) 00:19:44.317 job0: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:19:44.317 job1: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:19:44.317 job2: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:19:44.317 job3: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:19:44.317 job4: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:19:44.317 job5: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:19:44.317 job6: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:19:44.317 job7: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:19:44.317 job8: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:19:44.317 job9: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:19:44.317 job10: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:19:44.317 fio-3.35 00:19:44.317 Starting 11 threads 00:19:54.291 00:19:54.291 job0: (groupid=0, jobs=1): err= 0: pid=4132612: Sun Jul 14 07:37:09 2024 00:19:54.291 write: IOPS=353, BW=88.3MiB/s (92.6MB/s)(892MiB/10096msec); 0 zone resets 00:19:54.291 slat (usec): min=21, max=275676, avg=1943.46, stdev=10746.35 00:19:54.291 clat (msec): min=4, max=1615, avg=179.18, stdev=215.70 00:19:54.291 lat (msec): min=4, max=1617, avg=181.12, stdev=217.12 00:19:54.291 clat percentiles (msec): 00:19:54.291 | 1.00th=[ 13], 5.00th=[ 25], 10.00th=[ 41], 20.00th=[ 81], 00:19:54.291 | 30.00th=[ 107], 40.00th=[ 128], 50.00th=[ 138], 60.00th=[ 157], 00:19:54.291 | 70.00th=[ 176], 80.00th=[ 203], 90.00th=[ 262], 95.00th=[ 326], 00:19:54.291 | 99.00th=[ 1536], 99.50th=[ 1586], 99.90th=[ 1603], 99.95th=[ 1620], 00:19:54.291 | 99.99th=[ 1620] 00:19:54.291 bw ( KiB/s): min= 5120, max=178688, per=7.34%, avg=89676.15, stdev=52194.51, samples=20 00:19:54.291 iops : min= 20, max= 698, avg=350.25, stdev=203.96, samples=20 00:19:54.291 lat (msec) : 10=0.48%, 20=2.66%, 50=9.20%, 100=15.17%, 250=61.10% 00:19:54.291 lat (msec) : 500=7.29%, 750=0.64%, 1000=1.40%, 2000=2.05% 00:19:54.291 cpu : usr=1.07%, sys=1.20%, ctx=1916, majf=0, minf=1 00:19:54.291 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.4%, 32=0.9%, >=64=98.2% 00:19:54.291 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:54.291 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:19:54.291 issued rwts: total=0,3566,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:54.291 latency : target=0, window=0, percentile=100.00%, depth=64 00:19:54.291 job1: (groupid=0, jobs=1): err= 0: pid=4132613: Sun Jul 14 07:37:09 2024 00:19:54.291 write: IOPS=446, BW=112MiB/s (117MB/s)(1151MiB/10310msec); 0 zone resets 00:19:54.291 slat (usec): min=21, max=63793, avg=1804.70, stdev=4130.06 00:19:54.291 clat (msec): min=7, max=870, avg=141.39, stdev=82.20 00:19:54.291 lat (msec): min=7, max=876, avg=143.20, stdev=82.57 00:19:54.291 clat percentiles (msec): 00:19:54.291 | 1.00th=[ 16], 5.00th=[ 35], 10.00th=[ 66], 20.00th=[ 90], 00:19:54.291 | 30.00th=[ 106], 40.00th=[ 122], 50.00th=[ 138], 60.00th=[ 148], 00:19:54.291 | 70.00th=[ 159], 80.00th=[ 186], 90.00th=[ 218], 95.00th=[ 234], 00:19:54.291 | 99.00th=[ 651], 99.50th=[ 676], 99.90th=[ 869], 99.95th=[ 869], 00:19:54.291 | 99.99th=[ 869] 00:19:54.291 bw ( KiB/s): min=75776, max=226304, per=9.51%, avg=116202.30, stdev=35389.31, samples=20 00:19:54.291 iops : min= 296, max= 884, avg=453.90, stdev=138.24, samples=20 00:19:54.291 lat (msec) : 10=0.04%, 20=2.32%, 50=5.19%, 100=19.31%, 250=70.67% 00:19:54.291 lat (msec) : 500=1.28%, 750=1.04%, 1000=0.13% 00:19:54.291 cpu : usr=1.24%, sys=1.46%, ctx=1955, majf=0, minf=1 00:19:54.291 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.3%, 32=0.7%, >=64=98.6% 00:19:54.291 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:54.291 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:19:54.291 issued rwts: total=0,4603,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:54.291 latency : target=0, window=0, percentile=100.00%, depth=64 00:19:54.291 job2: (groupid=0, jobs=1): err= 0: pid=4132614: Sun Jul 14 07:37:09 2024 00:19:54.291 write: IOPS=408, BW=102MiB/s (107MB/s)(1029MiB/10086msec); 0 zone resets 00:19:54.291 slat (usec): min=25, max=449863, avg=2219.94, stdev=9194.56 00:19:54.291 clat (msec): min=3, max=819, avg=154.55, stdev=102.97 00:19:54.291 lat (msec): min=3, max=819, avg=156.77, stdev=104.32 00:19:54.291 clat percentiles (msec): 00:19:54.291 | 1.00th=[ 18], 5.00th=[ 60], 10.00th=[ 86], 20.00th=[ 93], 00:19:54.291 | 30.00th=[ 104], 40.00th=[ 114], 50.00th=[ 133], 60.00th=[ 148], 00:19:54.291 | 70.00th=[ 167], 80.00th=[ 190], 90.00th=[ 239], 95.00th=[ 317], 00:19:54.291 | 99.00th=[ 751], 99.50th=[ 751], 99.90th=[ 768], 99.95th=[ 818], 00:19:54.291 | 99.99th=[ 818] 00:19:54.291 bw ( KiB/s): min=30208, max=176128, per=8.50%, avg=103752.50, stdev=45160.98, samples=20 00:19:54.291 iops : min= 118, max= 688, avg=405.25, stdev=176.46, samples=20 00:19:54.291 lat (msec) : 4=0.02%, 10=0.36%, 20=0.78%, 50=2.84%, 100=24.44% 00:19:54.291 lat (msec) : 250=63.34%, 500=6.68%, 750=0.63%, 1000=0.90% 00:19:54.291 cpu : usr=1.17%, sys=1.22%, ctx=1453, majf=0, minf=1 00:19:54.291 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.4%, 32=0.8%, >=64=98.5% 00:19:54.291 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:54.291 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:19:54.291 issued rwts: total=0,4116,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:54.291 latency : target=0, window=0, percentile=100.00%, depth=64 00:19:54.291 job3: (groupid=0, jobs=1): err= 0: pid=4132615: Sun Jul 14 07:37:09 2024 00:19:54.291 write: IOPS=379, BW=94.9MiB/s (99.5MB/s)(957MiB/10083msec); 0 zone resets 00:19:54.291 slat (usec): min=22, max=510191, avg=1900.11, stdev=10955.96 00:19:54.291 clat (msec): min=2, max=1185, avg=166.54, stdev=148.58 00:19:54.291 lat (msec): min=2, max=1185, avg=168.44, stdev=150.10 00:19:54.291 clat percentiles (msec): 00:19:54.291 | 1.00th=[ 11], 5.00th=[ 34], 10.00th=[ 50], 20.00th=[ 72], 00:19:54.291 | 30.00th=[ 84], 40.00th=[ 113], 50.00th=[ 140], 60.00th=[ 176], 00:19:54.291 | 70.00th=[ 197], 80.00th=[ 220], 90.00th=[ 268], 95.00th=[ 359], 00:19:54.292 | 99.00th=[ 1003], 99.50th=[ 1116], 99.90th=[ 1183], 99.95th=[ 1183], 00:19:54.292 | 99.99th=[ 1183] 00:19:54.292 bw ( KiB/s): min=14848, max=207872, per=7.89%, avg=96379.60, stdev=52830.55, samples=20 00:19:54.292 iops : min= 58, max= 812, avg=376.40, stdev=206.34, samples=20 00:19:54.292 lat (msec) : 4=0.05%, 10=0.86%, 20=1.91%, 50=7.26%, 100=26.15% 00:19:54.292 lat (msec) : 250=51.46%, 500=9.17%, 750=1.49%, 1000=0.47%, 2000=1.18% 00:19:54.292 cpu : usr=1.02%, sys=1.33%, ctx=2151, majf=0, minf=1 00:19:54.292 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.4%, 32=0.8%, >=64=98.4% 00:19:54.292 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:54.292 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:19:54.292 issued rwts: total=0,3828,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:54.292 latency : target=0, window=0, percentile=100.00%, depth=64 00:19:54.292 job4: (groupid=0, jobs=1): err= 0: pid=4132616: Sun Jul 14 07:37:09 2024 00:19:54.292 write: IOPS=627, BW=157MiB/s (165MB/s)(1581MiB/10069msec); 0 zone resets 00:19:54.292 slat (usec): min=24, max=205324, avg=1554.35, stdev=4011.26 00:19:54.292 clat (msec): min=12, max=413, avg=100.32, stdev=37.13 00:19:54.292 lat (msec): min=12, max=413, avg=101.87, stdev=37.47 00:19:54.292 clat percentiles (msec): 00:19:54.292 | 1.00th=[ 57], 5.00th=[ 65], 10.00th=[ 68], 20.00th=[ 73], 00:19:54.292 | 30.00th=[ 83], 40.00th=[ 89], 50.00th=[ 95], 60.00th=[ 100], 00:19:54.292 | 70.00th=[ 106], 80.00th=[ 118], 90.00th=[ 140], 95.00th=[ 157], 00:19:54.292 | 99.00th=[ 209], 99.50th=[ 342], 99.90th=[ 397], 99.95th=[ 405], 00:19:54.292 | 99.99th=[ 414] 00:19:54.292 bw ( KiB/s): min=84992, max=216064, per=13.12%, avg=160248.35, stdev=38959.59, samples=20 00:19:54.292 iops : min= 332, max= 844, avg=625.90, stdev=152.25, samples=20 00:19:54.292 lat (msec) : 20=0.19%, 50=0.63%, 100=60.29%, 250=38.07%, 500=0.82% 00:19:54.292 cpu : usr=1.92%, sys=1.81%, ctx=1663, majf=0, minf=1 00:19:54.292 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.3%, 32=0.5%, >=64=99.0% 00:19:54.292 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:54.292 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:19:54.292 issued rwts: total=0,6323,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:54.292 latency : target=0, window=0, percentile=100.00%, depth=64 00:19:54.292 job5: (groupid=0, jobs=1): err= 0: pid=4132617: Sun Jul 14 07:37:09 2024 00:19:54.292 write: IOPS=300, BW=75.1MiB/s (78.8MB/s)(774MiB/10300msec); 0 zone resets 00:19:54.292 slat (usec): min=23, max=781486, avg=2035.95, stdev=17509.00 00:19:54.292 clat (msec): min=6, max=1089, avg=210.42, stdev=188.04 00:19:54.292 lat (msec): min=6, max=1089, avg=212.46, stdev=189.11 00:19:54.292 clat percentiles (msec): 00:19:54.292 | 1.00th=[ 20], 5.00th=[ 35], 10.00th=[ 51], 20.00th=[ 97], 00:19:54.292 | 30.00th=[ 131], 40.00th=[ 157], 50.00th=[ 174], 60.00th=[ 190], 00:19:54.292 | 70.00th=[ 222], 80.00th=[ 251], 90.00th=[ 355], 95.00th=[ 743], 00:19:54.292 | 99.00th=[ 1003], 99.50th=[ 1045], 99.90th=[ 1083], 99.95th=[ 1083], 00:19:54.292 | 99.99th=[ 1083] 00:19:54.292 bw ( KiB/s): min= 6144, max=124928, per=6.69%, avg=81702.74, stdev=31992.83, samples=19 00:19:54.292 iops : min= 24, max= 488, avg=319.11, stdev=124.94, samples=19 00:19:54.292 lat (msec) : 10=0.36%, 20=0.78%, 50=8.69%, 100=10.79%, 250=59.30% 00:19:54.292 lat (msec) : 500=14.21%, 750=1.29%, 1000=3.49%, 2000=1.10% 00:19:54.292 cpu : usr=0.84%, sys=1.10%, ctx=1930, majf=0, minf=1 00:19:54.292 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.3%, 16=0.5%, 32=1.0%, >=64=98.0% 00:19:54.292 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:54.292 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:19:54.292 issued rwts: total=0,3096,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:54.292 latency : target=0, window=0, percentile=100.00%, depth=64 00:19:54.292 job6: (groupid=0, jobs=1): err= 0: pid=4132618: Sun Jul 14 07:37:09 2024 00:19:54.292 write: IOPS=421, BW=105MiB/s (110MB/s)(1076MiB/10222msec); 0 zone resets 00:19:54.292 slat (usec): min=14, max=376572, avg=1860.86, stdev=8624.56 00:19:54.292 clat (msec): min=2, max=907, avg=150.07, stdev=130.54 00:19:54.292 lat (msec): min=2, max=907, avg=151.93, stdev=132.12 00:19:54.292 clat percentiles (msec): 00:19:54.292 | 1.00th=[ 9], 5.00th=[ 22], 10.00th=[ 48], 20.00th=[ 79], 00:19:54.292 | 30.00th=[ 94], 40.00th=[ 110], 50.00th=[ 127], 60.00th=[ 138], 00:19:54.292 | 70.00th=[ 157], 80.00th=[ 182], 90.00th=[ 245], 95.00th=[ 334], 00:19:54.292 | 99.00th=[ 844], 99.50th=[ 869], 99.90th=[ 902], 99.95th=[ 902], 00:19:54.292 | 99.99th=[ 911] 00:19:54.292 bw ( KiB/s): min=18432, max=185344, per=8.89%, avg=108530.10, stdev=50291.51, samples=20 00:19:54.292 iops : min= 72, max= 724, avg=423.90, stdev=196.42, samples=20 00:19:54.292 lat (msec) : 4=0.19%, 10=1.28%, 20=2.95%, 50=6.06%, 100=23.54% 00:19:54.292 lat (msec) : 250=56.95%, 500=6.11%, 750=1.14%, 1000=1.79% 00:19:54.292 cpu : usr=1.37%, sys=1.41%, ctx=2208, majf=0, minf=1 00:19:54.292 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.4%, 32=0.7%, >=64=98.5% 00:19:54.292 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:54.292 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:19:54.292 issued rwts: total=0,4304,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:54.292 latency : target=0, window=0, percentile=100.00%, depth=64 00:19:54.292 job7: (groupid=0, jobs=1): err= 0: pid=4132619: Sun Jul 14 07:37:09 2024 00:19:54.292 write: IOPS=496, BW=124MiB/s (130MB/s)(1261MiB/10163msec); 0 zone resets 00:19:54.292 slat (usec): min=19, max=117143, avg=1659.00, stdev=4297.55 00:19:54.292 clat (msec): min=3, max=350, avg=127.24, stdev=55.98 00:19:54.292 lat (msec): min=3, max=350, avg=128.90, stdev=56.62 00:19:54.292 clat percentiles (msec): 00:19:54.292 | 1.00th=[ 21], 5.00th=[ 43], 10.00th=[ 65], 20.00th=[ 77], 00:19:54.292 | 30.00th=[ 99], 40.00th=[ 116], 50.00th=[ 126], 60.00th=[ 136], 00:19:54.292 | 70.00th=[ 150], 80.00th=[ 165], 90.00th=[ 197], 95.00th=[ 228], 00:19:54.292 | 99.00th=[ 309], 99.50th=[ 334], 99.90th=[ 347], 99.95th=[ 351], 00:19:54.292 | 99.99th=[ 351] 00:19:54.292 bw ( KiB/s): min=73580, max=202752, per=10.43%, avg=127440.25, stdev=38935.22, samples=20 00:19:54.292 iops : min= 287, max= 792, avg=497.75, stdev=152.10, samples=20 00:19:54.292 lat (msec) : 4=0.02%, 10=0.20%, 20=0.75%, 50=6.03%, 100=24.10% 00:19:54.292 lat (msec) : 250=65.61%, 500=3.29% 00:19:54.292 cpu : usr=1.65%, sys=1.48%, ctx=2087, majf=0, minf=1 00:19:54.292 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.3%, 32=0.6%, >=64=98.8% 00:19:54.292 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:54.292 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:19:54.292 issued rwts: total=0,5042,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:54.292 latency : target=0, window=0, percentile=100.00%, depth=64 00:19:54.292 job8: (groupid=0, jobs=1): err= 0: pid=4132632: Sun Jul 14 07:37:09 2024 00:19:54.292 write: IOPS=412, BW=103MiB/s (108MB/s)(1038MiB/10066msec); 0 zone resets 00:19:54.292 slat (usec): min=20, max=75073, avg=2071.63, stdev=5451.80 00:19:54.292 clat (msec): min=3, max=842, avg=153.03, stdev=91.00 00:19:54.292 lat (msec): min=3, max=842, avg=155.10, stdev=92.06 00:19:54.292 clat percentiles (msec): 00:19:54.292 | 1.00th=[ 11], 5.00th=[ 38], 10.00th=[ 64], 20.00th=[ 107], 00:19:54.292 | 30.00th=[ 121], 40.00th=[ 136], 50.00th=[ 148], 60.00th=[ 157], 00:19:54.292 | 70.00th=[ 174], 80.00th=[ 192], 90.00th=[ 211], 95.00th=[ 241], 00:19:54.292 | 99.00th=[ 651], 99.50th=[ 810], 99.90th=[ 844], 99.95th=[ 844], 00:19:54.292 | 99.99th=[ 844] 00:19:54.292 bw ( KiB/s): min=16384, max=156672, per=8.57%, avg=104683.90, stdev=31706.83, samples=20 00:19:54.292 iops : min= 64, max= 612, avg=408.90, stdev=123.84, samples=20 00:19:54.292 lat (msec) : 4=0.07%, 10=0.79%, 20=1.04%, 50=4.65%, 100=10.31% 00:19:54.292 lat (msec) : 250=78.83%, 500=2.82%, 750=0.79%, 1000=0.70% 00:19:54.292 cpu : usr=1.30%, sys=1.26%, ctx=1793, majf=0, minf=1 00:19:54.292 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.4%, 32=0.8%, >=64=98.5% 00:19:54.292 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:54.292 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:19:54.292 issued rwts: total=0,4152,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:54.292 latency : target=0, window=0, percentile=100.00%, depth=64 00:19:54.292 job9: (groupid=0, jobs=1): err= 0: pid=4132633: Sun Jul 14 07:37:09 2024 00:19:54.292 write: IOPS=561, BW=140MiB/s (147MB/s)(1421MiB/10117msec); 0 zone resets 00:19:54.292 slat (usec): min=22, max=161748, avg=1407.00, stdev=4406.16 00:19:54.292 clat (msec): min=2, max=353, avg=112.44, stdev=52.82 00:19:54.292 lat (msec): min=2, max=375, avg=113.84, stdev=53.37 00:19:54.292 clat percentiles (msec): 00:19:54.292 | 1.00th=[ 16], 5.00th=[ 39], 10.00th=[ 62], 20.00th=[ 79], 00:19:54.292 | 30.00th=[ 86], 40.00th=[ 90], 50.00th=[ 99], 60.00th=[ 114], 00:19:54.292 | 70.00th=[ 128], 80.00th=[ 150], 90.00th=[ 180], 95.00th=[ 222], 00:19:54.292 | 99.00th=[ 288], 99.50th=[ 321], 99.90th=[ 342], 99.95th=[ 351], 00:19:54.292 | 99.99th=[ 355] 00:19:54.292 bw ( KiB/s): min=59511, max=223744, per=11.78%, avg=143885.30, stdev=46582.15, samples=20 00:19:54.292 iops : min= 232, max= 874, avg=562.00, stdev=181.98, samples=20 00:19:54.292 lat (msec) : 4=0.07%, 10=0.33%, 20=1.58%, 50=4.57%, 100=45.11% 00:19:54.292 lat (msec) : 250=45.83%, 500=2.50% 00:19:54.292 cpu : usr=1.72%, sys=1.52%, ctx=2555, majf=0, minf=1 00:19:54.292 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.3%, 32=0.6%, >=64=98.9% 00:19:54.292 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:54.292 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:19:54.292 issued rwts: total=0,5684,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:54.292 latency : target=0, window=0, percentile=100.00%, depth=64 00:19:54.292 job10: (groupid=0, jobs=1): err= 0: pid=4132634: Sun Jul 14 07:37:09 2024 00:19:54.292 write: IOPS=440, BW=110MiB/s (115MB/s)(1118MiB/10154msec); 0 zone resets 00:19:54.292 slat (usec): min=22, max=434222, avg=1272.78, stdev=8009.70 00:19:54.292 clat (msec): min=3, max=1530, avg=143.99, stdev=153.06 00:19:54.292 lat (msec): min=3, max=1532, avg=145.26, stdev=154.02 00:19:54.292 clat percentiles (msec): 00:19:54.292 | 1.00th=[ 14], 5.00th=[ 33], 10.00th=[ 42], 20.00th=[ 59], 00:19:54.292 | 30.00th=[ 83], 40.00th=[ 101], 50.00th=[ 113], 60.00th=[ 132], 00:19:54.292 | 70.00th=[ 171], 80.00th=[ 197], 90.00th=[ 215], 95.00th=[ 284], 00:19:54.292 | 99.00th=[ 659], 99.50th=[ 1401], 99.90th=[ 1536], 99.95th=[ 1536], 00:19:54.292 | 99.99th=[ 1536] 00:19:54.292 bw ( KiB/s): min=53760, max=183808, per=9.24%, avg=112847.60, stdev=39155.29, samples=20 00:19:54.292 iops : min= 210, max= 718, avg=440.80, stdev=152.95, samples=20 00:19:54.292 lat (msec) : 4=0.04%, 10=0.31%, 20=1.83%, 50=14.23%, 100=23.42% 00:19:54.292 lat (msec) : 250=54.04%, 500=3.42%, 750=1.77%, 2000=0.94% 00:19:54.292 cpu : usr=1.27%, sys=1.77%, ctx=3074, majf=0, minf=1 00:19:54.292 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.4%, 32=0.7%, >=64=98.6% 00:19:54.292 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:54.292 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:19:54.292 issued rwts: total=0,4471,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:54.292 latency : target=0, window=0, percentile=100.00%, depth=64 00:19:54.292 00:19:54.292 Run status group 0 (all jobs): 00:19:54.292 WRITE: bw=1193MiB/s (1251MB/s), 75.1MiB/s-157MiB/s (78.8MB/s-165MB/s), io=12.0GiB (12.9GB), run=10066-10310msec 00:19:54.292 00:19:54.292 Disk stats (read/write): 00:19:54.293 nvme0n1: ios=47/6932, merge=0/0, ticks=1657/1213175, in_queue=1214832, util=99.02% 00:19:54.293 nvme10n1: ios=49/9081, merge=0/0, ticks=945/1187552, in_queue=1188497, util=98.86% 00:19:54.293 nvme1n1: ios=49/7991, merge=0/0, ticks=70/1207976, in_queue=1208046, util=97.74% 00:19:54.293 nvme2n1: ios=50/7399, merge=0/0, ticks=2971/1207089, in_queue=1210060, util=99.20% 00:19:54.293 nvme3n1: ios=49/12401, merge=0/0, ticks=2236/1180366, in_queue=1182602, util=99.31% 00:19:54.293 nvme4n1: ios=48/6077, merge=0/0, ticks=3172/1153399, in_queue=1156571, util=99.65% 00:19:54.293 nvme5n1: ios=49/8527, merge=0/0, ticks=47/1219444, in_queue=1219491, util=98.49% 00:19:54.293 nvme6n1: ios=49/9902, merge=0/0, ticks=3707/1197563, in_queue=1201270, util=99.91% 00:19:54.293 nvme7n1: ios=49/8039, merge=0/0, ticks=143/1212156, in_queue=1212299, util=99.49% 00:19:54.293 nvme8n1: ios=47/11164, merge=0/0, ticks=1444/1205095, in_queue=1206539, util=99.96% 00:19:54.293 nvme9n1: ios=0/8757, merge=0/0, ticks=0/1220857, in_queue=1220857, util=99.00% 00:19:54.293 07:37:09 -- target/multiconnection.sh@36 -- # sync 00:19:54.293 07:37:09 -- target/multiconnection.sh@37 -- # seq 1 11 00:19:54.293 07:37:09 -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:19:54.293 07:37:09 -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:19:54.293 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:19:54.293 07:37:09 -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK1 00:19:54.293 07:37:09 -- common/autotest_common.sh@1198 -- # local i=0 00:19:54.293 07:37:09 -- common/autotest_common.sh@1199 -- # lsblk -o NAME,SERIAL 00:19:54.293 07:37:09 -- common/autotest_common.sh@1199 -- # grep -q -w SPDK1 00:19:54.293 07:37:09 -- common/autotest_common.sh@1206 -- # lsblk -l -o NAME,SERIAL 00:19:54.293 07:37:09 -- common/autotest_common.sh@1206 -- # grep -q -w SPDK1 00:19:54.293 07:37:09 -- common/autotest_common.sh@1210 -- # return 0 00:19:54.293 07:37:09 -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:19:54.293 07:37:09 -- common/autotest_common.sh@551 -- # xtrace_disable 00:19:54.293 07:37:09 -- common/autotest_common.sh@10 -- # set +x 00:19:54.293 07:37:09 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:19:54.293 07:37:09 -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:19:54.293 07:37:09 -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode2 00:19:54.293 NQN:nqn.2016-06.io.spdk:cnode2 disconnected 1 controller(s) 00:19:54.293 07:37:10 -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK2 00:19:54.293 07:37:10 -- common/autotest_common.sh@1198 -- # local i=0 00:19:54.293 07:37:10 -- common/autotest_common.sh@1199 -- # lsblk -o NAME,SERIAL 00:19:54.293 07:37:10 -- common/autotest_common.sh@1199 -- # grep -q -w SPDK2 00:19:54.293 07:37:10 -- common/autotest_common.sh@1206 -- # lsblk -l -o NAME,SERIAL 00:19:54.293 07:37:10 -- common/autotest_common.sh@1206 -- # grep -q -w SPDK2 00:19:54.293 07:37:10 -- common/autotest_common.sh@1210 -- # return 0 00:19:54.293 07:37:10 -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:19:54.293 07:37:10 -- common/autotest_common.sh@551 -- # xtrace_disable 00:19:54.293 07:37:10 -- common/autotest_common.sh@10 -- # set +x 00:19:54.293 07:37:10 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:19:54.293 07:37:10 -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:19:54.293 07:37:10 -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode3 00:19:54.551 NQN:nqn.2016-06.io.spdk:cnode3 disconnected 1 controller(s) 00:19:54.551 07:37:10 -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK3 00:19:54.551 07:37:10 -- common/autotest_common.sh@1198 -- # local i=0 00:19:54.551 07:37:10 -- common/autotest_common.sh@1199 -- # lsblk -o NAME,SERIAL 00:19:54.551 07:37:10 -- common/autotest_common.sh@1199 -- # grep -q -w SPDK3 00:19:54.551 07:37:10 -- common/autotest_common.sh@1206 -- # lsblk -l -o NAME,SERIAL 00:19:54.551 07:37:10 -- common/autotest_common.sh@1206 -- # grep -q -w SPDK3 00:19:54.551 07:37:10 -- common/autotest_common.sh@1210 -- # return 0 00:19:54.551 07:37:10 -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode3 00:19:54.551 07:37:10 -- common/autotest_common.sh@551 -- # xtrace_disable 00:19:54.551 07:37:10 -- common/autotest_common.sh@10 -- # set +x 00:19:54.551 07:37:10 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:19:54.551 07:37:10 -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:19:54.551 07:37:10 -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode4 00:19:54.809 NQN:nqn.2016-06.io.spdk:cnode4 disconnected 1 controller(s) 00:19:54.809 07:37:10 -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK4 00:19:54.809 07:37:10 -- common/autotest_common.sh@1198 -- # local i=0 00:19:54.809 07:37:10 -- common/autotest_common.sh@1199 -- # lsblk -o NAME,SERIAL 00:19:54.809 07:37:10 -- common/autotest_common.sh@1199 -- # grep -q -w SPDK4 00:19:54.809 07:37:10 -- common/autotest_common.sh@1206 -- # lsblk -l -o NAME,SERIAL 00:19:54.809 07:37:10 -- common/autotest_common.sh@1206 -- # grep -q -w SPDK4 00:19:54.809 07:37:10 -- common/autotest_common.sh@1210 -- # return 0 00:19:54.809 07:37:10 -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode4 00:19:54.809 07:37:10 -- common/autotest_common.sh@551 -- # xtrace_disable 00:19:54.809 07:37:10 -- common/autotest_common.sh@10 -- # set +x 00:19:54.809 07:37:10 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:19:54.809 07:37:10 -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:19:54.809 07:37:10 -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode5 00:19:55.067 NQN:nqn.2016-06.io.spdk:cnode5 disconnected 1 controller(s) 00:19:55.067 07:37:11 -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK5 00:19:55.067 07:37:11 -- common/autotest_common.sh@1198 -- # local i=0 00:19:55.067 07:37:11 -- common/autotest_common.sh@1199 -- # lsblk -o NAME,SERIAL 00:19:55.067 07:37:11 -- common/autotest_common.sh@1199 -- # grep -q -w SPDK5 00:19:55.067 07:37:11 -- common/autotest_common.sh@1206 -- # lsblk -l -o NAME,SERIAL 00:19:55.067 07:37:11 -- common/autotest_common.sh@1206 -- # grep -q -w SPDK5 00:19:55.067 07:37:11 -- common/autotest_common.sh@1210 -- # return 0 00:19:55.067 07:37:11 -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode5 00:19:55.067 07:37:11 -- common/autotest_common.sh@551 -- # xtrace_disable 00:19:55.067 07:37:11 -- common/autotest_common.sh@10 -- # set +x 00:19:55.067 07:37:11 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:19:55.067 07:37:11 -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:19:55.067 07:37:11 -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode6 00:19:55.067 NQN:nqn.2016-06.io.spdk:cnode6 disconnected 1 controller(s) 00:19:55.067 07:37:11 -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK6 00:19:55.067 07:37:11 -- common/autotest_common.sh@1198 -- # local i=0 00:19:55.067 07:37:11 -- common/autotest_common.sh@1199 -- # lsblk -o NAME,SERIAL 00:19:55.067 07:37:11 -- common/autotest_common.sh@1199 -- # grep -q -w SPDK6 00:19:55.067 07:37:11 -- common/autotest_common.sh@1206 -- # lsblk -l -o NAME,SERIAL 00:19:55.067 07:37:11 -- common/autotest_common.sh@1206 -- # grep -q -w SPDK6 00:19:55.324 07:37:11 -- common/autotest_common.sh@1210 -- # return 0 00:19:55.324 07:37:11 -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode6 00:19:55.324 07:37:11 -- common/autotest_common.sh@551 -- # xtrace_disable 00:19:55.324 07:37:11 -- common/autotest_common.sh@10 -- # set +x 00:19:55.324 07:37:11 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:19:55.324 07:37:11 -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:19:55.324 07:37:11 -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode7 00:19:55.324 NQN:nqn.2016-06.io.spdk:cnode7 disconnected 1 controller(s) 00:19:55.324 07:37:11 -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK7 00:19:55.324 07:37:11 -- common/autotest_common.sh@1198 -- # local i=0 00:19:55.324 07:37:11 -- common/autotest_common.sh@1199 -- # lsblk -o NAME,SERIAL 00:19:55.324 07:37:11 -- common/autotest_common.sh@1199 -- # grep -q -w SPDK7 00:19:55.324 07:37:11 -- common/autotest_common.sh@1206 -- # lsblk -l -o NAME,SERIAL 00:19:55.324 07:37:11 -- common/autotest_common.sh@1206 -- # grep -q -w SPDK7 00:19:55.324 07:37:11 -- common/autotest_common.sh@1210 -- # return 0 00:19:55.324 07:37:11 -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode7 00:19:55.324 07:37:11 -- common/autotest_common.sh@551 -- # xtrace_disable 00:19:55.324 07:37:11 -- common/autotest_common.sh@10 -- # set +x 00:19:55.324 07:37:11 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:19:55.324 07:37:11 -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:19:55.324 07:37:11 -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode8 00:19:55.583 NQN:nqn.2016-06.io.spdk:cnode8 disconnected 1 controller(s) 00:19:55.583 07:37:11 -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK8 00:19:55.583 07:37:11 -- common/autotest_common.sh@1198 -- # local i=0 00:19:55.583 07:37:11 -- common/autotest_common.sh@1199 -- # lsblk -o NAME,SERIAL 00:19:55.583 07:37:11 -- common/autotest_common.sh@1199 -- # grep -q -w SPDK8 00:19:55.583 07:37:11 -- common/autotest_common.sh@1206 -- # lsblk -l -o NAME,SERIAL 00:19:55.583 07:37:11 -- common/autotest_common.sh@1206 -- # grep -q -w SPDK8 00:19:55.583 07:37:11 -- common/autotest_common.sh@1210 -- # return 0 00:19:55.583 07:37:11 -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode8 00:19:55.583 07:37:11 -- common/autotest_common.sh@551 -- # xtrace_disable 00:19:55.583 07:37:11 -- common/autotest_common.sh@10 -- # set +x 00:19:55.583 07:37:11 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:19:55.583 07:37:11 -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:19:55.583 07:37:11 -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode9 00:19:55.583 NQN:nqn.2016-06.io.spdk:cnode9 disconnected 1 controller(s) 00:19:55.583 07:37:11 -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK9 00:19:55.583 07:37:11 -- common/autotest_common.sh@1198 -- # local i=0 00:19:55.583 07:37:11 -- common/autotest_common.sh@1199 -- # lsblk -o NAME,SERIAL 00:19:55.583 07:37:11 -- common/autotest_common.sh@1199 -- # grep -q -w SPDK9 00:19:55.583 07:37:11 -- common/autotest_common.sh@1206 -- # lsblk -l -o NAME,SERIAL 00:19:55.583 07:37:11 -- common/autotest_common.sh@1206 -- # grep -q -w SPDK9 00:19:55.583 07:37:11 -- common/autotest_common.sh@1210 -- # return 0 00:19:55.583 07:37:11 -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode9 00:19:55.583 07:37:11 -- common/autotest_common.sh@551 -- # xtrace_disable 00:19:55.583 07:37:11 -- common/autotest_common.sh@10 -- # set +x 00:19:55.841 07:37:11 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:19:55.841 07:37:11 -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:19:55.841 07:37:11 -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode10 00:19:55.841 NQN:nqn.2016-06.io.spdk:cnode10 disconnected 1 controller(s) 00:19:55.841 07:37:11 -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK10 00:19:55.841 07:37:11 -- common/autotest_common.sh@1198 -- # local i=0 00:19:55.841 07:37:11 -- common/autotest_common.sh@1199 -- # lsblk -o NAME,SERIAL 00:19:55.841 07:37:11 -- common/autotest_common.sh@1199 -- # grep -q -w SPDK10 00:19:55.841 07:37:11 -- common/autotest_common.sh@1206 -- # lsblk -l -o NAME,SERIAL 00:19:55.841 07:37:11 -- common/autotest_common.sh@1206 -- # grep -q -w SPDK10 00:19:55.841 07:37:11 -- common/autotest_common.sh@1210 -- # return 0 00:19:55.841 07:37:11 -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode10 00:19:55.841 07:37:11 -- common/autotest_common.sh@551 -- # xtrace_disable 00:19:55.841 07:37:11 -- common/autotest_common.sh@10 -- # set +x 00:19:55.841 07:37:11 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:19:55.841 07:37:11 -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:19:55.841 07:37:11 -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode11 00:19:55.841 NQN:nqn.2016-06.io.spdk:cnode11 disconnected 1 controller(s) 00:19:55.841 07:37:11 -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK11 00:19:55.841 07:37:11 -- common/autotest_common.sh@1198 -- # local i=0 00:19:55.841 07:37:12 -- common/autotest_common.sh@1199 -- # lsblk -o NAME,SERIAL 00:19:55.841 07:37:12 -- common/autotest_common.sh@1199 -- # grep -q -w SPDK11 00:19:55.841 07:37:12 -- common/autotest_common.sh@1206 -- # lsblk -l -o NAME,SERIAL 00:19:56.099 07:37:12 -- common/autotest_common.sh@1206 -- # grep -q -w SPDK11 00:19:56.099 07:37:12 -- common/autotest_common.sh@1210 -- # return 0 00:19:56.099 07:37:12 -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode11 00:19:56.100 07:37:12 -- common/autotest_common.sh@551 -- # xtrace_disable 00:19:56.100 07:37:12 -- common/autotest_common.sh@10 -- # set +x 00:19:56.100 07:37:12 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:19:56.100 07:37:12 -- target/multiconnection.sh@43 -- # rm -f ./local-job0-0-verify.state 00:19:56.100 07:37:12 -- target/multiconnection.sh@45 -- # trap - SIGINT SIGTERM EXIT 00:19:56.100 07:37:12 -- target/multiconnection.sh@47 -- # nvmftestfini 00:19:56.100 07:37:12 -- nvmf/common.sh@476 -- # nvmfcleanup 00:19:56.100 07:37:12 -- nvmf/common.sh@116 -- # sync 00:19:56.100 07:37:12 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:19:56.100 07:37:12 -- nvmf/common.sh@119 -- # set +e 00:19:56.100 07:37:12 -- nvmf/common.sh@120 -- # for i in {1..20} 00:19:56.100 07:37:12 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:19:56.100 rmmod nvme_tcp 00:19:56.100 rmmod nvme_fabrics 00:19:56.100 rmmod nvme_keyring 00:19:56.100 07:37:12 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:19:56.100 07:37:12 -- nvmf/common.sh@123 -- # set -e 00:19:56.100 07:37:12 -- nvmf/common.sh@124 -- # return 0 00:19:56.100 07:37:12 -- nvmf/common.sh@477 -- # '[' -n 4127171 ']' 00:19:56.100 07:37:12 -- nvmf/common.sh@478 -- # killprocess 4127171 00:19:56.100 07:37:12 -- common/autotest_common.sh@926 -- # '[' -z 4127171 ']' 00:19:56.100 07:37:12 -- common/autotest_common.sh@930 -- # kill -0 4127171 00:19:56.100 07:37:12 -- common/autotest_common.sh@931 -- # uname 00:19:56.100 07:37:12 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:19:56.100 07:37:12 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 4127171 00:19:56.100 07:37:12 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:19:56.100 07:37:12 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:19:56.100 07:37:12 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 4127171' 00:19:56.100 killing process with pid 4127171 00:19:56.100 07:37:12 -- common/autotest_common.sh@945 -- # kill 4127171 00:19:56.100 07:37:12 -- common/autotest_common.sh@950 -- # wait 4127171 00:19:56.683 07:37:12 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:19:56.683 07:37:12 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:19:56.683 07:37:12 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:19:56.683 07:37:12 -- nvmf/common.sh@273 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:19:56.683 07:37:12 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:19:56.683 07:37:12 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:56.683 07:37:12 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:19:56.683 07:37:12 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:58.584 07:37:14 -- nvmf/common.sh@278 -- # ip -4 addr flush cvl_0_1 00:19:58.584 00:19:58.584 real 1m0.938s 00:19:58.584 user 3m20.807s 00:19:58.584 sys 0m24.839s 00:19:58.584 07:37:14 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:19:58.584 07:37:14 -- common/autotest_common.sh@10 -- # set +x 00:19:58.584 ************************************ 00:19:58.584 END TEST nvmf_multiconnection 00:19:58.584 ************************************ 00:19:58.584 07:37:14 -- nvmf/nvmf.sh@66 -- # run_test nvmf_initiator_timeout /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/initiator_timeout.sh --transport=tcp 00:19:58.584 07:37:14 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:19:58.584 07:37:14 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:19:58.584 07:37:14 -- common/autotest_common.sh@10 -- # set +x 00:19:58.584 ************************************ 00:19:58.584 START TEST nvmf_initiator_timeout 00:19:58.584 ************************************ 00:19:58.584 07:37:14 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/initiator_timeout.sh --transport=tcp 00:19:58.842 * Looking for test storage... 00:19:58.842 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:19:58.842 07:37:14 -- target/initiator_timeout.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:19:58.842 07:37:14 -- nvmf/common.sh@7 -- # uname -s 00:19:58.842 07:37:14 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:19:58.842 07:37:14 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:19:58.842 07:37:14 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:19:58.842 07:37:14 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:19:58.842 07:37:14 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:19:58.842 07:37:14 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:19:58.842 07:37:14 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:19:58.842 07:37:14 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:19:58.842 07:37:14 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:19:58.842 07:37:14 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:19:58.842 07:37:14 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:19:58.842 07:37:14 -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:19:58.842 07:37:14 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:19:58.842 07:37:14 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:19:58.842 07:37:14 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:19:58.842 07:37:14 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:19:58.842 07:37:14 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:19:58.842 07:37:14 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:19:58.842 07:37:14 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:19:58.842 07:37:14 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:58.842 07:37:14 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:58.842 07:37:14 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:58.842 07:37:14 -- paths/export.sh@5 -- # export PATH 00:19:58.842 07:37:14 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:58.842 07:37:14 -- nvmf/common.sh@46 -- # : 0 00:19:58.842 07:37:14 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:19:58.842 07:37:14 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:19:58.842 07:37:14 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:19:58.842 07:37:14 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:19:58.842 07:37:14 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:19:58.842 07:37:14 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:19:58.842 07:37:14 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:19:58.842 07:37:14 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:19:58.842 07:37:14 -- target/initiator_timeout.sh@11 -- # MALLOC_BDEV_SIZE=64 00:19:58.843 07:37:14 -- target/initiator_timeout.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:19:58.843 07:37:14 -- target/initiator_timeout.sh@14 -- # nvmftestinit 00:19:58.843 07:37:14 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:19:58.843 07:37:14 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:19:58.843 07:37:14 -- nvmf/common.sh@436 -- # prepare_net_devs 00:19:58.843 07:37:14 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:19:58.843 07:37:14 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:19:58.843 07:37:14 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:58.843 07:37:14 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:19:58.843 07:37:14 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:58.843 07:37:14 -- nvmf/common.sh@402 -- # [[ phy != virt ]] 00:19:58.843 07:37:14 -- nvmf/common.sh@402 -- # gather_supported_nvmf_pci_devs 00:19:58.843 07:37:14 -- nvmf/common.sh@284 -- # xtrace_disable 00:19:58.843 07:37:14 -- common/autotest_common.sh@10 -- # set +x 00:20:00.742 07:37:16 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:20:00.742 07:37:16 -- nvmf/common.sh@290 -- # pci_devs=() 00:20:00.742 07:37:16 -- nvmf/common.sh@290 -- # local -a pci_devs 00:20:00.742 07:37:16 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:20:00.742 07:37:16 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:20:00.742 07:37:16 -- nvmf/common.sh@292 -- # pci_drivers=() 00:20:00.742 07:37:16 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:20:00.742 07:37:16 -- nvmf/common.sh@294 -- # net_devs=() 00:20:00.742 07:37:16 -- nvmf/common.sh@294 -- # local -ga net_devs 00:20:00.742 07:37:16 -- nvmf/common.sh@295 -- # e810=() 00:20:00.742 07:37:16 -- nvmf/common.sh@295 -- # local -ga e810 00:20:00.742 07:37:16 -- nvmf/common.sh@296 -- # x722=() 00:20:00.742 07:37:16 -- nvmf/common.sh@296 -- # local -ga x722 00:20:00.742 07:37:16 -- nvmf/common.sh@297 -- # mlx=() 00:20:00.742 07:37:16 -- nvmf/common.sh@297 -- # local -ga mlx 00:20:00.742 07:37:16 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:20:00.742 07:37:16 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:20:00.742 07:37:16 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:20:00.742 07:37:16 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:20:00.742 07:37:16 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:20:00.742 07:37:16 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:20:00.742 07:37:16 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:20:00.742 07:37:16 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:20:00.742 07:37:16 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:20:00.742 07:37:16 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:20:00.742 07:37:16 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:20:00.742 07:37:16 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:20:00.742 07:37:16 -- nvmf/common.sh@320 -- # [[ tcp == rdma ]] 00:20:00.742 07:37:16 -- nvmf/common.sh@326 -- # [[ e810 == mlx5 ]] 00:20:00.742 07:37:16 -- nvmf/common.sh@328 -- # [[ e810 == e810 ]] 00:20:00.742 07:37:16 -- nvmf/common.sh@329 -- # pci_devs=("${e810[@]}") 00:20:00.742 07:37:16 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:20:00.742 07:37:16 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:20:00.742 07:37:16 -- nvmf/common.sh@340 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:20:00.742 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:20:00.742 07:37:16 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:20:00.742 07:37:16 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:20:00.742 07:37:16 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:20:00.742 07:37:16 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:20:00.742 07:37:16 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:20:00.742 07:37:16 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:20:00.742 07:37:16 -- nvmf/common.sh@340 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:20:00.742 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:20:00.742 07:37:16 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:20:00.742 07:37:16 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:20:00.742 07:37:16 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:20:00.742 07:37:16 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:20:00.742 07:37:16 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:20:00.742 07:37:16 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:20:00.742 07:37:16 -- nvmf/common.sh@371 -- # [[ e810 == e810 ]] 00:20:00.742 07:37:16 -- nvmf/common.sh@371 -- # [[ tcp == rdma ]] 00:20:00.742 07:37:16 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:20:00.742 07:37:16 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:20:00.742 07:37:16 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:20:00.742 07:37:16 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:20:00.742 07:37:16 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:20:00.742 Found net devices under 0000:0a:00.0: cvl_0_0 00:20:00.742 07:37:16 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:20:00.742 07:37:16 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:20:00.742 07:37:16 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:20:00.742 07:37:16 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:20:00.742 07:37:16 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:20:00.742 07:37:16 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:20:00.742 Found net devices under 0000:0a:00.1: cvl_0_1 00:20:00.742 07:37:16 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:20:00.742 07:37:16 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:20:00.742 07:37:16 -- nvmf/common.sh@402 -- # is_hw=yes 00:20:00.742 07:37:16 -- nvmf/common.sh@404 -- # [[ yes == yes ]] 00:20:00.742 07:37:16 -- nvmf/common.sh@405 -- # [[ tcp == tcp ]] 00:20:00.742 07:37:16 -- nvmf/common.sh@406 -- # nvmf_tcp_init 00:20:00.742 07:37:16 -- nvmf/common.sh@228 -- # NVMF_INITIATOR_IP=10.0.0.1 00:20:00.742 07:37:16 -- nvmf/common.sh@229 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:20:00.742 07:37:16 -- nvmf/common.sh@230 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:20:00.742 07:37:16 -- nvmf/common.sh@233 -- # (( 2 > 1 )) 00:20:00.742 07:37:16 -- nvmf/common.sh@235 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:20:00.742 07:37:16 -- nvmf/common.sh@236 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:20:00.742 07:37:16 -- nvmf/common.sh@239 -- # NVMF_SECOND_TARGET_IP= 00:20:00.742 07:37:16 -- nvmf/common.sh@241 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:20:00.742 07:37:16 -- nvmf/common.sh@242 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:20:00.742 07:37:16 -- nvmf/common.sh@243 -- # ip -4 addr flush cvl_0_0 00:20:00.742 07:37:16 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_1 00:20:00.742 07:37:16 -- nvmf/common.sh@247 -- # ip netns add cvl_0_0_ns_spdk 00:20:00.742 07:37:16 -- nvmf/common.sh@250 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:20:00.742 07:37:16 -- nvmf/common.sh@253 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:20:00.742 07:37:16 -- nvmf/common.sh@254 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:20:00.742 07:37:16 -- nvmf/common.sh@257 -- # ip link set cvl_0_1 up 00:20:00.742 07:37:16 -- nvmf/common.sh@259 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:20:00.742 07:37:16 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:20:00.742 07:37:16 -- nvmf/common.sh@263 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:20:00.742 07:37:16 -- nvmf/common.sh@266 -- # ping -c 1 10.0.0.2 00:20:00.742 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:20:00.742 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.208 ms 00:20:00.742 00:20:00.742 --- 10.0.0.2 ping statistics --- 00:20:00.742 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:00.742 rtt min/avg/max/mdev = 0.208/0.208/0.208/0.000 ms 00:20:00.742 07:37:16 -- nvmf/common.sh@267 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:20:00.742 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:20:00.742 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.134 ms 00:20:00.742 00:20:00.742 --- 10.0.0.1 ping statistics --- 00:20:00.742 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:00.742 rtt min/avg/max/mdev = 0.134/0.134/0.134/0.000 ms 00:20:00.742 07:37:16 -- nvmf/common.sh@269 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:20:00.742 07:37:16 -- nvmf/common.sh@410 -- # return 0 00:20:00.742 07:37:16 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:20:00.742 07:37:16 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:20:00.742 07:37:16 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:20:00.742 07:37:16 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:20:00.742 07:37:16 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:20:00.742 07:37:16 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:20:00.742 07:37:16 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:20:00.742 07:37:16 -- target/initiator_timeout.sh@15 -- # nvmfappstart -m 0xF 00:20:00.742 07:37:16 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:20:00.742 07:37:16 -- common/autotest_common.sh@712 -- # xtrace_disable 00:20:00.742 07:37:16 -- common/autotest_common.sh@10 -- # set +x 00:20:00.742 07:37:16 -- nvmf/common.sh@469 -- # nvmfpid=4135997 00:20:00.742 07:37:16 -- nvmf/common.sh@468 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:20:00.742 07:37:16 -- nvmf/common.sh@470 -- # waitforlisten 4135997 00:20:00.742 07:37:16 -- common/autotest_common.sh@819 -- # '[' -z 4135997 ']' 00:20:00.742 07:37:16 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:00.742 07:37:16 -- common/autotest_common.sh@824 -- # local max_retries=100 00:20:00.742 07:37:16 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:00.742 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:00.742 07:37:16 -- common/autotest_common.sh@828 -- # xtrace_disable 00:20:00.742 07:37:16 -- common/autotest_common.sh@10 -- # set +x 00:20:00.742 [2024-07-14 07:37:16.769186] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:20:00.742 [2024-07-14 07:37:16.769276] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:20:00.742 EAL: No free 2048 kB hugepages reported on node 1 00:20:00.742 [2024-07-14 07:37:16.839637] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:20:01.000 [2024-07-14 07:37:16.953218] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:20:01.000 [2024-07-14 07:37:16.953356] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:20:01.000 [2024-07-14 07:37:16.953373] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:20:01.001 [2024-07-14 07:37:16.953386] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:20:01.001 [2024-07-14 07:37:16.953438] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:20:01.001 [2024-07-14 07:37:16.953465] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:20:01.001 [2024-07-14 07:37:16.953527] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:20:01.001 [2024-07-14 07:37:16.953530] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:20:01.565 07:37:17 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:20:01.565 07:37:17 -- common/autotest_common.sh@852 -- # return 0 00:20:01.565 07:37:17 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:20:01.565 07:37:17 -- common/autotest_common.sh@718 -- # xtrace_disable 00:20:01.565 07:37:17 -- common/autotest_common.sh@10 -- # set +x 00:20:01.823 07:37:17 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:20:01.823 07:37:17 -- target/initiator_timeout.sh@17 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $nvmfpid; nvmftestfini $1; exit 1' SIGINT SIGTERM EXIT 00:20:01.823 07:37:17 -- target/initiator_timeout.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:20:01.823 07:37:17 -- common/autotest_common.sh@551 -- # xtrace_disable 00:20:01.823 07:37:17 -- common/autotest_common.sh@10 -- # set +x 00:20:01.823 Malloc0 00:20:01.823 07:37:17 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:20:01.823 07:37:17 -- target/initiator_timeout.sh@22 -- # rpc_cmd bdev_delay_create -b Malloc0 -d Delay0 -r 30 -t 30 -w 30 -n 30 00:20:01.823 07:37:17 -- common/autotest_common.sh@551 -- # xtrace_disable 00:20:01.823 07:37:17 -- common/autotest_common.sh@10 -- # set +x 00:20:01.823 Delay0 00:20:01.823 07:37:17 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:20:01.823 07:37:17 -- target/initiator_timeout.sh@24 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:20:01.823 07:37:17 -- common/autotest_common.sh@551 -- # xtrace_disable 00:20:01.823 07:37:17 -- common/autotest_common.sh@10 -- # set +x 00:20:01.823 [2024-07-14 07:37:17.778584] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:20:01.823 07:37:17 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:20:01.823 07:37:17 -- target/initiator_timeout.sh@25 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:20:01.823 07:37:17 -- common/autotest_common.sh@551 -- # xtrace_disable 00:20:01.823 07:37:17 -- common/autotest_common.sh@10 -- # set +x 00:20:01.823 07:37:17 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:20:01.823 07:37:17 -- target/initiator_timeout.sh@26 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:20:01.823 07:37:17 -- common/autotest_common.sh@551 -- # xtrace_disable 00:20:01.823 07:37:17 -- common/autotest_common.sh@10 -- # set +x 00:20:01.823 07:37:17 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:20:01.823 07:37:17 -- target/initiator_timeout.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:20:01.823 07:37:17 -- common/autotest_common.sh@551 -- # xtrace_disable 00:20:01.823 07:37:17 -- common/autotest_common.sh@10 -- # set +x 00:20:01.823 [2024-07-14 07:37:17.806876] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:20:01.823 07:37:17 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:20:01.823 07:37:17 -- target/initiator_timeout.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:20:02.390 07:37:18 -- target/initiator_timeout.sh@31 -- # waitforserial SPDKISFASTANDAWESOME 00:20:02.390 07:37:18 -- common/autotest_common.sh@1177 -- # local i=0 00:20:02.390 07:37:18 -- common/autotest_common.sh@1178 -- # local nvme_device_counter=1 nvme_devices=0 00:20:02.390 07:37:18 -- common/autotest_common.sh@1179 -- # [[ -n '' ]] 00:20:02.390 07:37:18 -- common/autotest_common.sh@1184 -- # sleep 2 00:20:04.288 07:37:20 -- common/autotest_common.sh@1185 -- # (( i++ <= 15 )) 00:20:04.288 07:37:20 -- common/autotest_common.sh@1186 -- # lsblk -l -o NAME,SERIAL 00:20:04.288 07:37:20 -- common/autotest_common.sh@1186 -- # grep -c SPDKISFASTANDAWESOME 00:20:04.288 07:37:20 -- common/autotest_common.sh@1186 -- # nvme_devices=1 00:20:04.288 07:37:20 -- common/autotest_common.sh@1187 -- # (( nvme_devices == nvme_device_counter )) 00:20:04.288 07:37:20 -- common/autotest_common.sh@1187 -- # return 0 00:20:04.288 07:37:20 -- target/initiator_timeout.sh@35 -- # fio_pid=4136448 00:20:04.288 07:37:20 -- target/initiator_timeout.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t write -r 60 -v 00:20:04.288 07:37:20 -- target/initiator_timeout.sh@37 -- # sleep 3 00:20:04.288 [global] 00:20:04.288 thread=1 00:20:04.288 invalidate=1 00:20:04.288 rw=write 00:20:04.288 time_based=1 00:20:04.288 runtime=60 00:20:04.288 ioengine=libaio 00:20:04.288 direct=1 00:20:04.288 bs=4096 00:20:04.288 iodepth=1 00:20:04.288 norandommap=0 00:20:04.288 numjobs=1 00:20:04.288 00:20:04.288 verify_dump=1 00:20:04.288 verify_backlog=512 00:20:04.288 verify_state_save=0 00:20:04.288 do_verify=1 00:20:04.288 verify=crc32c-intel 00:20:04.288 [job0] 00:20:04.288 filename=/dev/nvme0n1 00:20:04.546 Could not set queue depth (nvme0n1) 00:20:04.546 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:20:04.546 fio-3.35 00:20:04.546 Starting 1 thread 00:20:07.822 07:37:23 -- target/initiator_timeout.sh@40 -- # rpc_cmd bdev_delay_update_latency Delay0 avg_read 31000000 00:20:07.822 07:37:23 -- common/autotest_common.sh@551 -- # xtrace_disable 00:20:07.822 07:37:23 -- common/autotest_common.sh@10 -- # set +x 00:20:07.822 true 00:20:07.822 07:37:23 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:20:07.822 07:37:23 -- target/initiator_timeout.sh@41 -- # rpc_cmd bdev_delay_update_latency Delay0 avg_write 31000000 00:20:07.822 07:37:23 -- common/autotest_common.sh@551 -- # xtrace_disable 00:20:07.822 07:37:23 -- common/autotest_common.sh@10 -- # set +x 00:20:07.822 true 00:20:07.822 07:37:23 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:20:07.822 07:37:23 -- target/initiator_timeout.sh@42 -- # rpc_cmd bdev_delay_update_latency Delay0 p99_read 31000000 00:20:07.822 07:37:23 -- common/autotest_common.sh@551 -- # xtrace_disable 00:20:07.822 07:37:23 -- common/autotest_common.sh@10 -- # set +x 00:20:07.822 true 00:20:07.822 07:37:23 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:20:07.822 07:37:23 -- target/initiator_timeout.sh@43 -- # rpc_cmd bdev_delay_update_latency Delay0 p99_write 310000000 00:20:07.822 07:37:23 -- common/autotest_common.sh@551 -- # xtrace_disable 00:20:07.822 07:37:23 -- common/autotest_common.sh@10 -- # set +x 00:20:07.822 true 00:20:07.822 07:37:23 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:20:07.822 07:37:23 -- target/initiator_timeout.sh@45 -- # sleep 3 00:20:10.344 07:37:26 -- target/initiator_timeout.sh@48 -- # rpc_cmd bdev_delay_update_latency Delay0 avg_read 30 00:20:10.344 07:37:26 -- common/autotest_common.sh@551 -- # xtrace_disable 00:20:10.344 07:37:26 -- common/autotest_common.sh@10 -- # set +x 00:20:10.344 true 00:20:10.345 07:37:26 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:20:10.345 07:37:26 -- target/initiator_timeout.sh@49 -- # rpc_cmd bdev_delay_update_latency Delay0 avg_write 30 00:20:10.345 07:37:26 -- common/autotest_common.sh@551 -- # xtrace_disable 00:20:10.345 07:37:26 -- common/autotest_common.sh@10 -- # set +x 00:20:10.345 true 00:20:10.345 07:37:26 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:20:10.345 07:37:26 -- target/initiator_timeout.sh@50 -- # rpc_cmd bdev_delay_update_latency Delay0 p99_read 30 00:20:10.345 07:37:26 -- common/autotest_common.sh@551 -- # xtrace_disable 00:20:10.345 07:37:26 -- common/autotest_common.sh@10 -- # set +x 00:20:10.345 true 00:20:10.345 07:37:26 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:20:10.345 07:37:26 -- target/initiator_timeout.sh@51 -- # rpc_cmd bdev_delay_update_latency Delay0 p99_write 30 00:20:10.345 07:37:26 -- common/autotest_common.sh@551 -- # xtrace_disable 00:20:10.345 07:37:26 -- common/autotest_common.sh@10 -- # set +x 00:20:10.345 true 00:20:10.345 07:37:26 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:20:10.345 07:37:26 -- target/initiator_timeout.sh@53 -- # fio_status=0 00:20:10.345 07:37:26 -- target/initiator_timeout.sh@54 -- # wait 4136448 00:21:06.585 00:21:06.585 job0: (groupid=0, jobs=1): err= 0: pid=4136517: Sun Jul 14 07:38:20 2024 00:21:06.585 read: IOPS=171, BW=686KiB/s (702kB/s)(40.2MiB/60011msec) 00:21:06.585 slat (usec): min=5, max=18567, avg=21.21, stdev=197.44 00:21:06.585 clat (usec): min=362, max=41040k, avg=5431.45, stdev=404612.25 00:21:06.585 lat (usec): min=368, max=41040k, avg=5452.66, stdev=404612.25 00:21:06.585 clat percentiles (usec): 00:21:06.585 | 1.00th=[ 375], 5.00th=[ 388], 10.00th=[ 396], 20.00th=[ 408], 00:21:06.585 | 30.00th=[ 424], 40.00th=[ 474], 50.00th=[ 529], 60.00th=[ 553], 00:21:06.585 | 70.00th=[ 570], 80.00th=[ 586], 90.00th=[ 619], 95.00th=[ 668], 00:21:06.585 | 99.00th=[41157], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:21:06.585 | 99.99th=[44827] 00:21:06.585 write: IOPS=179, BW=717KiB/s (734kB/s)(42.0MiB/60011msec); 0 zone resets 00:21:06.585 slat (usec): min=6, max=30200, avg=23.48, stdev=291.30 00:21:06.585 clat (usec): min=235, max=2342, avg=329.01, stdev=56.76 00:21:06.585 lat (usec): min=241, max=30659, avg=352.49, stdev=299.41 00:21:06.585 clat percentiles (usec): 00:21:06.585 | 1.00th=[ 245], 5.00th=[ 258], 10.00th=[ 269], 20.00th=[ 285], 00:21:06.585 | 30.00th=[ 297], 40.00th=[ 310], 50.00th=[ 318], 60.00th=[ 330], 00:21:06.585 | 70.00th=[ 347], 80.00th=[ 375], 90.00th=[ 416], 95.00th=[ 429], 00:21:06.585 | 99.00th=[ 457], 99.50th=[ 465], 99.90th=[ 494], 99.95th=[ 498], 00:21:06.585 | 99.99th=[ 1020] 00:21:06.585 bw ( KiB/s): min= 648, max= 6600, per=100.00%, avg=4352.00, stdev=1150.03, samples=19 00:21:06.585 iops : min= 162, max= 1650, avg=1088.00, stdev=287.51, samples=19 00:21:06.585 lat (usec) : 250=1.15%, 500=71.58%, 750=26.07%, 1000=0.05% 00:21:06.585 lat (msec) : 2=0.02%, 4=0.01%, 50=1.12%, >=2000=0.01% 00:21:06.585 cpu : usr=0.51%, sys=0.87%, ctx=21048, majf=0, minf=2 00:21:06.585 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:21:06.585 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:06.585 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:06.585 issued rwts: total=10290,10752,0,0 short=0,0,0,0 dropped=0,0,0,0 00:21:06.585 latency : target=0, window=0, percentile=100.00%, depth=1 00:21:06.585 00:21:06.585 Run status group 0 (all jobs): 00:21:06.586 READ: bw=686KiB/s (702kB/s), 686KiB/s-686KiB/s (702kB/s-702kB/s), io=40.2MiB (42.1MB), run=60011-60011msec 00:21:06.586 WRITE: bw=717KiB/s (734kB/s), 717KiB/s-717KiB/s (734kB/s-734kB/s), io=42.0MiB (44.0MB), run=60011-60011msec 00:21:06.586 00:21:06.586 Disk stats (read/write): 00:21:06.586 nvme0n1: ios=10339/10752, merge=0/0, ticks=15884/3313, in_queue=19197, util=99.80% 00:21:06.586 07:38:20 -- target/initiator_timeout.sh@56 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:21:06.586 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:21:06.586 07:38:20 -- target/initiator_timeout.sh@57 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:21:06.586 07:38:20 -- common/autotest_common.sh@1198 -- # local i=0 00:21:06.586 07:38:20 -- common/autotest_common.sh@1199 -- # lsblk -o NAME,SERIAL 00:21:06.586 07:38:20 -- common/autotest_common.sh@1199 -- # grep -q -w SPDKISFASTANDAWESOME 00:21:06.586 07:38:20 -- common/autotest_common.sh@1206 -- # lsblk -l -o NAME,SERIAL 00:21:06.586 07:38:20 -- common/autotest_common.sh@1206 -- # grep -q -w SPDKISFASTANDAWESOME 00:21:06.586 07:38:20 -- common/autotest_common.sh@1210 -- # return 0 00:21:06.586 07:38:20 -- target/initiator_timeout.sh@59 -- # '[' 0 -eq 0 ']' 00:21:06.586 07:38:20 -- target/initiator_timeout.sh@60 -- # echo 'nvmf hotplug test: fio successful as expected' 00:21:06.586 nvmf hotplug test: fio successful as expected 00:21:06.586 07:38:20 -- target/initiator_timeout.sh@67 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:21:06.586 07:38:20 -- common/autotest_common.sh@551 -- # xtrace_disable 00:21:06.586 07:38:20 -- common/autotest_common.sh@10 -- # set +x 00:21:06.586 07:38:20 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:21:06.586 07:38:20 -- target/initiator_timeout.sh@69 -- # rm -f ./local-job0-0-verify.state 00:21:06.586 07:38:20 -- target/initiator_timeout.sh@71 -- # trap - SIGINT SIGTERM EXIT 00:21:06.586 07:38:20 -- target/initiator_timeout.sh@73 -- # nvmftestfini 00:21:06.586 07:38:20 -- nvmf/common.sh@476 -- # nvmfcleanup 00:21:06.586 07:38:20 -- nvmf/common.sh@116 -- # sync 00:21:06.586 07:38:20 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:21:06.586 07:38:20 -- nvmf/common.sh@119 -- # set +e 00:21:06.586 07:38:20 -- nvmf/common.sh@120 -- # for i in {1..20} 00:21:06.586 07:38:20 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:21:06.586 rmmod nvme_tcp 00:21:06.586 rmmod nvme_fabrics 00:21:06.586 rmmod nvme_keyring 00:21:06.586 07:38:20 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:21:06.586 07:38:20 -- nvmf/common.sh@123 -- # set -e 00:21:06.586 07:38:20 -- nvmf/common.sh@124 -- # return 0 00:21:06.586 07:38:20 -- nvmf/common.sh@477 -- # '[' -n 4135997 ']' 00:21:06.586 07:38:20 -- nvmf/common.sh@478 -- # killprocess 4135997 00:21:06.586 07:38:20 -- common/autotest_common.sh@926 -- # '[' -z 4135997 ']' 00:21:06.586 07:38:20 -- common/autotest_common.sh@930 -- # kill -0 4135997 00:21:06.586 07:38:20 -- common/autotest_common.sh@931 -- # uname 00:21:06.586 07:38:20 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:21:06.586 07:38:20 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 4135997 00:21:06.586 07:38:21 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:21:06.586 07:38:21 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:21:06.586 07:38:21 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 4135997' 00:21:06.586 killing process with pid 4135997 00:21:06.586 07:38:21 -- common/autotest_common.sh@945 -- # kill 4135997 00:21:06.586 07:38:21 -- common/autotest_common.sh@950 -- # wait 4135997 00:21:06.586 07:38:21 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:21:06.586 07:38:21 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:21:06.586 07:38:21 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:21:06.586 07:38:21 -- nvmf/common.sh@273 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:21:06.586 07:38:21 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:21:06.586 07:38:21 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:06.586 07:38:21 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:21:06.586 07:38:21 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:07.524 07:38:23 -- nvmf/common.sh@278 -- # ip -4 addr flush cvl_0_1 00:21:07.524 00:21:07.524 real 1m8.667s 00:21:07.524 user 4m10.851s 00:21:07.524 sys 0m8.322s 00:21:07.524 07:38:23 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:21:07.524 07:38:23 -- common/autotest_common.sh@10 -- # set +x 00:21:07.524 ************************************ 00:21:07.524 END TEST nvmf_initiator_timeout 00:21:07.524 ************************************ 00:21:07.524 07:38:23 -- nvmf/nvmf.sh@69 -- # [[ phy == phy ]] 00:21:07.524 07:38:23 -- nvmf/nvmf.sh@70 -- # '[' tcp = tcp ']' 00:21:07.524 07:38:23 -- nvmf/nvmf.sh@71 -- # gather_supported_nvmf_pci_devs 00:21:07.524 07:38:23 -- nvmf/common.sh@284 -- # xtrace_disable 00:21:07.524 07:38:23 -- common/autotest_common.sh@10 -- # set +x 00:21:09.423 07:38:25 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:21:09.423 07:38:25 -- nvmf/common.sh@290 -- # pci_devs=() 00:21:09.423 07:38:25 -- nvmf/common.sh@290 -- # local -a pci_devs 00:21:09.423 07:38:25 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:21:09.423 07:38:25 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:21:09.423 07:38:25 -- nvmf/common.sh@292 -- # pci_drivers=() 00:21:09.423 07:38:25 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:21:09.423 07:38:25 -- nvmf/common.sh@294 -- # net_devs=() 00:21:09.423 07:38:25 -- nvmf/common.sh@294 -- # local -ga net_devs 00:21:09.423 07:38:25 -- nvmf/common.sh@295 -- # e810=() 00:21:09.423 07:38:25 -- nvmf/common.sh@295 -- # local -ga e810 00:21:09.423 07:38:25 -- nvmf/common.sh@296 -- # x722=() 00:21:09.423 07:38:25 -- nvmf/common.sh@296 -- # local -ga x722 00:21:09.423 07:38:25 -- nvmf/common.sh@297 -- # mlx=() 00:21:09.423 07:38:25 -- nvmf/common.sh@297 -- # local -ga mlx 00:21:09.423 07:38:25 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:21:09.423 07:38:25 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:21:09.423 07:38:25 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:21:09.423 07:38:25 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:21:09.423 07:38:25 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:21:09.423 07:38:25 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:21:09.423 07:38:25 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:21:09.423 07:38:25 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:21:09.423 07:38:25 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:21:09.423 07:38:25 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:21:09.423 07:38:25 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:21:09.423 07:38:25 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:21:09.423 07:38:25 -- nvmf/common.sh@320 -- # [[ tcp == rdma ]] 00:21:09.423 07:38:25 -- nvmf/common.sh@326 -- # [[ e810 == mlx5 ]] 00:21:09.423 07:38:25 -- nvmf/common.sh@328 -- # [[ e810 == e810 ]] 00:21:09.423 07:38:25 -- nvmf/common.sh@329 -- # pci_devs=("${e810[@]}") 00:21:09.423 07:38:25 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:21:09.423 07:38:25 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:21:09.423 07:38:25 -- nvmf/common.sh@340 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:21:09.423 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:21:09.423 07:38:25 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:21:09.423 07:38:25 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:21:09.423 07:38:25 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:21:09.423 07:38:25 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:21:09.423 07:38:25 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:21:09.423 07:38:25 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:21:09.423 07:38:25 -- nvmf/common.sh@340 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:21:09.423 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:21:09.423 07:38:25 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:21:09.423 07:38:25 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:21:09.423 07:38:25 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:21:09.423 07:38:25 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:21:09.423 07:38:25 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:21:09.423 07:38:25 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:21:09.423 07:38:25 -- nvmf/common.sh@371 -- # [[ e810 == e810 ]] 00:21:09.423 07:38:25 -- nvmf/common.sh@371 -- # [[ tcp == rdma ]] 00:21:09.423 07:38:25 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:21:09.423 07:38:25 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:09.423 07:38:25 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:21:09.423 07:38:25 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:09.423 07:38:25 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:21:09.423 Found net devices under 0000:0a:00.0: cvl_0_0 00:21:09.423 07:38:25 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:21:09.423 07:38:25 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:21:09.423 07:38:25 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:09.423 07:38:25 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:21:09.423 07:38:25 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:09.423 07:38:25 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:21:09.423 Found net devices under 0000:0a:00.1: cvl_0_1 00:21:09.423 07:38:25 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:21:09.423 07:38:25 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:21:09.423 07:38:25 -- nvmf/nvmf.sh@72 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:21:09.423 07:38:25 -- nvmf/nvmf.sh@73 -- # (( 2 > 0 )) 00:21:09.423 07:38:25 -- nvmf/nvmf.sh@74 -- # run_test nvmf_perf_adq /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/perf_adq.sh --transport=tcp 00:21:09.423 07:38:25 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:21:09.423 07:38:25 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:21:09.423 07:38:25 -- common/autotest_common.sh@10 -- # set +x 00:21:09.423 ************************************ 00:21:09.423 START TEST nvmf_perf_adq 00:21:09.423 ************************************ 00:21:09.423 07:38:25 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/perf_adq.sh --transport=tcp 00:21:09.423 * Looking for test storage... 00:21:09.423 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:21:09.423 07:38:25 -- target/perf_adq.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:21:09.423 07:38:25 -- nvmf/common.sh@7 -- # uname -s 00:21:09.423 07:38:25 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:21:09.423 07:38:25 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:21:09.423 07:38:25 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:21:09.423 07:38:25 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:21:09.423 07:38:25 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:21:09.423 07:38:25 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:21:09.423 07:38:25 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:21:09.423 07:38:25 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:21:09.423 07:38:25 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:21:09.423 07:38:25 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:21:09.424 07:38:25 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:21:09.424 07:38:25 -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:21:09.424 07:38:25 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:21:09.424 07:38:25 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:21:09.424 07:38:25 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:21:09.424 07:38:25 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:21:09.424 07:38:25 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:21:09.424 07:38:25 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:21:09.424 07:38:25 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:21:09.424 07:38:25 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:09.424 07:38:25 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:09.424 07:38:25 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:09.424 07:38:25 -- paths/export.sh@5 -- # export PATH 00:21:09.424 07:38:25 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:09.424 07:38:25 -- nvmf/common.sh@46 -- # : 0 00:21:09.424 07:38:25 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:21:09.424 07:38:25 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:21:09.424 07:38:25 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:21:09.424 07:38:25 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:21:09.424 07:38:25 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:21:09.424 07:38:25 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:21:09.424 07:38:25 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:21:09.424 07:38:25 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:21:09.424 07:38:25 -- target/perf_adq.sh@11 -- # gather_supported_nvmf_pci_devs 00:21:09.424 07:38:25 -- nvmf/common.sh@284 -- # xtrace_disable 00:21:09.424 07:38:25 -- common/autotest_common.sh@10 -- # set +x 00:21:11.319 07:38:27 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:21:11.319 07:38:27 -- nvmf/common.sh@290 -- # pci_devs=() 00:21:11.319 07:38:27 -- nvmf/common.sh@290 -- # local -a pci_devs 00:21:11.319 07:38:27 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:21:11.319 07:38:27 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:21:11.319 07:38:27 -- nvmf/common.sh@292 -- # pci_drivers=() 00:21:11.319 07:38:27 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:21:11.319 07:38:27 -- nvmf/common.sh@294 -- # net_devs=() 00:21:11.319 07:38:27 -- nvmf/common.sh@294 -- # local -ga net_devs 00:21:11.319 07:38:27 -- nvmf/common.sh@295 -- # e810=() 00:21:11.319 07:38:27 -- nvmf/common.sh@295 -- # local -ga e810 00:21:11.319 07:38:27 -- nvmf/common.sh@296 -- # x722=() 00:21:11.319 07:38:27 -- nvmf/common.sh@296 -- # local -ga x722 00:21:11.319 07:38:27 -- nvmf/common.sh@297 -- # mlx=() 00:21:11.319 07:38:27 -- nvmf/common.sh@297 -- # local -ga mlx 00:21:11.319 07:38:27 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:21:11.319 07:38:27 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:21:11.319 07:38:27 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:21:11.319 07:38:27 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:21:11.319 07:38:27 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:21:11.319 07:38:27 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:21:11.319 07:38:27 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:21:11.319 07:38:27 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:21:11.319 07:38:27 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:21:11.319 07:38:27 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:21:11.319 07:38:27 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:21:11.319 07:38:27 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:21:11.319 07:38:27 -- nvmf/common.sh@320 -- # [[ tcp == rdma ]] 00:21:11.319 07:38:27 -- nvmf/common.sh@326 -- # [[ e810 == mlx5 ]] 00:21:11.319 07:38:27 -- nvmf/common.sh@328 -- # [[ e810 == e810 ]] 00:21:11.319 07:38:27 -- nvmf/common.sh@329 -- # pci_devs=("${e810[@]}") 00:21:11.319 07:38:27 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:21:11.319 07:38:27 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:21:11.319 07:38:27 -- nvmf/common.sh@340 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:21:11.319 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:21:11.319 07:38:27 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:21:11.319 07:38:27 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:21:11.319 07:38:27 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:21:11.319 07:38:27 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:21:11.319 07:38:27 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:21:11.319 07:38:27 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:21:11.319 07:38:27 -- nvmf/common.sh@340 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:21:11.319 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:21:11.320 07:38:27 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:21:11.320 07:38:27 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:21:11.320 07:38:27 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:21:11.320 07:38:27 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:21:11.320 07:38:27 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:21:11.320 07:38:27 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:21:11.320 07:38:27 -- nvmf/common.sh@371 -- # [[ e810 == e810 ]] 00:21:11.320 07:38:27 -- nvmf/common.sh@371 -- # [[ tcp == rdma ]] 00:21:11.320 07:38:27 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:21:11.320 07:38:27 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:11.320 07:38:27 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:21:11.320 07:38:27 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:11.320 07:38:27 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:21:11.320 Found net devices under 0000:0a:00.0: cvl_0_0 00:21:11.320 07:38:27 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:21:11.320 07:38:27 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:21:11.320 07:38:27 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:11.320 07:38:27 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:21:11.320 07:38:27 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:11.320 07:38:27 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:21:11.320 Found net devices under 0000:0a:00.1: cvl_0_1 00:21:11.320 07:38:27 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:21:11.320 07:38:27 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:21:11.320 07:38:27 -- target/perf_adq.sh@12 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:21:11.320 07:38:27 -- target/perf_adq.sh@13 -- # (( 2 == 0 )) 00:21:11.320 07:38:27 -- target/perf_adq.sh@18 -- # perf=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf 00:21:11.320 07:38:27 -- target/perf_adq.sh@59 -- # adq_reload_driver 00:21:11.320 07:38:27 -- target/perf_adq.sh@52 -- # rmmod ice 00:21:11.883 07:38:28 -- target/perf_adq.sh@53 -- # modprobe ice 00:21:14.408 07:38:29 -- target/perf_adq.sh@54 -- # sleep 5 00:21:19.684 07:38:34 -- target/perf_adq.sh@67 -- # nvmftestinit 00:21:19.684 07:38:34 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:21:19.684 07:38:34 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:21:19.684 07:38:34 -- nvmf/common.sh@436 -- # prepare_net_devs 00:21:19.684 07:38:34 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:21:19.684 07:38:34 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:21:19.684 07:38:34 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:19.684 07:38:34 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:21:19.684 07:38:34 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:19.684 07:38:34 -- nvmf/common.sh@402 -- # [[ phy != virt ]] 00:21:19.684 07:38:34 -- nvmf/common.sh@402 -- # gather_supported_nvmf_pci_devs 00:21:19.684 07:38:34 -- nvmf/common.sh@284 -- # xtrace_disable 00:21:19.684 07:38:34 -- common/autotest_common.sh@10 -- # set +x 00:21:19.684 07:38:34 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:21:19.684 07:38:34 -- nvmf/common.sh@290 -- # pci_devs=() 00:21:19.684 07:38:34 -- nvmf/common.sh@290 -- # local -a pci_devs 00:21:19.684 07:38:34 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:21:19.684 07:38:34 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:21:19.684 07:38:34 -- nvmf/common.sh@292 -- # pci_drivers=() 00:21:19.684 07:38:34 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:21:19.684 07:38:34 -- nvmf/common.sh@294 -- # net_devs=() 00:21:19.684 07:38:34 -- nvmf/common.sh@294 -- # local -ga net_devs 00:21:19.684 07:38:34 -- nvmf/common.sh@295 -- # e810=() 00:21:19.684 07:38:34 -- nvmf/common.sh@295 -- # local -ga e810 00:21:19.684 07:38:34 -- nvmf/common.sh@296 -- # x722=() 00:21:19.684 07:38:34 -- nvmf/common.sh@296 -- # local -ga x722 00:21:19.684 07:38:34 -- nvmf/common.sh@297 -- # mlx=() 00:21:19.684 07:38:34 -- nvmf/common.sh@297 -- # local -ga mlx 00:21:19.684 07:38:34 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:21:19.684 07:38:34 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:21:19.684 07:38:34 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:21:19.684 07:38:34 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:21:19.684 07:38:34 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:21:19.684 07:38:34 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:21:19.684 07:38:34 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:21:19.684 07:38:34 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:21:19.684 07:38:34 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:21:19.684 07:38:34 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:21:19.684 07:38:34 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:21:19.684 07:38:34 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:21:19.684 07:38:34 -- nvmf/common.sh@320 -- # [[ tcp == rdma ]] 00:21:19.684 07:38:34 -- nvmf/common.sh@326 -- # [[ e810 == mlx5 ]] 00:21:19.684 07:38:34 -- nvmf/common.sh@328 -- # [[ e810 == e810 ]] 00:21:19.684 07:38:34 -- nvmf/common.sh@329 -- # pci_devs=("${e810[@]}") 00:21:19.684 07:38:34 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:21:19.684 07:38:34 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:21:19.684 07:38:34 -- nvmf/common.sh@340 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:21:19.684 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:21:19.684 07:38:34 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:21:19.684 07:38:34 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:21:19.684 07:38:34 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:21:19.684 07:38:34 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:21:19.684 07:38:34 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:21:19.684 07:38:34 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:21:19.684 07:38:34 -- nvmf/common.sh@340 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:21:19.684 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:21:19.684 07:38:34 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:21:19.684 07:38:34 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:21:19.684 07:38:34 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:21:19.684 07:38:34 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:21:19.684 07:38:34 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:21:19.684 07:38:34 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:21:19.684 07:38:34 -- nvmf/common.sh@371 -- # [[ e810 == e810 ]] 00:21:19.684 07:38:34 -- nvmf/common.sh@371 -- # [[ tcp == rdma ]] 00:21:19.684 07:38:34 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:21:19.684 07:38:34 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:19.684 07:38:34 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:21:19.684 07:38:34 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:19.684 07:38:34 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:21:19.684 Found net devices under 0000:0a:00.0: cvl_0_0 00:21:19.684 07:38:34 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:21:19.684 07:38:34 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:21:19.684 07:38:34 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:19.684 07:38:34 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:21:19.684 07:38:34 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:19.684 07:38:34 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:21:19.684 Found net devices under 0000:0a:00.1: cvl_0_1 00:21:19.684 07:38:34 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:21:19.684 07:38:34 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:21:19.684 07:38:34 -- nvmf/common.sh@402 -- # is_hw=yes 00:21:19.684 07:38:34 -- nvmf/common.sh@404 -- # [[ yes == yes ]] 00:21:19.684 07:38:34 -- nvmf/common.sh@405 -- # [[ tcp == tcp ]] 00:21:19.684 07:38:34 -- nvmf/common.sh@406 -- # nvmf_tcp_init 00:21:19.684 07:38:34 -- nvmf/common.sh@228 -- # NVMF_INITIATOR_IP=10.0.0.1 00:21:19.684 07:38:34 -- nvmf/common.sh@229 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:21:19.684 07:38:34 -- nvmf/common.sh@230 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:21:19.684 07:38:34 -- nvmf/common.sh@233 -- # (( 2 > 1 )) 00:21:19.684 07:38:34 -- nvmf/common.sh@235 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:21:19.684 07:38:34 -- nvmf/common.sh@236 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:21:19.684 07:38:34 -- nvmf/common.sh@239 -- # NVMF_SECOND_TARGET_IP= 00:21:19.684 07:38:34 -- nvmf/common.sh@241 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:21:19.684 07:38:34 -- nvmf/common.sh@242 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:21:19.684 07:38:34 -- nvmf/common.sh@243 -- # ip -4 addr flush cvl_0_0 00:21:19.684 07:38:34 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_1 00:21:19.684 07:38:34 -- nvmf/common.sh@247 -- # ip netns add cvl_0_0_ns_spdk 00:21:19.684 07:38:34 -- nvmf/common.sh@250 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:21:19.684 07:38:35 -- nvmf/common.sh@253 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:21:19.684 07:38:35 -- nvmf/common.sh@254 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:21:19.684 07:38:35 -- nvmf/common.sh@257 -- # ip link set cvl_0_1 up 00:21:19.684 07:38:35 -- nvmf/common.sh@259 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:21:19.684 07:38:35 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:21:19.684 07:38:35 -- nvmf/common.sh@263 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:21:19.684 07:38:35 -- nvmf/common.sh@266 -- # ping -c 1 10.0.0.2 00:21:19.684 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:21:19.684 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.191 ms 00:21:19.684 00:21:19.684 --- 10.0.0.2 ping statistics --- 00:21:19.684 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:19.684 rtt min/avg/max/mdev = 0.191/0.191/0.191/0.000 ms 00:21:19.684 07:38:35 -- nvmf/common.sh@267 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:21:19.684 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:21:19.684 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.129 ms 00:21:19.684 00:21:19.684 --- 10.0.0.1 ping statistics --- 00:21:19.684 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:19.684 rtt min/avg/max/mdev = 0.129/0.129/0.129/0.000 ms 00:21:19.684 07:38:35 -- nvmf/common.sh@269 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:21:19.684 07:38:35 -- nvmf/common.sh@410 -- # return 0 00:21:19.684 07:38:35 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:21:19.684 07:38:35 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:21:19.684 07:38:35 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:21:19.684 07:38:35 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:21:19.684 07:38:35 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:21:19.684 07:38:35 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:21:19.684 07:38:35 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:21:19.684 07:38:35 -- target/perf_adq.sh@68 -- # nvmfappstart -m 0xF --wait-for-rpc 00:21:19.684 07:38:35 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:21:19.684 07:38:35 -- common/autotest_common.sh@712 -- # xtrace_disable 00:21:19.684 07:38:35 -- common/autotest_common.sh@10 -- # set +x 00:21:19.684 07:38:35 -- nvmf/common.sh@469 -- # nvmfpid=4148328 00:21:19.684 07:38:35 -- nvmf/common.sh@468 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF --wait-for-rpc 00:21:19.684 07:38:35 -- nvmf/common.sh@470 -- # waitforlisten 4148328 00:21:19.684 07:38:35 -- common/autotest_common.sh@819 -- # '[' -z 4148328 ']' 00:21:19.684 07:38:35 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:19.684 07:38:35 -- common/autotest_common.sh@824 -- # local max_retries=100 00:21:19.684 07:38:35 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:19.684 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:19.684 07:38:35 -- common/autotest_common.sh@828 -- # xtrace_disable 00:21:19.684 07:38:35 -- common/autotest_common.sh@10 -- # set +x 00:21:19.684 [2024-07-14 07:38:35.166404] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:21:19.684 [2024-07-14 07:38:35.166475] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:21:19.684 EAL: No free 2048 kB hugepages reported on node 1 00:21:19.684 [2024-07-14 07:38:35.231146] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:21:19.684 [2024-07-14 07:38:35.339303] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:21:19.684 [2024-07-14 07:38:35.339461] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:21:19.684 [2024-07-14 07:38:35.339479] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:21:19.684 [2024-07-14 07:38:35.339491] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:21:19.684 [2024-07-14 07:38:35.339551] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:21:19.684 [2024-07-14 07:38:35.339622] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:21:19.685 [2024-07-14 07:38:35.339688] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:21:19.685 [2024-07-14 07:38:35.339690] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:21:19.685 07:38:35 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:21:19.685 07:38:35 -- common/autotest_common.sh@852 -- # return 0 00:21:19.685 07:38:35 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:21:19.685 07:38:35 -- common/autotest_common.sh@718 -- # xtrace_disable 00:21:19.685 07:38:35 -- common/autotest_common.sh@10 -- # set +x 00:21:19.685 07:38:35 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:21:19.685 07:38:35 -- target/perf_adq.sh@69 -- # adq_configure_nvmf_target 0 00:21:19.685 07:38:35 -- target/perf_adq.sh@42 -- # rpc_cmd sock_impl_set_options --enable-placement-id 0 --enable-zerocopy-send-server -i posix 00:21:19.685 07:38:35 -- common/autotest_common.sh@551 -- # xtrace_disable 00:21:19.685 07:38:35 -- common/autotest_common.sh@10 -- # set +x 00:21:19.685 07:38:35 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:21:19.685 07:38:35 -- target/perf_adq.sh@43 -- # rpc_cmd framework_start_init 00:21:19.685 07:38:35 -- common/autotest_common.sh@551 -- # xtrace_disable 00:21:19.685 07:38:35 -- common/autotest_common.sh@10 -- # set +x 00:21:19.685 07:38:35 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:21:19.685 07:38:35 -- target/perf_adq.sh@44 -- # rpc_cmd nvmf_create_transport -t tcp -o --io-unit-size 8192 --sock-priority 0 00:21:19.685 07:38:35 -- common/autotest_common.sh@551 -- # xtrace_disable 00:21:19.685 07:38:35 -- common/autotest_common.sh@10 -- # set +x 00:21:19.685 [2024-07-14 07:38:35.523899] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:21:19.685 07:38:35 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:21:19.685 07:38:35 -- target/perf_adq.sh@45 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:21:19.685 07:38:35 -- common/autotest_common.sh@551 -- # xtrace_disable 00:21:19.685 07:38:35 -- common/autotest_common.sh@10 -- # set +x 00:21:19.685 Malloc1 00:21:19.685 07:38:35 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:21:19.685 07:38:35 -- target/perf_adq.sh@46 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:21:19.685 07:38:35 -- common/autotest_common.sh@551 -- # xtrace_disable 00:21:19.685 07:38:35 -- common/autotest_common.sh@10 -- # set +x 00:21:19.685 07:38:35 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:21:19.685 07:38:35 -- target/perf_adq.sh@47 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:21:19.685 07:38:35 -- common/autotest_common.sh@551 -- # xtrace_disable 00:21:19.685 07:38:35 -- common/autotest_common.sh@10 -- # set +x 00:21:19.685 07:38:35 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:21:19.685 07:38:35 -- target/perf_adq.sh@48 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:21:19.685 07:38:35 -- common/autotest_common.sh@551 -- # xtrace_disable 00:21:19.685 07:38:35 -- common/autotest_common.sh@10 -- # set +x 00:21:19.685 [2024-07-14 07:38:35.577147] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:21:19.685 07:38:35 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:21:19.685 07:38:35 -- target/perf_adq.sh@73 -- # perfpid=4148472 00:21:19.685 07:38:35 -- target/perf_adq.sh@74 -- # sleep 2 00:21:19.685 07:38:35 -- target/perf_adq.sh@70 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 64 -o 4096 -w randread -t 10 -c 0xF0 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:21:19.685 EAL: No free 2048 kB hugepages reported on node 1 00:21:21.597 07:38:37 -- target/perf_adq.sh@76 -- # rpc_cmd nvmf_get_stats 00:21:21.597 07:38:37 -- target/perf_adq.sh@76 -- # jq -r '.poll_groups[] | select(.current_io_qpairs == 1) | length' 00:21:21.597 07:38:37 -- common/autotest_common.sh@551 -- # xtrace_disable 00:21:21.597 07:38:37 -- target/perf_adq.sh@76 -- # wc -l 00:21:21.597 07:38:37 -- common/autotest_common.sh@10 -- # set +x 00:21:21.597 07:38:37 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:21:21.597 07:38:37 -- target/perf_adq.sh@76 -- # count=4 00:21:21.597 07:38:37 -- target/perf_adq.sh@77 -- # [[ 4 -ne 4 ]] 00:21:21.597 07:38:37 -- target/perf_adq.sh@81 -- # wait 4148472 00:21:29.720 Initializing NVMe Controllers 00:21:29.720 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:21:29.720 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 4 00:21:29.720 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 5 00:21:29.720 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 6 00:21:29.720 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 7 00:21:29.720 Initialization complete. Launching workers. 00:21:29.720 ======================================================== 00:21:29.720 Latency(us) 00:21:29.720 Device Information : IOPS MiB/s Average min max 00:21:29.720 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 4: 10349.15 40.43 6184.46 1280.06 10543.29 00:21:29.720 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 5: 11870.66 46.37 5391.18 1008.74 7910.95 00:21:29.720 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 6: 10856.42 42.41 5895.22 1203.86 9295.30 00:21:29.720 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 7: 11878.86 46.40 5387.43 1136.92 8950.44 00:21:29.720 ======================================================== 00:21:29.720 Total : 44955.09 175.61 5694.54 1008.74 10543.29 00:21:29.720 00:21:29.720 07:38:45 -- target/perf_adq.sh@82 -- # nvmftestfini 00:21:29.720 07:38:45 -- nvmf/common.sh@476 -- # nvmfcleanup 00:21:29.720 07:38:45 -- nvmf/common.sh@116 -- # sync 00:21:29.720 07:38:45 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:21:29.720 07:38:45 -- nvmf/common.sh@119 -- # set +e 00:21:29.720 07:38:45 -- nvmf/common.sh@120 -- # for i in {1..20} 00:21:29.720 07:38:45 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:21:29.720 rmmod nvme_tcp 00:21:29.720 rmmod nvme_fabrics 00:21:29.720 rmmod nvme_keyring 00:21:29.720 07:38:45 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:21:29.720 07:38:45 -- nvmf/common.sh@123 -- # set -e 00:21:29.720 07:38:45 -- nvmf/common.sh@124 -- # return 0 00:21:29.720 07:38:45 -- nvmf/common.sh@477 -- # '[' -n 4148328 ']' 00:21:29.720 07:38:45 -- nvmf/common.sh@478 -- # killprocess 4148328 00:21:29.720 07:38:45 -- common/autotest_common.sh@926 -- # '[' -z 4148328 ']' 00:21:29.720 07:38:45 -- common/autotest_common.sh@930 -- # kill -0 4148328 00:21:29.720 07:38:45 -- common/autotest_common.sh@931 -- # uname 00:21:29.720 07:38:45 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:21:29.720 07:38:45 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 4148328 00:21:29.720 07:38:45 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:21:29.720 07:38:45 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:21:29.720 07:38:45 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 4148328' 00:21:29.720 killing process with pid 4148328 00:21:29.720 07:38:45 -- common/autotest_common.sh@945 -- # kill 4148328 00:21:29.720 07:38:45 -- common/autotest_common.sh@950 -- # wait 4148328 00:21:29.979 07:38:46 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:21:29.979 07:38:46 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:21:29.979 07:38:46 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:21:29.979 07:38:46 -- nvmf/common.sh@273 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:21:29.979 07:38:46 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:21:29.979 07:38:46 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:29.979 07:38:46 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:21:29.979 07:38:46 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:32.514 07:38:48 -- nvmf/common.sh@278 -- # ip -4 addr flush cvl_0_1 00:21:32.514 07:38:48 -- target/perf_adq.sh@84 -- # adq_reload_driver 00:21:32.514 07:38:48 -- target/perf_adq.sh@52 -- # rmmod ice 00:21:32.772 07:38:48 -- target/perf_adq.sh@53 -- # modprobe ice 00:21:34.674 07:38:50 -- target/perf_adq.sh@54 -- # sleep 5 00:21:39.954 07:38:55 -- target/perf_adq.sh@87 -- # nvmftestinit 00:21:39.954 07:38:55 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:21:39.954 07:38:55 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:21:39.954 07:38:55 -- nvmf/common.sh@436 -- # prepare_net_devs 00:21:39.954 07:38:55 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:21:39.955 07:38:55 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:21:39.955 07:38:55 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:39.955 07:38:55 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:21:39.955 07:38:55 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:39.955 07:38:55 -- nvmf/common.sh@402 -- # [[ phy != virt ]] 00:21:39.955 07:38:55 -- nvmf/common.sh@402 -- # gather_supported_nvmf_pci_devs 00:21:39.955 07:38:55 -- nvmf/common.sh@284 -- # xtrace_disable 00:21:39.955 07:38:55 -- common/autotest_common.sh@10 -- # set +x 00:21:39.955 07:38:55 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:21:39.955 07:38:55 -- nvmf/common.sh@290 -- # pci_devs=() 00:21:39.955 07:38:55 -- nvmf/common.sh@290 -- # local -a pci_devs 00:21:39.955 07:38:55 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:21:39.955 07:38:55 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:21:39.955 07:38:55 -- nvmf/common.sh@292 -- # pci_drivers=() 00:21:39.955 07:38:55 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:21:39.955 07:38:55 -- nvmf/common.sh@294 -- # net_devs=() 00:21:39.955 07:38:55 -- nvmf/common.sh@294 -- # local -ga net_devs 00:21:39.955 07:38:55 -- nvmf/common.sh@295 -- # e810=() 00:21:39.955 07:38:55 -- nvmf/common.sh@295 -- # local -ga e810 00:21:39.955 07:38:55 -- nvmf/common.sh@296 -- # x722=() 00:21:39.955 07:38:55 -- nvmf/common.sh@296 -- # local -ga x722 00:21:39.955 07:38:55 -- nvmf/common.sh@297 -- # mlx=() 00:21:39.955 07:38:55 -- nvmf/common.sh@297 -- # local -ga mlx 00:21:39.955 07:38:55 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:21:39.955 07:38:55 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:21:39.955 07:38:55 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:21:39.955 07:38:55 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:21:39.955 07:38:55 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:21:39.955 07:38:55 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:21:39.955 07:38:55 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:21:39.955 07:38:55 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:21:39.955 07:38:55 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:21:39.955 07:38:55 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:21:39.955 07:38:55 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:21:39.955 07:38:55 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:21:39.955 07:38:55 -- nvmf/common.sh@320 -- # [[ tcp == rdma ]] 00:21:39.955 07:38:55 -- nvmf/common.sh@326 -- # [[ e810 == mlx5 ]] 00:21:39.955 07:38:55 -- nvmf/common.sh@328 -- # [[ e810 == e810 ]] 00:21:39.955 07:38:55 -- nvmf/common.sh@329 -- # pci_devs=("${e810[@]}") 00:21:39.955 07:38:55 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:21:39.955 07:38:55 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:21:39.955 07:38:55 -- nvmf/common.sh@340 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:21:39.955 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:21:39.955 07:38:55 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:21:39.955 07:38:55 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:21:39.955 07:38:55 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:21:39.955 07:38:55 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:21:39.955 07:38:55 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:21:39.955 07:38:55 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:21:39.955 07:38:55 -- nvmf/common.sh@340 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:21:39.955 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:21:39.955 07:38:55 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:21:39.955 07:38:55 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:21:39.955 07:38:55 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:21:39.955 07:38:55 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:21:39.955 07:38:55 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:21:39.955 07:38:55 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:21:39.955 07:38:55 -- nvmf/common.sh@371 -- # [[ e810 == e810 ]] 00:21:39.955 07:38:55 -- nvmf/common.sh@371 -- # [[ tcp == rdma ]] 00:21:39.955 07:38:55 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:21:39.955 07:38:55 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:39.955 07:38:55 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:21:39.955 07:38:55 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:39.955 07:38:55 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:21:39.955 Found net devices under 0000:0a:00.0: cvl_0_0 00:21:39.955 07:38:55 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:21:39.955 07:38:55 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:21:39.955 07:38:55 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:39.955 07:38:55 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:21:39.955 07:38:55 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:39.955 07:38:55 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:21:39.955 Found net devices under 0000:0a:00.1: cvl_0_1 00:21:39.955 07:38:55 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:21:39.955 07:38:55 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:21:39.955 07:38:55 -- nvmf/common.sh@402 -- # is_hw=yes 00:21:39.955 07:38:55 -- nvmf/common.sh@404 -- # [[ yes == yes ]] 00:21:39.955 07:38:55 -- nvmf/common.sh@405 -- # [[ tcp == tcp ]] 00:21:39.955 07:38:55 -- nvmf/common.sh@406 -- # nvmf_tcp_init 00:21:39.955 07:38:55 -- nvmf/common.sh@228 -- # NVMF_INITIATOR_IP=10.0.0.1 00:21:39.955 07:38:55 -- nvmf/common.sh@229 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:21:39.955 07:38:55 -- nvmf/common.sh@230 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:21:39.955 07:38:55 -- nvmf/common.sh@233 -- # (( 2 > 1 )) 00:21:39.955 07:38:55 -- nvmf/common.sh@235 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:21:39.955 07:38:55 -- nvmf/common.sh@236 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:21:39.955 07:38:55 -- nvmf/common.sh@239 -- # NVMF_SECOND_TARGET_IP= 00:21:39.955 07:38:55 -- nvmf/common.sh@241 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:21:39.955 07:38:55 -- nvmf/common.sh@242 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:21:39.955 07:38:55 -- nvmf/common.sh@243 -- # ip -4 addr flush cvl_0_0 00:21:39.955 07:38:55 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_1 00:21:39.955 07:38:55 -- nvmf/common.sh@247 -- # ip netns add cvl_0_0_ns_spdk 00:21:39.955 07:38:55 -- nvmf/common.sh@250 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:21:39.955 07:38:55 -- nvmf/common.sh@253 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:21:39.955 07:38:55 -- nvmf/common.sh@254 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:21:39.955 07:38:55 -- nvmf/common.sh@257 -- # ip link set cvl_0_1 up 00:21:39.955 07:38:55 -- nvmf/common.sh@259 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:21:39.955 07:38:55 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:21:39.955 07:38:55 -- nvmf/common.sh@263 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:21:39.955 07:38:55 -- nvmf/common.sh@266 -- # ping -c 1 10.0.0.2 00:21:39.955 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:21:39.955 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.179 ms 00:21:39.955 00:21:39.955 --- 10.0.0.2 ping statistics --- 00:21:39.955 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:39.955 rtt min/avg/max/mdev = 0.179/0.179/0.179/0.000 ms 00:21:39.955 07:38:55 -- nvmf/common.sh@267 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:21:39.955 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:21:39.955 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.112 ms 00:21:39.955 00:21:39.955 --- 10.0.0.1 ping statistics --- 00:21:39.955 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:39.955 rtt min/avg/max/mdev = 0.112/0.112/0.112/0.000 ms 00:21:39.955 07:38:55 -- nvmf/common.sh@269 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:21:39.955 07:38:55 -- nvmf/common.sh@410 -- # return 0 00:21:39.955 07:38:55 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:21:39.955 07:38:55 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:21:39.955 07:38:55 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:21:39.955 07:38:55 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:21:39.955 07:38:55 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:21:39.955 07:38:55 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:21:39.955 07:38:55 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:21:39.955 07:38:55 -- target/perf_adq.sh@88 -- # adq_configure_driver 00:21:39.955 07:38:55 -- target/perf_adq.sh@22 -- # ip netns exec cvl_0_0_ns_spdk ethtool --offload cvl_0_0 hw-tc-offload on 00:21:39.955 07:38:55 -- target/perf_adq.sh@24 -- # ip netns exec cvl_0_0_ns_spdk ethtool --set-priv-flags cvl_0_0 channel-pkt-inspect-optimize off 00:21:39.955 07:38:55 -- target/perf_adq.sh@26 -- # sysctl -w net.core.busy_poll=1 00:21:39.955 net.core.busy_poll = 1 00:21:39.955 07:38:55 -- target/perf_adq.sh@27 -- # sysctl -w net.core.busy_read=1 00:21:39.955 net.core.busy_read = 1 00:21:39.955 07:38:55 -- target/perf_adq.sh@29 -- # tc=/usr/sbin/tc 00:21:39.955 07:38:55 -- target/perf_adq.sh@31 -- # ip netns exec cvl_0_0_ns_spdk /usr/sbin/tc qdisc add dev cvl_0_0 root mqprio num_tc 2 map 0 1 queues 2@0 2@2 hw 1 mode channel 00:21:39.955 07:38:56 -- target/perf_adq.sh@33 -- # ip netns exec cvl_0_0_ns_spdk /usr/sbin/tc qdisc add dev cvl_0_0 ingress 00:21:39.955 07:38:56 -- target/perf_adq.sh@35 -- # ip netns exec cvl_0_0_ns_spdk /usr/sbin/tc filter add dev cvl_0_0 protocol ip parent ffff: prio 1 flower dst_ip 10.0.0.2/32 ip_proto tcp dst_port 4420 skip_sw hw_tc 1 00:21:39.955 07:38:56 -- target/perf_adq.sh@38 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/nvmf/set_xps_rxqs cvl_0_0 00:21:39.955 07:38:56 -- target/perf_adq.sh@89 -- # nvmfappstart -m 0xF --wait-for-rpc 00:21:39.955 07:38:56 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:21:39.955 07:38:56 -- common/autotest_common.sh@712 -- # xtrace_disable 00:21:39.955 07:38:56 -- common/autotest_common.sh@10 -- # set +x 00:21:39.955 07:38:56 -- nvmf/common.sh@469 -- # nvmfpid=4151167 00:21:39.955 07:38:56 -- nvmf/common.sh@468 -- # ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF --wait-for-rpc 00:21:39.955 07:38:56 -- nvmf/common.sh@470 -- # waitforlisten 4151167 00:21:39.955 07:38:56 -- common/autotest_common.sh@819 -- # '[' -z 4151167 ']' 00:21:39.955 07:38:56 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:39.955 07:38:56 -- common/autotest_common.sh@824 -- # local max_retries=100 00:21:39.955 07:38:56 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:39.955 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:39.955 07:38:56 -- common/autotest_common.sh@828 -- # xtrace_disable 00:21:39.955 07:38:56 -- common/autotest_common.sh@10 -- # set +x 00:21:39.955 [2024-07-14 07:38:56.124423] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:21:39.955 [2024-07-14 07:38:56.124504] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:21:40.214 EAL: No free 2048 kB hugepages reported on node 1 00:21:40.214 [2024-07-14 07:38:56.189377] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:21:40.214 [2024-07-14 07:38:56.299157] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:21:40.214 [2024-07-14 07:38:56.299321] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:21:40.214 [2024-07-14 07:38:56.299346] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:21:40.214 [2024-07-14 07:38:56.299364] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:21:40.214 [2024-07-14 07:38:56.299501] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:21:40.214 [2024-07-14 07:38:56.299557] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:21:40.214 [2024-07-14 07:38:56.299622] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:21:40.214 [2024-07-14 07:38:56.299627] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:21:40.214 07:38:56 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:21:40.214 07:38:56 -- common/autotest_common.sh@852 -- # return 0 00:21:40.214 07:38:56 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:21:40.214 07:38:56 -- common/autotest_common.sh@718 -- # xtrace_disable 00:21:40.214 07:38:56 -- common/autotest_common.sh@10 -- # set +x 00:21:40.214 07:38:56 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:21:40.214 07:38:56 -- target/perf_adq.sh@90 -- # adq_configure_nvmf_target 1 00:21:40.214 07:38:56 -- target/perf_adq.sh@42 -- # rpc_cmd sock_impl_set_options --enable-placement-id 1 --enable-zerocopy-send-server -i posix 00:21:40.214 07:38:56 -- common/autotest_common.sh@551 -- # xtrace_disable 00:21:40.214 07:38:56 -- common/autotest_common.sh@10 -- # set +x 00:21:40.214 07:38:56 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:21:40.214 07:38:56 -- target/perf_adq.sh@43 -- # rpc_cmd framework_start_init 00:21:40.214 07:38:56 -- common/autotest_common.sh@551 -- # xtrace_disable 00:21:40.214 07:38:56 -- common/autotest_common.sh@10 -- # set +x 00:21:40.473 07:38:56 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:21:40.473 07:38:56 -- target/perf_adq.sh@44 -- # rpc_cmd nvmf_create_transport -t tcp -o --io-unit-size 8192 --sock-priority 1 00:21:40.473 07:38:56 -- common/autotest_common.sh@551 -- # xtrace_disable 00:21:40.473 07:38:56 -- common/autotest_common.sh@10 -- # set +x 00:21:40.473 [2024-07-14 07:38:56.475481] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:21:40.473 07:38:56 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:21:40.473 07:38:56 -- target/perf_adq.sh@45 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:21:40.473 07:38:56 -- common/autotest_common.sh@551 -- # xtrace_disable 00:21:40.473 07:38:56 -- common/autotest_common.sh@10 -- # set +x 00:21:40.473 Malloc1 00:21:40.473 07:38:56 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:21:40.473 07:38:56 -- target/perf_adq.sh@46 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:21:40.473 07:38:56 -- common/autotest_common.sh@551 -- # xtrace_disable 00:21:40.473 07:38:56 -- common/autotest_common.sh@10 -- # set +x 00:21:40.473 07:38:56 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:21:40.473 07:38:56 -- target/perf_adq.sh@47 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:21:40.473 07:38:56 -- common/autotest_common.sh@551 -- # xtrace_disable 00:21:40.473 07:38:56 -- common/autotest_common.sh@10 -- # set +x 00:21:40.473 07:38:56 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:21:40.473 07:38:56 -- target/perf_adq.sh@48 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:21:40.473 07:38:56 -- common/autotest_common.sh@551 -- # xtrace_disable 00:21:40.473 07:38:56 -- common/autotest_common.sh@10 -- # set +x 00:21:40.473 [2024-07-14 07:38:56.528673] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:21:40.473 07:38:56 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:21:40.473 07:38:56 -- target/perf_adq.sh@94 -- # perfpid=4151195 00:21:40.473 07:38:56 -- target/perf_adq.sh@95 -- # sleep 2 00:21:40.473 07:38:56 -- target/perf_adq.sh@91 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 64 -o 4096 -w randread -t 10 -c 0xF0 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:21:40.473 EAL: No free 2048 kB hugepages reported on node 1 00:21:42.372 07:38:58 -- target/perf_adq.sh@97 -- # rpc_cmd nvmf_get_stats 00:21:42.372 07:38:58 -- target/perf_adq.sh@97 -- # jq -r '.poll_groups[] | select(.current_io_qpairs == 0) | length' 00:21:42.372 07:38:58 -- common/autotest_common.sh@551 -- # xtrace_disable 00:21:42.372 07:38:58 -- target/perf_adq.sh@97 -- # wc -l 00:21:42.372 07:38:58 -- common/autotest_common.sh@10 -- # set +x 00:21:42.630 07:38:58 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:21:42.630 07:38:58 -- target/perf_adq.sh@97 -- # count=2 00:21:42.630 07:38:58 -- target/perf_adq.sh@98 -- # [[ 2 -lt 2 ]] 00:21:42.630 07:38:58 -- target/perf_adq.sh@103 -- # wait 4151195 00:21:50.742 Initializing NVMe Controllers 00:21:50.742 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:21:50.742 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 4 00:21:50.742 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 5 00:21:50.742 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 6 00:21:50.742 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 7 00:21:50.742 Initialization complete. Launching workers. 00:21:50.742 ======================================================== 00:21:50.742 Latency(us) 00:21:50.742 Device Information : IOPS MiB/s Average min max 00:21:50.742 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 4: 10772.97 42.08 5941.25 3022.42 7817.21 00:21:50.742 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 5: 5650.18 22.07 11362.10 1180.77 60091.34 00:21:50.742 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 6: 5074.29 19.82 12644.31 1668.50 60584.73 00:21:50.742 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 7: 3977.91 15.54 16090.46 2416.52 60445.48 00:21:50.742 ======================================================== 00:21:50.742 Total : 25475.34 99.51 10063.46 1180.77 60584.73 00:21:50.742 00:21:50.742 07:39:06 -- target/perf_adq.sh@104 -- # nvmftestfini 00:21:50.742 07:39:06 -- nvmf/common.sh@476 -- # nvmfcleanup 00:21:50.742 07:39:06 -- nvmf/common.sh@116 -- # sync 00:21:50.742 07:39:06 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:21:50.742 07:39:06 -- nvmf/common.sh@119 -- # set +e 00:21:50.742 07:39:06 -- nvmf/common.sh@120 -- # for i in {1..20} 00:21:50.742 07:39:06 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:21:50.742 rmmod nvme_tcp 00:21:50.742 rmmod nvme_fabrics 00:21:50.742 rmmod nvme_keyring 00:21:50.742 07:39:06 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:21:50.742 07:39:06 -- nvmf/common.sh@123 -- # set -e 00:21:50.742 07:39:06 -- nvmf/common.sh@124 -- # return 0 00:21:50.742 07:39:06 -- nvmf/common.sh@477 -- # '[' -n 4151167 ']' 00:21:50.742 07:39:06 -- nvmf/common.sh@478 -- # killprocess 4151167 00:21:50.742 07:39:06 -- common/autotest_common.sh@926 -- # '[' -z 4151167 ']' 00:21:50.742 07:39:06 -- common/autotest_common.sh@930 -- # kill -0 4151167 00:21:50.742 07:39:06 -- common/autotest_common.sh@931 -- # uname 00:21:50.742 07:39:06 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:21:50.742 07:39:06 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 4151167 00:21:50.742 07:39:06 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:21:50.742 07:39:06 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:21:50.742 07:39:06 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 4151167' 00:21:50.742 killing process with pid 4151167 00:21:50.742 07:39:06 -- common/autotest_common.sh@945 -- # kill 4151167 00:21:50.742 07:39:06 -- common/autotest_common.sh@950 -- # wait 4151167 00:21:51.000 07:39:07 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:21:51.000 07:39:07 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:21:51.000 07:39:07 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:21:51.000 07:39:07 -- nvmf/common.sh@273 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:21:51.000 07:39:07 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:21:51.000 07:39:07 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:51.000 07:39:07 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:21:51.000 07:39:07 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:53.529 07:39:09 -- nvmf/common.sh@278 -- # ip -4 addr flush cvl_0_1 00:21:53.529 07:39:09 -- target/perf_adq.sh@106 -- # trap - SIGINT SIGTERM EXIT 00:21:53.529 00:21:53.529 real 0m43.715s 00:21:53.529 user 2m22.690s 00:21:53.529 sys 0m15.717s 00:21:53.529 07:39:09 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:21:53.529 07:39:09 -- common/autotest_common.sh@10 -- # set +x 00:21:53.529 ************************************ 00:21:53.529 END TEST nvmf_perf_adq 00:21:53.529 ************************************ 00:21:53.529 07:39:09 -- nvmf/nvmf.sh@81 -- # run_test nvmf_shutdown /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/shutdown.sh --transport=tcp 00:21:53.529 07:39:09 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:21:53.529 07:39:09 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:21:53.529 07:39:09 -- common/autotest_common.sh@10 -- # set +x 00:21:53.529 ************************************ 00:21:53.529 START TEST nvmf_shutdown 00:21:53.529 ************************************ 00:21:53.529 07:39:09 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/shutdown.sh --transport=tcp 00:21:53.529 * Looking for test storage... 00:21:53.529 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:21:53.529 07:39:09 -- target/shutdown.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:21:53.529 07:39:09 -- nvmf/common.sh@7 -- # uname -s 00:21:53.529 07:39:09 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:21:53.529 07:39:09 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:21:53.529 07:39:09 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:21:53.529 07:39:09 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:21:53.529 07:39:09 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:21:53.529 07:39:09 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:21:53.529 07:39:09 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:21:53.529 07:39:09 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:21:53.529 07:39:09 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:21:53.529 07:39:09 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:21:53.529 07:39:09 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:21:53.529 07:39:09 -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:21:53.529 07:39:09 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:21:53.529 07:39:09 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:21:53.529 07:39:09 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:21:53.529 07:39:09 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:21:53.529 07:39:09 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:21:53.529 07:39:09 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:21:53.529 07:39:09 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:21:53.529 07:39:09 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:53.529 07:39:09 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:53.529 07:39:09 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:53.529 07:39:09 -- paths/export.sh@5 -- # export PATH 00:21:53.529 07:39:09 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:53.529 07:39:09 -- nvmf/common.sh@46 -- # : 0 00:21:53.529 07:39:09 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:21:53.529 07:39:09 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:21:53.529 07:39:09 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:21:53.529 07:39:09 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:21:53.529 07:39:09 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:21:53.529 07:39:09 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:21:53.529 07:39:09 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:21:53.529 07:39:09 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:21:53.529 07:39:09 -- target/shutdown.sh@11 -- # MALLOC_BDEV_SIZE=64 00:21:53.529 07:39:09 -- target/shutdown.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:21:53.529 07:39:09 -- target/shutdown.sh@146 -- # run_test nvmf_shutdown_tc1 nvmf_shutdown_tc1 00:21:53.529 07:39:09 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:21:53.529 07:39:09 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:21:53.529 07:39:09 -- common/autotest_common.sh@10 -- # set +x 00:21:53.529 ************************************ 00:21:53.529 START TEST nvmf_shutdown_tc1 00:21:53.529 ************************************ 00:21:53.529 07:39:09 -- common/autotest_common.sh@1104 -- # nvmf_shutdown_tc1 00:21:53.529 07:39:09 -- target/shutdown.sh@74 -- # starttarget 00:21:53.529 07:39:09 -- target/shutdown.sh@15 -- # nvmftestinit 00:21:53.529 07:39:09 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:21:53.529 07:39:09 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:21:53.529 07:39:09 -- nvmf/common.sh@436 -- # prepare_net_devs 00:21:53.529 07:39:09 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:21:53.529 07:39:09 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:21:53.529 07:39:09 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:53.529 07:39:09 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:21:53.529 07:39:09 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:53.529 07:39:09 -- nvmf/common.sh@402 -- # [[ phy != virt ]] 00:21:53.529 07:39:09 -- nvmf/common.sh@402 -- # gather_supported_nvmf_pci_devs 00:21:53.529 07:39:09 -- nvmf/common.sh@284 -- # xtrace_disable 00:21:53.529 07:39:09 -- common/autotest_common.sh@10 -- # set +x 00:21:54.903 07:39:10 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:21:54.903 07:39:10 -- nvmf/common.sh@290 -- # pci_devs=() 00:21:54.903 07:39:10 -- nvmf/common.sh@290 -- # local -a pci_devs 00:21:54.903 07:39:10 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:21:54.903 07:39:10 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:21:54.903 07:39:10 -- nvmf/common.sh@292 -- # pci_drivers=() 00:21:54.903 07:39:10 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:21:54.903 07:39:10 -- nvmf/common.sh@294 -- # net_devs=() 00:21:54.903 07:39:10 -- nvmf/common.sh@294 -- # local -ga net_devs 00:21:54.903 07:39:10 -- nvmf/common.sh@295 -- # e810=() 00:21:54.903 07:39:10 -- nvmf/common.sh@295 -- # local -ga e810 00:21:54.903 07:39:10 -- nvmf/common.sh@296 -- # x722=() 00:21:54.903 07:39:10 -- nvmf/common.sh@296 -- # local -ga x722 00:21:54.903 07:39:10 -- nvmf/common.sh@297 -- # mlx=() 00:21:54.903 07:39:10 -- nvmf/common.sh@297 -- # local -ga mlx 00:21:54.903 07:39:10 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:21:54.903 07:39:10 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:21:54.903 07:39:10 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:21:54.903 07:39:10 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:21:54.903 07:39:10 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:21:54.903 07:39:10 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:21:54.903 07:39:10 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:21:54.903 07:39:10 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:21:54.903 07:39:10 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:21:54.903 07:39:10 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:21:54.903 07:39:10 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:21:54.903 07:39:10 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:21:54.903 07:39:10 -- nvmf/common.sh@320 -- # [[ tcp == rdma ]] 00:21:54.903 07:39:11 -- nvmf/common.sh@326 -- # [[ e810 == mlx5 ]] 00:21:54.903 07:39:11 -- nvmf/common.sh@328 -- # [[ e810 == e810 ]] 00:21:54.903 07:39:11 -- nvmf/common.sh@329 -- # pci_devs=("${e810[@]}") 00:21:54.903 07:39:11 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:21:54.903 07:39:11 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:21:54.903 07:39:11 -- nvmf/common.sh@340 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:21:54.903 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:21:54.903 07:39:11 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:21:54.903 07:39:11 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:21:54.903 07:39:11 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:21:54.903 07:39:11 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:21:54.903 07:39:11 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:21:54.903 07:39:11 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:21:54.903 07:39:11 -- nvmf/common.sh@340 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:21:54.903 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:21:54.903 07:39:11 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:21:54.903 07:39:11 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:21:54.904 07:39:11 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:21:54.904 07:39:11 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:21:54.904 07:39:11 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:21:54.904 07:39:11 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:21:54.904 07:39:11 -- nvmf/common.sh@371 -- # [[ e810 == e810 ]] 00:21:54.904 07:39:11 -- nvmf/common.sh@371 -- # [[ tcp == rdma ]] 00:21:54.904 07:39:11 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:21:54.904 07:39:11 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:54.904 07:39:11 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:21:54.904 07:39:11 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:54.904 07:39:11 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:21:54.904 Found net devices under 0000:0a:00.0: cvl_0_0 00:21:54.904 07:39:11 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:21:54.904 07:39:11 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:21:54.904 07:39:11 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:54.904 07:39:11 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:21:54.904 07:39:11 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:54.904 07:39:11 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:21:54.904 Found net devices under 0000:0a:00.1: cvl_0_1 00:21:54.904 07:39:11 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:21:54.904 07:39:11 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:21:54.904 07:39:11 -- nvmf/common.sh@402 -- # is_hw=yes 00:21:54.904 07:39:11 -- nvmf/common.sh@404 -- # [[ yes == yes ]] 00:21:54.904 07:39:11 -- nvmf/common.sh@405 -- # [[ tcp == tcp ]] 00:21:54.904 07:39:11 -- nvmf/common.sh@406 -- # nvmf_tcp_init 00:21:54.904 07:39:11 -- nvmf/common.sh@228 -- # NVMF_INITIATOR_IP=10.0.0.1 00:21:54.904 07:39:11 -- nvmf/common.sh@229 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:21:54.904 07:39:11 -- nvmf/common.sh@230 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:21:54.904 07:39:11 -- nvmf/common.sh@233 -- # (( 2 > 1 )) 00:21:54.904 07:39:11 -- nvmf/common.sh@235 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:21:54.904 07:39:11 -- nvmf/common.sh@236 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:21:54.904 07:39:11 -- nvmf/common.sh@239 -- # NVMF_SECOND_TARGET_IP= 00:21:54.904 07:39:11 -- nvmf/common.sh@241 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:21:54.904 07:39:11 -- nvmf/common.sh@242 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:21:54.904 07:39:11 -- nvmf/common.sh@243 -- # ip -4 addr flush cvl_0_0 00:21:54.904 07:39:11 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_1 00:21:54.904 07:39:11 -- nvmf/common.sh@247 -- # ip netns add cvl_0_0_ns_spdk 00:21:54.904 07:39:11 -- nvmf/common.sh@250 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:21:54.904 07:39:11 -- nvmf/common.sh@253 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:21:54.904 07:39:11 -- nvmf/common.sh@254 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:21:54.904 07:39:11 -- nvmf/common.sh@257 -- # ip link set cvl_0_1 up 00:21:55.162 07:39:11 -- nvmf/common.sh@259 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:21:55.162 07:39:11 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:21:55.162 07:39:11 -- nvmf/common.sh@263 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:21:55.162 07:39:11 -- nvmf/common.sh@266 -- # ping -c 1 10.0.0.2 00:21:55.162 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:21:55.162 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.232 ms 00:21:55.162 00:21:55.162 --- 10.0.0.2 ping statistics --- 00:21:55.162 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:55.162 rtt min/avg/max/mdev = 0.232/0.232/0.232/0.000 ms 00:21:55.162 07:39:11 -- nvmf/common.sh@267 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:21:55.162 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:21:55.162 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.144 ms 00:21:55.162 00:21:55.162 --- 10.0.0.1 ping statistics --- 00:21:55.162 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:55.162 rtt min/avg/max/mdev = 0.144/0.144/0.144/0.000 ms 00:21:55.162 07:39:11 -- nvmf/common.sh@269 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:21:55.162 07:39:11 -- nvmf/common.sh@410 -- # return 0 00:21:55.162 07:39:11 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:21:55.162 07:39:11 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:21:55.162 07:39:11 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:21:55.162 07:39:11 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:21:55.162 07:39:11 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:21:55.162 07:39:11 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:21:55.162 07:39:11 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:21:55.162 07:39:11 -- target/shutdown.sh@18 -- # nvmfappstart -m 0x1E 00:21:55.162 07:39:11 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:21:55.162 07:39:11 -- common/autotest_common.sh@712 -- # xtrace_disable 00:21:55.162 07:39:11 -- common/autotest_common.sh@10 -- # set +x 00:21:55.162 07:39:11 -- nvmf/common.sh@469 -- # nvmfpid=4155018 00:21:55.162 07:39:11 -- nvmf/common.sh@470 -- # waitforlisten 4155018 00:21:55.162 07:39:11 -- nvmf/common.sh@468 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:21:55.162 07:39:11 -- common/autotest_common.sh@819 -- # '[' -z 4155018 ']' 00:21:55.162 07:39:11 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:55.162 07:39:11 -- common/autotest_common.sh@824 -- # local max_retries=100 00:21:55.162 07:39:11 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:55.162 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:55.162 07:39:11 -- common/autotest_common.sh@828 -- # xtrace_disable 00:21:55.162 07:39:11 -- common/autotest_common.sh@10 -- # set +x 00:21:55.162 [2024-07-14 07:39:11.202478] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:21:55.162 [2024-07-14 07:39:11.202559] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:21:55.162 EAL: No free 2048 kB hugepages reported on node 1 00:21:55.162 [2024-07-14 07:39:11.272383] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:21:55.419 [2024-07-14 07:39:11.387746] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:21:55.419 [2024-07-14 07:39:11.387943] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:21:55.419 [2024-07-14 07:39:11.387974] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:21:55.420 [2024-07-14 07:39:11.387998] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:21:55.420 [2024-07-14 07:39:11.388117] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:21:55.420 [2024-07-14 07:39:11.388282] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:21:55.420 [2024-07-14 07:39:11.388347] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:21:55.420 [2024-07-14 07:39:11.388344] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 4 00:21:55.984 07:39:12 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:21:55.984 07:39:12 -- common/autotest_common.sh@852 -- # return 0 00:21:55.984 07:39:12 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:21:55.984 07:39:12 -- common/autotest_common.sh@718 -- # xtrace_disable 00:21:55.984 07:39:12 -- common/autotest_common.sh@10 -- # set +x 00:21:55.984 07:39:12 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:21:55.984 07:39:12 -- target/shutdown.sh@20 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:21:55.984 07:39:12 -- common/autotest_common.sh@551 -- # xtrace_disable 00:21:55.984 07:39:12 -- common/autotest_common.sh@10 -- # set +x 00:21:55.984 [2024-07-14 07:39:12.147265] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:21:56.241 07:39:12 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:21:56.241 07:39:12 -- target/shutdown.sh@22 -- # num_subsystems=({1..10}) 00:21:56.241 07:39:12 -- target/shutdown.sh@24 -- # timing_enter create_subsystems 00:21:56.241 07:39:12 -- common/autotest_common.sh@712 -- # xtrace_disable 00:21:56.241 07:39:12 -- common/autotest_common.sh@10 -- # set +x 00:21:56.241 07:39:12 -- target/shutdown.sh@26 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:21:56.241 07:39:12 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:21:56.241 07:39:12 -- target/shutdown.sh@28 -- # cat 00:21:56.241 07:39:12 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:21:56.241 07:39:12 -- target/shutdown.sh@28 -- # cat 00:21:56.241 07:39:12 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:21:56.241 07:39:12 -- target/shutdown.sh@28 -- # cat 00:21:56.241 07:39:12 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:21:56.241 07:39:12 -- target/shutdown.sh@28 -- # cat 00:21:56.241 07:39:12 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:21:56.241 07:39:12 -- target/shutdown.sh@28 -- # cat 00:21:56.241 07:39:12 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:21:56.241 07:39:12 -- target/shutdown.sh@28 -- # cat 00:21:56.241 07:39:12 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:21:56.241 07:39:12 -- target/shutdown.sh@28 -- # cat 00:21:56.241 07:39:12 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:21:56.241 07:39:12 -- target/shutdown.sh@28 -- # cat 00:21:56.241 07:39:12 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:21:56.241 07:39:12 -- target/shutdown.sh@28 -- # cat 00:21:56.241 07:39:12 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:21:56.241 07:39:12 -- target/shutdown.sh@28 -- # cat 00:21:56.241 07:39:12 -- target/shutdown.sh@35 -- # rpc_cmd 00:21:56.241 07:39:12 -- common/autotest_common.sh@551 -- # xtrace_disable 00:21:56.241 07:39:12 -- common/autotest_common.sh@10 -- # set +x 00:21:56.241 Malloc1 00:21:56.241 [2024-07-14 07:39:12.222458] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:21:56.241 Malloc2 00:21:56.241 Malloc3 00:21:56.241 Malloc4 00:21:56.241 Malloc5 00:21:56.499 Malloc6 00:21:56.499 Malloc7 00:21:56.499 Malloc8 00:21:56.499 Malloc9 00:21:56.499 Malloc10 00:21:56.499 07:39:12 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:21:56.499 07:39:12 -- target/shutdown.sh@36 -- # timing_exit create_subsystems 00:21:56.499 07:39:12 -- common/autotest_common.sh@718 -- # xtrace_disable 00:21:56.499 07:39:12 -- common/autotest_common.sh@10 -- # set +x 00:21:56.757 07:39:12 -- target/shutdown.sh@78 -- # perfpid=4155260 00:21:56.757 07:39:12 -- target/shutdown.sh@79 -- # waitforlisten 4155260 /var/tmp/bdevperf.sock 00:21:56.757 07:39:12 -- common/autotest_common.sh@819 -- # '[' -z 4155260 ']' 00:21:56.757 07:39:12 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:21:56.757 07:39:12 -- target/shutdown.sh@77 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/bdev_svc/bdev_svc -m 0x1 -i 1 -r /var/tmp/bdevperf.sock --json /dev/fd/63 00:21:56.757 07:39:12 -- target/shutdown.sh@77 -- # gen_nvmf_target_json 1 2 3 4 5 6 7 8 9 10 00:21:56.757 07:39:12 -- common/autotest_common.sh@824 -- # local max_retries=100 00:21:56.757 07:39:12 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:21:56.757 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:21:56.757 07:39:12 -- nvmf/common.sh@520 -- # config=() 00:21:56.757 07:39:12 -- common/autotest_common.sh@828 -- # xtrace_disable 00:21:56.757 07:39:12 -- nvmf/common.sh@520 -- # local subsystem config 00:21:56.757 07:39:12 -- common/autotest_common.sh@10 -- # set +x 00:21:56.757 07:39:12 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:21:56.758 07:39:12 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:21:56.758 { 00:21:56.758 "params": { 00:21:56.758 "name": "Nvme$subsystem", 00:21:56.758 "trtype": "$TEST_TRANSPORT", 00:21:56.758 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:56.758 "adrfam": "ipv4", 00:21:56.758 "trsvcid": "$NVMF_PORT", 00:21:56.758 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:56.758 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:56.758 "hdgst": ${hdgst:-false}, 00:21:56.758 "ddgst": ${ddgst:-false} 00:21:56.758 }, 00:21:56.758 "method": "bdev_nvme_attach_controller" 00:21:56.758 } 00:21:56.758 EOF 00:21:56.758 )") 00:21:56.758 07:39:12 -- nvmf/common.sh@542 -- # cat 00:21:56.758 07:39:12 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:21:56.758 07:39:12 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:21:56.758 { 00:21:56.758 "params": { 00:21:56.758 "name": "Nvme$subsystem", 00:21:56.758 "trtype": "$TEST_TRANSPORT", 00:21:56.758 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:56.758 "adrfam": "ipv4", 00:21:56.758 "trsvcid": "$NVMF_PORT", 00:21:56.758 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:56.758 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:56.758 "hdgst": ${hdgst:-false}, 00:21:56.758 "ddgst": ${ddgst:-false} 00:21:56.758 }, 00:21:56.758 "method": "bdev_nvme_attach_controller" 00:21:56.758 } 00:21:56.758 EOF 00:21:56.758 )") 00:21:56.758 07:39:12 -- nvmf/common.sh@542 -- # cat 00:21:56.758 07:39:12 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:21:56.758 07:39:12 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:21:56.758 { 00:21:56.758 "params": { 00:21:56.758 "name": "Nvme$subsystem", 00:21:56.758 "trtype": "$TEST_TRANSPORT", 00:21:56.758 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:56.758 "adrfam": "ipv4", 00:21:56.758 "trsvcid": "$NVMF_PORT", 00:21:56.758 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:56.758 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:56.758 "hdgst": ${hdgst:-false}, 00:21:56.758 "ddgst": ${ddgst:-false} 00:21:56.758 }, 00:21:56.758 "method": "bdev_nvme_attach_controller" 00:21:56.758 } 00:21:56.758 EOF 00:21:56.758 )") 00:21:56.758 07:39:12 -- nvmf/common.sh@542 -- # cat 00:21:56.758 07:39:12 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:21:56.758 07:39:12 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:21:56.758 { 00:21:56.758 "params": { 00:21:56.758 "name": "Nvme$subsystem", 00:21:56.758 "trtype": "$TEST_TRANSPORT", 00:21:56.758 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:56.758 "adrfam": "ipv4", 00:21:56.758 "trsvcid": "$NVMF_PORT", 00:21:56.758 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:56.758 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:56.758 "hdgst": ${hdgst:-false}, 00:21:56.758 "ddgst": ${ddgst:-false} 00:21:56.758 }, 00:21:56.758 "method": "bdev_nvme_attach_controller" 00:21:56.758 } 00:21:56.758 EOF 00:21:56.758 )") 00:21:56.758 07:39:12 -- nvmf/common.sh@542 -- # cat 00:21:56.758 07:39:12 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:21:56.758 07:39:12 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:21:56.758 { 00:21:56.758 "params": { 00:21:56.758 "name": "Nvme$subsystem", 00:21:56.758 "trtype": "$TEST_TRANSPORT", 00:21:56.758 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:56.758 "adrfam": "ipv4", 00:21:56.758 "trsvcid": "$NVMF_PORT", 00:21:56.758 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:56.758 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:56.758 "hdgst": ${hdgst:-false}, 00:21:56.758 "ddgst": ${ddgst:-false} 00:21:56.758 }, 00:21:56.758 "method": "bdev_nvme_attach_controller" 00:21:56.758 } 00:21:56.758 EOF 00:21:56.758 )") 00:21:56.758 07:39:12 -- nvmf/common.sh@542 -- # cat 00:21:56.758 07:39:12 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:21:56.758 07:39:12 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:21:56.758 { 00:21:56.758 "params": { 00:21:56.758 "name": "Nvme$subsystem", 00:21:56.758 "trtype": "$TEST_TRANSPORT", 00:21:56.758 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:56.758 "adrfam": "ipv4", 00:21:56.758 "trsvcid": "$NVMF_PORT", 00:21:56.758 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:56.758 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:56.758 "hdgst": ${hdgst:-false}, 00:21:56.758 "ddgst": ${ddgst:-false} 00:21:56.758 }, 00:21:56.758 "method": "bdev_nvme_attach_controller" 00:21:56.758 } 00:21:56.758 EOF 00:21:56.758 )") 00:21:56.758 07:39:12 -- nvmf/common.sh@542 -- # cat 00:21:56.758 07:39:12 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:21:56.758 07:39:12 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:21:56.758 { 00:21:56.758 "params": { 00:21:56.758 "name": "Nvme$subsystem", 00:21:56.758 "trtype": "$TEST_TRANSPORT", 00:21:56.758 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:56.758 "adrfam": "ipv4", 00:21:56.758 "trsvcid": "$NVMF_PORT", 00:21:56.758 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:56.758 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:56.758 "hdgst": ${hdgst:-false}, 00:21:56.758 "ddgst": ${ddgst:-false} 00:21:56.758 }, 00:21:56.758 "method": "bdev_nvme_attach_controller" 00:21:56.758 } 00:21:56.758 EOF 00:21:56.758 )") 00:21:56.758 07:39:12 -- nvmf/common.sh@542 -- # cat 00:21:56.758 07:39:12 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:21:56.758 07:39:12 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:21:56.758 { 00:21:56.758 "params": { 00:21:56.758 "name": "Nvme$subsystem", 00:21:56.758 "trtype": "$TEST_TRANSPORT", 00:21:56.758 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:56.758 "adrfam": "ipv4", 00:21:56.758 "trsvcid": "$NVMF_PORT", 00:21:56.758 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:56.758 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:56.758 "hdgst": ${hdgst:-false}, 00:21:56.758 "ddgst": ${ddgst:-false} 00:21:56.758 }, 00:21:56.758 "method": "bdev_nvme_attach_controller" 00:21:56.758 } 00:21:56.758 EOF 00:21:56.758 )") 00:21:56.758 07:39:12 -- nvmf/common.sh@542 -- # cat 00:21:56.758 07:39:12 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:21:56.758 07:39:12 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:21:56.758 { 00:21:56.758 "params": { 00:21:56.758 "name": "Nvme$subsystem", 00:21:56.758 "trtype": "$TEST_TRANSPORT", 00:21:56.758 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:56.758 "adrfam": "ipv4", 00:21:56.758 "trsvcid": "$NVMF_PORT", 00:21:56.758 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:56.758 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:56.758 "hdgst": ${hdgst:-false}, 00:21:56.758 "ddgst": ${ddgst:-false} 00:21:56.758 }, 00:21:56.758 "method": "bdev_nvme_attach_controller" 00:21:56.758 } 00:21:56.758 EOF 00:21:56.758 )") 00:21:56.758 07:39:12 -- nvmf/common.sh@542 -- # cat 00:21:56.758 07:39:12 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:21:56.758 07:39:12 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:21:56.758 { 00:21:56.758 "params": { 00:21:56.758 "name": "Nvme$subsystem", 00:21:56.758 "trtype": "$TEST_TRANSPORT", 00:21:56.758 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:56.758 "adrfam": "ipv4", 00:21:56.758 "trsvcid": "$NVMF_PORT", 00:21:56.758 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:56.758 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:56.758 "hdgst": ${hdgst:-false}, 00:21:56.758 "ddgst": ${ddgst:-false} 00:21:56.758 }, 00:21:56.758 "method": "bdev_nvme_attach_controller" 00:21:56.758 } 00:21:56.758 EOF 00:21:56.758 )") 00:21:56.758 07:39:12 -- nvmf/common.sh@542 -- # cat 00:21:56.758 07:39:12 -- nvmf/common.sh@544 -- # jq . 00:21:56.758 07:39:12 -- nvmf/common.sh@545 -- # IFS=, 00:21:56.758 07:39:12 -- nvmf/common.sh@546 -- # printf '%s\n' '{ 00:21:56.758 "params": { 00:21:56.758 "name": "Nvme1", 00:21:56.758 "trtype": "tcp", 00:21:56.758 "traddr": "10.0.0.2", 00:21:56.758 "adrfam": "ipv4", 00:21:56.758 "trsvcid": "4420", 00:21:56.758 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:21:56.758 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:21:56.758 "hdgst": false, 00:21:56.758 "ddgst": false 00:21:56.758 }, 00:21:56.758 "method": "bdev_nvme_attach_controller" 00:21:56.758 },{ 00:21:56.758 "params": { 00:21:56.758 "name": "Nvme2", 00:21:56.758 "trtype": "tcp", 00:21:56.758 "traddr": "10.0.0.2", 00:21:56.758 "adrfam": "ipv4", 00:21:56.758 "trsvcid": "4420", 00:21:56.758 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:21:56.758 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:21:56.758 "hdgst": false, 00:21:56.758 "ddgst": false 00:21:56.758 }, 00:21:56.758 "method": "bdev_nvme_attach_controller" 00:21:56.758 },{ 00:21:56.758 "params": { 00:21:56.758 "name": "Nvme3", 00:21:56.758 "trtype": "tcp", 00:21:56.758 "traddr": "10.0.0.2", 00:21:56.758 "adrfam": "ipv4", 00:21:56.758 "trsvcid": "4420", 00:21:56.758 "subnqn": "nqn.2016-06.io.spdk:cnode3", 00:21:56.758 "hostnqn": "nqn.2016-06.io.spdk:host3", 00:21:56.758 "hdgst": false, 00:21:56.758 "ddgst": false 00:21:56.758 }, 00:21:56.758 "method": "bdev_nvme_attach_controller" 00:21:56.758 },{ 00:21:56.758 "params": { 00:21:56.758 "name": "Nvme4", 00:21:56.758 "trtype": "tcp", 00:21:56.758 "traddr": "10.0.0.2", 00:21:56.758 "adrfam": "ipv4", 00:21:56.758 "trsvcid": "4420", 00:21:56.758 "subnqn": "nqn.2016-06.io.spdk:cnode4", 00:21:56.758 "hostnqn": "nqn.2016-06.io.spdk:host4", 00:21:56.758 "hdgst": false, 00:21:56.758 "ddgst": false 00:21:56.758 }, 00:21:56.758 "method": "bdev_nvme_attach_controller" 00:21:56.758 },{ 00:21:56.758 "params": { 00:21:56.758 "name": "Nvme5", 00:21:56.758 "trtype": "tcp", 00:21:56.758 "traddr": "10.0.0.2", 00:21:56.758 "adrfam": "ipv4", 00:21:56.759 "trsvcid": "4420", 00:21:56.759 "subnqn": "nqn.2016-06.io.spdk:cnode5", 00:21:56.759 "hostnqn": "nqn.2016-06.io.spdk:host5", 00:21:56.759 "hdgst": false, 00:21:56.759 "ddgst": false 00:21:56.759 }, 00:21:56.759 "method": "bdev_nvme_attach_controller" 00:21:56.759 },{ 00:21:56.759 "params": { 00:21:56.759 "name": "Nvme6", 00:21:56.759 "trtype": "tcp", 00:21:56.759 "traddr": "10.0.0.2", 00:21:56.759 "adrfam": "ipv4", 00:21:56.759 "trsvcid": "4420", 00:21:56.759 "subnqn": "nqn.2016-06.io.spdk:cnode6", 00:21:56.759 "hostnqn": "nqn.2016-06.io.spdk:host6", 00:21:56.759 "hdgst": false, 00:21:56.759 "ddgst": false 00:21:56.759 }, 00:21:56.759 "method": "bdev_nvme_attach_controller" 00:21:56.759 },{ 00:21:56.759 "params": { 00:21:56.759 "name": "Nvme7", 00:21:56.759 "trtype": "tcp", 00:21:56.759 "traddr": "10.0.0.2", 00:21:56.759 "adrfam": "ipv4", 00:21:56.759 "trsvcid": "4420", 00:21:56.759 "subnqn": "nqn.2016-06.io.spdk:cnode7", 00:21:56.759 "hostnqn": "nqn.2016-06.io.spdk:host7", 00:21:56.759 "hdgst": false, 00:21:56.759 "ddgst": false 00:21:56.759 }, 00:21:56.759 "method": "bdev_nvme_attach_controller" 00:21:56.759 },{ 00:21:56.759 "params": { 00:21:56.759 "name": "Nvme8", 00:21:56.759 "trtype": "tcp", 00:21:56.759 "traddr": "10.0.0.2", 00:21:56.759 "adrfam": "ipv4", 00:21:56.759 "trsvcid": "4420", 00:21:56.759 "subnqn": "nqn.2016-06.io.spdk:cnode8", 00:21:56.759 "hostnqn": "nqn.2016-06.io.spdk:host8", 00:21:56.759 "hdgst": false, 00:21:56.759 "ddgst": false 00:21:56.759 }, 00:21:56.759 "method": "bdev_nvme_attach_controller" 00:21:56.759 },{ 00:21:56.759 "params": { 00:21:56.759 "name": "Nvme9", 00:21:56.759 "trtype": "tcp", 00:21:56.759 "traddr": "10.0.0.2", 00:21:56.759 "adrfam": "ipv4", 00:21:56.759 "trsvcid": "4420", 00:21:56.759 "subnqn": "nqn.2016-06.io.spdk:cnode9", 00:21:56.759 "hostnqn": "nqn.2016-06.io.spdk:host9", 00:21:56.759 "hdgst": false, 00:21:56.759 "ddgst": false 00:21:56.759 }, 00:21:56.759 "method": "bdev_nvme_attach_controller" 00:21:56.759 },{ 00:21:56.759 "params": { 00:21:56.759 "name": "Nvme10", 00:21:56.759 "trtype": "tcp", 00:21:56.759 "traddr": "10.0.0.2", 00:21:56.759 "adrfam": "ipv4", 00:21:56.759 "trsvcid": "4420", 00:21:56.759 "subnqn": "nqn.2016-06.io.spdk:cnode10", 00:21:56.759 "hostnqn": "nqn.2016-06.io.spdk:host10", 00:21:56.759 "hdgst": false, 00:21:56.759 "ddgst": false 00:21:56.759 }, 00:21:56.759 "method": "bdev_nvme_attach_controller" 00:21:56.759 }' 00:21:56.759 [2024-07-14 07:39:12.717367] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:21:56.759 [2024-07-14 07:39:12.717458] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk1 --proc-type=auto ] 00:21:56.759 EAL: No free 2048 kB hugepages reported on node 1 00:21:56.759 [2024-07-14 07:39:12.783113] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:56.759 [2024-07-14 07:39:12.891755] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:21:58.653 07:39:14 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:21:58.653 07:39:14 -- common/autotest_common.sh@852 -- # return 0 00:21:58.653 07:39:14 -- target/shutdown.sh@80 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:21:58.653 07:39:14 -- common/autotest_common.sh@551 -- # xtrace_disable 00:21:58.653 07:39:14 -- common/autotest_common.sh@10 -- # set +x 00:21:58.653 07:39:14 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:21:58.653 07:39:14 -- target/shutdown.sh@83 -- # kill -9 4155260 00:21:58.653 07:39:14 -- target/shutdown.sh@84 -- # rm -f /var/run/spdk_bdev1 00:21:58.653 07:39:14 -- target/shutdown.sh@87 -- # sleep 1 00:21:59.587 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/shutdown.sh: line 73: 4155260 Killed $rootdir/test/app/bdev_svc/bdev_svc -m 0x1 -i 1 -r /var/tmp/bdevperf.sock --json <(gen_nvmf_target_json "${num_subsystems[@]}") 00:21:59.587 07:39:15 -- target/shutdown.sh@88 -- # kill -0 4155018 00:21:59.587 07:39:15 -- target/shutdown.sh@91 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json /dev/fd/62 -q 64 -o 65536 -w verify -t 1 00:21:59.587 07:39:15 -- target/shutdown.sh@91 -- # gen_nvmf_target_json 1 2 3 4 5 6 7 8 9 10 00:21:59.587 07:39:15 -- nvmf/common.sh@520 -- # config=() 00:21:59.587 07:39:15 -- nvmf/common.sh@520 -- # local subsystem config 00:21:59.587 07:39:15 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:21:59.587 07:39:15 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:21:59.587 { 00:21:59.587 "params": { 00:21:59.587 "name": "Nvme$subsystem", 00:21:59.587 "trtype": "$TEST_TRANSPORT", 00:21:59.587 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:59.587 "adrfam": "ipv4", 00:21:59.587 "trsvcid": "$NVMF_PORT", 00:21:59.587 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:59.587 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:59.587 "hdgst": ${hdgst:-false}, 00:21:59.587 "ddgst": ${ddgst:-false} 00:21:59.587 }, 00:21:59.587 "method": "bdev_nvme_attach_controller" 00:21:59.587 } 00:21:59.587 EOF 00:21:59.587 )") 00:21:59.587 07:39:15 -- nvmf/common.sh@542 -- # cat 00:21:59.587 07:39:15 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:21:59.587 07:39:15 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:21:59.587 { 00:21:59.587 "params": { 00:21:59.587 "name": "Nvme$subsystem", 00:21:59.587 "trtype": "$TEST_TRANSPORT", 00:21:59.587 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:59.587 "adrfam": "ipv4", 00:21:59.587 "trsvcid": "$NVMF_PORT", 00:21:59.587 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:59.587 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:59.587 "hdgst": ${hdgst:-false}, 00:21:59.587 "ddgst": ${ddgst:-false} 00:21:59.587 }, 00:21:59.587 "method": "bdev_nvme_attach_controller" 00:21:59.587 } 00:21:59.587 EOF 00:21:59.587 )") 00:21:59.587 07:39:15 -- nvmf/common.sh@542 -- # cat 00:21:59.587 07:39:15 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:21:59.587 07:39:15 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:21:59.587 { 00:21:59.587 "params": { 00:21:59.587 "name": "Nvme$subsystem", 00:21:59.587 "trtype": "$TEST_TRANSPORT", 00:21:59.587 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:59.587 "adrfam": "ipv4", 00:21:59.587 "trsvcid": "$NVMF_PORT", 00:21:59.587 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:59.587 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:59.587 "hdgst": ${hdgst:-false}, 00:21:59.587 "ddgst": ${ddgst:-false} 00:21:59.587 }, 00:21:59.587 "method": "bdev_nvme_attach_controller" 00:21:59.587 } 00:21:59.587 EOF 00:21:59.587 )") 00:21:59.587 07:39:15 -- nvmf/common.sh@542 -- # cat 00:21:59.587 07:39:15 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:21:59.587 07:39:15 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:21:59.587 { 00:21:59.587 "params": { 00:21:59.587 "name": "Nvme$subsystem", 00:21:59.587 "trtype": "$TEST_TRANSPORT", 00:21:59.587 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:59.587 "adrfam": "ipv4", 00:21:59.587 "trsvcid": "$NVMF_PORT", 00:21:59.587 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:59.587 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:59.587 "hdgst": ${hdgst:-false}, 00:21:59.587 "ddgst": ${ddgst:-false} 00:21:59.587 }, 00:21:59.587 "method": "bdev_nvme_attach_controller" 00:21:59.587 } 00:21:59.587 EOF 00:21:59.587 )") 00:21:59.587 07:39:15 -- nvmf/common.sh@542 -- # cat 00:21:59.587 07:39:15 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:21:59.587 07:39:15 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:21:59.587 { 00:21:59.587 "params": { 00:21:59.587 "name": "Nvme$subsystem", 00:21:59.587 "trtype": "$TEST_TRANSPORT", 00:21:59.587 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:59.587 "adrfam": "ipv4", 00:21:59.587 "trsvcid": "$NVMF_PORT", 00:21:59.587 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:59.587 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:59.587 "hdgst": ${hdgst:-false}, 00:21:59.587 "ddgst": ${ddgst:-false} 00:21:59.587 }, 00:21:59.587 "method": "bdev_nvme_attach_controller" 00:21:59.587 } 00:21:59.587 EOF 00:21:59.588 )") 00:21:59.588 07:39:15 -- nvmf/common.sh@542 -- # cat 00:21:59.588 07:39:15 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:21:59.588 07:39:15 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:21:59.588 { 00:21:59.588 "params": { 00:21:59.588 "name": "Nvme$subsystem", 00:21:59.588 "trtype": "$TEST_TRANSPORT", 00:21:59.588 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:59.588 "adrfam": "ipv4", 00:21:59.588 "trsvcid": "$NVMF_PORT", 00:21:59.588 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:59.588 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:59.588 "hdgst": ${hdgst:-false}, 00:21:59.588 "ddgst": ${ddgst:-false} 00:21:59.588 }, 00:21:59.588 "method": "bdev_nvme_attach_controller" 00:21:59.588 } 00:21:59.588 EOF 00:21:59.588 )") 00:21:59.588 07:39:15 -- nvmf/common.sh@542 -- # cat 00:21:59.588 07:39:15 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:21:59.588 07:39:15 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:21:59.588 { 00:21:59.588 "params": { 00:21:59.588 "name": "Nvme$subsystem", 00:21:59.588 "trtype": "$TEST_TRANSPORT", 00:21:59.588 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:59.588 "adrfam": "ipv4", 00:21:59.588 "trsvcid": "$NVMF_PORT", 00:21:59.588 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:59.588 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:59.588 "hdgst": ${hdgst:-false}, 00:21:59.588 "ddgst": ${ddgst:-false} 00:21:59.588 }, 00:21:59.588 "method": "bdev_nvme_attach_controller" 00:21:59.588 } 00:21:59.588 EOF 00:21:59.588 )") 00:21:59.588 07:39:15 -- nvmf/common.sh@542 -- # cat 00:21:59.588 07:39:15 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:21:59.588 07:39:15 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:21:59.588 { 00:21:59.588 "params": { 00:21:59.588 "name": "Nvme$subsystem", 00:21:59.588 "trtype": "$TEST_TRANSPORT", 00:21:59.588 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:59.588 "adrfam": "ipv4", 00:21:59.588 "trsvcid": "$NVMF_PORT", 00:21:59.588 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:59.588 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:59.588 "hdgst": ${hdgst:-false}, 00:21:59.588 "ddgst": ${ddgst:-false} 00:21:59.588 }, 00:21:59.588 "method": "bdev_nvme_attach_controller" 00:21:59.588 } 00:21:59.588 EOF 00:21:59.588 )") 00:21:59.588 07:39:15 -- nvmf/common.sh@542 -- # cat 00:21:59.588 07:39:15 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:21:59.588 07:39:15 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:21:59.588 { 00:21:59.588 "params": { 00:21:59.588 "name": "Nvme$subsystem", 00:21:59.588 "trtype": "$TEST_TRANSPORT", 00:21:59.588 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:59.588 "adrfam": "ipv4", 00:21:59.588 "trsvcid": "$NVMF_PORT", 00:21:59.588 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:59.588 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:59.588 "hdgst": ${hdgst:-false}, 00:21:59.588 "ddgst": ${ddgst:-false} 00:21:59.588 }, 00:21:59.588 "method": "bdev_nvme_attach_controller" 00:21:59.588 } 00:21:59.588 EOF 00:21:59.588 )") 00:21:59.588 07:39:15 -- nvmf/common.sh@542 -- # cat 00:21:59.588 07:39:15 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:21:59.588 07:39:15 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:21:59.588 { 00:21:59.588 "params": { 00:21:59.588 "name": "Nvme$subsystem", 00:21:59.588 "trtype": "$TEST_TRANSPORT", 00:21:59.588 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:59.588 "adrfam": "ipv4", 00:21:59.588 "trsvcid": "$NVMF_PORT", 00:21:59.588 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:59.588 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:59.588 "hdgst": ${hdgst:-false}, 00:21:59.588 "ddgst": ${ddgst:-false} 00:21:59.588 }, 00:21:59.588 "method": "bdev_nvme_attach_controller" 00:21:59.588 } 00:21:59.588 EOF 00:21:59.588 )") 00:21:59.588 07:39:15 -- nvmf/common.sh@542 -- # cat 00:21:59.588 07:39:15 -- nvmf/common.sh@544 -- # jq . 00:21:59.588 07:39:15 -- nvmf/common.sh@545 -- # IFS=, 00:21:59.588 07:39:15 -- nvmf/common.sh@546 -- # printf '%s\n' '{ 00:21:59.588 "params": { 00:21:59.588 "name": "Nvme1", 00:21:59.588 "trtype": "tcp", 00:21:59.588 "traddr": "10.0.0.2", 00:21:59.588 "adrfam": "ipv4", 00:21:59.588 "trsvcid": "4420", 00:21:59.588 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:21:59.588 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:21:59.588 "hdgst": false, 00:21:59.588 "ddgst": false 00:21:59.588 }, 00:21:59.588 "method": "bdev_nvme_attach_controller" 00:21:59.588 },{ 00:21:59.588 "params": { 00:21:59.588 "name": "Nvme2", 00:21:59.588 "trtype": "tcp", 00:21:59.588 "traddr": "10.0.0.2", 00:21:59.588 "adrfam": "ipv4", 00:21:59.588 "trsvcid": "4420", 00:21:59.588 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:21:59.588 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:21:59.588 "hdgst": false, 00:21:59.588 "ddgst": false 00:21:59.588 }, 00:21:59.588 "method": "bdev_nvme_attach_controller" 00:21:59.588 },{ 00:21:59.588 "params": { 00:21:59.588 "name": "Nvme3", 00:21:59.588 "trtype": "tcp", 00:21:59.588 "traddr": "10.0.0.2", 00:21:59.588 "adrfam": "ipv4", 00:21:59.588 "trsvcid": "4420", 00:21:59.588 "subnqn": "nqn.2016-06.io.spdk:cnode3", 00:21:59.588 "hostnqn": "nqn.2016-06.io.spdk:host3", 00:21:59.588 "hdgst": false, 00:21:59.588 "ddgst": false 00:21:59.588 }, 00:21:59.588 "method": "bdev_nvme_attach_controller" 00:21:59.588 },{ 00:21:59.588 "params": { 00:21:59.588 "name": "Nvme4", 00:21:59.588 "trtype": "tcp", 00:21:59.588 "traddr": "10.0.0.2", 00:21:59.588 "adrfam": "ipv4", 00:21:59.588 "trsvcid": "4420", 00:21:59.588 "subnqn": "nqn.2016-06.io.spdk:cnode4", 00:21:59.588 "hostnqn": "nqn.2016-06.io.spdk:host4", 00:21:59.588 "hdgst": false, 00:21:59.588 "ddgst": false 00:21:59.588 }, 00:21:59.588 "method": "bdev_nvme_attach_controller" 00:21:59.588 },{ 00:21:59.588 "params": { 00:21:59.588 "name": "Nvme5", 00:21:59.588 "trtype": "tcp", 00:21:59.588 "traddr": "10.0.0.2", 00:21:59.588 "adrfam": "ipv4", 00:21:59.588 "trsvcid": "4420", 00:21:59.588 "subnqn": "nqn.2016-06.io.spdk:cnode5", 00:21:59.588 "hostnqn": "nqn.2016-06.io.spdk:host5", 00:21:59.588 "hdgst": false, 00:21:59.588 "ddgst": false 00:21:59.588 }, 00:21:59.588 "method": "bdev_nvme_attach_controller" 00:21:59.588 },{ 00:21:59.588 "params": { 00:21:59.588 "name": "Nvme6", 00:21:59.588 "trtype": "tcp", 00:21:59.588 "traddr": "10.0.0.2", 00:21:59.588 "adrfam": "ipv4", 00:21:59.588 "trsvcid": "4420", 00:21:59.588 "subnqn": "nqn.2016-06.io.spdk:cnode6", 00:21:59.588 "hostnqn": "nqn.2016-06.io.spdk:host6", 00:21:59.588 "hdgst": false, 00:21:59.588 "ddgst": false 00:21:59.588 }, 00:21:59.588 "method": "bdev_nvme_attach_controller" 00:21:59.588 },{ 00:21:59.588 "params": { 00:21:59.588 "name": "Nvme7", 00:21:59.588 "trtype": "tcp", 00:21:59.588 "traddr": "10.0.0.2", 00:21:59.588 "adrfam": "ipv4", 00:21:59.588 "trsvcid": "4420", 00:21:59.588 "subnqn": "nqn.2016-06.io.spdk:cnode7", 00:21:59.588 "hostnqn": "nqn.2016-06.io.spdk:host7", 00:21:59.588 "hdgst": false, 00:21:59.588 "ddgst": false 00:21:59.588 }, 00:21:59.588 "method": "bdev_nvme_attach_controller" 00:21:59.588 },{ 00:21:59.588 "params": { 00:21:59.588 "name": "Nvme8", 00:21:59.588 "trtype": "tcp", 00:21:59.588 "traddr": "10.0.0.2", 00:21:59.588 "adrfam": "ipv4", 00:21:59.588 "trsvcid": "4420", 00:21:59.588 "subnqn": "nqn.2016-06.io.spdk:cnode8", 00:21:59.588 "hostnqn": "nqn.2016-06.io.spdk:host8", 00:21:59.588 "hdgst": false, 00:21:59.588 "ddgst": false 00:21:59.588 }, 00:21:59.588 "method": "bdev_nvme_attach_controller" 00:21:59.588 },{ 00:21:59.588 "params": { 00:21:59.588 "name": "Nvme9", 00:21:59.588 "trtype": "tcp", 00:21:59.588 "traddr": "10.0.0.2", 00:21:59.588 "adrfam": "ipv4", 00:21:59.588 "trsvcid": "4420", 00:21:59.588 "subnqn": "nqn.2016-06.io.spdk:cnode9", 00:21:59.588 "hostnqn": "nqn.2016-06.io.spdk:host9", 00:21:59.588 "hdgst": false, 00:21:59.588 "ddgst": false 00:21:59.588 }, 00:21:59.588 "method": "bdev_nvme_attach_controller" 00:21:59.588 },{ 00:21:59.588 "params": { 00:21:59.588 "name": "Nvme10", 00:21:59.588 "trtype": "tcp", 00:21:59.588 "traddr": "10.0.0.2", 00:21:59.588 "adrfam": "ipv4", 00:21:59.588 "trsvcid": "4420", 00:21:59.588 "subnqn": "nqn.2016-06.io.spdk:cnode10", 00:21:59.588 "hostnqn": "nqn.2016-06.io.spdk:host10", 00:21:59.588 "hdgst": false, 00:21:59.588 "ddgst": false 00:21:59.588 }, 00:21:59.588 "method": "bdev_nvme_attach_controller" 00:21:59.588 }' 00:21:59.588 [2024-07-14 07:39:15.461315] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:21:59.588 [2024-07-14 07:39:15.461395] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid4155643 ] 00:21:59.588 EAL: No free 2048 kB hugepages reported on node 1 00:21:59.588 [2024-07-14 07:39:15.526299] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:59.588 [2024-07-14 07:39:15.634200] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:22:01.487 Running I/O for 1 seconds... 00:22:02.422 00:22:02.422 Latency(us) 00:22:02.422 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:02.422 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:22:02.422 Verification LBA range: start 0x0 length 0x400 00:22:02.422 Nvme1n1 : 1.10 362.23 22.64 0.00 0.00 172375.90 37865.24 205054.86 00:22:02.422 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:22:02.422 Verification LBA range: start 0x0 length 0x400 00:22:02.422 Nvme2n1 : 1.06 420.58 26.29 0.00 0.00 147772.58 4951.61 123498.95 00:22:02.422 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:22:02.422 Verification LBA range: start 0x0 length 0x400 00:22:02.422 Nvme3n1 : 1.08 402.25 25.14 0.00 0.00 153264.53 24660.95 125829.12 00:22:02.422 Job: Nvme4n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:22:02.422 Verification LBA range: start 0x0 length 0x400 00:22:02.422 Nvme4n1 : 1.09 403.49 25.22 0.00 0.00 152435.96 11019.76 121945.51 00:22:02.422 Job: Nvme5n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:22:02.422 Verification LBA range: start 0x0 length 0x400 00:22:02.422 Nvme5n1 : 1.15 346.38 21.65 0.00 0.00 171453.18 15922.82 168548.88 00:22:02.422 Job: Nvme6n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:22:02.422 Verification LBA range: start 0x0 length 0x400 00:22:02.422 Nvme6n1 : 1.09 399.15 24.95 0.00 0.00 152129.98 16796.63 118838.61 00:22:02.422 Job: Nvme7n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:22:02.422 Verification LBA range: start 0x0 length 0x400 00:22:02.422 Nvme7n1 : 1.09 398.55 24.91 0.00 0.00 151538.94 14757.74 128159.29 00:22:02.422 Job: Nvme8n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:22:02.422 Verification LBA range: start 0x0 length 0x400 00:22:02.422 Nvme8n1 : 1.10 394.31 24.64 0.00 0.00 152062.65 15340.28 123498.95 00:22:02.422 Job: Nvme9n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:22:02.422 Verification LBA range: start 0x0 length 0x400 00:22:02.422 Nvme9n1 : 1.15 345.42 21.59 0.00 0.00 167615.36 6262.33 194180.74 00:22:02.422 Job: Nvme10n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:22:02.422 Verification LBA range: start 0x0 length 0x400 00:22:02.422 Nvme10n1 : 1.15 420.01 26.25 0.00 0.00 136519.61 13010.11 117285.17 00:22:02.422 =================================================================================================================== 00:22:02.422 Total : 3892.37 243.27 0.00 0.00 155103.80 4951.61 205054.86 00:22:02.681 07:39:18 -- target/shutdown.sh@93 -- # stoptarget 00:22:02.681 07:39:18 -- target/shutdown.sh@41 -- # rm -f ./local-job0-0-verify.state 00:22:02.681 07:39:18 -- target/shutdown.sh@42 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:22:02.681 07:39:18 -- target/shutdown.sh@43 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:22:02.681 07:39:18 -- target/shutdown.sh@45 -- # nvmftestfini 00:22:02.681 07:39:18 -- nvmf/common.sh@476 -- # nvmfcleanup 00:22:02.681 07:39:18 -- nvmf/common.sh@116 -- # sync 00:22:02.681 07:39:18 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:22:02.681 07:39:18 -- nvmf/common.sh@119 -- # set +e 00:22:02.681 07:39:18 -- nvmf/common.sh@120 -- # for i in {1..20} 00:22:02.681 07:39:18 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:22:02.681 rmmod nvme_tcp 00:22:02.681 rmmod nvme_fabrics 00:22:02.681 rmmod nvme_keyring 00:22:02.681 07:39:18 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:22:02.681 07:39:18 -- nvmf/common.sh@123 -- # set -e 00:22:02.681 07:39:18 -- nvmf/common.sh@124 -- # return 0 00:22:02.681 07:39:18 -- nvmf/common.sh@477 -- # '[' -n 4155018 ']' 00:22:02.681 07:39:18 -- nvmf/common.sh@478 -- # killprocess 4155018 00:22:02.681 07:39:18 -- common/autotest_common.sh@926 -- # '[' -z 4155018 ']' 00:22:02.681 07:39:18 -- common/autotest_common.sh@930 -- # kill -0 4155018 00:22:02.681 07:39:18 -- common/autotest_common.sh@931 -- # uname 00:22:02.681 07:39:18 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:22:02.681 07:39:18 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 4155018 00:22:02.681 07:39:18 -- common/autotest_common.sh@932 -- # process_name=reactor_1 00:22:02.681 07:39:18 -- common/autotest_common.sh@936 -- # '[' reactor_1 = sudo ']' 00:22:02.681 07:39:18 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 4155018' 00:22:02.681 killing process with pid 4155018 00:22:02.681 07:39:18 -- common/autotest_common.sh@945 -- # kill 4155018 00:22:02.681 07:39:18 -- common/autotest_common.sh@950 -- # wait 4155018 00:22:03.252 07:39:19 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:22:03.252 07:39:19 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:22:03.252 07:39:19 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:22:03.252 07:39:19 -- nvmf/common.sh@273 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:22:03.252 07:39:19 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:22:03.252 07:39:19 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:03.252 07:39:19 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:22:03.252 07:39:19 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:05.157 07:39:21 -- nvmf/common.sh@278 -- # ip -4 addr flush cvl_0_1 00:22:05.157 00:22:05.157 real 0m12.151s 00:22:05.157 user 0m36.048s 00:22:05.157 sys 0m3.110s 00:22:05.157 07:39:21 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:22:05.157 07:39:21 -- common/autotest_common.sh@10 -- # set +x 00:22:05.157 ************************************ 00:22:05.157 END TEST nvmf_shutdown_tc1 00:22:05.157 ************************************ 00:22:05.415 07:39:21 -- target/shutdown.sh@147 -- # run_test nvmf_shutdown_tc2 nvmf_shutdown_tc2 00:22:05.415 07:39:21 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:22:05.415 07:39:21 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:22:05.415 07:39:21 -- common/autotest_common.sh@10 -- # set +x 00:22:05.415 ************************************ 00:22:05.415 START TEST nvmf_shutdown_tc2 00:22:05.415 ************************************ 00:22:05.415 07:39:21 -- common/autotest_common.sh@1104 -- # nvmf_shutdown_tc2 00:22:05.415 07:39:21 -- target/shutdown.sh@98 -- # starttarget 00:22:05.415 07:39:21 -- target/shutdown.sh@15 -- # nvmftestinit 00:22:05.415 07:39:21 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:22:05.415 07:39:21 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:22:05.415 07:39:21 -- nvmf/common.sh@436 -- # prepare_net_devs 00:22:05.415 07:39:21 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:22:05.415 07:39:21 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:22:05.415 07:39:21 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:05.415 07:39:21 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:22:05.415 07:39:21 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:05.415 07:39:21 -- nvmf/common.sh@402 -- # [[ phy != virt ]] 00:22:05.415 07:39:21 -- nvmf/common.sh@402 -- # gather_supported_nvmf_pci_devs 00:22:05.415 07:39:21 -- nvmf/common.sh@284 -- # xtrace_disable 00:22:05.415 07:39:21 -- common/autotest_common.sh@10 -- # set +x 00:22:05.415 07:39:21 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:22:05.415 07:39:21 -- nvmf/common.sh@290 -- # pci_devs=() 00:22:05.415 07:39:21 -- nvmf/common.sh@290 -- # local -a pci_devs 00:22:05.415 07:39:21 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:22:05.415 07:39:21 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:22:05.415 07:39:21 -- nvmf/common.sh@292 -- # pci_drivers=() 00:22:05.415 07:39:21 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:22:05.415 07:39:21 -- nvmf/common.sh@294 -- # net_devs=() 00:22:05.415 07:39:21 -- nvmf/common.sh@294 -- # local -ga net_devs 00:22:05.415 07:39:21 -- nvmf/common.sh@295 -- # e810=() 00:22:05.415 07:39:21 -- nvmf/common.sh@295 -- # local -ga e810 00:22:05.415 07:39:21 -- nvmf/common.sh@296 -- # x722=() 00:22:05.415 07:39:21 -- nvmf/common.sh@296 -- # local -ga x722 00:22:05.415 07:39:21 -- nvmf/common.sh@297 -- # mlx=() 00:22:05.415 07:39:21 -- nvmf/common.sh@297 -- # local -ga mlx 00:22:05.415 07:39:21 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:22:05.415 07:39:21 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:22:05.415 07:39:21 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:22:05.415 07:39:21 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:22:05.415 07:39:21 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:22:05.415 07:39:21 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:22:05.415 07:39:21 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:22:05.415 07:39:21 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:22:05.415 07:39:21 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:22:05.415 07:39:21 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:22:05.415 07:39:21 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:22:05.415 07:39:21 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:22:05.415 07:39:21 -- nvmf/common.sh@320 -- # [[ tcp == rdma ]] 00:22:05.415 07:39:21 -- nvmf/common.sh@326 -- # [[ e810 == mlx5 ]] 00:22:05.415 07:39:21 -- nvmf/common.sh@328 -- # [[ e810 == e810 ]] 00:22:05.415 07:39:21 -- nvmf/common.sh@329 -- # pci_devs=("${e810[@]}") 00:22:05.415 07:39:21 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:22:05.415 07:39:21 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:22:05.415 07:39:21 -- nvmf/common.sh@340 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:22:05.415 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:22:05.415 07:39:21 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:22:05.415 07:39:21 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:22:05.415 07:39:21 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:05.415 07:39:21 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:05.415 07:39:21 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:22:05.415 07:39:21 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:22:05.415 07:39:21 -- nvmf/common.sh@340 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:22:05.416 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:22:05.416 07:39:21 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:22:05.416 07:39:21 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:22:05.416 07:39:21 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:05.416 07:39:21 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:05.416 07:39:21 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:22:05.416 07:39:21 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:22:05.416 07:39:21 -- nvmf/common.sh@371 -- # [[ e810 == e810 ]] 00:22:05.416 07:39:21 -- nvmf/common.sh@371 -- # [[ tcp == rdma ]] 00:22:05.416 07:39:21 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:22:05.416 07:39:21 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:05.416 07:39:21 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:22:05.416 07:39:21 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:05.416 07:39:21 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:22:05.416 Found net devices under 0000:0a:00.0: cvl_0_0 00:22:05.416 07:39:21 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:22:05.416 07:39:21 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:22:05.416 07:39:21 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:05.416 07:39:21 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:22:05.416 07:39:21 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:05.416 07:39:21 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:22:05.416 Found net devices under 0000:0a:00.1: cvl_0_1 00:22:05.416 07:39:21 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:22:05.416 07:39:21 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:22:05.416 07:39:21 -- nvmf/common.sh@402 -- # is_hw=yes 00:22:05.416 07:39:21 -- nvmf/common.sh@404 -- # [[ yes == yes ]] 00:22:05.416 07:39:21 -- nvmf/common.sh@405 -- # [[ tcp == tcp ]] 00:22:05.416 07:39:21 -- nvmf/common.sh@406 -- # nvmf_tcp_init 00:22:05.416 07:39:21 -- nvmf/common.sh@228 -- # NVMF_INITIATOR_IP=10.0.0.1 00:22:05.416 07:39:21 -- nvmf/common.sh@229 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:22:05.416 07:39:21 -- nvmf/common.sh@230 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:22:05.416 07:39:21 -- nvmf/common.sh@233 -- # (( 2 > 1 )) 00:22:05.416 07:39:21 -- nvmf/common.sh@235 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:22:05.416 07:39:21 -- nvmf/common.sh@236 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:22:05.416 07:39:21 -- nvmf/common.sh@239 -- # NVMF_SECOND_TARGET_IP= 00:22:05.416 07:39:21 -- nvmf/common.sh@241 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:22:05.416 07:39:21 -- nvmf/common.sh@242 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:22:05.416 07:39:21 -- nvmf/common.sh@243 -- # ip -4 addr flush cvl_0_0 00:22:05.416 07:39:21 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_1 00:22:05.416 07:39:21 -- nvmf/common.sh@247 -- # ip netns add cvl_0_0_ns_spdk 00:22:05.416 07:39:21 -- nvmf/common.sh@250 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:22:05.416 07:39:21 -- nvmf/common.sh@253 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:22:05.416 07:39:21 -- nvmf/common.sh@254 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:22:05.416 07:39:21 -- nvmf/common.sh@257 -- # ip link set cvl_0_1 up 00:22:05.416 07:39:21 -- nvmf/common.sh@259 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:22:05.416 07:39:21 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:22:05.416 07:39:21 -- nvmf/common.sh@263 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:22:05.416 07:39:21 -- nvmf/common.sh@266 -- # ping -c 1 10.0.0.2 00:22:05.416 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:22:05.416 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.242 ms 00:22:05.416 00:22:05.416 --- 10.0.0.2 ping statistics --- 00:22:05.416 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:05.416 rtt min/avg/max/mdev = 0.242/0.242/0.242/0.000 ms 00:22:05.416 07:39:21 -- nvmf/common.sh@267 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:22:05.416 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:22:05.416 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.187 ms 00:22:05.416 00:22:05.416 --- 10.0.0.1 ping statistics --- 00:22:05.416 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:05.416 rtt min/avg/max/mdev = 0.187/0.187/0.187/0.000 ms 00:22:05.416 07:39:21 -- nvmf/common.sh@269 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:22:05.416 07:39:21 -- nvmf/common.sh@410 -- # return 0 00:22:05.416 07:39:21 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:22:05.416 07:39:21 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:22:05.416 07:39:21 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:22:05.416 07:39:21 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:22:05.416 07:39:21 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:22:05.416 07:39:21 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:22:05.416 07:39:21 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:22:05.416 07:39:21 -- target/shutdown.sh@18 -- # nvmfappstart -m 0x1E 00:22:05.416 07:39:21 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:22:05.416 07:39:21 -- common/autotest_common.sh@712 -- # xtrace_disable 00:22:05.416 07:39:21 -- common/autotest_common.sh@10 -- # set +x 00:22:05.416 07:39:21 -- nvmf/common.sh@469 -- # nvmfpid=4156428 00:22:05.416 07:39:21 -- nvmf/common.sh@468 -- # ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:22:05.416 07:39:21 -- nvmf/common.sh@470 -- # waitforlisten 4156428 00:22:05.416 07:39:21 -- common/autotest_common.sh@819 -- # '[' -z 4156428 ']' 00:22:05.416 07:39:21 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:05.416 07:39:21 -- common/autotest_common.sh@824 -- # local max_retries=100 00:22:05.416 07:39:21 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:05.416 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:05.416 07:39:21 -- common/autotest_common.sh@828 -- # xtrace_disable 00:22:05.416 07:39:21 -- common/autotest_common.sh@10 -- # set +x 00:22:05.674 [2024-07-14 07:39:21.585484] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:22:05.674 [2024-07-14 07:39:21.585574] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:22:05.674 EAL: No free 2048 kB hugepages reported on node 1 00:22:05.674 [2024-07-14 07:39:21.651103] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:22:05.674 [2024-07-14 07:39:21.759973] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:22:05.674 [2024-07-14 07:39:21.760135] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:22:05.674 [2024-07-14 07:39:21.760161] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:22:05.674 [2024-07-14 07:39:21.760181] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:22:05.674 [2024-07-14 07:39:21.760276] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:22:05.674 [2024-07-14 07:39:21.760443] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:22:05.674 [2024-07-14 07:39:21.760502] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 4 00:22:05.674 [2024-07-14 07:39:21.760505] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:22:06.605 07:39:22 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:22:06.605 07:39:22 -- common/autotest_common.sh@852 -- # return 0 00:22:06.605 07:39:22 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:22:06.605 07:39:22 -- common/autotest_common.sh@718 -- # xtrace_disable 00:22:06.605 07:39:22 -- common/autotest_common.sh@10 -- # set +x 00:22:06.605 07:39:22 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:22:06.605 07:39:22 -- target/shutdown.sh@20 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:22:06.605 07:39:22 -- common/autotest_common.sh@551 -- # xtrace_disable 00:22:06.605 07:39:22 -- common/autotest_common.sh@10 -- # set +x 00:22:06.605 [2024-07-14 07:39:22.541392] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:22:06.605 07:39:22 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:22:06.605 07:39:22 -- target/shutdown.sh@22 -- # num_subsystems=({1..10}) 00:22:06.605 07:39:22 -- target/shutdown.sh@24 -- # timing_enter create_subsystems 00:22:06.605 07:39:22 -- common/autotest_common.sh@712 -- # xtrace_disable 00:22:06.605 07:39:22 -- common/autotest_common.sh@10 -- # set +x 00:22:06.605 07:39:22 -- target/shutdown.sh@26 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:22:06.605 07:39:22 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:22:06.605 07:39:22 -- target/shutdown.sh@28 -- # cat 00:22:06.605 07:39:22 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:22:06.605 07:39:22 -- target/shutdown.sh@28 -- # cat 00:22:06.605 07:39:22 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:22:06.605 07:39:22 -- target/shutdown.sh@28 -- # cat 00:22:06.605 07:39:22 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:22:06.605 07:39:22 -- target/shutdown.sh@28 -- # cat 00:22:06.605 07:39:22 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:22:06.605 07:39:22 -- target/shutdown.sh@28 -- # cat 00:22:06.605 07:39:22 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:22:06.605 07:39:22 -- target/shutdown.sh@28 -- # cat 00:22:06.605 07:39:22 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:22:06.605 07:39:22 -- target/shutdown.sh@28 -- # cat 00:22:06.605 07:39:22 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:22:06.605 07:39:22 -- target/shutdown.sh@28 -- # cat 00:22:06.605 07:39:22 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:22:06.605 07:39:22 -- target/shutdown.sh@28 -- # cat 00:22:06.605 07:39:22 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:22:06.605 07:39:22 -- target/shutdown.sh@28 -- # cat 00:22:06.605 07:39:22 -- target/shutdown.sh@35 -- # rpc_cmd 00:22:06.605 07:39:22 -- common/autotest_common.sh@551 -- # xtrace_disable 00:22:06.605 07:39:22 -- common/autotest_common.sh@10 -- # set +x 00:22:06.605 Malloc1 00:22:06.605 [2024-07-14 07:39:22.630527] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:22:06.605 Malloc2 00:22:06.605 Malloc3 00:22:06.605 Malloc4 00:22:06.863 Malloc5 00:22:06.863 Malloc6 00:22:06.863 Malloc7 00:22:06.863 Malloc8 00:22:06.863 Malloc9 00:22:07.121 Malloc10 00:22:07.121 07:39:23 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:22:07.121 07:39:23 -- target/shutdown.sh@36 -- # timing_exit create_subsystems 00:22:07.121 07:39:23 -- common/autotest_common.sh@718 -- # xtrace_disable 00:22:07.121 07:39:23 -- common/autotest_common.sh@10 -- # set +x 00:22:07.121 07:39:23 -- target/shutdown.sh@102 -- # perfpid=4156739 00:22:07.121 07:39:23 -- target/shutdown.sh@103 -- # waitforlisten 4156739 /var/tmp/bdevperf.sock 00:22:07.121 07:39:23 -- common/autotest_common.sh@819 -- # '[' -z 4156739 ']' 00:22:07.121 07:39:23 -- target/shutdown.sh@101 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json /dev/fd/63 -q 64 -o 65536 -w verify -t 10 00:22:07.121 07:39:23 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:22:07.121 07:39:23 -- target/shutdown.sh@101 -- # gen_nvmf_target_json 1 2 3 4 5 6 7 8 9 10 00:22:07.121 07:39:23 -- common/autotest_common.sh@824 -- # local max_retries=100 00:22:07.121 07:39:23 -- nvmf/common.sh@520 -- # config=() 00:22:07.121 07:39:23 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:22:07.121 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:22:07.121 07:39:23 -- nvmf/common.sh@520 -- # local subsystem config 00:22:07.121 07:39:23 -- common/autotest_common.sh@828 -- # xtrace_disable 00:22:07.121 07:39:23 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:22:07.121 07:39:23 -- common/autotest_common.sh@10 -- # set +x 00:22:07.121 07:39:23 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:22:07.121 { 00:22:07.121 "params": { 00:22:07.121 "name": "Nvme$subsystem", 00:22:07.121 "trtype": "$TEST_TRANSPORT", 00:22:07.121 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:07.121 "adrfam": "ipv4", 00:22:07.121 "trsvcid": "$NVMF_PORT", 00:22:07.121 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:07.121 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:07.121 "hdgst": ${hdgst:-false}, 00:22:07.121 "ddgst": ${ddgst:-false} 00:22:07.121 }, 00:22:07.121 "method": "bdev_nvme_attach_controller" 00:22:07.121 } 00:22:07.121 EOF 00:22:07.121 )") 00:22:07.121 07:39:23 -- nvmf/common.sh@542 -- # cat 00:22:07.121 07:39:23 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:22:07.121 07:39:23 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:22:07.121 { 00:22:07.121 "params": { 00:22:07.121 "name": "Nvme$subsystem", 00:22:07.121 "trtype": "$TEST_TRANSPORT", 00:22:07.121 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:07.121 "adrfam": "ipv4", 00:22:07.121 "trsvcid": "$NVMF_PORT", 00:22:07.121 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:07.121 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:07.121 "hdgst": ${hdgst:-false}, 00:22:07.121 "ddgst": ${ddgst:-false} 00:22:07.121 }, 00:22:07.121 "method": "bdev_nvme_attach_controller" 00:22:07.121 } 00:22:07.121 EOF 00:22:07.121 )") 00:22:07.121 07:39:23 -- nvmf/common.sh@542 -- # cat 00:22:07.121 07:39:23 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:22:07.121 07:39:23 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:22:07.121 { 00:22:07.121 "params": { 00:22:07.121 "name": "Nvme$subsystem", 00:22:07.121 "trtype": "$TEST_TRANSPORT", 00:22:07.121 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:07.121 "adrfam": "ipv4", 00:22:07.121 "trsvcid": "$NVMF_PORT", 00:22:07.121 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:07.121 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:07.121 "hdgst": ${hdgst:-false}, 00:22:07.121 "ddgst": ${ddgst:-false} 00:22:07.121 }, 00:22:07.121 "method": "bdev_nvme_attach_controller" 00:22:07.121 } 00:22:07.121 EOF 00:22:07.121 )") 00:22:07.121 07:39:23 -- nvmf/common.sh@542 -- # cat 00:22:07.121 07:39:23 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:22:07.121 07:39:23 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:22:07.121 { 00:22:07.121 "params": { 00:22:07.121 "name": "Nvme$subsystem", 00:22:07.121 "trtype": "$TEST_TRANSPORT", 00:22:07.121 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:07.121 "adrfam": "ipv4", 00:22:07.121 "trsvcid": "$NVMF_PORT", 00:22:07.121 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:07.121 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:07.121 "hdgst": ${hdgst:-false}, 00:22:07.121 "ddgst": ${ddgst:-false} 00:22:07.121 }, 00:22:07.121 "method": "bdev_nvme_attach_controller" 00:22:07.121 } 00:22:07.121 EOF 00:22:07.121 )") 00:22:07.121 07:39:23 -- nvmf/common.sh@542 -- # cat 00:22:07.121 07:39:23 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:22:07.121 07:39:23 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:22:07.121 { 00:22:07.121 "params": { 00:22:07.121 "name": "Nvme$subsystem", 00:22:07.121 "trtype": "$TEST_TRANSPORT", 00:22:07.121 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:07.121 "adrfam": "ipv4", 00:22:07.121 "trsvcid": "$NVMF_PORT", 00:22:07.121 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:07.121 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:07.121 "hdgst": ${hdgst:-false}, 00:22:07.121 "ddgst": ${ddgst:-false} 00:22:07.121 }, 00:22:07.121 "method": "bdev_nvme_attach_controller" 00:22:07.121 } 00:22:07.121 EOF 00:22:07.121 )") 00:22:07.121 07:39:23 -- nvmf/common.sh@542 -- # cat 00:22:07.121 07:39:23 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:22:07.121 07:39:23 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:22:07.121 { 00:22:07.121 "params": { 00:22:07.121 "name": "Nvme$subsystem", 00:22:07.121 "trtype": "$TEST_TRANSPORT", 00:22:07.121 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:07.121 "adrfam": "ipv4", 00:22:07.121 "trsvcid": "$NVMF_PORT", 00:22:07.121 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:07.121 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:07.121 "hdgst": ${hdgst:-false}, 00:22:07.121 "ddgst": ${ddgst:-false} 00:22:07.121 }, 00:22:07.121 "method": "bdev_nvme_attach_controller" 00:22:07.121 } 00:22:07.121 EOF 00:22:07.121 )") 00:22:07.121 07:39:23 -- nvmf/common.sh@542 -- # cat 00:22:07.121 07:39:23 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:22:07.121 07:39:23 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:22:07.121 { 00:22:07.121 "params": { 00:22:07.121 "name": "Nvme$subsystem", 00:22:07.121 "trtype": "$TEST_TRANSPORT", 00:22:07.121 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:07.121 "adrfam": "ipv4", 00:22:07.121 "trsvcid": "$NVMF_PORT", 00:22:07.121 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:07.121 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:07.121 "hdgst": ${hdgst:-false}, 00:22:07.121 "ddgst": ${ddgst:-false} 00:22:07.121 }, 00:22:07.121 "method": "bdev_nvme_attach_controller" 00:22:07.121 } 00:22:07.121 EOF 00:22:07.121 )") 00:22:07.121 07:39:23 -- nvmf/common.sh@542 -- # cat 00:22:07.121 07:39:23 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:22:07.121 07:39:23 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:22:07.121 { 00:22:07.121 "params": { 00:22:07.121 "name": "Nvme$subsystem", 00:22:07.121 "trtype": "$TEST_TRANSPORT", 00:22:07.121 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:07.121 "adrfam": "ipv4", 00:22:07.121 "trsvcid": "$NVMF_PORT", 00:22:07.121 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:07.121 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:07.121 "hdgst": ${hdgst:-false}, 00:22:07.121 "ddgst": ${ddgst:-false} 00:22:07.121 }, 00:22:07.121 "method": "bdev_nvme_attach_controller" 00:22:07.121 } 00:22:07.121 EOF 00:22:07.121 )") 00:22:07.121 07:39:23 -- nvmf/common.sh@542 -- # cat 00:22:07.121 07:39:23 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:22:07.121 07:39:23 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:22:07.121 { 00:22:07.121 "params": { 00:22:07.121 "name": "Nvme$subsystem", 00:22:07.121 "trtype": "$TEST_TRANSPORT", 00:22:07.121 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:07.121 "adrfam": "ipv4", 00:22:07.121 "trsvcid": "$NVMF_PORT", 00:22:07.121 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:07.121 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:07.121 "hdgst": ${hdgst:-false}, 00:22:07.121 "ddgst": ${ddgst:-false} 00:22:07.121 }, 00:22:07.121 "method": "bdev_nvme_attach_controller" 00:22:07.121 } 00:22:07.121 EOF 00:22:07.121 )") 00:22:07.121 07:39:23 -- nvmf/common.sh@542 -- # cat 00:22:07.121 07:39:23 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:22:07.121 07:39:23 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:22:07.121 { 00:22:07.121 "params": { 00:22:07.121 "name": "Nvme$subsystem", 00:22:07.121 "trtype": "$TEST_TRANSPORT", 00:22:07.121 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:07.121 "adrfam": "ipv4", 00:22:07.121 "trsvcid": "$NVMF_PORT", 00:22:07.122 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:07.122 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:07.122 "hdgst": ${hdgst:-false}, 00:22:07.122 "ddgst": ${ddgst:-false} 00:22:07.122 }, 00:22:07.122 "method": "bdev_nvme_attach_controller" 00:22:07.122 } 00:22:07.122 EOF 00:22:07.122 )") 00:22:07.122 07:39:23 -- nvmf/common.sh@542 -- # cat 00:22:07.122 07:39:23 -- nvmf/common.sh@544 -- # jq . 00:22:07.122 07:39:23 -- nvmf/common.sh@545 -- # IFS=, 00:22:07.122 07:39:23 -- nvmf/common.sh@546 -- # printf '%s\n' '{ 00:22:07.122 "params": { 00:22:07.122 "name": "Nvme1", 00:22:07.122 "trtype": "tcp", 00:22:07.122 "traddr": "10.0.0.2", 00:22:07.122 "adrfam": "ipv4", 00:22:07.122 "trsvcid": "4420", 00:22:07.122 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:22:07.122 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:22:07.122 "hdgst": false, 00:22:07.122 "ddgst": false 00:22:07.122 }, 00:22:07.122 "method": "bdev_nvme_attach_controller" 00:22:07.122 },{ 00:22:07.122 "params": { 00:22:07.122 "name": "Nvme2", 00:22:07.122 "trtype": "tcp", 00:22:07.122 "traddr": "10.0.0.2", 00:22:07.122 "adrfam": "ipv4", 00:22:07.122 "trsvcid": "4420", 00:22:07.122 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:22:07.122 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:22:07.122 "hdgst": false, 00:22:07.122 "ddgst": false 00:22:07.122 }, 00:22:07.122 "method": "bdev_nvme_attach_controller" 00:22:07.122 },{ 00:22:07.122 "params": { 00:22:07.122 "name": "Nvme3", 00:22:07.122 "trtype": "tcp", 00:22:07.122 "traddr": "10.0.0.2", 00:22:07.122 "adrfam": "ipv4", 00:22:07.122 "trsvcid": "4420", 00:22:07.122 "subnqn": "nqn.2016-06.io.spdk:cnode3", 00:22:07.122 "hostnqn": "nqn.2016-06.io.spdk:host3", 00:22:07.122 "hdgst": false, 00:22:07.122 "ddgst": false 00:22:07.122 }, 00:22:07.122 "method": "bdev_nvme_attach_controller" 00:22:07.122 },{ 00:22:07.122 "params": { 00:22:07.122 "name": "Nvme4", 00:22:07.122 "trtype": "tcp", 00:22:07.122 "traddr": "10.0.0.2", 00:22:07.122 "adrfam": "ipv4", 00:22:07.122 "trsvcid": "4420", 00:22:07.122 "subnqn": "nqn.2016-06.io.spdk:cnode4", 00:22:07.122 "hostnqn": "nqn.2016-06.io.spdk:host4", 00:22:07.122 "hdgst": false, 00:22:07.122 "ddgst": false 00:22:07.122 }, 00:22:07.122 "method": "bdev_nvme_attach_controller" 00:22:07.122 },{ 00:22:07.122 "params": { 00:22:07.122 "name": "Nvme5", 00:22:07.122 "trtype": "tcp", 00:22:07.122 "traddr": "10.0.0.2", 00:22:07.122 "adrfam": "ipv4", 00:22:07.122 "trsvcid": "4420", 00:22:07.122 "subnqn": "nqn.2016-06.io.spdk:cnode5", 00:22:07.122 "hostnqn": "nqn.2016-06.io.spdk:host5", 00:22:07.122 "hdgst": false, 00:22:07.122 "ddgst": false 00:22:07.122 }, 00:22:07.122 "method": "bdev_nvme_attach_controller" 00:22:07.122 },{ 00:22:07.122 "params": { 00:22:07.122 "name": "Nvme6", 00:22:07.122 "trtype": "tcp", 00:22:07.122 "traddr": "10.0.0.2", 00:22:07.122 "adrfam": "ipv4", 00:22:07.122 "trsvcid": "4420", 00:22:07.122 "subnqn": "nqn.2016-06.io.spdk:cnode6", 00:22:07.122 "hostnqn": "nqn.2016-06.io.spdk:host6", 00:22:07.122 "hdgst": false, 00:22:07.122 "ddgst": false 00:22:07.122 }, 00:22:07.122 "method": "bdev_nvme_attach_controller" 00:22:07.122 },{ 00:22:07.122 "params": { 00:22:07.122 "name": "Nvme7", 00:22:07.122 "trtype": "tcp", 00:22:07.122 "traddr": "10.0.0.2", 00:22:07.122 "adrfam": "ipv4", 00:22:07.122 "trsvcid": "4420", 00:22:07.122 "subnqn": "nqn.2016-06.io.spdk:cnode7", 00:22:07.122 "hostnqn": "nqn.2016-06.io.spdk:host7", 00:22:07.122 "hdgst": false, 00:22:07.122 "ddgst": false 00:22:07.122 }, 00:22:07.122 "method": "bdev_nvme_attach_controller" 00:22:07.122 },{ 00:22:07.122 "params": { 00:22:07.122 "name": "Nvme8", 00:22:07.122 "trtype": "tcp", 00:22:07.122 "traddr": "10.0.0.2", 00:22:07.122 "adrfam": "ipv4", 00:22:07.122 "trsvcid": "4420", 00:22:07.122 "subnqn": "nqn.2016-06.io.spdk:cnode8", 00:22:07.122 "hostnqn": "nqn.2016-06.io.spdk:host8", 00:22:07.122 "hdgst": false, 00:22:07.122 "ddgst": false 00:22:07.122 }, 00:22:07.122 "method": "bdev_nvme_attach_controller" 00:22:07.122 },{ 00:22:07.122 "params": { 00:22:07.122 "name": "Nvme9", 00:22:07.122 "trtype": "tcp", 00:22:07.122 "traddr": "10.0.0.2", 00:22:07.122 "adrfam": "ipv4", 00:22:07.122 "trsvcid": "4420", 00:22:07.122 "subnqn": "nqn.2016-06.io.spdk:cnode9", 00:22:07.122 "hostnqn": "nqn.2016-06.io.spdk:host9", 00:22:07.122 "hdgst": false, 00:22:07.122 "ddgst": false 00:22:07.122 }, 00:22:07.122 "method": "bdev_nvme_attach_controller" 00:22:07.122 },{ 00:22:07.122 "params": { 00:22:07.122 "name": "Nvme10", 00:22:07.122 "trtype": "tcp", 00:22:07.122 "traddr": "10.0.0.2", 00:22:07.122 "adrfam": "ipv4", 00:22:07.122 "trsvcid": "4420", 00:22:07.122 "subnqn": "nqn.2016-06.io.spdk:cnode10", 00:22:07.122 "hostnqn": "nqn.2016-06.io.spdk:host10", 00:22:07.122 "hdgst": false, 00:22:07.122 "ddgst": false 00:22:07.122 }, 00:22:07.122 "method": "bdev_nvme_attach_controller" 00:22:07.122 }' 00:22:07.122 [2024-07-14 07:39:23.137706] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:22:07.122 [2024-07-14 07:39:23.137780] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid4156739 ] 00:22:07.122 EAL: No free 2048 kB hugepages reported on node 1 00:22:07.122 [2024-07-14 07:39:23.203833] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:07.379 [2024-07-14 07:39:23.312736] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:22:09.277 Running I/O for 10 seconds... 00:22:09.534 07:39:25 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:22:09.534 07:39:25 -- common/autotest_common.sh@852 -- # return 0 00:22:09.534 07:39:25 -- target/shutdown.sh@104 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:22:09.534 07:39:25 -- common/autotest_common.sh@551 -- # xtrace_disable 00:22:09.534 07:39:25 -- common/autotest_common.sh@10 -- # set +x 00:22:09.534 07:39:25 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:22:09.534 07:39:25 -- target/shutdown.sh@106 -- # waitforio /var/tmp/bdevperf.sock Nvme1n1 00:22:09.534 07:39:25 -- target/shutdown.sh@50 -- # '[' -z /var/tmp/bdevperf.sock ']' 00:22:09.534 07:39:25 -- target/shutdown.sh@54 -- # '[' -z Nvme1n1 ']' 00:22:09.534 07:39:25 -- target/shutdown.sh@57 -- # local ret=1 00:22:09.534 07:39:25 -- target/shutdown.sh@58 -- # local i 00:22:09.534 07:39:25 -- target/shutdown.sh@59 -- # (( i = 10 )) 00:22:09.534 07:39:25 -- target/shutdown.sh@59 -- # (( i != 0 )) 00:22:09.534 07:39:25 -- target/shutdown.sh@60 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:22:09.534 07:39:25 -- target/shutdown.sh@60 -- # jq -r '.bdevs[0].num_read_ops' 00:22:09.534 07:39:25 -- common/autotest_common.sh@551 -- # xtrace_disable 00:22:09.534 07:39:25 -- common/autotest_common.sh@10 -- # set +x 00:22:09.534 07:39:25 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:22:09.534 07:39:25 -- target/shutdown.sh@60 -- # read_io_count=129 00:22:09.534 07:39:25 -- target/shutdown.sh@63 -- # '[' 129 -ge 100 ']' 00:22:09.534 07:39:25 -- target/shutdown.sh@64 -- # ret=0 00:22:09.534 07:39:25 -- target/shutdown.sh@65 -- # break 00:22:09.534 07:39:25 -- target/shutdown.sh@69 -- # return 0 00:22:09.534 07:39:25 -- target/shutdown.sh@109 -- # killprocess 4156739 00:22:09.534 07:39:25 -- common/autotest_common.sh@926 -- # '[' -z 4156739 ']' 00:22:09.534 07:39:25 -- common/autotest_common.sh@930 -- # kill -0 4156739 00:22:09.534 07:39:25 -- common/autotest_common.sh@931 -- # uname 00:22:09.534 07:39:25 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:22:09.534 07:39:25 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 4156739 00:22:09.534 07:39:25 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:22:09.534 07:39:25 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:22:09.534 07:39:25 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 4156739' 00:22:09.534 killing process with pid 4156739 00:22:09.534 07:39:25 -- common/autotest_common.sh@945 -- # kill 4156739 00:22:09.534 07:39:25 -- common/autotest_common.sh@950 -- # wait 4156739 00:22:09.792 Received shutdown signal, test time was about 0.635817 seconds 00:22:09.792 00:22:09.792 Latency(us) 00:22:09.792 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:09.792 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:22:09.792 Verification LBA range: start 0x0 length 0x400 00:22:09.792 Nvme1n1 : 0.59 387.90 24.24 0.00 0.00 159733.25 19515.16 174762.67 00:22:09.792 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:22:09.792 Verification LBA range: start 0x0 length 0x400 00:22:09.792 Nvme2n1 : 0.56 406.95 25.43 0.00 0.00 150039.54 18058.81 121945.51 00:22:09.792 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:22:09.792 Verification LBA range: start 0x0 length 0x400 00:22:09.792 Nvme3n1 : 0.59 390.30 24.39 0.00 0.00 154817.14 13301.38 159228.21 00:22:09.792 Job: Nvme4n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:22:09.792 Verification LBA range: start 0x0 length 0x400 00:22:09.792 Nvme4n1 : 0.58 389.99 24.37 0.00 0.00 152420.81 20388.98 150684.25 00:22:09.792 Job: Nvme5n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:22:09.792 Verification LBA range: start 0x0 length 0x400 00:22:09.792 Nvme5n1 : 0.63 428.35 26.77 0.00 0.00 129835.35 13883.92 124275.67 00:22:09.792 Job: Nvme6n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:22:09.792 Verification LBA range: start 0x0 length 0x400 00:22:09.792 Nvme6n1 : 0.60 377.38 23.59 0.00 0.00 143444.00 20388.98 114955.00 00:22:09.792 Job: Nvme7n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:22:09.792 Verification LBA range: start 0x0 length 0x400 00:22:09.792 Nvme7n1 : 0.57 396.84 24.80 0.00 0.00 142638.53 21942.42 118061.89 00:22:09.792 Job: Nvme8n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:22:09.792 Verification LBA range: start 0x0 length 0x400 00:22:09.792 Nvme8n1 : 0.59 385.03 24.06 0.00 0.00 147194.54 15825.73 132042.90 00:22:09.792 Job: Nvme9n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:22:09.792 Verification LBA range: start 0x0 length 0x400 00:22:09.792 Nvme9n1 : 0.63 430.36 26.90 0.00 0.00 121687.25 19418.07 113401.55 00:22:09.792 Job: Nvme10n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:22:09.792 Verification LBA range: start 0x0 length 0x400 00:22:09.792 Nvme10n1 : 0.58 404.26 25.27 0.00 0.00 134355.23 11019.76 119615.34 00:22:09.792 =================================================================================================================== 00:22:09.792 Total : 3997.34 249.83 0.00 0.00 142941.30 11019.76 174762.67 00:22:10.049 07:39:26 -- target/shutdown.sh@112 -- # sleep 1 00:22:10.982 07:39:27 -- target/shutdown.sh@113 -- # kill -0 4156428 00:22:10.982 07:39:27 -- target/shutdown.sh@115 -- # stoptarget 00:22:10.982 07:39:27 -- target/shutdown.sh@41 -- # rm -f ./local-job0-0-verify.state 00:22:10.982 07:39:27 -- target/shutdown.sh@42 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:22:10.982 07:39:27 -- target/shutdown.sh@43 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:22:10.982 07:39:27 -- target/shutdown.sh@45 -- # nvmftestfini 00:22:10.982 07:39:27 -- nvmf/common.sh@476 -- # nvmfcleanup 00:22:10.982 07:39:27 -- nvmf/common.sh@116 -- # sync 00:22:10.982 07:39:27 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:22:10.982 07:39:27 -- nvmf/common.sh@119 -- # set +e 00:22:10.982 07:39:27 -- nvmf/common.sh@120 -- # for i in {1..20} 00:22:10.982 07:39:27 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:22:10.982 rmmod nvme_tcp 00:22:10.982 rmmod nvme_fabrics 00:22:10.982 rmmod nvme_keyring 00:22:10.982 07:39:27 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:22:10.982 07:39:27 -- nvmf/common.sh@123 -- # set -e 00:22:10.982 07:39:27 -- nvmf/common.sh@124 -- # return 0 00:22:10.982 07:39:27 -- nvmf/common.sh@477 -- # '[' -n 4156428 ']' 00:22:10.982 07:39:27 -- nvmf/common.sh@478 -- # killprocess 4156428 00:22:10.982 07:39:27 -- common/autotest_common.sh@926 -- # '[' -z 4156428 ']' 00:22:10.982 07:39:27 -- common/autotest_common.sh@930 -- # kill -0 4156428 00:22:10.982 07:39:27 -- common/autotest_common.sh@931 -- # uname 00:22:10.982 07:39:27 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:22:10.982 07:39:27 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 4156428 00:22:11.240 07:39:27 -- common/autotest_common.sh@932 -- # process_name=reactor_1 00:22:11.240 07:39:27 -- common/autotest_common.sh@936 -- # '[' reactor_1 = sudo ']' 00:22:11.240 07:39:27 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 4156428' 00:22:11.240 killing process with pid 4156428 00:22:11.240 07:39:27 -- common/autotest_common.sh@945 -- # kill 4156428 00:22:11.240 07:39:27 -- common/autotest_common.sh@950 -- # wait 4156428 00:22:11.808 07:39:27 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:22:11.808 07:39:27 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:22:11.808 07:39:27 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:22:11.808 07:39:27 -- nvmf/common.sh@273 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:22:11.808 07:39:27 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:22:11.808 07:39:27 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:11.808 07:39:27 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:22:11.808 07:39:27 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:13.712 07:39:29 -- nvmf/common.sh@278 -- # ip -4 addr flush cvl_0_1 00:22:13.712 00:22:13.712 real 0m8.371s 00:22:13.712 user 0m26.451s 00:22:13.712 sys 0m1.482s 00:22:13.712 07:39:29 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:22:13.712 07:39:29 -- common/autotest_common.sh@10 -- # set +x 00:22:13.712 ************************************ 00:22:13.712 END TEST nvmf_shutdown_tc2 00:22:13.712 ************************************ 00:22:13.712 07:39:29 -- target/shutdown.sh@148 -- # run_test nvmf_shutdown_tc3 nvmf_shutdown_tc3 00:22:13.712 07:39:29 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:22:13.712 07:39:29 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:22:13.712 07:39:29 -- common/autotest_common.sh@10 -- # set +x 00:22:13.712 ************************************ 00:22:13.712 START TEST nvmf_shutdown_tc3 00:22:13.712 ************************************ 00:22:13.712 07:39:29 -- common/autotest_common.sh@1104 -- # nvmf_shutdown_tc3 00:22:13.712 07:39:29 -- target/shutdown.sh@120 -- # starttarget 00:22:13.712 07:39:29 -- target/shutdown.sh@15 -- # nvmftestinit 00:22:13.712 07:39:29 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:22:13.712 07:39:29 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:22:13.712 07:39:29 -- nvmf/common.sh@436 -- # prepare_net_devs 00:22:13.712 07:39:29 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:22:13.712 07:39:29 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:22:13.712 07:39:29 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:13.712 07:39:29 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:22:13.712 07:39:29 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:13.712 07:39:29 -- nvmf/common.sh@402 -- # [[ phy != virt ]] 00:22:13.712 07:39:29 -- nvmf/common.sh@402 -- # gather_supported_nvmf_pci_devs 00:22:13.712 07:39:29 -- nvmf/common.sh@284 -- # xtrace_disable 00:22:13.712 07:39:29 -- common/autotest_common.sh@10 -- # set +x 00:22:13.712 07:39:29 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:22:13.712 07:39:29 -- nvmf/common.sh@290 -- # pci_devs=() 00:22:13.712 07:39:29 -- nvmf/common.sh@290 -- # local -a pci_devs 00:22:13.712 07:39:29 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:22:13.712 07:39:29 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:22:13.712 07:39:29 -- nvmf/common.sh@292 -- # pci_drivers=() 00:22:13.712 07:39:29 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:22:13.712 07:39:29 -- nvmf/common.sh@294 -- # net_devs=() 00:22:13.712 07:39:29 -- nvmf/common.sh@294 -- # local -ga net_devs 00:22:13.712 07:39:29 -- nvmf/common.sh@295 -- # e810=() 00:22:13.712 07:39:29 -- nvmf/common.sh@295 -- # local -ga e810 00:22:13.712 07:39:29 -- nvmf/common.sh@296 -- # x722=() 00:22:13.712 07:39:29 -- nvmf/common.sh@296 -- # local -ga x722 00:22:13.712 07:39:29 -- nvmf/common.sh@297 -- # mlx=() 00:22:13.712 07:39:29 -- nvmf/common.sh@297 -- # local -ga mlx 00:22:13.712 07:39:29 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:22:13.712 07:39:29 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:22:13.712 07:39:29 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:22:13.712 07:39:29 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:22:13.712 07:39:29 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:22:13.712 07:39:29 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:22:13.712 07:39:29 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:22:13.712 07:39:29 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:22:13.712 07:39:29 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:22:13.712 07:39:29 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:22:13.712 07:39:29 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:22:13.712 07:39:29 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:22:13.712 07:39:29 -- nvmf/common.sh@320 -- # [[ tcp == rdma ]] 00:22:13.712 07:39:29 -- nvmf/common.sh@326 -- # [[ e810 == mlx5 ]] 00:22:13.712 07:39:29 -- nvmf/common.sh@328 -- # [[ e810 == e810 ]] 00:22:13.712 07:39:29 -- nvmf/common.sh@329 -- # pci_devs=("${e810[@]}") 00:22:13.712 07:39:29 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:22:13.712 07:39:29 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:22:13.712 07:39:29 -- nvmf/common.sh@340 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:22:13.712 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:22:13.712 07:39:29 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:22:13.712 07:39:29 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:22:13.712 07:39:29 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:13.712 07:39:29 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:13.712 07:39:29 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:22:13.712 07:39:29 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:22:13.712 07:39:29 -- nvmf/common.sh@340 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:22:13.712 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:22:13.712 07:39:29 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:22:13.712 07:39:29 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:22:13.712 07:39:29 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:13.712 07:39:29 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:13.712 07:39:29 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:22:13.712 07:39:29 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:22:13.712 07:39:29 -- nvmf/common.sh@371 -- # [[ e810 == e810 ]] 00:22:13.712 07:39:29 -- nvmf/common.sh@371 -- # [[ tcp == rdma ]] 00:22:13.712 07:39:29 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:22:13.712 07:39:29 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:13.712 07:39:29 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:22:13.712 07:39:29 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:13.712 07:39:29 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:22:13.712 Found net devices under 0000:0a:00.0: cvl_0_0 00:22:13.712 07:39:29 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:22:13.712 07:39:29 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:22:13.712 07:39:29 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:13.712 07:39:29 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:22:13.712 07:39:29 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:13.712 07:39:29 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:22:13.712 Found net devices under 0000:0a:00.1: cvl_0_1 00:22:13.712 07:39:29 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:22:13.712 07:39:29 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:22:13.712 07:39:29 -- nvmf/common.sh@402 -- # is_hw=yes 00:22:13.712 07:39:29 -- nvmf/common.sh@404 -- # [[ yes == yes ]] 00:22:13.712 07:39:29 -- nvmf/common.sh@405 -- # [[ tcp == tcp ]] 00:22:13.712 07:39:29 -- nvmf/common.sh@406 -- # nvmf_tcp_init 00:22:13.712 07:39:29 -- nvmf/common.sh@228 -- # NVMF_INITIATOR_IP=10.0.0.1 00:22:13.712 07:39:29 -- nvmf/common.sh@229 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:22:13.712 07:39:29 -- nvmf/common.sh@230 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:22:13.712 07:39:29 -- nvmf/common.sh@233 -- # (( 2 > 1 )) 00:22:13.712 07:39:29 -- nvmf/common.sh@235 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:22:13.712 07:39:29 -- nvmf/common.sh@236 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:22:13.712 07:39:29 -- nvmf/common.sh@239 -- # NVMF_SECOND_TARGET_IP= 00:22:13.712 07:39:29 -- nvmf/common.sh@241 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:22:13.712 07:39:29 -- nvmf/common.sh@242 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:22:13.712 07:39:29 -- nvmf/common.sh@243 -- # ip -4 addr flush cvl_0_0 00:22:13.712 07:39:29 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_1 00:22:13.712 07:39:29 -- nvmf/common.sh@247 -- # ip netns add cvl_0_0_ns_spdk 00:22:13.712 07:39:29 -- nvmf/common.sh@250 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:22:13.712 07:39:29 -- nvmf/common.sh@253 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:22:13.712 07:39:29 -- nvmf/common.sh@254 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:22:13.712 07:39:29 -- nvmf/common.sh@257 -- # ip link set cvl_0_1 up 00:22:13.712 07:39:29 -- nvmf/common.sh@259 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:22:13.971 07:39:29 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:22:13.971 07:39:29 -- nvmf/common.sh@263 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:22:13.971 07:39:29 -- nvmf/common.sh@266 -- # ping -c 1 10.0.0.2 00:22:13.971 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:22:13.971 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.198 ms 00:22:13.971 00:22:13.971 --- 10.0.0.2 ping statistics --- 00:22:13.971 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:13.971 rtt min/avg/max/mdev = 0.198/0.198/0.198/0.000 ms 00:22:13.971 07:39:29 -- nvmf/common.sh@267 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:22:13.971 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:22:13.971 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.185 ms 00:22:13.971 00:22:13.971 --- 10.0.0.1 ping statistics --- 00:22:13.971 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:13.971 rtt min/avg/max/mdev = 0.185/0.185/0.185/0.000 ms 00:22:13.971 07:39:29 -- nvmf/common.sh@269 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:22:13.971 07:39:29 -- nvmf/common.sh@410 -- # return 0 00:22:13.971 07:39:29 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:22:13.971 07:39:29 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:22:13.971 07:39:29 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:22:13.971 07:39:29 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:22:13.971 07:39:29 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:22:13.971 07:39:29 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:22:13.971 07:39:29 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:22:13.971 07:39:29 -- target/shutdown.sh@18 -- # nvmfappstart -m 0x1E 00:22:13.971 07:39:29 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:22:13.971 07:39:29 -- common/autotest_common.sh@712 -- # xtrace_disable 00:22:13.971 07:39:29 -- common/autotest_common.sh@10 -- # set +x 00:22:13.971 07:39:29 -- nvmf/common.sh@469 -- # nvmfpid=4157671 00:22:13.972 07:39:29 -- nvmf/common.sh@468 -- # ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:22:13.972 07:39:29 -- nvmf/common.sh@470 -- # waitforlisten 4157671 00:22:13.972 07:39:29 -- common/autotest_common.sh@819 -- # '[' -z 4157671 ']' 00:22:13.972 07:39:29 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:13.972 07:39:29 -- common/autotest_common.sh@824 -- # local max_retries=100 00:22:13.972 07:39:29 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:13.972 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:13.972 07:39:29 -- common/autotest_common.sh@828 -- # xtrace_disable 00:22:13.972 07:39:29 -- common/autotest_common.sh@10 -- # set +x 00:22:13.972 [2024-07-14 07:39:29.980236] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:22:13.972 [2024-07-14 07:39:29.980324] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:22:13.972 EAL: No free 2048 kB hugepages reported on node 1 00:22:13.972 [2024-07-14 07:39:30.056041] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:22:14.230 [2024-07-14 07:39:30.175979] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:22:14.230 [2024-07-14 07:39:30.176144] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:22:14.230 [2024-07-14 07:39:30.176164] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:22:14.230 [2024-07-14 07:39:30.176178] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:22:14.230 [2024-07-14 07:39:30.176288] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:22:14.230 [2024-07-14 07:39:30.176312] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:22:14.230 [2024-07-14 07:39:30.176388] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 4 00:22:14.230 [2024-07-14 07:39:30.176391] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:22:14.796 07:39:30 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:22:14.796 07:39:30 -- common/autotest_common.sh@852 -- # return 0 00:22:14.796 07:39:30 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:22:14.796 07:39:30 -- common/autotest_common.sh@718 -- # xtrace_disable 00:22:14.796 07:39:30 -- common/autotest_common.sh@10 -- # set +x 00:22:15.054 07:39:30 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:22:15.054 07:39:30 -- target/shutdown.sh@20 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:22:15.054 07:39:30 -- common/autotest_common.sh@551 -- # xtrace_disable 00:22:15.054 07:39:30 -- common/autotest_common.sh@10 -- # set +x 00:22:15.054 [2024-07-14 07:39:30.987424] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:22:15.054 07:39:30 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:22:15.054 07:39:30 -- target/shutdown.sh@22 -- # num_subsystems=({1..10}) 00:22:15.054 07:39:30 -- target/shutdown.sh@24 -- # timing_enter create_subsystems 00:22:15.054 07:39:30 -- common/autotest_common.sh@712 -- # xtrace_disable 00:22:15.054 07:39:30 -- common/autotest_common.sh@10 -- # set +x 00:22:15.054 07:39:30 -- target/shutdown.sh@26 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:22:15.054 07:39:31 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:22:15.054 07:39:31 -- target/shutdown.sh@28 -- # cat 00:22:15.054 07:39:31 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:22:15.054 07:39:31 -- target/shutdown.sh@28 -- # cat 00:22:15.054 07:39:31 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:22:15.054 07:39:31 -- target/shutdown.sh@28 -- # cat 00:22:15.054 07:39:31 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:22:15.054 07:39:31 -- target/shutdown.sh@28 -- # cat 00:22:15.054 07:39:31 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:22:15.054 07:39:31 -- target/shutdown.sh@28 -- # cat 00:22:15.054 07:39:31 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:22:15.054 07:39:31 -- target/shutdown.sh@28 -- # cat 00:22:15.054 07:39:31 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:22:15.054 07:39:31 -- target/shutdown.sh@28 -- # cat 00:22:15.054 07:39:31 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:22:15.054 07:39:31 -- target/shutdown.sh@28 -- # cat 00:22:15.055 07:39:31 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:22:15.055 07:39:31 -- target/shutdown.sh@28 -- # cat 00:22:15.055 07:39:31 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:22:15.055 07:39:31 -- target/shutdown.sh@28 -- # cat 00:22:15.055 07:39:31 -- target/shutdown.sh@35 -- # rpc_cmd 00:22:15.055 07:39:31 -- common/autotest_common.sh@551 -- # xtrace_disable 00:22:15.055 07:39:31 -- common/autotest_common.sh@10 -- # set +x 00:22:15.055 Malloc1 00:22:15.055 [2024-07-14 07:39:31.070684] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:22:15.055 Malloc2 00:22:15.055 Malloc3 00:22:15.055 Malloc4 00:22:15.313 Malloc5 00:22:15.313 Malloc6 00:22:15.313 Malloc7 00:22:15.313 Malloc8 00:22:15.313 Malloc9 00:22:15.571 Malloc10 00:22:15.571 07:39:31 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:22:15.571 07:39:31 -- target/shutdown.sh@36 -- # timing_exit create_subsystems 00:22:15.571 07:39:31 -- common/autotest_common.sh@718 -- # xtrace_disable 00:22:15.571 07:39:31 -- common/autotest_common.sh@10 -- # set +x 00:22:15.571 07:39:31 -- target/shutdown.sh@124 -- # perfpid=4157870 00:22:15.571 07:39:31 -- target/shutdown.sh@125 -- # waitforlisten 4157870 /var/tmp/bdevperf.sock 00:22:15.571 07:39:31 -- common/autotest_common.sh@819 -- # '[' -z 4157870 ']' 00:22:15.571 07:39:31 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:22:15.571 07:39:31 -- target/shutdown.sh@123 -- # gen_nvmf_target_json 1 2 3 4 5 6 7 8 9 10 00:22:15.571 07:39:31 -- target/shutdown.sh@123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json /dev/fd/63 -q 64 -o 65536 -w verify -t 10 00:22:15.571 07:39:31 -- common/autotest_common.sh@824 -- # local max_retries=100 00:22:15.571 07:39:31 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:22:15.571 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:22:15.572 07:39:31 -- nvmf/common.sh@520 -- # config=() 00:22:15.572 07:39:31 -- common/autotest_common.sh@828 -- # xtrace_disable 00:22:15.572 07:39:31 -- nvmf/common.sh@520 -- # local subsystem config 00:22:15.572 07:39:31 -- common/autotest_common.sh@10 -- # set +x 00:22:15.572 07:39:31 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:22:15.572 07:39:31 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:22:15.572 { 00:22:15.572 "params": { 00:22:15.572 "name": "Nvme$subsystem", 00:22:15.572 "trtype": "$TEST_TRANSPORT", 00:22:15.572 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:15.572 "adrfam": "ipv4", 00:22:15.572 "trsvcid": "$NVMF_PORT", 00:22:15.572 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:15.572 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:15.572 "hdgst": ${hdgst:-false}, 00:22:15.572 "ddgst": ${ddgst:-false} 00:22:15.572 }, 00:22:15.572 "method": "bdev_nvme_attach_controller" 00:22:15.572 } 00:22:15.572 EOF 00:22:15.572 )") 00:22:15.572 07:39:31 -- nvmf/common.sh@542 -- # cat 00:22:15.572 07:39:31 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:22:15.572 07:39:31 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:22:15.572 { 00:22:15.572 "params": { 00:22:15.572 "name": "Nvme$subsystem", 00:22:15.572 "trtype": "$TEST_TRANSPORT", 00:22:15.572 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:15.572 "adrfam": "ipv4", 00:22:15.572 "trsvcid": "$NVMF_PORT", 00:22:15.572 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:15.572 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:15.572 "hdgst": ${hdgst:-false}, 00:22:15.572 "ddgst": ${ddgst:-false} 00:22:15.572 }, 00:22:15.572 "method": "bdev_nvme_attach_controller" 00:22:15.572 } 00:22:15.572 EOF 00:22:15.572 )") 00:22:15.572 07:39:31 -- nvmf/common.sh@542 -- # cat 00:22:15.572 07:39:31 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:22:15.572 07:39:31 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:22:15.572 { 00:22:15.572 "params": { 00:22:15.572 "name": "Nvme$subsystem", 00:22:15.572 "trtype": "$TEST_TRANSPORT", 00:22:15.572 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:15.572 "adrfam": "ipv4", 00:22:15.572 "trsvcid": "$NVMF_PORT", 00:22:15.572 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:15.572 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:15.572 "hdgst": ${hdgst:-false}, 00:22:15.572 "ddgst": ${ddgst:-false} 00:22:15.572 }, 00:22:15.572 "method": "bdev_nvme_attach_controller" 00:22:15.572 } 00:22:15.572 EOF 00:22:15.572 )") 00:22:15.572 07:39:31 -- nvmf/common.sh@542 -- # cat 00:22:15.572 07:39:31 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:22:15.572 07:39:31 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:22:15.572 { 00:22:15.572 "params": { 00:22:15.572 "name": "Nvme$subsystem", 00:22:15.572 "trtype": "$TEST_TRANSPORT", 00:22:15.572 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:15.572 "adrfam": "ipv4", 00:22:15.572 "trsvcid": "$NVMF_PORT", 00:22:15.572 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:15.572 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:15.572 "hdgst": ${hdgst:-false}, 00:22:15.572 "ddgst": ${ddgst:-false} 00:22:15.572 }, 00:22:15.572 "method": "bdev_nvme_attach_controller" 00:22:15.572 } 00:22:15.572 EOF 00:22:15.572 )") 00:22:15.572 07:39:31 -- nvmf/common.sh@542 -- # cat 00:22:15.572 07:39:31 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:22:15.572 07:39:31 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:22:15.572 { 00:22:15.572 "params": { 00:22:15.572 "name": "Nvme$subsystem", 00:22:15.572 "trtype": "$TEST_TRANSPORT", 00:22:15.572 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:15.572 "adrfam": "ipv4", 00:22:15.572 "trsvcid": "$NVMF_PORT", 00:22:15.572 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:15.572 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:15.572 "hdgst": ${hdgst:-false}, 00:22:15.572 "ddgst": ${ddgst:-false} 00:22:15.572 }, 00:22:15.572 "method": "bdev_nvme_attach_controller" 00:22:15.572 } 00:22:15.572 EOF 00:22:15.572 )") 00:22:15.572 07:39:31 -- nvmf/common.sh@542 -- # cat 00:22:15.572 07:39:31 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:22:15.572 07:39:31 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:22:15.572 { 00:22:15.572 "params": { 00:22:15.572 "name": "Nvme$subsystem", 00:22:15.572 "trtype": "$TEST_TRANSPORT", 00:22:15.572 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:15.572 "adrfam": "ipv4", 00:22:15.572 "trsvcid": "$NVMF_PORT", 00:22:15.572 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:15.572 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:15.572 "hdgst": ${hdgst:-false}, 00:22:15.572 "ddgst": ${ddgst:-false} 00:22:15.572 }, 00:22:15.572 "method": "bdev_nvme_attach_controller" 00:22:15.572 } 00:22:15.572 EOF 00:22:15.572 )") 00:22:15.572 07:39:31 -- nvmf/common.sh@542 -- # cat 00:22:15.572 07:39:31 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:22:15.572 07:39:31 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:22:15.572 { 00:22:15.572 "params": { 00:22:15.572 "name": "Nvme$subsystem", 00:22:15.572 "trtype": "$TEST_TRANSPORT", 00:22:15.572 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:15.572 "adrfam": "ipv4", 00:22:15.572 "trsvcid": "$NVMF_PORT", 00:22:15.572 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:15.572 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:15.572 "hdgst": ${hdgst:-false}, 00:22:15.572 "ddgst": ${ddgst:-false} 00:22:15.572 }, 00:22:15.572 "method": "bdev_nvme_attach_controller" 00:22:15.572 } 00:22:15.572 EOF 00:22:15.572 )") 00:22:15.572 07:39:31 -- nvmf/common.sh@542 -- # cat 00:22:15.572 07:39:31 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:22:15.572 07:39:31 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:22:15.572 { 00:22:15.572 "params": { 00:22:15.572 "name": "Nvme$subsystem", 00:22:15.572 "trtype": "$TEST_TRANSPORT", 00:22:15.572 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:15.572 "adrfam": "ipv4", 00:22:15.572 "trsvcid": "$NVMF_PORT", 00:22:15.572 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:15.572 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:15.572 "hdgst": ${hdgst:-false}, 00:22:15.572 "ddgst": ${ddgst:-false} 00:22:15.572 }, 00:22:15.572 "method": "bdev_nvme_attach_controller" 00:22:15.572 } 00:22:15.572 EOF 00:22:15.572 )") 00:22:15.572 07:39:31 -- nvmf/common.sh@542 -- # cat 00:22:15.572 07:39:31 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:22:15.572 07:39:31 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:22:15.572 { 00:22:15.572 "params": { 00:22:15.572 "name": "Nvme$subsystem", 00:22:15.572 "trtype": "$TEST_TRANSPORT", 00:22:15.572 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:15.572 "adrfam": "ipv4", 00:22:15.572 "trsvcid": "$NVMF_PORT", 00:22:15.572 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:15.572 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:15.572 "hdgst": ${hdgst:-false}, 00:22:15.572 "ddgst": ${ddgst:-false} 00:22:15.572 }, 00:22:15.572 "method": "bdev_nvme_attach_controller" 00:22:15.572 } 00:22:15.572 EOF 00:22:15.572 )") 00:22:15.572 07:39:31 -- nvmf/common.sh@542 -- # cat 00:22:15.572 07:39:31 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:22:15.572 07:39:31 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:22:15.572 { 00:22:15.572 "params": { 00:22:15.572 "name": "Nvme$subsystem", 00:22:15.572 "trtype": "$TEST_TRANSPORT", 00:22:15.572 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:15.572 "adrfam": "ipv4", 00:22:15.572 "trsvcid": "$NVMF_PORT", 00:22:15.572 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:15.572 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:15.572 "hdgst": ${hdgst:-false}, 00:22:15.572 "ddgst": ${ddgst:-false} 00:22:15.572 }, 00:22:15.572 "method": "bdev_nvme_attach_controller" 00:22:15.572 } 00:22:15.572 EOF 00:22:15.572 )") 00:22:15.572 07:39:31 -- nvmf/common.sh@542 -- # cat 00:22:15.572 07:39:31 -- nvmf/common.sh@544 -- # jq . 00:22:15.572 07:39:31 -- nvmf/common.sh@545 -- # IFS=, 00:22:15.572 07:39:31 -- nvmf/common.sh@546 -- # printf '%s\n' '{ 00:22:15.572 "params": { 00:22:15.572 "name": "Nvme1", 00:22:15.572 "trtype": "tcp", 00:22:15.572 "traddr": "10.0.0.2", 00:22:15.572 "adrfam": "ipv4", 00:22:15.572 "trsvcid": "4420", 00:22:15.572 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:22:15.572 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:22:15.572 "hdgst": false, 00:22:15.572 "ddgst": false 00:22:15.572 }, 00:22:15.572 "method": "bdev_nvme_attach_controller" 00:22:15.572 },{ 00:22:15.572 "params": { 00:22:15.572 "name": "Nvme2", 00:22:15.572 "trtype": "tcp", 00:22:15.572 "traddr": "10.0.0.2", 00:22:15.572 "adrfam": "ipv4", 00:22:15.572 "trsvcid": "4420", 00:22:15.572 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:22:15.572 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:22:15.572 "hdgst": false, 00:22:15.572 "ddgst": false 00:22:15.572 }, 00:22:15.572 "method": "bdev_nvme_attach_controller" 00:22:15.572 },{ 00:22:15.572 "params": { 00:22:15.572 "name": "Nvme3", 00:22:15.572 "trtype": "tcp", 00:22:15.572 "traddr": "10.0.0.2", 00:22:15.572 "adrfam": "ipv4", 00:22:15.572 "trsvcid": "4420", 00:22:15.572 "subnqn": "nqn.2016-06.io.spdk:cnode3", 00:22:15.572 "hostnqn": "nqn.2016-06.io.spdk:host3", 00:22:15.572 "hdgst": false, 00:22:15.572 "ddgst": false 00:22:15.573 }, 00:22:15.573 "method": "bdev_nvme_attach_controller" 00:22:15.573 },{ 00:22:15.573 "params": { 00:22:15.573 "name": "Nvme4", 00:22:15.573 "trtype": "tcp", 00:22:15.573 "traddr": "10.0.0.2", 00:22:15.573 "adrfam": "ipv4", 00:22:15.573 "trsvcid": "4420", 00:22:15.573 "subnqn": "nqn.2016-06.io.spdk:cnode4", 00:22:15.573 "hostnqn": "nqn.2016-06.io.spdk:host4", 00:22:15.573 "hdgst": false, 00:22:15.573 "ddgst": false 00:22:15.573 }, 00:22:15.573 "method": "bdev_nvme_attach_controller" 00:22:15.573 },{ 00:22:15.573 "params": { 00:22:15.573 "name": "Nvme5", 00:22:15.573 "trtype": "tcp", 00:22:15.573 "traddr": "10.0.0.2", 00:22:15.573 "adrfam": "ipv4", 00:22:15.573 "trsvcid": "4420", 00:22:15.573 "subnqn": "nqn.2016-06.io.spdk:cnode5", 00:22:15.573 "hostnqn": "nqn.2016-06.io.spdk:host5", 00:22:15.573 "hdgst": false, 00:22:15.573 "ddgst": false 00:22:15.573 }, 00:22:15.573 "method": "bdev_nvme_attach_controller" 00:22:15.573 },{ 00:22:15.573 "params": { 00:22:15.573 "name": "Nvme6", 00:22:15.573 "trtype": "tcp", 00:22:15.573 "traddr": "10.0.0.2", 00:22:15.573 "adrfam": "ipv4", 00:22:15.573 "trsvcid": "4420", 00:22:15.573 "subnqn": "nqn.2016-06.io.spdk:cnode6", 00:22:15.573 "hostnqn": "nqn.2016-06.io.spdk:host6", 00:22:15.573 "hdgst": false, 00:22:15.573 "ddgst": false 00:22:15.573 }, 00:22:15.573 "method": "bdev_nvme_attach_controller" 00:22:15.573 },{ 00:22:15.573 "params": { 00:22:15.573 "name": "Nvme7", 00:22:15.573 "trtype": "tcp", 00:22:15.573 "traddr": "10.0.0.2", 00:22:15.573 "adrfam": "ipv4", 00:22:15.573 "trsvcid": "4420", 00:22:15.573 "subnqn": "nqn.2016-06.io.spdk:cnode7", 00:22:15.573 "hostnqn": "nqn.2016-06.io.spdk:host7", 00:22:15.573 "hdgst": false, 00:22:15.573 "ddgst": false 00:22:15.573 }, 00:22:15.573 "method": "bdev_nvme_attach_controller" 00:22:15.573 },{ 00:22:15.573 "params": { 00:22:15.573 "name": "Nvme8", 00:22:15.573 "trtype": "tcp", 00:22:15.573 "traddr": "10.0.0.2", 00:22:15.573 "adrfam": "ipv4", 00:22:15.573 "trsvcid": "4420", 00:22:15.573 "subnqn": "nqn.2016-06.io.spdk:cnode8", 00:22:15.573 "hostnqn": "nqn.2016-06.io.spdk:host8", 00:22:15.573 "hdgst": false, 00:22:15.573 "ddgst": false 00:22:15.573 }, 00:22:15.573 "method": "bdev_nvme_attach_controller" 00:22:15.573 },{ 00:22:15.573 "params": { 00:22:15.573 "name": "Nvme9", 00:22:15.573 "trtype": "tcp", 00:22:15.573 "traddr": "10.0.0.2", 00:22:15.573 "adrfam": "ipv4", 00:22:15.573 "trsvcid": "4420", 00:22:15.573 "subnqn": "nqn.2016-06.io.spdk:cnode9", 00:22:15.573 "hostnqn": "nqn.2016-06.io.spdk:host9", 00:22:15.573 "hdgst": false, 00:22:15.573 "ddgst": false 00:22:15.573 }, 00:22:15.573 "method": "bdev_nvme_attach_controller" 00:22:15.573 },{ 00:22:15.573 "params": { 00:22:15.573 "name": "Nvme10", 00:22:15.573 "trtype": "tcp", 00:22:15.573 "traddr": "10.0.0.2", 00:22:15.573 "adrfam": "ipv4", 00:22:15.573 "trsvcid": "4420", 00:22:15.573 "subnqn": "nqn.2016-06.io.spdk:cnode10", 00:22:15.573 "hostnqn": "nqn.2016-06.io.spdk:host10", 00:22:15.573 "hdgst": false, 00:22:15.573 "ddgst": false 00:22:15.573 }, 00:22:15.573 "method": "bdev_nvme_attach_controller" 00:22:15.573 }' 00:22:15.573 [2024-07-14 07:39:31.588424] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:22:15.573 [2024-07-14 07:39:31.588495] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid4157870 ] 00:22:15.573 EAL: No free 2048 kB hugepages reported on node 1 00:22:15.573 [2024-07-14 07:39:31.651219] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:15.831 [2024-07-14 07:39:31.760414] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:22:17.211 Running I/O for 10 seconds... 00:22:17.211 07:39:33 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:22:17.211 07:39:33 -- common/autotest_common.sh@852 -- # return 0 00:22:17.211 07:39:33 -- target/shutdown.sh@126 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:22:17.211 07:39:33 -- common/autotest_common.sh@551 -- # xtrace_disable 00:22:17.211 07:39:33 -- common/autotest_common.sh@10 -- # set +x 00:22:17.211 07:39:33 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:22:17.211 07:39:33 -- target/shutdown.sh@129 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill -9 $perfpid || true; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:22:17.211 07:39:33 -- target/shutdown.sh@131 -- # waitforio /var/tmp/bdevperf.sock Nvme1n1 00:22:17.211 07:39:33 -- target/shutdown.sh@50 -- # '[' -z /var/tmp/bdevperf.sock ']' 00:22:17.211 07:39:33 -- target/shutdown.sh@54 -- # '[' -z Nvme1n1 ']' 00:22:17.211 07:39:33 -- target/shutdown.sh@57 -- # local ret=1 00:22:17.211 07:39:33 -- target/shutdown.sh@58 -- # local i 00:22:17.211 07:39:33 -- target/shutdown.sh@59 -- # (( i = 10 )) 00:22:17.211 07:39:33 -- target/shutdown.sh@59 -- # (( i != 0 )) 00:22:17.211 07:39:33 -- target/shutdown.sh@60 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:22:17.211 07:39:33 -- target/shutdown.sh@60 -- # jq -r '.bdevs[0].num_read_ops' 00:22:17.211 07:39:33 -- common/autotest_common.sh@551 -- # xtrace_disable 00:22:17.211 07:39:33 -- common/autotest_common.sh@10 -- # set +x 00:22:17.211 07:39:33 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:22:17.211 07:39:33 -- target/shutdown.sh@60 -- # read_io_count=42 00:22:17.211 07:39:33 -- target/shutdown.sh@63 -- # '[' 42 -ge 100 ']' 00:22:17.211 07:39:33 -- target/shutdown.sh@67 -- # sleep 0.25 00:22:17.471 07:39:33 -- target/shutdown.sh@59 -- # (( i-- )) 00:22:17.471 07:39:33 -- target/shutdown.sh@59 -- # (( i != 0 )) 00:22:17.471 07:39:33 -- target/shutdown.sh@60 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:22:17.471 07:39:33 -- target/shutdown.sh@60 -- # jq -r '.bdevs[0].num_read_ops' 00:22:17.471 07:39:33 -- common/autotest_common.sh@551 -- # xtrace_disable 00:22:17.471 07:39:33 -- common/autotest_common.sh@10 -- # set +x 00:22:17.742 07:39:33 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:22:17.742 07:39:33 -- target/shutdown.sh@60 -- # read_io_count=131 00:22:17.742 07:39:33 -- target/shutdown.sh@63 -- # '[' 131 -ge 100 ']' 00:22:17.742 07:39:33 -- target/shutdown.sh@64 -- # ret=0 00:22:17.742 07:39:33 -- target/shutdown.sh@65 -- # break 00:22:17.742 07:39:33 -- target/shutdown.sh@69 -- # return 0 00:22:17.742 07:39:33 -- target/shutdown.sh@134 -- # killprocess 4157671 00:22:17.742 07:39:33 -- common/autotest_common.sh@926 -- # '[' -z 4157671 ']' 00:22:17.742 07:39:33 -- common/autotest_common.sh@930 -- # kill -0 4157671 00:22:17.742 07:39:33 -- common/autotest_common.sh@931 -- # uname 00:22:17.742 07:39:33 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:22:17.742 07:39:33 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 4157671 00:22:17.742 07:39:33 -- common/autotest_common.sh@932 -- # process_name=reactor_1 00:22:17.742 07:39:33 -- common/autotest_common.sh@936 -- # '[' reactor_1 = sudo ']' 00:22:17.742 07:39:33 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 4157671' 00:22:17.742 killing process with pid 4157671 00:22:17.742 07:39:33 -- common/autotest_common.sh@945 -- # kill 4157671 00:22:17.742 07:39:33 -- common/autotest_common.sh@950 -- # wait 4157671 00:22:17.742 [2024-07-14 07:39:33.680943] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x169d100 is same with the state(5) to be set 00:22:17.742 [2024-07-14 07:39:33.681026] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x169d100 is same with the state(5) to be set 00:22:17.742 [2024-07-14 07:39:33.681042] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x169d100 is same with the state(5) to be set 00:22:17.742 [2024-07-14 07:39:33.681055] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x169d100 is same with the state(5) to be set 00:22:17.742 [2024-07-14 07:39:33.681067] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x169d100 is same with the state(5) to be set 00:22:17.742 [2024-07-14 07:39:33.681080] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x169d100 is same with the state(5) to be set 00:22:17.742 [2024-07-14 07:39:33.681096] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x169d100 is same with the state(5) to be set 00:22:17.742 [2024-07-14 07:39:33.681109] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x169d100 is same with the state(5) to be set 00:22:17.742 [2024-07-14 07:39:33.681121] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x169d100 is same with the state(5) to be set 00:22:17.742 [2024-07-14 07:39:33.681133] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x169d100 is same with the state(5) to be set 00:22:17.742 [2024-07-14 07:39:33.681145] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x169d100 is same with the state(5) to be set 00:22:17.742 [2024-07-14 07:39:33.681168] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x169d100 is same with the state(5) to be set 00:22:17.742 [2024-07-14 07:39:33.681190] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x169d100 is same with the state(5) to be set 00:22:17.742 [2024-07-14 07:39:33.681203] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x169d100 is same with the state(5) to be set 00:22:17.742 [2024-07-14 07:39:33.681215] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x169d100 is same with the state(5) to be set 00:22:17.742 [2024-07-14 07:39:33.681232] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x169d100 is same with the state(5) to be set 00:22:17.742 [2024-07-14 07:39:33.681243] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x169d100 is same with the state(5) to be set 00:22:17.742 [2024-07-14 07:39:33.681255] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x169d100 is same with the state(5) to be set 00:22:17.742 [2024-07-14 07:39:33.681267] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x169d100 is same with the state(5) to be set 00:22:17.742 [2024-07-14 07:39:33.681280] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x169d100 is same with the state(5) to be set 00:22:17.742 [2024-07-14 07:39:33.681291] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x169d100 is same with the state(5) to be set 00:22:17.742 [2024-07-14 07:39:33.681304] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x169d100 is same with the state(5) to be set 00:22:17.742 [2024-07-14 07:39:33.681315] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x169d100 is same with the state(5) to be set 00:22:17.742 [2024-07-14 07:39:33.681327] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x169d100 is same with the state(5) to be set 00:22:17.742 [2024-07-14 07:39:33.681339] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x169d100 is same with the state(5) to be set 00:22:17.742 [2024-07-14 07:39:33.681350] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x169d100 is same with the state(5) to be set 00:22:17.742 [2024-07-14 07:39:33.681362] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x169d100 is same with the state(5) to be set 00:22:17.742 [2024-07-14 07:39:33.681374] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x169d100 is same with the state(5) to be set 00:22:17.742 [2024-07-14 07:39:33.681386] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x169d100 is same with the state(5) to be set 00:22:17.742 [2024-07-14 07:39:33.681398] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x169d100 is same with the state(5) to be set 00:22:17.742 [2024-07-14 07:39:33.681410] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x169d100 is same with the state(5) to be set 00:22:17.742 [2024-07-14 07:39:33.681421] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x169d100 is same with the state(5) to be set 00:22:17.742 [2024-07-14 07:39:33.681434] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x169d100 is same with the state(5) to be set 00:22:17.742 [2024-07-14 07:39:33.681446] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x169d100 is same with the state(5) to be set 00:22:17.742 [2024-07-14 07:39:33.681458] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x169d100 is same with the state(5) to be set 00:22:17.742 [2024-07-14 07:39:33.681470] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x169d100 is same with the state(5) to be set 00:22:17.742 [2024-07-14 07:39:33.681482] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x169d100 is same with the state(5) to be set 00:22:17.742 [2024-07-14 07:39:33.681494] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x169d100 is same with the state(5) to be set 00:22:17.742 [2024-07-14 07:39:33.681505] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x169d100 is same with the state(5) to be set 00:22:17.742 [2024-07-14 07:39:33.681517] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x169d100 is same with the state(5) to be set 00:22:17.742 [2024-07-14 07:39:33.681541] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x169d100 is same with the state(5) to be set 00:22:17.742 [2024-07-14 07:39:33.681554] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x169d100 is same with the state(5) to be set 00:22:17.742 [2024-07-14 07:39:33.681566] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x169d100 is same with the state(5) to be set 00:22:17.742 [2024-07-14 07:39:33.681578] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x169d100 is same with the state(5) to be set 00:22:17.742 [2024-07-14 07:39:33.681590] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x169d100 is same with the state(5) to be set 00:22:17.742 [2024-07-14 07:39:33.681602] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x169d100 is same with the state(5) to be set 00:22:17.742 [2024-07-14 07:39:33.681614] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x169d100 is same with the state(5) to be set 00:22:17.742 [2024-07-14 07:39:33.681626] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x169d100 is same with the state(5) to be set 00:22:17.742 [2024-07-14 07:39:33.681638] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x169d100 is same with the state(5) to be set 00:22:17.742 [2024-07-14 07:39:33.681649] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x169d100 is same with the state(5) to be set 00:22:17.742 [2024-07-14 07:39:33.681661] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x169d100 is same with the state(5) to be set 00:22:17.742 [2024-07-14 07:39:33.681673] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x169d100 is same with the state(5) to be set 00:22:17.742 [2024-07-14 07:39:33.681686] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x169d100 is same with the state(5) to be set 00:22:17.742 [2024-07-14 07:39:33.681698] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x169d100 is same with the state(5) to be set 00:22:17.742 [2024-07-14 07:39:33.681709] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x169d100 is same with the state(5) to be set 00:22:17.742 [2024-07-14 07:39:33.681721] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x169d100 is same with the state(5) to be set 00:22:17.742 [2024-07-14 07:39:33.681732] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x169d100 is same with the state(5) to be set 00:22:17.742 [2024-07-14 07:39:33.681744] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x169d100 is same with the state(5) to be set 00:22:17.742 [2024-07-14 07:39:33.681756] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x169d100 is same with the state(5) to be set 00:22:17.742 [2024-07-14 07:39:33.681767] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x169d100 is same with the state(5) to be set 00:22:17.742 [2024-07-14 07:39:33.681781] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x169d100 is same with the state(5) to be set 00:22:17.742 [2024-07-14 07:39:33.681793] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x169d100 is same with the state(5) to be set 00:22:17.742 [2024-07-14 07:39:33.681805] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x169d100 is same with the state(5) to be set 00:22:17.742 [2024-07-14 07:39:33.685955] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x169fa90 is same with the state(5) to be set 00:22:17.742 [2024-07-14 07:39:33.685990] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x169fa90 is same with the state(5) to be set 00:22:17.742 [2024-07-14 07:39:33.686004] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x169fa90 is same with the state(5) to be set 00:22:17.742 [2024-07-14 07:39:33.686017] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x169fa90 is same with the state(5) to be set 00:22:17.742 [2024-07-14 07:39:33.686034] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x169fa90 is same with the state(5) to be set 00:22:17.742 [2024-07-14 07:39:33.686048] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x169fa90 is same with the state(5) to be set 00:22:17.742 [2024-07-14 07:39:33.686060] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x169fa90 is same with the state(5) to be set 00:22:17.742 [2024-07-14 07:39:33.686072] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x169fa90 is same with the state(5) to be set 00:22:17.742 [2024-07-14 07:39:33.686084] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x169fa90 is same with the state(5) to be set 00:22:17.742 [2024-07-14 07:39:33.686096] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x169fa90 is same with the state(5) to be set 00:22:17.742 [2024-07-14 07:39:33.686109] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x169fa90 is same with the state(5) to be set 00:22:17.742 [2024-07-14 07:39:33.686120] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x169fa90 is same with the state(5) to be set 00:22:17.742 [2024-07-14 07:39:33.686132] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x169fa90 is same with the state(5) to be set 00:22:17.742 [2024-07-14 07:39:33.686144] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x169fa90 is same with the state(5) to be set 00:22:17.742 [2024-07-14 07:39:33.686156] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x169fa90 is same with the state(5) to be set 00:22:17.742 [2024-07-14 07:39:33.686170] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x169fa90 is same with the state(5) to be set 00:22:17.742 [2024-07-14 07:39:33.686181] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x169fa90 is same with the state(5) to be set 00:22:17.742 [2024-07-14 07:39:33.686194] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x169fa90 is same with the state(5) to be set 00:22:17.742 [2024-07-14 07:39:33.686205] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x169fa90 is same with the state(5) to be set 00:22:17.742 [2024-07-14 07:39:33.686217] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x169fa90 is same with the state(5) to be set 00:22:17.742 [2024-07-14 07:39:33.686229] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x169fa90 is same with the state(5) to be set 00:22:17.742 [2024-07-14 07:39:33.686241] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x169fa90 is same with the state(5) to be set 00:22:17.742 [2024-07-14 07:39:33.686253] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x169fa90 is same with the state(5) to be set 00:22:17.742 [2024-07-14 07:39:33.686265] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x169fa90 is same with the state(5) to be set 00:22:17.742 [2024-07-14 07:39:33.686276] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x169fa90 is same with the state(5) to be set 00:22:17.742 [2024-07-14 07:39:33.686288] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x169fa90 is same with the state(5) to be set 00:22:17.742 [2024-07-14 07:39:33.686300] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x169fa90 is same with the state(5) to be set 00:22:17.742 [2024-07-14 07:39:33.686312] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x169fa90 is same with the state(5) to be set 00:22:17.742 [2024-07-14 07:39:33.686324] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x169fa90 is same with the state(5) to be set 00:22:17.742 [2024-07-14 07:39:33.686335] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x169fa90 is same with the state(5) to be set 00:22:17.742 [2024-07-14 07:39:33.686347] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x169fa90 is same with the state(5) to be set 00:22:17.742 [2024-07-14 07:39:33.686361] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x169fa90 is same with the state(5) to be set 00:22:17.742 [2024-07-14 07:39:33.686373] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x169fa90 is same with the state(5) to be set 00:22:17.742 [2024-07-14 07:39:33.686385] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x169fa90 is same with the state(5) to be set 00:22:17.742 [2024-07-14 07:39:33.686397] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x169fa90 is same with the state(5) to be set 00:22:17.742 [2024-07-14 07:39:33.686408] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x169fa90 is same with the state(5) to be set 00:22:17.742 [2024-07-14 07:39:33.686420] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x169fa90 is same with the state(5) to be set 00:22:17.742 [2024-07-14 07:39:33.686432] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x169fa90 is same with the state(5) to be set 00:22:17.742 [2024-07-14 07:39:33.686444] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x169fa90 is same with the state(5) to be set 00:22:17.742 [2024-07-14 07:39:33.686456] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x169fa90 is same with the state(5) to be set 00:22:17.742 [2024-07-14 07:39:33.686468] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x169fa90 is same with the state(5) to be set 00:22:17.742 [2024-07-14 07:39:33.686479] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x169fa90 is same with the state(5) to be set 00:22:17.742 [2024-07-14 07:39:33.686491] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x169fa90 is same with the state(5) to be set 00:22:17.742 [2024-07-14 07:39:33.686503] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x169fa90 is same with the state(5) to be set 00:22:17.742 [2024-07-14 07:39:33.686514] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x169fa90 is same with the state(5) to be set 00:22:17.742 [2024-07-14 07:39:33.686526] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x169fa90 is same with the state(5) to be set 00:22:17.742 [2024-07-14 07:39:33.686538] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x169fa90 is same with the state(5) to be set 00:22:17.742 [2024-07-14 07:39:33.686549] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x169fa90 is same with the state(5) to be set 00:22:17.742 [2024-07-14 07:39:33.686561] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x169fa90 is same with the state(5) to be set 00:22:17.742 [2024-07-14 07:39:33.686573] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x169fa90 is same with the state(5) to be set 00:22:17.742 [2024-07-14 07:39:33.686585] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x169fa90 is same with the state(5) to be set 00:22:17.742 [2024-07-14 07:39:33.686596] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x169fa90 is same with the state(5) to be set 00:22:17.742 [2024-07-14 07:39:33.686608] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x169fa90 is same with the state(5) to be set 00:22:17.742 [2024-07-14 07:39:33.686619] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x169fa90 is same with the state(5) to be set 00:22:17.742 [2024-07-14 07:39:33.686631] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x169fa90 is same with the state(5) to be set 00:22:17.742 [2024-07-14 07:39:33.686644] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x169fa90 is same with the state(5) to be set 00:22:17.742 [2024-07-14 07:39:33.686656] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x169fa90 is same with the state(5) to be set 00:22:17.742 [2024-07-14 07:39:33.686668] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x169fa90 is same with the state(5) to be set 00:22:17.742 [2024-07-14 07:39:33.686682] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x169fa90 is same with the state(5) to be set 00:22:17.742 [2024-07-14 07:39:33.686695] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x169fa90 is same with the state(5) to be set 00:22:17.742 [2024-07-14 07:39:33.686707] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x169fa90 is same with the state(5) to be set 00:22:17.742 [2024-07-14 07:39:33.686718] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x169fa90 is same with the state(5) to be set 00:22:17.742 [2024-07-14 07:39:33.686730] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x169fa90 is same with the state(5) to be set 00:22:17.742 [2024-07-14 07:39:33.687807] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x169d590 is same with the state(5) to be set 00:22:17.742 [2024-07-14 07:39:33.687831] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x169d590 is same with the state(5) to be set 00:22:17.742 [2024-07-14 07:39:33.687845] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x169d590 is same with the state(5) to be set 00:22:17.742 [2024-07-14 07:39:33.687861] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x169d590 is same with the state(5) to be set 00:22:17.742 [2024-07-14 07:39:33.687882] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x169d590 is same with the state(5) to be set 00:22:17.742 [2024-07-14 07:39:33.687894] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x169d590 is same with the state(5) to be set 00:22:17.742 [2024-07-14 07:39:33.687907] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x169d590 is same with the state(5) to be set 00:22:17.742 [2024-07-14 07:39:33.687918] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x169d590 is same with the state(5) to be set 00:22:17.742 [2024-07-14 07:39:33.687930] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x169d590 is same with the state(5) to be set 00:22:17.742 [2024-07-14 07:39:33.687942] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x169d590 is same with the state(5) to be set 00:22:17.742 [2024-07-14 07:39:33.687954] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x169d590 is same with the state(5) to be set 00:22:17.742 [2024-07-14 07:39:33.687965] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x169d590 is same with the state(5) to be set 00:22:17.742 [2024-07-14 07:39:33.687977] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x169d590 is same with the state(5) to be set 00:22:17.742 [2024-07-14 07:39:33.687989] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x169d590 is same with the state(5) to be set 00:22:17.742 [2024-07-14 07:39:33.688001] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x169d590 is same with the state(5) to be set 00:22:17.743 [2024-07-14 07:39:33.688012] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x169d590 is same with the state(5) to be set 00:22:17.743 [2024-07-14 07:39:33.688024] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x169d590 is same with the state(5) to be set 00:22:17.743 [2024-07-14 07:39:33.688036] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x169d590 is same with the state(5) to be set 00:22:17.743 [2024-07-14 07:39:33.688048] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x169d590 is same with the state(5) to be set 00:22:17.743 [2024-07-14 07:39:33.688060] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x169d590 is same with the state(5) to be set 00:22:17.743 [2024-07-14 07:39:33.688072] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x169d590 is same with the state(5) to be set 00:22:17.743 [2024-07-14 07:39:33.688083] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x169d590 is same with the state(5) to be set 00:22:17.743 [2024-07-14 07:39:33.688100] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x169d590 is same with the state(5) to be set 00:22:17.743 [2024-07-14 07:39:33.688113] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x169d590 is same with the state(5) to be set 00:22:17.743 [2024-07-14 07:39:33.688125] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x169d590 is same with the state(5) to be set 00:22:17.743 [2024-07-14 07:39:33.688137] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x169d590 is same with the state(5) to be set 00:22:17.743 [2024-07-14 07:39:33.688149] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x169d590 is same with the state(5) to be set 00:22:17.743 [2024-07-14 07:39:33.688171] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x169d590 is same with the state(5) to be set 00:22:17.743 [2024-07-14 07:39:33.688183] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x169d590 is same with the state(5) to be set 00:22:17.743 [2024-07-14 07:39:33.688195] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x169d590 is same with the state(5) to be set 00:22:17.743 [2024-07-14 07:39:33.688207] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x169d590 is same with the state(5) to be set 00:22:17.743 [2024-07-14 07:39:33.688218] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x169d590 is same with the state(5) to be set 00:22:17.743 [2024-07-14 07:39:33.688231] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x169d590 is same with the state(5) to be set 00:22:17.743 [2024-07-14 07:39:33.688242] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x169d590 is same with the state(5) to be set 00:22:17.743 [2024-07-14 07:39:33.688254] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x169d590 is same with the state(5) to be set 00:22:17.743 [2024-07-14 07:39:33.688266] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x169d590 is same with the state(5) to be set 00:22:17.743 [2024-07-14 07:39:33.688278] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x169d590 is same with the state(5) to be set 00:22:17.743 [2024-07-14 07:39:33.688290] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x169d590 is same with the state(5) to be set 00:22:17.743 [2024-07-14 07:39:33.688302] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x169d590 is same with the state(5) to be set 00:22:17.743 [2024-07-14 07:39:33.688314] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x169d590 is same with the state(5) to be set 00:22:17.743 [2024-07-14 07:39:33.688325] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x169d590 is same with the state(5) to be set 00:22:17.743 [2024-07-14 07:39:33.688337] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x169d590 is same with the state(5) to be set 00:22:17.743 [2024-07-14 07:39:33.688349] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x169d590 is same with the state(5) to be set 00:22:17.743 [2024-07-14 07:39:33.688360] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x169d590 is same with the state(5) to be set 00:22:17.743 [2024-07-14 07:39:33.688372] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x169d590 is same with the state(5) to be set 00:22:17.743 [2024-07-14 07:39:33.688384] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x169d590 is same with the state(5) to be set 00:22:17.743 [2024-07-14 07:39:33.688396] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x169d590 is same with the state(5) to be set 00:22:17.743 [2024-07-14 07:39:33.688408] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x169d590 is same with the state(5) to be set 00:22:17.743 [2024-07-14 07:39:33.688421] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x169d590 is same with the state(5) to be set 00:22:17.743 [2024-07-14 07:39:33.688436] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x169d590 is same with the state(5) to be set 00:22:17.743 [2024-07-14 07:39:33.688449] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x169d590 is same with the state(5) to be set 00:22:17.743 [2024-07-14 07:39:33.688462] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x169d590 is same with the state(5) to be set 00:22:17.743 [2024-07-14 07:39:33.688474] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x169d590 is same with the state(5) to be set 00:22:17.743 [2024-07-14 07:39:33.688486] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x169d590 is same with the state(5) to be set 00:22:17.743 [2024-07-14 07:39:33.688498] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x169d590 is same with the state(5) to be set 00:22:17.743 [2024-07-14 07:39:33.688510] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x169d590 is same with the state(5) to be set 00:22:17.743 [2024-07-14 07:39:33.688522] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x169d590 is same with the state(5) to be set 00:22:17.743 [2024-07-14 07:39:33.688533] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x169d590 is same with the state(5) to be set 00:22:17.743 [2024-07-14 07:39:33.688545] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x169d590 is same with the state(5) to be set 00:22:17.743 [2024-07-14 07:39:33.688557] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x169d590 is same with the state(5) to be set 00:22:17.743 [2024-07-14 07:39:33.688568] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x169d590 is same with the state(5) to be set 00:22:17.743 [2024-07-14 07:39:33.688580] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x169d590 is same with the state(5) to be set 00:22:17.743 [2024-07-14 07:39:33.688592] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x169d590 is same with the state(5) to be set 00:22:17.743 [2024-07-14 07:39:33.691353] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:22:17.743 [2024-07-14 07:39:33.691407] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:17.743 [2024-07-14 07:39:33.691426] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:22:17.743 [2024-07-14 07:39:33.691440] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:17.743 [2024-07-14 07:39:33.691454] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:22:17.743 [2024-07-14 07:39:33.691468] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:17.743 [2024-07-14 07:39:33.691482] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:22:17.743 [2024-07-14 07:39:33.691496] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:17.743 [2024-07-14 07:39:33.691509] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x245d4f0 is same with the state(5) to be set 00:22:17.743 [2024-07-14 07:39:33.691602] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:22:17.743 [2024-07-14 07:39:33.691623] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:17.743 [2024-07-14 07:39:33.691638] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:22:17.743 [2024-07-14 07:39:33.691652] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:17.743 [2024-07-14 07:39:33.691671] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:22:17.743 [2024-07-14 07:39:33.691685] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:17.743 [2024-07-14 07:39:33.691699] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:22:17.743 [2024-07-14 07:39:33.691712] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:17.743 [2024-07-14 07:39:33.691725] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2422210 is same with the state(5) to be set 00:22:17.743 [2024-07-14 07:39:33.696074] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x169da40 is same with the state(5) to be set 00:22:17.743 [2024-07-14 07:39:33.696808] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x169ded0 is same with the state(5) to be set 00:22:17.743 [2024-07-14 07:39:33.696843] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x169ded0 is same with the state(5) to be set 00:22:17.743 [2024-07-14 07:39:33.696860] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x169ded0 is same with the state(5) to be set 00:22:17.743 [2024-07-14 07:39:33.696881] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x169ded0 is same with the state(5) to be set 00:22:17.743 [2024-07-14 07:39:33.696894] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x169ded0 is same with the state(5) to be set 00:22:17.743 [2024-07-14 07:39:33.696906] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x169ded0 is same with the state(5) to be set 00:22:17.743 [2024-07-14 07:39:33.696918] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x169ded0 is same with the state(5) to be set 00:22:17.743 [2024-07-14 07:39:33.696929] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x169ded0 is same with the state(5) to be set 00:22:17.743 [2024-07-14 07:39:33.696941] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x169ded0 is same with the state(5) to be set 00:22:17.743 [2024-07-14 07:39:33.696953] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x169ded0 is same with the state(5) to be set 00:22:17.743 [2024-07-14 07:39:33.696964] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x169ded0 is same with the state(5) to be set 00:22:17.743 [2024-07-14 07:39:33.696976] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x169ded0 is same with the state(5) to be set 00:22:17.743 [2024-07-14 07:39:33.696988] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x169ded0 is same with the state(5) to be set 00:22:17.743 [2024-07-14 07:39:33.697000] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x169ded0 is same with the state(5) to be set 00:22:17.743 [2024-07-14 07:39:33.697012] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x169ded0 is same with the state(5) to be set 00:22:17.743 [2024-07-14 07:39:33.697024] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x169ded0 is same with the state(5) to be set 00:22:17.743 [2024-07-14 07:39:33.697036] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x169ded0 is same with the state(5) to be set 00:22:17.743 [2024-07-14 07:39:33.697047] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x169ded0 is same with the state(5) to be set 00:22:17.743 [2024-07-14 07:39:33.697059] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x169ded0 is same with the state(5) to be set 00:22:17.743 [2024-07-14 07:39:33.697071] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x169ded0 is same with the state(5) to be set 00:22:17.743 [2024-07-14 07:39:33.697088] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x169ded0 is same with the state(5) to be set 00:22:17.743 [2024-07-14 07:39:33.697101] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x169ded0 is same with the state(5) to be set 00:22:17.743 [2024-07-14 07:39:33.697113] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x169ded0 is same with the state(5) to be set 00:22:17.743 [2024-07-14 07:39:33.697126] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x169ded0 is same with the state(5) to be set 00:22:17.743 [2024-07-14 07:39:33.697137] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x169ded0 is same with the state(5) to be set 00:22:17.743 [2024-07-14 07:39:33.697159] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x169ded0 is same with the state(5) to be set 00:22:17.743 [2024-07-14 07:39:33.697171] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x169ded0 is same with the state(5) to be set 00:22:17.743 [2024-07-14 07:39:33.697183] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x169ded0 is same with the state(5) to be set 00:22:17.743 [2024-07-14 07:39:33.697195] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x169ded0 is same with the state(5) to be set 00:22:17.743 [2024-07-14 07:39:33.697207] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x169ded0 is same with the state(5) to be set 00:22:17.743 [2024-07-14 07:39:33.697224] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x169ded0 is same with the state(5) to be set 00:22:17.743 [2024-07-14 07:39:33.697236] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x169ded0 is same with the state(5) to be set 00:22:17.743 [2024-07-14 07:39:33.697248] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x169ded0 is same with the state(5) to be set 00:22:17.743 [2024-07-14 07:39:33.697260] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x169ded0 is same with the state(5) to be set 00:22:17.743 [2024-07-14 07:39:33.697272] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x169ded0 is same with the state(5) to be set 00:22:17.743 [2024-07-14 07:39:33.697284] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x169ded0 is same with the state(5) to be set 00:22:17.743 [2024-07-14 07:39:33.697295] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x169ded0 is same with the state(5) to be set 00:22:17.743 [2024-07-14 07:39:33.697307] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x169ded0 is same with the state(5) to be set 00:22:17.743 [2024-07-14 07:39:33.697319] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x169ded0 is same with the state(5) to be set 00:22:17.743 [2024-07-14 07:39:33.697331] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x169ded0 is same with the state(5) to be set 00:22:17.743 [2024-07-14 07:39:33.697343] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x169ded0 is same with the state(5) to be set 00:22:17.743 [2024-07-14 07:39:33.697355] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x169ded0 is same with the state(5) to be set 00:22:17.743 [2024-07-14 07:39:33.697367] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x169ded0 is same with the state(5) to be set 00:22:17.743 [2024-07-14 07:39:33.697378] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x169ded0 is same with the state(5) to be set 00:22:17.743 [2024-07-14 07:39:33.697390] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x169ded0 is same with the state(5) to be set 00:22:17.743 [2024-07-14 07:39:33.697402] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x169ded0 is same with the state(5) to be set 00:22:17.743 [2024-07-14 07:39:33.697414] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x169ded0 is same with the state(5) to be set 00:22:17.743 [2024-07-14 07:39:33.697428] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x169ded0 is same with the state(5) to be set 00:22:17.743 [2024-07-14 07:39:33.697441] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x169ded0 is same with the state(5) to be set 00:22:17.743 [2024-07-14 07:39:33.697453] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x169ded0 is same with the state(5) to be set 00:22:17.743 [2024-07-14 07:39:33.697465] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x169ded0 is same with the state(5) to be set 00:22:17.743 [2024-07-14 07:39:33.697477] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x169ded0 is same with the state(5) to be set 00:22:17.743 [2024-07-14 07:39:33.697490] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x169ded0 is same with the state(5) to be set 00:22:17.743 [2024-07-14 07:39:33.697502] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x169ded0 is same with the state(5) to be set 00:22:17.743 [2024-07-14 07:39:33.697515] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x169ded0 is same with the state(5) to be set 00:22:17.743 [2024-07-14 07:39:33.697527] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x169ded0 is same with the state(5) to be set 00:22:17.743 [2024-07-14 07:39:33.697539] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x169ded0 is same with the state(5) to be set 00:22:17.743 [2024-07-14 07:39:33.697550] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x169ded0 is same with the state(5) to be set 00:22:17.743 [2024-07-14 07:39:33.697562] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x169ded0 is same with the state(5) to be set 00:22:17.743 [2024-07-14 07:39:33.697574] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x169ded0 is same with the state(5) to be set 00:22:17.743 [2024-07-14 07:39:33.697586] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x169ded0 is same with the state(5) to be set 00:22:17.743 [2024-07-14 07:39:33.697597] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x169ded0 is same with the state(5) to be set 00:22:17.743 [2024-07-14 07:39:33.697609] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x169ded0 is same with the state(5) to be set 00:22:17.743 [2024-07-14 07:39:33.698313] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x169e380 is same with the state(5) to be set 00:22:17.743 [2024-07-14 07:39:33.698342] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x169e380 is same with the state(5) to be set 00:22:17.743 [2024-07-14 07:39:33.698897] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x169e830 is same with the state(5) to be set 00:22:17.743 [2024-07-14 07:39:33.698927] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x169e830 is same with the state(5) to be set 00:22:17.743 [2024-07-14 07:39:33.698942] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x169e830 is same with the state(5) to be set 00:22:17.743 [2024-07-14 07:39:33.698954] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x169e830 is same with the state(5) to be set 00:22:17.743 [2024-07-14 07:39:33.698966] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x169e830 is same with the state(5) to be set 00:22:17.743 [2024-07-14 07:39:33.698978] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x169e830 is same with the state(5) to be set 00:22:17.743 [2024-07-14 07:39:33.698989] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x169e830 is same with the state(5) to be set 00:22:17.743 [2024-07-14 07:39:33.699001] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x169e830 is same with the state(5) to be set 00:22:17.743 [2024-07-14 07:39:33.699013] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x169e830 is same with the state(5) to be set 00:22:17.743 [2024-07-14 07:39:33.699025] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x169e830 is same with the state(5) to be set 00:22:17.743 [2024-07-14 07:39:33.699042] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x169e830 is same with the state(5) to be set 00:22:17.743 [2024-07-14 07:39:33.699055] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x169e830 is same with the state(5) to be set 00:22:17.743 [2024-07-14 07:39:33.699068] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x169e830 is same with the state(5) to be set 00:22:17.743 [2024-07-14 07:39:33.699080] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x169e830 is same with the state(5) to be set 00:22:17.743 [2024-07-14 07:39:33.699092] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x169e830 is same with the state(5) to be set 00:22:17.743 [2024-07-14 07:39:33.699104] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x169e830 is same with the state(5) to be set 00:22:17.743 [2024-07-14 07:39:33.699117] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x169e830 is same with the state(5) to be set 00:22:17.743 [2024-07-14 07:39:33.699128] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x169e830 is same with the state(5) to be set 00:22:17.743 [2024-07-14 07:39:33.699141] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x169e830 is same with the state(5) to be set 00:22:17.743 [2024-07-14 07:39:33.699153] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x169e830 is same with the state(5) to be set 00:22:17.743 [2024-07-14 07:39:33.699169] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x169e830 is same with the state(5) to be set 00:22:17.743 [2024-07-14 07:39:33.699181] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x169e830 is same with the state(5) to be set 00:22:17.743 [2024-07-14 07:39:33.699192] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x169e830 is same with the state(5) to be set 00:22:17.743 [2024-07-14 07:39:33.699204] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x169e830 is same with the state(5) to be set 00:22:17.743 [2024-07-14 07:39:33.699216] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x169e830 is same with the state(5) to be set 00:22:17.743 [2024-07-14 07:39:33.699228] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x169e830 is same with the state(5) to be set 00:22:17.743 [2024-07-14 07:39:33.699239] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x169e830 is same with the state(5) to be set 00:22:17.743 [2024-07-14 07:39:33.699251] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x169e830 is same with the state(5) to be set 00:22:17.743 [2024-07-14 07:39:33.699263] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x169e830 is same with the state(5) to be set 00:22:17.743 [2024-07-14 07:39:33.699275] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x169e830 is same with the state(5) to be set 00:22:17.743 [2024-07-14 07:39:33.699287] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x169e830 is same with the state(5) to be set 00:22:17.743 [2024-07-14 07:39:33.699299] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x169e830 is same with the state(5) to be set 00:22:17.743 [2024-07-14 07:39:33.699311] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x169e830 is same with the state(5) to be set 00:22:17.743 [2024-07-14 07:39:33.699322] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x169e830 is same with the state(5) to be set 00:22:17.743 [2024-07-14 07:39:33.699334] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x169e830 is same with the state(5) to be set 00:22:17.743 [2024-07-14 07:39:33.699346] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x169e830 is same with the state(5) to be set 00:22:17.743 [2024-07-14 07:39:33.699358] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x169e830 is same with the state(5) to be set 00:22:17.743 [2024-07-14 07:39:33.699373] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x169e830 is same with the state(5) to be set 00:22:17.743 [2024-07-14 07:39:33.699385] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x169e830 is same with the state(5) to be set 00:22:17.743 [2024-07-14 07:39:33.699397] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x169e830 is same with the state(5) to be set 00:22:17.744 [2024-07-14 07:39:33.699408] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x169e830 is same with the state(5) to be set 00:22:17.744 [2024-07-14 07:39:33.699420] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x169e830 is same with the state(5) to be set 00:22:17.744 [2024-07-14 07:39:33.699432] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x169e830 is same with the state(5) to be set 00:22:17.744 [2024-07-14 07:39:33.699443] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x169e830 is same with the state(5) to be set 00:22:17.744 [2024-07-14 07:39:33.699455] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x169e830 is same with the state(5) to be set 00:22:17.744 [2024-07-14 07:39:33.699467] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x169e830 is same with the state(5) to be set 00:22:17.744 [2024-07-14 07:39:33.699479] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x169e830 is same with the state(5) to be set 00:22:17.744 [2024-07-14 07:39:33.699490] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x169e830 is same with the state(5) to be set 00:22:17.744 [2024-07-14 07:39:33.699502] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x169e830 is same with the state(5) to be set 00:22:17.744 [2024-07-14 07:39:33.699514] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x169e830 is same with the state(5) to be set 00:22:17.744 [2024-07-14 07:39:33.699526] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x169e830 is same with the state(5) to be set 00:22:17.744 [2024-07-14 07:39:33.699537] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x169e830 is same with the state(5) to be set 00:22:17.744 [2024-07-14 07:39:33.699549] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x169e830 is same with the state(5) to be set 00:22:17.744 [2024-07-14 07:39:33.699561] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x169e830 is same with the state(5) to be set 00:22:17.744 [2024-07-14 07:39:33.699573] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x169e830 is same with the state(5) to be set 00:22:17.744 [2024-07-14 07:39:33.699884] nvme_tcp.c:1159:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:22:17.744 [2024-07-14 07:39:33.699978] nvme_tcp.c:1159:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:22:17.744 [2024-07-14 07:39:33.704764] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x169ecc0 is same with the state(5) to be set 00:22:17.744 [2024-07-14 07:39:33.704794] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x169ecc0 is same with the state(5) to be set 00:22:17.744 [2024-07-14 07:39:33.704808] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x169ecc0 is same with the state(5) to be set 00:22:17.744 [2024-07-14 07:39:33.704820] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x169ecc0 is same with the state(5) to be set 00:22:17.744 [2024-07-14 07:39:33.704832] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x169ecc0 is same with the state(5) to be set 00:22:17.744 [2024-07-14 07:39:33.704843] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x169ecc0 is same with the state(5) to be set 00:22:17.744 [2024-07-14 07:39:33.704863] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x169ecc0 is same with the state(5) to be set 00:22:17.744 [2024-07-14 07:39:33.704899] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x169ecc0 is same with the state(5) to be set 00:22:17.744 [2024-07-14 07:39:33.704922] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x169ecc0 is same with the state(5) to be set 00:22:17.744 [2024-07-14 07:39:33.704942] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x169ecc0 is same with the state(5) to be set 00:22:17.744 [2024-07-14 07:39:33.704962] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x169ecc0 is same with the state(5) to be set 00:22:17.744 [2024-07-14 07:39:33.704982] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x169ecc0 is same with the state(5) to be set 00:22:17.744 [2024-07-14 07:39:33.705010] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x169ecc0 is same with the state(5) to be set 00:22:17.744 [2024-07-14 07:39:33.705026] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x169ecc0 is same with the state(5) to be set 00:22:17.744 [2024-07-14 07:39:33.705038] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x169ecc0 is same with the state(5) to be set 00:22:17.744 [2024-07-14 07:39:33.705050] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x169ecc0 is same with the state(5) to be set 00:22:17.744 [2024-07-14 07:39:33.705062] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x169ecc0 is same with the state(5) to be set 00:22:17.744 [2024-07-14 07:39:33.705074] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x169ecc0 is same with the state(5) to be set 00:22:17.744 [2024-07-14 07:39:33.705085] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x169ecc0 is same with the state(5) to be set 00:22:17.744 [2024-07-14 07:39:33.705097] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x169ecc0 is same with the state(5) to be set 00:22:17.744 [2024-07-14 07:39:33.705109] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x169ecc0 is same with the state(5) to be set 00:22:17.744 [2024-07-14 07:39:33.705120] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x169ecc0 is same with the state(5) to be set 00:22:17.744 [2024-07-14 07:39:33.705132] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x169ecc0 is same with the state(5) to be set 00:22:17.744 [2024-07-14 07:39:33.705144] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x169ecc0 is same with the state(5) to be set 00:22:17.744 [2024-07-14 07:39:33.705156] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x169ecc0 is same with the state(5) to be set 00:22:17.744 [2024-07-14 07:39:33.705175] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x169ecc0 is same with the state(5) to be set 00:22:17.744 [2024-07-14 07:39:33.705187] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x169ecc0 is same with the state(5) to be set 00:22:17.744 [2024-07-14 07:39:33.705199] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x169ecc0 is same with the state(5) to be set 00:22:17.744 [2024-07-14 07:39:33.705210] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x169ecc0 is same with the state(5) to be set 00:22:17.744 [2024-07-14 07:39:33.705222] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x169ecc0 is same with the state(5) to be set 00:22:17.744 [2024-07-14 07:39:33.705234] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x169ecc0 is same with the state(5) to be set 00:22:17.744 [2024-07-14 07:39:33.705245] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x169ecc0 is same with the state(5) to be set 00:22:17.744 [2024-07-14 07:39:33.705257] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x169ecc0 is same with the state(5) to be set 00:22:17.744 [2024-07-14 07:39:33.705269] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x169ecc0 is same with the state(5) to be set 00:22:17.744 [2024-07-14 07:39:33.705281] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x169ecc0 is same with the state(5) to be set 00:22:17.744 [2024-07-14 07:39:33.705297] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x169ecc0 is same with the state(5) to be set 00:22:17.744 [2024-07-14 07:39:33.705309] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x169ecc0 is same with the state(5) to be set 00:22:17.744 [2024-07-14 07:39:33.705321] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x169ecc0 is same with the state(5) to be set 00:22:17.744 [2024-07-14 07:39:33.705334] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x169ecc0 is same with the state(5) to be set 00:22:17.744 [2024-07-14 07:39:33.705346] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x169ecc0 is same with the state(5) to be set 00:22:17.744 [2024-07-14 07:39:33.705357] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x169ecc0 is same with the state(5) to be set 00:22:17.744 [2024-07-14 07:39:33.705369] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x169ecc0 is same with the state(5) to be set 00:22:17.744 [2024-07-14 07:39:33.705380] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x169ecc0 is same with the state(5) to be set 00:22:17.744 [2024-07-14 07:39:33.705392] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x169ecc0 is same with the state(5) to be set 00:22:17.744 [2024-07-14 07:39:33.705404] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x169ecc0 is same with the state(5) to be set 00:22:17.744 [2024-07-14 07:39:33.705415] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x169ecc0 is same with the state(5) to be set 00:22:17.744 [2024-07-14 07:39:33.705427] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x169ecc0 is same with the state(5) to be set 00:22:17.744 [2024-07-14 07:39:33.705438] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x169ecc0 is same with the state(5) to be set 00:22:17.744 [2024-07-14 07:39:33.705450] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x169ecc0 is same with the state(5) to be set 00:22:17.744 [2024-07-14 07:39:33.705461] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x169ecc0 is same with the state(5) to be set 00:22:17.744 [2024-07-14 07:39:33.705473] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x169ecc0 is same with the state(5) to be set 00:22:17.744 [2024-07-14 07:39:33.705485] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x169ecc0 is same with the state(5) to be set 00:22:17.744 [2024-07-14 07:39:33.705496] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x169ecc0 is same with the state(5) to be set 00:22:17.744 [2024-07-14 07:39:33.705508] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x169ecc0 is same with the state(5) to be set 00:22:17.744 [2024-07-14 07:39:33.705519] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x169ecc0 is same with the state(5) to be set 00:22:17.744 [2024-07-14 07:39:33.705531] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x169ecc0 is same with the state(5) to be set 00:22:17.744 [2024-07-14 07:39:33.705543] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x169ecc0 is same with the state(5) to be set 00:22:17.744 [2024-07-14 07:39:33.705554] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x169ecc0 is same with the state(5) to be set 00:22:17.744 [2024-07-14 07:39:33.705567] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x169ecc0 is same with the state(5) to be set 00:22:17.744 [2024-07-14 07:39:33.705580] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x169ecc0 is same with the state(5) to be set 00:22:17.744 [2024-07-14 07:39:33.705592] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x169ecc0 is same with the state(5) to be set 00:22:17.744 [2024-07-14 07:39:33.705604] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x169ecc0 is same with the state(5) to be set 00:22:17.744 [2024-07-14 07:39:33.705619] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x169ecc0 is same with the state(5) to be set 00:22:17.744 [2024-07-14 07:39:33.706596] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x169f150 is same with the state(5) to be set 00:22:17.744 [2024-07-14 07:39:33.706623] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x169f150 is same with the state(5) to be set 00:22:17.744 [2024-07-14 07:39:33.706637] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x169f150 is same with the state(5) to be set 00:22:17.744 [2024-07-14 07:39:33.706649] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x169f150 is same with the state(5) to be set 00:22:17.744 [2024-07-14 07:39:33.706661] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x169f150 is same with the state(5) to be set 00:22:17.744 [2024-07-14 07:39:33.706672] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x169f150 is same with the state(5) to be set 00:22:17.744 [2024-07-14 07:39:33.706684] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x169f150 is same with the state(5) to be set 00:22:17.744 [2024-07-14 07:39:33.706696] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x169f150 is same with the state(5) to be set 00:22:17.744 [2024-07-14 07:39:33.706708] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x169f150 is same with the state(5) to be set 00:22:17.744 [2024-07-14 07:39:33.706719] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x169f150 is same with the state(5) to be set 00:22:17.744 [2024-07-14 07:39:33.706731] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x169f150 is same with the state(5) to be set 00:22:17.744 [2024-07-14 07:39:33.706743] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x169f150 is same with the state(5) to be set 00:22:17.744 [2024-07-14 07:39:33.706755] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x169f150 is same with the state(5) to be set 00:22:17.744 [2024-07-14 07:39:33.706767] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x169f150 is same with the state(5) to be set 00:22:17.744 [2024-07-14 07:39:33.706778] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x169f150 is same with the state(5) to be set 00:22:17.744 [2024-07-14 07:39:33.706790] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x169f150 is same with the state(5) to be set 00:22:17.744 [2024-07-14 07:39:33.706802] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x169f150 is same with the state(5) to be set 00:22:17.744 [2024-07-14 07:39:33.706814] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x169f150 is same with the state(5) to be set 00:22:17.744 [2024-07-14 07:39:33.706826] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x169f150 is same with the state(5) to be set 00:22:17.744 [2024-07-14 07:39:33.706837] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x169f150 is same with the state(5) to be set 00:22:17.744 [2024-07-14 07:39:33.706856] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x169f150 is same with the state(5) to be set 00:22:17.744 [2024-07-14 07:39:33.706877] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x169f150 is same with the state(5) to be set 00:22:17.744 [2024-07-14 07:39:33.706891] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x169f150 is same with the state(5) to be set 00:22:17.744 [2024-07-14 07:39:33.706904] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x169f150 is same with the state(5) to be set 00:22:17.744 [2024-07-14 07:39:33.706915] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x169f150 is same with the state(5) to be set 00:22:17.744 [2024-07-14 07:39:33.706927] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x169f150 is same with the state(5) to be set 00:22:17.744 [2024-07-14 07:39:33.706944] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x169f150 is same with the state(5) to be set 00:22:17.744 [2024-07-14 07:39:33.706956] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x169f150 is same with the state(5) to be set 00:22:17.744 [2024-07-14 07:39:33.706968] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x169f150 is same with the state(5) to be set 00:22:17.744 [2024-07-14 07:39:33.706980] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x169f150 is same with the state(5) to be set 00:22:17.744 [2024-07-14 07:39:33.706991] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x169f150 is same with the state(5) to be set 00:22:17.744 [2024-07-14 07:39:33.707003] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x169f150 is same with the state(5) to be set 00:22:17.744 [2024-07-14 07:39:33.707014] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x169f150 is same with the state(5) to be set 00:22:17.744 [2024-07-14 07:39:33.707026] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x169f150 is same with the state(5) to be set 00:22:17.744 [2024-07-14 07:39:33.707037] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x169f150 is same with the state(5) to be set 00:22:17.744 [2024-07-14 07:39:33.707049] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x169f150 is same with the state(5) to be set 00:22:17.744 [2024-07-14 07:39:33.707060] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x169f150 is same with the state(5) to be set 00:22:17.744 [2024-07-14 07:39:33.707072] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x169f150 is same with the state(5) to be set 00:22:17.744 [2024-07-14 07:39:33.707084] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x169f150 is same with the state(5) to be set 00:22:17.744 [2024-07-14 07:39:33.707095] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x169f150 is same with the state(5) to be set 00:22:17.744 [2024-07-14 07:39:33.707107] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x169f150 is same with the state(5) to be set 00:22:17.744 [2024-07-14 07:39:33.707118] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x169f150 is same with the state(5) to be set 00:22:17.744 [2024-07-14 07:39:33.707130] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x169f150 is same with the state(5) to be set 00:22:17.744 [2024-07-14 07:39:33.707142] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x169f150 is same with the state(5) to be set 00:22:17.744 [2024-07-14 07:39:33.707154] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x169f150 is same with the state(5) to be set 00:22:17.744 [2024-07-14 07:39:33.707165] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x169f150 is same with the state(5) to be set 00:22:17.744 [2024-07-14 07:39:33.707177] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x169f150 is same with the state(5) to be set 00:22:17.744 [2024-07-14 07:39:33.707189] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x169f150 is same with the state(5) to be set 00:22:17.744 [2024-07-14 07:39:33.707200] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x169f150 is same with the state(5) to be set 00:22:17.744 [2024-07-14 07:39:33.707212] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x169f150 is same with the state(5) to be set 00:22:17.744 [2024-07-14 07:39:33.707224] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x169f150 is same with the state(5) to be set 00:22:17.744 [2024-07-14 07:39:33.707235] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x169f150 is same with the state(5) to be set 00:22:17.744 [2024-07-14 07:39:33.707247] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x169f150 is same with the state(5) to be set 00:22:17.744 [2024-07-14 07:39:33.707262] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x169f150 is same with the state(5) to be set 00:22:17.744 [2024-07-14 07:39:33.707275] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x169f150 is same with the state(5) to be set 00:22:17.744 [2024-07-14 07:39:33.707288] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x169f150 is same with the state(5) to be set 00:22:17.744 [2024-07-14 07:39:33.707300] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x169f150 is same with the state(5) to be set 00:22:17.744 [2024-07-14 07:39:33.707312] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x169f150 is same with the state(5) to be set 00:22:17.744 [2024-07-14 07:39:33.707323] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x169f150 is same with the state(5) to be set 00:22:17.744 [2024-07-14 07:39:33.707336] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x169f150 is same with the state(5) to be set 00:22:17.744 [2024-07-14 07:39:33.707348] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x169f150 is same with the state(5) to be set 00:22:17.744 [2024-07-14 07:39:33.707360] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x169f150 is same with the state(5) to be set 00:22:17.744 [2024-07-14 07:39:33.707371] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x169f150 is same with the state(5) to be set 00:22:17.744 [2024-07-14 07:39:33.710392] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:22:17.744 [2024-07-14 07:39:33.710425] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:17.744 [2024-07-14 07:39:33.710443] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:22:17.744 [2024-07-14 07:39:33.710457] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:17.744 [2024-07-14 07:39:33.710471] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:22:17.744 [2024-07-14 07:39:33.710485] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:17.744 [2024-07-14 07:39:33.710499] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:22:17.744 [2024-07-14 07:39:33.710511] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:17.744 [2024-07-14 07:39:33.710525] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x25a6830 is same with the state(5) to be set 00:22:17.744 [2024-07-14 07:39:33.710565] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x245d4f0 (9): Bad file descriptor 00:22:17.744 [2024-07-14 07:39:33.710621] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:22:17.744 [2024-07-14 07:39:33.710642] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:17.745 [2024-07-14 07:39:33.710657] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:22:17.745 [2024-07-14 07:39:33.710671] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:17.745 [2024-07-14 07:39:33.710684] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:22:17.745 [2024-07-14 07:39:33.710698] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:17.745 [2024-07-14 07:39:33.710717] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:22:17.745 [2024-07-14 07:39:33.710731] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:17.745 [2024-07-14 07:39:33.710744] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x241ba30 is same with the state(5) to be set 00:22:17.745 [2024-07-14 07:39:33.710793] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:22:17.745 [2024-07-14 07:39:33.710813] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:17.745 [2024-07-14 07:39:33.710828] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:22:17.745 [2024-07-14 07:39:33.710841] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:17.745 [2024-07-14 07:39:33.710860] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:22:17.745 [2024-07-14 07:39:33.710883] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:17.745 [2024-07-14 07:39:33.710898] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:22:17.745 [2024-07-14 07:39:33.710911] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:17.745 [2024-07-14 07:39:33.710924] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x25e8a60 is same with the state(5) to be set 00:22:17.745 [2024-07-14 07:39:33.710968] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:22:17.745 [2024-07-14 07:39:33.710989] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:17.745 [2024-07-14 07:39:33.711003] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:22:17.745 [2024-07-14 07:39:33.711017] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:17.745 [2024-07-14 07:39:33.711030] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:22:17.745 [2024-07-14 07:39:33.711043] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:17.745 [2024-07-14 07:39:33.711057] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:22:17.745 [2024-07-14 07:39:33.711070] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:17.745 [2024-07-14 07:39:33.711082] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2424f70 is same with the state(5) to be set 00:22:17.745 [2024-07-14 07:39:33.711126] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:22:17.745 [2024-07-14 07:39:33.711146] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:17.745 [2024-07-14 07:39:33.711161] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:22:17.745 [2024-07-14 07:39:33.711174] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:17.745 [2024-07-14 07:39:33.711187] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:22:17.745 [2024-07-14 07:39:33.711208] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:17.745 [2024-07-14 07:39:33.711222] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:22:17.745 [2024-07-14 07:39:33.711236] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:17.745 [2024-07-14 07:39:33.711249] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x25e3490 is same with the state(5) to be set 00:22:17.745 [2024-07-14 07:39:33.711278] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2422210 (9): Bad file descriptor 00:22:17.745 [2024-07-14 07:39:33.711326] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:22:17.745 [2024-07-14 07:39:33.711346] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:17.745 [2024-07-14 07:39:33.711361] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:22:17.745 [2024-07-14 07:39:33.711374] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:17.745 [2024-07-14 07:39:33.711388] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:22:17.745 [2024-07-14 07:39:33.711401] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:17.745 [2024-07-14 07:39:33.711414] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:22:17.745 [2024-07-14 07:39:33.711427] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:17.745 [2024-07-14 07:39:33.711440] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x25e38c0 is same with the state(5) to be set 00:22:17.745 [2024-07-14 07:39:33.711484] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:22:17.745 [2024-07-14 07:39:33.711504] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:17.745 [2024-07-14 07:39:33.711519] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:22:17.745 [2024-07-14 07:39:33.711532] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:17.745 [2024-07-14 07:39:33.711546] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:22:17.745 [2024-07-14 07:39:33.711559] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:17.745 [2024-07-14 07:39:33.711573] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:22:17.745 [2024-07-14 07:39:33.711586] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:17.745 [2024-07-14 07:39:33.711598] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2385a50 is same with the state(5) to be set 00:22:17.745 [2024-07-14 07:39:33.711642] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:22:17.745 [2024-07-14 07:39:33.711663] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:17.745 [2024-07-14 07:39:33.711683] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:22:17.745 [2024-07-14 07:39:33.711698] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:17.745 [2024-07-14 07:39:33.711712] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:22:17.745 [2024-07-14 07:39:33.711725] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:17.745 [2024-07-14 07:39:33.711739] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:22:17.745 [2024-07-14 07:39:33.711752] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:17.745 [2024-07-14 07:39:33.711764] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x241be60 is same with the state(5) to be set 00:22:17.745 [2024-07-14 07:39:33.711901] nvme_tcp.c:1159:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:22:17.745 [2024-07-14 07:39:33.711967] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:29184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:17.745 [2024-07-14 07:39:33.711988] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:17.745 [2024-07-14 07:39:33.712014] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:29312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:17.745 [2024-07-14 07:39:33.712030] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:17.745 [2024-07-14 07:39:33.712048] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:29440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:17.745 [2024-07-14 07:39:33.712063] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:17.745 [2024-07-14 07:39:33.712078] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:29568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:17.745 [2024-07-14 07:39:33.712093] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:17.745 [2024-07-14 07:39:33.712109] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:24320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:17.745 [2024-07-14 07:39:33.712123] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:17.745 [2024-07-14 07:39:33.712138] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:24448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:17.745 [2024-07-14 07:39:33.712159] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:17.745 [2024-07-14 07:39:33.712174] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:24576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:17.745 [2024-07-14 07:39:33.712189] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:17.745 [2024-07-14 07:39:33.712204] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:24960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:17.745 [2024-07-14 07:39:33.712219] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:17.745 [2024-07-14 07:39:33.712235] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:25088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:17.745 [2024-07-14 07:39:33.712249] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:17.745 [2024-07-14 07:39:33.712270] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:25600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:17.745 [2024-07-14 07:39:33.712286] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:17.745 [2024-07-14 07:39:33.712302] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:25984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:17.745 [2024-07-14 07:39:33.712316] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:17.745 [2024-07-14 07:39:33.712332] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:26496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:17.745 [2024-07-14 07:39:33.712346] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:17.745 [2024-07-14 07:39:33.712361] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:26624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:17.745 [2024-07-14 07:39:33.712376] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:17.745 [2024-07-14 07:39:33.712392] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:26880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:17.745 [2024-07-14 07:39:33.712405] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:17.745 [2024-07-14 07:39:33.712421] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:29696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:17.745 [2024-07-14 07:39:33.712435] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:17.745 [2024-07-14 07:39:33.712450] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:29824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:17.745 [2024-07-14 07:39:33.712465] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:17.745 [2024-07-14 07:39:33.712481] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:29952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:17.745 [2024-07-14 07:39:33.712502] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:17.745 [2024-07-14 07:39:33.712518] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:27008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:17.745 [2024-07-14 07:39:33.712532] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:17.745 [2024-07-14 07:39:33.712548] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:27392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:17.745 [2024-07-14 07:39:33.712562] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:17.745 [2024-07-14 07:39:33.712577] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:27520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:17.745 [2024-07-14 07:39:33.712591] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:17.745 [2024-07-14 07:39:33.712606] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:27776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:17.745 [2024-07-14 07:39:33.712621] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:17.745 [2024-07-14 07:39:33.712637] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:27904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:17.745 [2024-07-14 07:39:33.712655] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:17.745 [2024-07-14 07:39:33.712672] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:28288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:17.745 [2024-07-14 07:39:33.712686] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:17.745 [2024-07-14 07:39:33.712702] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:28416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:17.745 [2024-07-14 07:39:33.712716] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:17.745 [2024-07-14 07:39:33.712733] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:30080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:17.745 [2024-07-14 07:39:33.712747] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:17.745 [2024-07-14 07:39:33.712763] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:28672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:17.745 [2024-07-14 07:39:33.712777] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:17.745 [2024-07-14 07:39:33.712793] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:28800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:17.745 [2024-07-14 07:39:33.712807] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:17.745 [2024-07-14 07:39:33.712833] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:28928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:17.745 [2024-07-14 07:39:33.712848] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:17.745 [2024-07-14 07:39:33.712863] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:30208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:17.745 [2024-07-14 07:39:33.712893] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:17.745 [2024-07-14 07:39:33.712909] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:30336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:17.745 [2024-07-14 07:39:33.712924] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:17.745 [2024-07-14 07:39:33.712940] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:30464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:17.745 [2024-07-14 07:39:33.712954] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:17.745 [2024-07-14 07:39:33.712970] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:30592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:17.745 [2024-07-14 07:39:33.712986] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:17.745 [2024-07-14 07:39:33.713001] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:30720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:17.745 [2024-07-14 07:39:33.713016] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:17.745 [2024-07-14 07:39:33.713032] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:30848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:17.745 [2024-07-14 07:39:33.713046] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:17.745 [2024-07-14 07:39:33.713066] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:30976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:17.745 [2024-07-14 07:39:33.713081] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:17.745 [2024-07-14 07:39:33.713097] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:31104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:17.745 [2024-07-14 07:39:33.713111] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:17.745 [2024-07-14 07:39:33.713127] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:31232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:17.745 [2024-07-14 07:39:33.713141] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:17.745 [2024-07-14 07:39:33.713156] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:31360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:17.745 [2024-07-14 07:39:33.713170] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:17.745 [2024-07-14 07:39:33.713186] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:31488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:17.745 [2024-07-14 07:39:33.713200] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:17.745 [2024-07-14 07:39:33.713216] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:31616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:17.745 [2024-07-14 07:39:33.713230] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:17.745 [2024-07-14 07:39:33.713245] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:31744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:17.745 [2024-07-14 07:39:33.713259] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:17.745 [2024-07-14 07:39:33.713275] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:31872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:17.745 [2024-07-14 07:39:33.713289] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:17.745 [2024-07-14 07:39:33.713305] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:32000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:17.745 [2024-07-14 07:39:33.713319] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:17.745 [2024-07-14 07:39:33.713335] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:32128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:17.745 [2024-07-14 07:39:33.713349] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:17.745 [2024-07-14 07:39:33.713365] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:32256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:17.745 [2024-07-14 07:39:33.713378] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:17.745 [2024-07-14 07:39:33.713394] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:32384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:17.745 [2024-07-14 07:39:33.713408] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:17.745 [2024-07-14 07:39:33.713424] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:32512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:17.745 [2024-07-14 07:39:33.713442] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:17.745 [2024-07-14 07:39:33.713459] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:32640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:17.745 [2024-07-14 07:39:33.713473] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:17.745 [2024-07-14 07:39:33.713489] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:32768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:17.745 [2024-07-14 07:39:33.713503] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:17.745 [2024-07-14 07:39:33.713518] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:32896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:17.745 [2024-07-14 07:39:33.713533] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:17.745 [2024-07-14 07:39:33.713548] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:33024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:17.746 [2024-07-14 07:39:33.713563] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:17.746 [2024-07-14 07:39:33.713578] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:33152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:17.746 [2024-07-14 07:39:33.713592] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:17.746 [2024-07-14 07:39:33.713608] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:33280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:17.746 [2024-07-14 07:39:33.713622] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:17.746 [2024-07-14 07:39:33.713638] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:33408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:17.746 [2024-07-14 07:39:33.713652] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:17.746 [2024-07-14 07:39:33.713667] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:33536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:17.746 [2024-07-14 07:39:33.713682] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:17.746 [2024-07-14 07:39:33.713697] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:33664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:17.746 [2024-07-14 07:39:33.713711] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:17.746 [2024-07-14 07:39:33.713726] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:33792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:17.746 [2024-07-14 07:39:33.713741] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:17.746 [2024-07-14 07:39:33.713756] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:33920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:17.746 [2024-07-14 07:39:33.713770] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:17.746 [2024-07-14 07:39:33.713786] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:34048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:17.746 [2024-07-14 07:39:33.713800] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:17.746 [2024-07-14 07:39:33.713819] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:34176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:17.746 [2024-07-14 07:39:33.713834] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:17.746 [2024-07-14 07:39:33.713860] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:34304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:17.746 [2024-07-14 07:39:33.713882] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:17.746 [2024-07-14 07:39:33.713899] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:34432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:17.746 [2024-07-14 07:39:33.713914] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:17.746 [2024-07-14 07:39:33.713929] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:34560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:17.746 [2024-07-14 07:39:33.713944] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:17.746 [2024-07-14 07:39:33.713959] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:34688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:17.746 [2024-07-14 07:39:33.713973] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:17.746 [2024-07-14 07:39:33.713988] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x259a260 is same with the state(5) to be set 00:22:17.746 [2024-07-14 07:39:33.714065] bdev_nvme.c:1590:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x259a260 was disconnected and freed. reset controller. 00:22:17.746 [2024-07-14 07:39:33.721876] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:29184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:17.746 [2024-07-14 07:39:33.721949] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:17.746 [2024-07-14 07:39:33.721985] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:29312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:17.746 [2024-07-14 07:39:33.722001] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:17.746 [2024-07-14 07:39:33.722019] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:29440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:17.746 [2024-07-14 07:39:33.722034] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:17.746 [2024-07-14 07:39:33.722050] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:29568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:17.746 [2024-07-14 07:39:33.722064] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:17.746 [2024-07-14 07:39:33.722080] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:24320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:17.746 [2024-07-14 07:39:33.722095] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:17.746 [2024-07-14 07:39:33.722111] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:24448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:17.746 [2024-07-14 07:39:33.722125] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:17.746 [2024-07-14 07:39:33.722141] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:24576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:17.746 [2024-07-14 07:39:33.722178] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:17.746 [2024-07-14 07:39:33.722194] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:24960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:17.746 [2024-07-14 07:39:33.722209] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:17.746 [2024-07-14 07:39:33.722225] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:25088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:17.746 [2024-07-14 07:39:33.722239] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:17.746 [2024-07-14 07:39:33.722255] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:25600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:17.746 [2024-07-14 07:39:33.722269] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:17.746 [2024-07-14 07:39:33.722284] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:29696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:17.746 [2024-07-14 07:39:33.722299] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:17.746 [2024-07-14 07:39:33.722315] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:29824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:17.746 [2024-07-14 07:39:33.722330] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:17.746 [2024-07-14 07:39:33.722346] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:25984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:17.746 [2024-07-14 07:39:33.722360] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:17.746 [2024-07-14 07:39:33.722376] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:26496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:17.746 [2024-07-14 07:39:33.722390] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:17.746 [2024-07-14 07:39:33.722405] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:29952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:17.746 [2024-07-14 07:39:33.722419] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:17.746 [2024-07-14 07:39:33.722434] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:30080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:17.746 [2024-07-14 07:39:33.722448] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:17.746 [2024-07-14 07:39:33.722463] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:26624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:17.746 [2024-07-14 07:39:33.722478] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:17.746 [2024-07-14 07:39:33.722493] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:26880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:17.746 [2024-07-14 07:39:33.722507] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:17.746 [2024-07-14 07:39:33.722522] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:27008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:17.746 [2024-07-14 07:39:33.722536] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:17.746 [2024-07-14 07:39:33.722556] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:27392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:17.746 [2024-07-14 07:39:33.722570] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:17.746 [2024-07-14 07:39:33.722585] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:27520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:17.746 [2024-07-14 07:39:33.722599] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:17.746 [2024-07-14 07:39:33.722615] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:27776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:17.746 [2024-07-14 07:39:33.722630] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:17.746 [2024-07-14 07:39:33.722646] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:27904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:17.746 [2024-07-14 07:39:33.722660] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:17.746 [2024-07-14 07:39:33.722676] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:28288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:17.746 [2024-07-14 07:39:33.722690] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:17.746 [2024-07-14 07:39:33.722705] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:28416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:17.746 [2024-07-14 07:39:33.722720] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:17.746 [2024-07-14 07:39:33.722735] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:28672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:17.746 [2024-07-14 07:39:33.722749] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:17.746 [2024-07-14 07:39:33.722765] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:28800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:17.746 [2024-07-14 07:39:33.722778] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:17.746 [2024-07-14 07:39:33.722794] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:28928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:17.746 [2024-07-14 07:39:33.722808] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:17.746 [2024-07-14 07:39:33.722823] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:30208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:17.746 [2024-07-14 07:39:33.722837] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:17.746 [2024-07-14 07:39:33.722862] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:30336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:17.746 [2024-07-14 07:39:33.722904] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:17.746 [2024-07-14 07:39:33.722921] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:30464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:17.746 [2024-07-14 07:39:33.722936] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:17.746 [2024-07-14 07:39:33.722951] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:30592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:17.746 [2024-07-14 07:39:33.722968] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:17.746 [2024-07-14 07:39:33.722985] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:30720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:17.746 [2024-07-14 07:39:33.722999] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:17.746 [2024-07-14 07:39:33.723015] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:30848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:17.746 [2024-07-14 07:39:33.723029] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:17.746 [2024-07-14 07:39:33.723045] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:30976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:17.746 [2024-07-14 07:39:33.723059] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:17.746 [2024-07-14 07:39:33.723076] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:31104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:17.746 [2024-07-14 07:39:33.723090] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:17.746 [2024-07-14 07:39:33.723106] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:31232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:17.746 [2024-07-14 07:39:33.723119] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:17.746 [2024-07-14 07:39:33.723135] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:31360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:17.746 [2024-07-14 07:39:33.723156] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:17.746 [2024-07-14 07:39:33.723172] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:31488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:17.746 [2024-07-14 07:39:33.723186] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:17.746 [2024-07-14 07:39:33.723201] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:31616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:17.746 [2024-07-14 07:39:33.723215] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:17.746 [2024-07-14 07:39:33.723231] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:31744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:17.746 [2024-07-14 07:39:33.723245] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:17.746 [2024-07-14 07:39:33.723261] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:31872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:17.746 [2024-07-14 07:39:33.723275] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:17.746 [2024-07-14 07:39:33.723290] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:32000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:17.746 [2024-07-14 07:39:33.723304] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:17.746 [2024-07-14 07:39:33.723320] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:32128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:17.746 [2024-07-14 07:39:33.723334] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:17.746 [2024-07-14 07:39:33.723354] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:32256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:17.746 [2024-07-14 07:39:33.723369] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:17.746 [2024-07-14 07:39:33.723384] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:32384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:17.746 [2024-07-14 07:39:33.723398] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:17.746 [2024-07-14 07:39:33.723414] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:32512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:17.746 [2024-07-14 07:39:33.723428] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:17.746 [2024-07-14 07:39:33.723445] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:32640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:17.746 [2024-07-14 07:39:33.723459] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:17.746 [2024-07-14 07:39:33.723475] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:32768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:17.746 [2024-07-14 07:39:33.723489] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:17.746 [2024-07-14 07:39:33.723504] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:32896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:17.746 [2024-07-14 07:39:33.723519] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:17.746 [2024-07-14 07:39:33.723536] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:33024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:17.746 [2024-07-14 07:39:33.723550] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:17.746 [2024-07-14 07:39:33.723566] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:33152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:17.746 [2024-07-14 07:39:33.723581] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:17.746 [2024-07-14 07:39:33.723596] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:33280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:17.746 [2024-07-14 07:39:33.723610] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:17.746 [2024-07-14 07:39:33.723626] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:33408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:17.746 [2024-07-14 07:39:33.723640] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:17.746 [2024-07-14 07:39:33.723655] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:33536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:17.746 [2024-07-14 07:39:33.723669] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:17.746 [2024-07-14 07:39:33.723685] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:33664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:17.746 [2024-07-14 07:39:33.723699] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:17.746 [2024-07-14 07:39:33.723714] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:33792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:17.746 [2024-07-14 07:39:33.723732] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:17.746 [2024-07-14 07:39:33.723748] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:33920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:17.746 [2024-07-14 07:39:33.723762] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:17.746 [2024-07-14 07:39:33.723778] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:34048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:17.746 [2024-07-14 07:39:33.723792] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:17.746 [2024-07-14 07:39:33.723808] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:34176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:17.746 [2024-07-14 07:39:33.723822] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:17.746 [2024-07-14 07:39:33.723837] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:34304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:17.747 [2024-07-14 07:39:33.723860] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:17.747 [2024-07-14 07:39:33.723921] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:34432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:17.747 [2024-07-14 07:39:33.723937] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:17.747 [2024-07-14 07:39:33.723953] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:34560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:17.747 [2024-07-14 07:39:33.723967] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:17.747 [2024-07-14 07:39:33.723983] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:34688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:17.747 [2024-07-14 07:39:33.723996] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:17.747 [2024-07-14 07:39:33.724121] bdev_nvme.c:1590:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x2563210 was disconnected and freed. reset controller. 00:22:17.747 [2024-07-14 07:39:33.730462] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x25a6830 (9): Bad file descriptor 00:22:17.747 [2024-07-14 07:39:33.730544] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x241ba30 (9): Bad file descriptor 00:22:17.747 [2024-07-14 07:39:33.730571] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x25e8a60 (9): Bad file descriptor 00:22:17.747 [2024-07-14 07:39:33.730599] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2424f70 (9): Bad file descriptor 00:22:17.747 [2024-07-14 07:39:33.730623] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x25e3490 (9): Bad file descriptor 00:22:17.747 [2024-07-14 07:39:33.730653] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x25e38c0 (9): Bad file descriptor 00:22:17.747 [2024-07-14 07:39:33.730684] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2385a50 (9): Bad file descriptor 00:22:17.747 [2024-07-14 07:39:33.730713] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x241be60 (9): Bad file descriptor 00:22:17.747 [2024-07-14 07:39:33.732090] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:29184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:17.747 [2024-07-14 07:39:33.732120] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:17.747 [2024-07-14 07:39:33.732163] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:29312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:17.747 [2024-07-14 07:39:33.732181] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:17.747 [2024-07-14 07:39:33.732198] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:29440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:17.747 [2024-07-14 07:39:33.732212] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:17.747 [2024-07-14 07:39:33.732228] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:29568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:17.747 [2024-07-14 07:39:33.732243] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:17.747 [2024-07-14 07:39:33.732258] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:24320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:17.747 [2024-07-14 07:39:33.732273] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:17.747 [2024-07-14 07:39:33.732289] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:24448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:17.747 [2024-07-14 07:39:33.732303] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:17.747 [2024-07-14 07:39:33.732319] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:24576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:17.747 [2024-07-14 07:39:33.732333] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:17.747 [2024-07-14 07:39:33.732349] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:24960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:17.747 [2024-07-14 07:39:33.732363] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:17.747 [2024-07-14 07:39:33.732379] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:25088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:17.747 [2024-07-14 07:39:33.732394] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:17.747 [2024-07-14 07:39:33.732410] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:25600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:17.747 [2024-07-14 07:39:33.732424] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:17.747 [2024-07-14 07:39:33.732440] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:25984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:17.747 [2024-07-14 07:39:33.732454] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:17.747 [2024-07-14 07:39:33.732470] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:26496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:17.747 [2024-07-14 07:39:33.732484] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:17.747 [2024-07-14 07:39:33.732500] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:26624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:17.747 [2024-07-14 07:39:33.732514] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:17.747 [2024-07-14 07:39:33.732530] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:29696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:17.747 [2024-07-14 07:39:33.732548] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:17.747 [2024-07-14 07:39:33.732565] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:29824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:17.747 [2024-07-14 07:39:33.732579] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:17.747 [2024-07-14 07:39:33.732595] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:29952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:17.747 [2024-07-14 07:39:33.732609] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:17.747 [2024-07-14 07:39:33.732625] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:26880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:17.747 [2024-07-14 07:39:33.732639] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:17.747 [2024-07-14 07:39:33.732654] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:27008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:17.747 [2024-07-14 07:39:33.732669] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:17.747 [2024-07-14 07:39:33.732684] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:27392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:17.747 [2024-07-14 07:39:33.732699] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:17.747 [2024-07-14 07:39:33.732714] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:30080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:17.747 [2024-07-14 07:39:33.732728] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:17.747 [2024-07-14 07:39:33.732744] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:27520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:17.747 [2024-07-14 07:39:33.732758] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:17.747 [2024-07-14 07:39:33.732773] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:27776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:17.747 [2024-07-14 07:39:33.732787] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:17.747 [2024-07-14 07:39:33.732803] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:27904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:17.747 [2024-07-14 07:39:33.732817] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:17.747 [2024-07-14 07:39:33.732832] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:30208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:17.747 [2024-07-14 07:39:33.732846] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:17.747 [2024-07-14 07:39:33.732862] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:30336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:17.747 [2024-07-14 07:39:33.732886] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:17.747 [2024-07-14 07:39:33.732903] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:28288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:17.747 [2024-07-14 07:39:33.732917] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:17.747 [2024-07-14 07:39:33.732937] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:28416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:17.747 [2024-07-14 07:39:33.732952] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:17.747 [2024-07-14 07:39:33.732968] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:28672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:17.747 [2024-07-14 07:39:33.732983] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:17.747 [2024-07-14 07:39:33.732999] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:28800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:17.747 [2024-07-14 07:39:33.733013] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:17.747 [2024-07-14 07:39:33.733029] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:28928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:17.747 [2024-07-14 07:39:33.733044] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:17.747 [2024-07-14 07:39:33.733060] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:30464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:17.747 [2024-07-14 07:39:33.733074] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:17.747 [2024-07-14 07:39:33.733091] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:30592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:17.747 [2024-07-14 07:39:33.733104] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:17.747 [2024-07-14 07:39:33.733120] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:30720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:17.747 [2024-07-14 07:39:33.733134] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:17.747 [2024-07-14 07:39:33.733150] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:30848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:17.747 [2024-07-14 07:39:33.733164] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:17.747 [2024-07-14 07:39:33.733180] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:30976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:17.747 [2024-07-14 07:39:33.733194] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:17.747 [2024-07-14 07:39:33.733209] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:31104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:17.747 [2024-07-14 07:39:33.733223] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:17.747 [2024-07-14 07:39:33.733239] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:31232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:17.747 [2024-07-14 07:39:33.733253] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:17.747 [2024-07-14 07:39:33.733268] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:31360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:17.747 [2024-07-14 07:39:33.733282] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:17.747 [2024-07-14 07:39:33.733298] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:31488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:17.747 [2024-07-14 07:39:33.733315] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:17.747 [2024-07-14 07:39:33.733332] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:31616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:17.747 [2024-07-14 07:39:33.733346] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:17.747 [2024-07-14 07:39:33.733362] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:31744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:17.747 [2024-07-14 07:39:33.733376] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:17.747 [2024-07-14 07:39:33.733391] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:31872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:17.747 [2024-07-14 07:39:33.733405] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:17.747 [2024-07-14 07:39:33.733421] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:32000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:17.747 [2024-07-14 07:39:33.733437] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:17.747 [2024-07-14 07:39:33.733454] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:32128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:17.747 [2024-07-14 07:39:33.733468] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:17.747 [2024-07-14 07:39:33.733483] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:32256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:17.747 [2024-07-14 07:39:33.733498] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:17.747 [2024-07-14 07:39:33.733513] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:32384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:17.747 [2024-07-14 07:39:33.733527] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:17.747 [2024-07-14 07:39:33.733542] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:32512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:17.747 [2024-07-14 07:39:33.733556] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:17.747 [2024-07-14 07:39:33.733572] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:32640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:17.747 [2024-07-14 07:39:33.733586] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:17.747 [2024-07-14 07:39:33.733602] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:32768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:17.747 [2024-07-14 07:39:33.733616] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:17.747 [2024-07-14 07:39:33.733633] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:32896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:17.747 [2024-07-14 07:39:33.733647] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:17.747 [2024-07-14 07:39:33.733663] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:33024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:17.747 [2024-07-14 07:39:33.733677] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:17.747 [2024-07-14 07:39:33.733696] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:33152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:17.747 [2024-07-14 07:39:33.733711] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:17.747 [2024-07-14 07:39:33.733727] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:33280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:17.747 [2024-07-14 07:39:33.733741] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:17.747 [2024-07-14 07:39:33.733757] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:33408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:17.747 [2024-07-14 07:39:33.733771] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:17.747 [2024-07-14 07:39:33.733786] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:33536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:17.747 [2024-07-14 07:39:33.733800] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:17.747 [2024-07-14 07:39:33.733817] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:33664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:17.747 [2024-07-14 07:39:33.733831] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:17.747 [2024-07-14 07:39:33.733847] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:33792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:17.747 [2024-07-14 07:39:33.733861] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:17.747 [2024-07-14 07:39:33.733887] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:33920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:17.747 [2024-07-14 07:39:33.733902] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:17.747 [2024-07-14 07:39:33.733917] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:34048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:17.747 [2024-07-14 07:39:33.733932] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:17.747 [2024-07-14 07:39:33.733948] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:34176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:17.747 [2024-07-14 07:39:33.733963] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:17.747 [2024-07-14 07:39:33.733978] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:34304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:17.747 [2024-07-14 07:39:33.733992] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:17.747 [2024-07-14 07:39:33.734008] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:34432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:17.747 [2024-07-14 07:39:33.734022] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:17.747 [2024-07-14 07:39:33.734038] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:34560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:17.747 [2024-07-14 07:39:33.734052] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:17.747 [2024-07-14 07:39:33.734068] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:34688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:17.747 [2024-07-14 07:39:33.734087] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:17.747 [2024-07-14 07:39:33.734103] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2561c30 is same with the state(5) to be set 00:22:17.747 [2024-07-14 07:39:33.734187] bdev_nvme.c:1590:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x2561c30 was disconnected and freed. reset controller. 00:22:17.747 [2024-07-14 07:39:33.735342] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:29184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:17.747 [2024-07-14 07:39:33.735366] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:17.747 [2024-07-14 07:39:33.735387] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:29312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:17.747 [2024-07-14 07:39:33.735403] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:17.747 [2024-07-14 07:39:33.735419] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:29440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:17.747 [2024-07-14 07:39:33.735434] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:17.747 [2024-07-14 07:39:33.735449] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:29568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:17.747 [2024-07-14 07:39:33.735463] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:17.747 [2024-07-14 07:39:33.735479] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:24320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:17.747 [2024-07-14 07:39:33.735493] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:17.747 [2024-07-14 07:39:33.735509] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:24448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:17.747 [2024-07-14 07:39:33.735523] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:17.747 [2024-07-14 07:39:33.735539] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:29696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:17.747 [2024-07-14 07:39:33.735553] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:17.747 [2024-07-14 07:39:33.735569] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:29824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:17.748 [2024-07-14 07:39:33.735583] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:17.748 [2024-07-14 07:39:33.735600] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:24576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:17.748 [2024-07-14 07:39:33.735613] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:17.748 [2024-07-14 07:39:33.735630] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:24960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:17.748 [2024-07-14 07:39:33.735645] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:17.748 [2024-07-14 07:39:33.735661] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:29952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:17.748 [2024-07-14 07:39:33.735674] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:17.748 [2024-07-14 07:39:33.735690] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:25088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:17.748 [2024-07-14 07:39:33.735709] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:17.748 [2024-07-14 07:39:33.735725] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:25600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:17.748 [2024-07-14 07:39:33.735739] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:17.748 [2024-07-14 07:39:33.735755] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:25984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:17.748 [2024-07-14 07:39:33.735769] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:17.748 [2024-07-14 07:39:33.735786] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:26496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:17.748 [2024-07-14 07:39:33.735800] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:17.748 [2024-07-14 07:39:33.735816] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:26624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:17.748 [2024-07-14 07:39:33.735830] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:17.748 [2024-07-14 07:39:33.735846] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:26880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:17.748 [2024-07-14 07:39:33.735860] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:17.748 [2024-07-14 07:39:33.735883] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:27008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:17.748 [2024-07-14 07:39:33.735899] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:17.748 [2024-07-14 07:39:33.735915] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:27392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:17.748 [2024-07-14 07:39:33.735929] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:17.748 [2024-07-14 07:39:33.735944] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:27520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:17.748 [2024-07-14 07:39:33.735958] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:17.748 [2024-07-14 07:39:33.735974] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:30080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:17.748 [2024-07-14 07:39:33.735988] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:17.748 [2024-07-14 07:39:33.736003] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:27776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:17.748 [2024-07-14 07:39:33.736017] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:17.748 [2024-07-14 07:39:33.736043] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:27904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:17.748 [2024-07-14 07:39:33.736058] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:17.748 [2024-07-14 07:39:33.736073] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:28288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:17.748 [2024-07-14 07:39:33.736087] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:17.748 [2024-07-14 07:39:33.736108] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:28416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:17.748 [2024-07-14 07:39:33.736122] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:17.748 [2024-07-14 07:39:33.736139] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:28672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:17.748 [2024-07-14 07:39:33.736153] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:17.748 [2024-07-14 07:39:33.736169] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:28800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:17.748 [2024-07-14 07:39:33.736183] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:17.748 [2024-07-14 07:39:33.736198] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:28928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:17.748 [2024-07-14 07:39:33.736212] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:17.748 [2024-07-14 07:39:33.736228] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:30208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:17.748 [2024-07-14 07:39:33.736242] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:17.748 [2024-07-14 07:39:33.736257] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:30336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:17.748 [2024-07-14 07:39:33.736271] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:17.748 [2024-07-14 07:39:33.736287] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:30464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:17.748 [2024-07-14 07:39:33.736301] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:17.748 [2024-07-14 07:39:33.736316] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:30592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:17.748 [2024-07-14 07:39:33.736330] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:17.748 [2024-07-14 07:39:33.736346] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:30720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:17.748 [2024-07-14 07:39:33.736360] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:17.748 [2024-07-14 07:39:33.736381] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:30848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:17.748 [2024-07-14 07:39:33.736395] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:17.748 [2024-07-14 07:39:33.736410] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:30976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:17.748 [2024-07-14 07:39:33.736424] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:17.748 [2024-07-14 07:39:33.736439] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:31104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:17.748 [2024-07-14 07:39:33.736454] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:17.748 [2024-07-14 07:39:33.736469] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:31232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:17.748 [2024-07-14 07:39:33.736487] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:17.748 [2024-07-14 07:39:33.736503] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:31360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:17.748 [2024-07-14 07:39:33.736517] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:17.748 [2024-07-14 07:39:33.736532] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:31488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:17.748 [2024-07-14 07:39:33.736546] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:17.748 [2024-07-14 07:39:33.736562] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:31616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:17.748 [2024-07-14 07:39:33.736576] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:17.748 [2024-07-14 07:39:33.736591] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:31744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:17.748 [2024-07-14 07:39:33.736606] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:17.748 [2024-07-14 07:39:33.736622] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:31872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:17.748 [2024-07-14 07:39:33.736636] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:17.748 [2024-07-14 07:39:33.736652] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:32000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:17.748 [2024-07-14 07:39:33.736666] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:17.748 [2024-07-14 07:39:33.736682] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:32128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:17.748 [2024-07-14 07:39:33.736695] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:17.748 [2024-07-14 07:39:33.736711] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:32256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:17.748 [2024-07-14 07:39:33.736725] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:17.748 [2024-07-14 07:39:33.736741] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:32384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:17.748 [2024-07-14 07:39:33.736755] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:17.748 [2024-07-14 07:39:33.736771] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:32512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:17.748 [2024-07-14 07:39:33.736785] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:17.748 [2024-07-14 07:39:33.736800] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:32640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:17.748 [2024-07-14 07:39:33.736814] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:17.748 [2024-07-14 07:39:33.736830] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:32768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:17.748 [2024-07-14 07:39:33.736844] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:17.748 [2024-07-14 07:39:33.736862] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:32896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:17.748 [2024-07-14 07:39:33.736886] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:17.748 [2024-07-14 07:39:33.736902] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:33024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:17.748 [2024-07-14 07:39:33.736917] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:17.748 [2024-07-14 07:39:33.736933] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:33152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:17.748 [2024-07-14 07:39:33.736947] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:17.748 [2024-07-14 07:39:33.736963] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:33280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:17.748 [2024-07-14 07:39:33.736977] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:17.748 [2024-07-14 07:39:33.736992] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:33408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:17.748 [2024-07-14 07:39:33.737007] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:17.748 [2024-07-14 07:39:33.737022] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:33536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:17.748 [2024-07-14 07:39:33.737036] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:17.748 [2024-07-14 07:39:33.737051] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:33664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:17.748 [2024-07-14 07:39:33.737065] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:17.748 [2024-07-14 07:39:33.737080] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:33792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:17.748 [2024-07-14 07:39:33.737095] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:17.748 [2024-07-14 07:39:33.737111] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:33920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:17.748 [2024-07-14 07:39:33.737126] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:17.748 [2024-07-14 07:39:33.737141] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:34048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:17.748 [2024-07-14 07:39:33.737155] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:17.748 [2024-07-14 07:39:33.737171] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:34176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:17.748 [2024-07-14 07:39:33.737185] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:17.748 [2024-07-14 07:39:33.737200] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:34304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:17.748 [2024-07-14 07:39:33.737214] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:17.748 [2024-07-14 07:39:33.737229] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:34432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:17.748 [2024-07-14 07:39:33.737247] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:17.748 [2024-07-14 07:39:33.737263] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:34560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:17.748 [2024-07-14 07:39:33.737277] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:17.748 [2024-07-14 07:39:33.737293] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:34688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:17.748 [2024-07-14 07:39:33.737306] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:17.748 [2024-07-14 07:39:33.737406] bdev_nvme.c:1590:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x25647f0 was disconnected and freed. reset controller. 00:22:17.748 [2024-07-14 07:39:33.737463] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:29184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:17.748 [2024-07-14 07:39:33.737483] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:17.748 [2024-07-14 07:39:33.737504] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:29312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:17.748 [2024-07-14 07:39:33.737520] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:17.748 [2024-07-14 07:39:33.737536] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:29440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:17.748 [2024-07-14 07:39:33.737550] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:17.748 [2024-07-14 07:39:33.737565] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:29568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:17.748 [2024-07-14 07:39:33.737579] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:17.748 [2024-07-14 07:39:33.737594] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:24320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:17.748 [2024-07-14 07:39:33.737609] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:17.748 [2024-07-14 07:39:33.737624] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:24448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:17.748 [2024-07-14 07:39:33.737638] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:17.748 [2024-07-14 07:39:33.737653] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:24576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:17.748 [2024-07-14 07:39:33.737668] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:17.748 [2024-07-14 07:39:33.737683] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:24960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:17.748 [2024-07-14 07:39:33.737697] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:17.748 [2024-07-14 07:39:33.737712] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:25088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:17.748 [2024-07-14 07:39:33.737726] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:17.748 [2024-07-14 07:39:33.737742] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:25600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:17.748 [2024-07-14 07:39:33.737760] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:17.748 [2024-07-14 07:39:33.737777] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:25984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:17.748 [2024-07-14 07:39:33.737791] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:17.748 [2024-07-14 07:39:33.737806] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:26496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:17.748 [2024-07-14 07:39:33.737819] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:17.748 [2024-07-14 07:39:33.737835] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:26624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:17.748 [2024-07-14 07:39:33.737849] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:17.748 [2024-07-14 07:39:33.737874] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:26880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:17.748 [2024-07-14 07:39:33.737891] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:17.748 [2024-07-14 07:39:33.737906] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:27008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:17.748 [2024-07-14 07:39:33.737921] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:17.748 [2024-07-14 07:39:33.737937] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:27392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:17.748 [2024-07-14 07:39:33.737950] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:17.748 [2024-07-14 07:39:33.737966] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:27520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:17.748 [2024-07-14 07:39:33.737979] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:17.748 [2024-07-14 07:39:33.737995] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:27776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:17.749 [2024-07-14 07:39:33.738009] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:17.749 [2024-07-14 07:39:33.738025] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:27904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:17.749 [2024-07-14 07:39:33.738039] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:17.749 [2024-07-14 07:39:33.738055] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:29696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:17.749 [2024-07-14 07:39:33.738069] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:17.749 [2024-07-14 07:39:33.738084] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:29824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:17.749 [2024-07-14 07:39:33.738098] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:17.749 [2024-07-14 07:39:33.738113] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:29952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:17.749 [2024-07-14 07:39:33.738127] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:17.749 [2024-07-14 07:39:33.738147] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:28288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:17.749 [2024-07-14 07:39:33.738162] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:17.749 [2024-07-14 07:39:33.738178] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:28416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:17.749 [2024-07-14 07:39:33.738192] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:17.749 [2024-07-14 07:39:33.738207] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:30080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:17.749 [2024-07-14 07:39:33.738221] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:17.749 [2024-07-14 07:39:33.738236] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:30208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:17.749 [2024-07-14 07:39:33.738250] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:17.749 [2024-07-14 07:39:33.738265] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:30336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:17.749 [2024-07-14 07:39:33.738279] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:17.749 [2024-07-14 07:39:33.738294] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:30464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:17.749 [2024-07-14 07:39:33.738308] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:17.749 [2024-07-14 07:39:33.738323] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:28672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:17.749 [2024-07-14 07:39:33.738337] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:17.749 [2024-07-14 07:39:33.738353] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:28800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:17.749 [2024-07-14 07:39:33.738366] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:17.749 [2024-07-14 07:39:33.738382] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:28928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:17.749 [2024-07-14 07:39:33.738396] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:17.749 [2024-07-14 07:39:33.738412] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:30592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:17.749 [2024-07-14 07:39:33.738426] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:17.749 [2024-07-14 07:39:33.738441] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:30720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:17.749 [2024-07-14 07:39:33.738455] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:17.749 [2024-07-14 07:39:33.738470] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:30848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:17.749 [2024-07-14 07:39:33.738484] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:17.749 [2024-07-14 07:39:33.738499] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:30976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:17.749 [2024-07-14 07:39:33.738517] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:17.749 [2024-07-14 07:39:33.738533] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:31104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:17.749 [2024-07-14 07:39:33.738546] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:17.749 [2024-07-14 07:39:33.738562] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:31232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:17.749 [2024-07-14 07:39:33.738576] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:17.749 [2024-07-14 07:39:33.738593] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:31360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:17.749 [2024-07-14 07:39:33.738607] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:17.749 [2024-07-14 07:39:33.738622] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:31488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:17.749 [2024-07-14 07:39:33.738636] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:17.749 [2024-07-14 07:39:33.738651] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:31616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:17.749 [2024-07-14 07:39:33.738665] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:17.749 [2024-07-14 07:39:33.738681] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:31744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:17.749 [2024-07-14 07:39:33.738694] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:17.749 [2024-07-14 07:39:33.738710] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:31872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:17.749 [2024-07-14 07:39:33.738724] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:17.749 [2024-07-14 07:39:33.738739] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:32000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:17.749 [2024-07-14 07:39:33.738753] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:17.749 [2024-07-14 07:39:33.738768] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:32128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:17.749 [2024-07-14 07:39:33.738782] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:17.749 [2024-07-14 07:39:33.738798] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:32256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:17.749 [2024-07-14 07:39:33.738812] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:17.749 [2024-07-14 07:39:33.738827] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:32384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:17.749 [2024-07-14 07:39:33.738841] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:17.749 [2024-07-14 07:39:33.738857] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:32512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:17.749 [2024-07-14 07:39:33.738879] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:17.749 [2024-07-14 07:39:33.738900] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:32640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:17.749 [2024-07-14 07:39:33.738914] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:17.749 [2024-07-14 07:39:33.738930] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:32768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:17.749 [2024-07-14 07:39:33.738945] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:17.749 [2024-07-14 07:39:33.738960] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:32896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:17.749 [2024-07-14 07:39:33.738974] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:17.749 [2024-07-14 07:39:33.738990] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:33024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:17.749 [2024-07-14 07:39:33.739003] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:17.749 [2024-07-14 07:39:33.739019] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:33152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:17.749 [2024-07-14 07:39:33.739032] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:17.749 [2024-07-14 07:39:33.739048] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:33280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:17.749 [2024-07-14 07:39:33.739061] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:17.749 [2024-07-14 07:39:33.739076] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:33408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:17.749 [2024-07-14 07:39:33.739090] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:17.749 [2024-07-14 07:39:33.739105] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:33536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:17.749 [2024-07-14 07:39:33.739119] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:17.749 [2024-07-14 07:39:33.739134] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:33664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:17.749 [2024-07-14 07:39:33.739148] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:17.749 [2024-07-14 07:39:33.739163] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:33792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:17.749 [2024-07-14 07:39:33.739178] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:17.749 [2024-07-14 07:39:33.739193] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:33920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:17.749 [2024-07-14 07:39:33.739206] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:17.749 [2024-07-14 07:39:33.739222] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:34048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:17.749 [2024-07-14 07:39:33.739235] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:17.749 [2024-07-14 07:39:33.739251] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:34176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:17.749 [2024-07-14 07:39:33.739272] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:17.749 [2024-07-14 07:39:33.739289] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:34304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:17.749 [2024-07-14 07:39:33.739302] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:17.749 [2024-07-14 07:39:33.739318] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:34432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:17.749 [2024-07-14 07:39:33.739332] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:17.749 [2024-07-14 07:39:33.739347] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:34560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:17.749 [2024-07-14 07:39:33.739361] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:17.749 [2024-07-14 07:39:33.739376] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:34688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:17.749 [2024-07-14 07:39:33.739390] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:17.749 [2024-07-14 07:39:33.739485] bdev_nvme.c:1590:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x2565dd0 was disconnected and freed. reset controller. 00:22:17.749 [2024-07-14 07:39:33.739564] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:29184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:17.749 [2024-07-14 07:39:33.739583] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:17.749 [2024-07-14 07:39:33.739604] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:29312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:17.749 [2024-07-14 07:39:33.739619] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:17.749 [2024-07-14 07:39:33.739636] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:29440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:17.749 [2024-07-14 07:39:33.739650] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:17.749 [2024-07-14 07:39:33.739665] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:29568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:17.749 [2024-07-14 07:39:33.739679] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:17.749 [2024-07-14 07:39:33.739694] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:24320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:17.749 [2024-07-14 07:39:33.739708] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:17.749 [2024-07-14 07:39:33.739724] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:24448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:17.749 [2024-07-14 07:39:33.739738] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:17.749 [2024-07-14 07:39:33.739753] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:24576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:17.749 [2024-07-14 07:39:33.739767] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:17.749 [2024-07-14 07:39:33.739783] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:24960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:17.749 [2024-07-14 07:39:33.739801] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:17.749 [2024-07-14 07:39:33.739817] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:25088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:17.749 [2024-07-14 07:39:33.739831] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:17.749 [2024-07-14 07:39:33.739847] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:25600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:17.749 [2024-07-14 07:39:33.739860] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:17.749 [2024-07-14 07:39:33.739884] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:25984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:17.749 [2024-07-14 07:39:33.739899] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:17.749 [2024-07-14 07:39:33.739915] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:26496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:17.749 [2024-07-14 07:39:33.739928] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:17.749 [2024-07-14 07:39:33.739944] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:26624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:17.749 [2024-07-14 07:39:33.739958] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:17.749 [2024-07-14 07:39:33.739974] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:26880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:17.749 [2024-07-14 07:39:33.739987] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:17.749 [2024-07-14 07:39:33.740003] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:29696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:17.749 [2024-07-14 07:39:33.740017] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:17.749 [2024-07-14 07:39:33.740033] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:29824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:17.749 [2024-07-14 07:39:33.740047] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:17.749 [2024-07-14 07:39:33.740062] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:29952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:17.749 [2024-07-14 07:39:33.740076] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:17.749 [2024-07-14 07:39:33.740092] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:27008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:17.749 [2024-07-14 07:39:33.740106] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:17.749 [2024-07-14 07:39:33.740121] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:27392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:17.749 [2024-07-14 07:39:33.740135] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:17.749 [2024-07-14 07:39:33.740151] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:30080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:17.749 [2024-07-14 07:39:33.740164] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:17.749 [2024-07-14 07:39:33.740184] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:27520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:17.749 [2024-07-14 07:39:33.740199] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:17.749 [2024-07-14 07:39:33.740214] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:27776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:17.749 [2024-07-14 07:39:33.740228] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:17.749 [2024-07-14 07:39:33.740244] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:27904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:17.749 [2024-07-14 07:39:33.740258] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:17.749 [2024-07-14 07:39:33.740275] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:28288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:17.749 [2024-07-14 07:39:33.740289] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:17.749 [2024-07-14 07:39:33.740304] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:28416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:17.749 [2024-07-14 07:39:33.740318] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:17.749 [2024-07-14 07:39:33.740333] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:30208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:17.749 [2024-07-14 07:39:33.740347] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:17.749 [2024-07-14 07:39:33.740362] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:30336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:17.749 [2024-07-14 07:39:33.740376] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:17.749 [2024-07-14 07:39:33.740391] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:30464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:17.749 [2024-07-14 07:39:33.740405] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:17.749 [2024-07-14 07:39:33.740420] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:30592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:17.749 [2024-07-14 07:39:33.740434] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:17.749 [2024-07-14 07:39:33.740449] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:30720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:17.749 [2024-07-14 07:39:33.740463] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:17.749 [2024-07-14 07:39:33.740478] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:30848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:17.749 [2024-07-14 07:39:33.740492] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:17.749 [2024-07-14 07:39:33.740508] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:30976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:17.749 [2024-07-14 07:39:33.740521] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:17.749 [2024-07-14 07:39:33.740537] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:31104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:17.749 [2024-07-14 07:39:33.740553] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:17.749 [2024-07-14 07:39:33.740569] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:31232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:17.749 [2024-07-14 07:39:33.740583] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:17.749 [2024-07-14 07:39:33.740598] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:31360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:17.750 [2024-07-14 07:39:33.740612] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:17.750 [2024-07-14 07:39:33.740627] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:31488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:17.750 [2024-07-14 07:39:33.740641] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:17.750 [2024-07-14 07:39:33.740657] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:31616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:17.750 [2024-07-14 07:39:33.740670] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:17.750 [2024-07-14 07:39:33.740686] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:31744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:17.750 [2024-07-14 07:39:33.740700] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:17.750 [2024-07-14 07:39:33.740715] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:31872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:17.750 [2024-07-14 07:39:33.740729] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:17.750 [2024-07-14 07:39:33.740745] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:32000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:17.750 [2024-07-14 07:39:33.740758] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:17.750 [2024-07-14 07:39:33.740774] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:32128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:17.750 [2024-07-14 07:39:33.740787] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:17.750 [2024-07-14 07:39:33.740803] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:32256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:17.750 [2024-07-14 07:39:33.740817] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:17.750 [2024-07-14 07:39:33.740832] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:32384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:17.750 [2024-07-14 07:39:33.740846] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:17.750 [2024-07-14 07:39:33.740861] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:32512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:17.750 [2024-07-14 07:39:33.740882] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:17.750 [2024-07-14 07:39:33.740908] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:32640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:17.750 [2024-07-14 07:39:33.740922] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:17.750 [2024-07-14 07:39:33.740942] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:32768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:17.750 [2024-07-14 07:39:33.740956] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:17.750 [2024-07-14 07:39:33.740971] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:32896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:17.750 [2024-07-14 07:39:33.740985] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:17.750 [2024-07-14 07:39:33.741001] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:33024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:17.750 [2024-07-14 07:39:33.741014] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:17.750 [2024-07-14 07:39:33.741030] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:33152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:17.750 [2024-07-14 07:39:33.741044] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:17.750 [2024-07-14 07:39:33.741059] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:33280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:17.750 [2024-07-14 07:39:33.741073] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:17.750 [2024-07-14 07:39:33.741088] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:33408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:17.750 [2024-07-14 07:39:33.741102] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:17.750 [2024-07-14 07:39:33.741117] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:33536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:17.750 [2024-07-14 07:39:33.741131] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:17.750 [2024-07-14 07:39:33.741146] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:33664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:17.750 [2024-07-14 07:39:33.741160] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:17.750 [2024-07-14 07:39:33.741175] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:33792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:17.750 [2024-07-14 07:39:33.741189] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:17.750 [2024-07-14 07:39:33.741204] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:33920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:17.750 [2024-07-14 07:39:33.741218] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:17.750 [2024-07-14 07:39:33.741245] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:34048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:17.750 [2024-07-14 07:39:33.741260] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:17.750 [2024-07-14 07:39:33.741276] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:34176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:17.750 [2024-07-14 07:39:33.741290] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:17.750 [2024-07-14 07:39:33.741305] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:34304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:17.750 [2024-07-14 07:39:33.741323] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:17.750 [2024-07-14 07:39:33.741339] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:34432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:17.750 [2024-07-14 07:39:33.741353] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:17.750 [2024-07-14 07:39:33.741369] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:28672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:17.750 [2024-07-14 07:39:33.741383] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:17.750 [2024-07-14 07:39:33.741399] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:28800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:17.750 [2024-07-14 07:39:33.741413] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:17.750 [2024-07-14 07:39:33.741429] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:28928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:17.750 [2024-07-14 07:39:33.741442] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:17.750 [2024-07-14 07:39:33.741458] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:34560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:17.750 [2024-07-14 07:39:33.741472] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:17.750 [2024-07-14 07:39:33.741487] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:34688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:17.750 [2024-07-14 07:39:33.741501] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:17.750 [2024-07-14 07:39:33.741581] bdev_nvme.c:1590:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x348a1a0 was disconnected and freed. reset controller. 00:22:17.750 [2024-07-14 07:39:33.741655] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode3] resetting controller 00:22:17.750 [2024-07-14 07:39:33.741741] bdev_nvme.c:2867:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:22:17.750 [2024-07-14 07:39:33.741767] bdev_nvme.c:2867:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:22:17.750 [2024-07-14 07:39:33.741790] bdev_nvme.c:2867:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:22:17.750 [2024-07-14 07:39:33.741817] bdev_nvme.c:2867:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:22:17.750 [2024-07-14 07:39:33.741837] bdev_nvme.c:2867:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:22:17.750 [2024-07-14 07:39:33.741856] bdev_nvme.c:2867:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:22:17.750 [2024-07-14 07:39:33.741889] bdev_nvme.c:2867:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:22:17.750 [2024-07-14 07:39:33.741944] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:24576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:17.750 [2024-07-14 07:39:33.741965] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:17.750 [2024-07-14 07:39:33.741984] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:24960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:17.750 [2024-07-14 07:39:33.742000] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:17.750 [2024-07-14 07:39:33.742016] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:25088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:17.750 [2024-07-14 07:39:33.742040] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:17.750 [2024-07-14 07:39:33.742058] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:25216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:17.750 [2024-07-14 07:39:33.742072] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:17.750 [2024-07-14 07:39:33.742088] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:25344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:17.750 [2024-07-14 07:39:33.742102] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:17.750 [2024-07-14 07:39:33.742118] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:19456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:17.750 [2024-07-14 07:39:33.742131] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:17.750 [2024-07-14 07:39:33.742147] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:25472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:17.750 [2024-07-14 07:39:33.742161] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:17.750 [2024-07-14 07:39:33.742177] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:19584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:17.750 [2024-07-14 07:39:33.742191] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:17.750 [2024-07-14 07:39:33.742207] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:19712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:17.750 [2024-07-14 07:39:33.742221] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:17.750 [2024-07-14 07:39:33.742237] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:19840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:17.750 [2024-07-14 07:39:33.742251] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:17.750 [2024-07-14 07:39:33.742266] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:19968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:17.750 [2024-07-14 07:39:33.742280] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:17.750 [2024-07-14 07:39:33.742295] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:20096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:17.750 [2024-07-14 07:39:33.742309] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:17.750 [2024-07-14 07:39:33.742325] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:25600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:17.750 [2024-07-14 07:39:33.742339] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:17.750 [2024-07-14 07:39:33.742354] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:25728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:17.750 [2024-07-14 07:39:33.742368] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:17.750 [2024-07-14 07:39:33.742384] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:20352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:17.750 [2024-07-14 07:39:33.742397] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:17.750 [2024-07-14 07:39:33.742417] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:25856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:17.750 [2024-07-14 07:39:33.742431] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:17.750 [2024-07-14 07:39:33.742447] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:20480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:17.750 [2024-07-14 07:39:33.742461] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:17.750 [2024-07-14 07:39:33.742477] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:20736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:17.750 [2024-07-14 07:39:33.742491] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:17.750 [2024-07-14 07:39:33.742507] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:20864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:17.750 [2024-07-14 07:39:33.742526] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:17.750 [2024-07-14 07:39:33.742543] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:25984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:17.750 [2024-07-14 07:39:33.742557] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:17.750 [2024-07-14 07:39:33.742573] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:20992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:17.750 [2024-07-14 07:39:33.742587] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:17.750 [2024-07-14 07:39:33.742602] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:26112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:17.750 [2024-07-14 07:39:33.742616] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:17.750 [2024-07-14 07:39:33.742632] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:21376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:17.750 [2024-07-14 07:39:33.742647] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:17.750 [2024-07-14 07:39:33.742662] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:21504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:17.750 [2024-07-14 07:39:33.742676] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:17.750 [2024-07-14 07:39:33.742692] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:21760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:17.750 [2024-07-14 07:39:33.742706] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:17.750 [2024-07-14 07:39:33.742722] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:22272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:17.750 [2024-07-14 07:39:33.742736] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:17.750 [2024-07-14 07:39:33.742751] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:22656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:17.750 [2024-07-14 07:39:33.742765] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:17.750 [2024-07-14 07:39:33.742781] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:23040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:17.750 [2024-07-14 07:39:33.742798] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:17.750 [2024-07-14 07:39:33.742814] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:23424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:17.750 [2024-07-14 07:39:33.742828] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:17.750 [2024-07-14 07:39:33.742844] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:26240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:17.750 [2024-07-14 07:39:33.742858] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:17.750 [2024-07-14 07:39:33.742882] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:26368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:17.750 [2024-07-14 07:39:33.742897] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:17.750 [2024-07-14 07:39:33.742913] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:23552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:17.750 [2024-07-14 07:39:33.742927] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:17.750 [2024-07-14 07:39:33.742942] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:23680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:17.750 [2024-07-14 07:39:33.742956] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:17.750 [2024-07-14 07:39:33.742971] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:26496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:17.750 [2024-07-14 07:39:33.742986] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:17.750 [2024-07-14 07:39:33.743001] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:23808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:17.750 [2024-07-14 07:39:33.743016] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:17.750 [2024-07-14 07:39:33.743032] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:23936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:17.750 [2024-07-14 07:39:33.743046] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:17.750 [2024-07-14 07:39:33.743062] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:26624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:17.750 [2024-07-14 07:39:33.743078] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:17.750 [2024-07-14 07:39:33.743094] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:26752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:17.750 [2024-07-14 07:39:33.743109] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:17.750 [2024-07-14 07:39:33.743125] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:26880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:17.750 [2024-07-14 07:39:33.743139] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:17.750 [2024-07-14 07:39:33.743155] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:27008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:17.750 [2024-07-14 07:39:33.743169] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:17.750 [2024-07-14 07:39:33.743188] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:27136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:17.751 [2024-07-14 07:39:33.743203] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:17.751 [2024-07-14 07:39:33.743220] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:27264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:17.751 [2024-07-14 07:39:33.743234] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:17.751 [2024-07-14 07:39:33.743249] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:27392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:17.751 [2024-07-14 07:39:33.743263] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:17.751 [2024-07-14 07:39:33.743278] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:27520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:17.751 [2024-07-14 07:39:33.743293] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:17.751 [2024-07-14 07:39:33.743308] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:27648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:17.751 [2024-07-14 07:39:33.743322] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:17.751 [2024-07-14 07:39:33.743338] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:27776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:17.751 [2024-07-14 07:39:33.743352] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:17.751 [2024-07-14 07:39:33.743367] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:27904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:17.751 [2024-07-14 07:39:33.743382] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:17.751 [2024-07-14 07:39:33.743398] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:28032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:17.751 [2024-07-14 07:39:33.743412] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:17.751 [2024-07-14 07:39:33.743428] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:28160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:17.751 [2024-07-14 07:39:33.743442] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:17.751 [2024-07-14 07:39:33.743458] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:28288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:17.751 [2024-07-14 07:39:33.743473] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:17.751 [2024-07-14 07:39:33.743489] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:28416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:17.751 [2024-07-14 07:39:33.743503] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:17.751 [2024-07-14 07:39:33.743519] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:28544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:17.751 [2024-07-14 07:39:33.743533] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:17.751 [2024-07-14 07:39:33.743548] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:28672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:17.751 [2024-07-14 07:39:33.743566] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:17.751 [2024-07-14 07:39:33.743582] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:28800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:17.751 [2024-07-14 07:39:33.743596] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:17.751 [2024-07-14 07:39:33.743612] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:28928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:17.751 [2024-07-14 07:39:33.743626] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:17.751 [2024-07-14 07:39:33.743642] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:29056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:17.751 [2024-07-14 07:39:33.743656] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:17.751 [2024-07-14 07:39:33.743671] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:24064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:17.751 [2024-07-14 07:39:33.743686] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:17.751 [2024-07-14 07:39:33.743701] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:24192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:17.751 [2024-07-14 07:39:33.743715] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:17.751 [2024-07-14 07:39:33.743731] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:24320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:17.751 [2024-07-14 07:39:33.743746] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:17.751 [2024-07-14 07:39:33.743762] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:24448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:17.751 [2024-07-14 07:39:33.743776] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:17.751 [2024-07-14 07:39:33.743792] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:29184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:17.751 [2024-07-14 07:39:33.743806] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:17.751 [2024-07-14 07:39:33.743822] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:29312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:17.751 [2024-07-14 07:39:33.743836] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:17.751 [2024-07-14 07:39:33.743853] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:29440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:17.751 [2024-07-14 07:39:33.743873] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:17.751 [2024-07-14 07:39:33.743890] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:29568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:17.751 [2024-07-14 07:39:33.743904] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:17.751 [2024-07-14 07:39:33.743919] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2599f40 is same with the state(5) to be set 00:22:17.751 [2024-07-14 07:39:33.749681] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:24320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:17.751 [2024-07-14 07:39:33.749743] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:17.751 [2024-07-14 07:39:33.749775] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:24448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:17.751 [2024-07-14 07:39:33.749791] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:17.751 [2024-07-14 07:39:33.749808] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:24576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:17.751 [2024-07-14 07:39:33.749823] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:17.751 [2024-07-14 07:39:33.749840] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:24704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:17.751 [2024-07-14 07:39:33.749855] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:17.751 [2024-07-14 07:39:33.749879] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:24832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:17.751 [2024-07-14 07:39:33.749896] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:17.751 [2024-07-14 07:39:33.749912] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:24960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:17.751 [2024-07-14 07:39:33.749927] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:17.751 [2024-07-14 07:39:33.749943] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:18944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:17.751 [2024-07-14 07:39:33.749957] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:17.751 [2024-07-14 07:39:33.749973] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:19200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:17.751 [2024-07-14 07:39:33.749988] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:17.751 [2024-07-14 07:39:33.750004] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:19456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:17.751 [2024-07-14 07:39:33.750018] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:17.751 [2024-07-14 07:39:33.750034] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:19584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:17.751 [2024-07-14 07:39:33.750048] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:17.751 [2024-07-14 07:39:33.750063] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:19712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:17.751 [2024-07-14 07:39:33.750077] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:17.751 [2024-07-14 07:39:33.750092] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:19840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:17.751 [2024-07-14 07:39:33.750106] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:17.751 [2024-07-14 07:39:33.750122] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:19968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:17.751 [2024-07-14 07:39:33.750135] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:17.751 [2024-07-14 07:39:33.750155] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:20096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:17.751 [2024-07-14 07:39:33.750169] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:17.751 [2024-07-14 07:39:33.750185] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:20352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:17.751 [2024-07-14 07:39:33.750199] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:17.751 [2024-07-14 07:39:33.750214] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:25088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:17.751 [2024-07-14 07:39:33.750228] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:17.751 [2024-07-14 07:39:33.750244] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:25216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:17.751 [2024-07-14 07:39:33.750258] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:17.751 [2024-07-14 07:39:33.750275] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:25344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:17.751 [2024-07-14 07:39:33.750289] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:17.751 [2024-07-14 07:39:33.750304] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:25472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:17.751 [2024-07-14 07:39:33.750319] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:17.751 [2024-07-14 07:39:33.750334] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:25600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:17.751 [2024-07-14 07:39:33.750348] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:17.751 [2024-07-14 07:39:33.750363] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:25728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:17.751 [2024-07-14 07:39:33.750377] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:17.751 [2024-07-14 07:39:33.750393] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:20480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:17.751 [2024-07-14 07:39:33.750406] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:17.751 [2024-07-14 07:39:33.750422] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:20736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:17.751 [2024-07-14 07:39:33.750436] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:17.751 [2024-07-14 07:39:33.750452] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:20864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:17.751 [2024-07-14 07:39:33.750466] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:17.751 [2024-07-14 07:39:33.750481] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:20992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:17.751 [2024-07-14 07:39:33.750495] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:17.751 [2024-07-14 07:39:33.750511] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:25856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:17.751 [2024-07-14 07:39:33.750528] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:17.751 [2024-07-14 07:39:33.750544] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:25984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:17.751 [2024-07-14 07:39:33.750558] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:17.751 [2024-07-14 07:39:33.750574] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:26112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:17.751 [2024-07-14 07:39:33.750588] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:17.751 [2024-07-14 07:39:33.750603] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:26240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:17.751 [2024-07-14 07:39:33.750617] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:17.751 [2024-07-14 07:39:33.750633] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:26368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:17.751 [2024-07-14 07:39:33.750646] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:17.751 [2024-07-14 07:39:33.750662] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:26496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:17.751 [2024-07-14 07:39:33.750676] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:17.751 [2024-07-14 07:39:33.750691] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:26624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:17.751 [2024-07-14 07:39:33.750705] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:17.751 [2024-07-14 07:39:33.750721] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:26752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:17.751 [2024-07-14 07:39:33.750735] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:17.751 [2024-07-14 07:39:33.750751] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:26880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:17.751 [2024-07-14 07:39:33.750765] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:17.751 [2024-07-14 07:39:33.750781] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:27008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:17.751 [2024-07-14 07:39:33.750795] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:17.751 [2024-07-14 07:39:33.750811] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:27136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:17.751 [2024-07-14 07:39:33.750824] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:17.751 [2024-07-14 07:39:33.750839] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:27264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:17.751 [2024-07-14 07:39:33.750853] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:17.751 [2024-07-14 07:39:33.750877] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:27392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:17.751 [2024-07-14 07:39:33.750893] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:17.751 [2024-07-14 07:39:33.750913] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:27520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:17.751 [2024-07-14 07:39:33.750928] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:17.751 [2024-07-14 07:39:33.750943] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:27648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:17.751 [2024-07-14 07:39:33.750958] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:17.751 [2024-07-14 07:39:33.750974] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:27776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:17.751 [2024-07-14 07:39:33.750989] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:17.751 [2024-07-14 07:39:33.751004] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:27904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:17.751 [2024-07-14 07:39:33.751018] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:17.751 [2024-07-14 07:39:33.751033] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:28032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:17.751 [2024-07-14 07:39:33.751047] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:17.751 [2024-07-14 07:39:33.751063] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:21376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:17.751 [2024-07-14 07:39:33.751076] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:17.751 [2024-07-14 07:39:33.751092] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:21504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:17.751 [2024-07-14 07:39:33.751106] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:17.751 [2024-07-14 07:39:33.751121] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:21760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:17.751 [2024-07-14 07:39:33.751135] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:17.751 [2024-07-14 07:39:33.751150] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:28160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:17.751 [2024-07-14 07:39:33.751164] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:17.751 [2024-07-14 07:39:33.751179] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:22272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:17.751 [2024-07-14 07:39:33.751193] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:17.751 [2024-07-14 07:39:33.751209] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:22656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:17.751 [2024-07-14 07:39:33.751222] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:17.751 [2024-07-14 07:39:33.751238] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:23040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:17.751 [2024-07-14 07:39:33.751252] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:17.751 [2024-07-14 07:39:33.751268] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:28288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:17.751 [2024-07-14 07:39:33.751290] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:17.751 [2024-07-14 07:39:33.751307] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:28416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:17.751 [2024-07-14 07:39:33.751321] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:17.751 [2024-07-14 07:39:33.751336] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:28544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:17.751 [2024-07-14 07:39:33.751350] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:17.751 [2024-07-14 07:39:33.751366] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:23424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:17.751 [2024-07-14 07:39:33.751380] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:17.751 [2024-07-14 07:39:33.751395] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:23552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:17.751 [2024-07-14 07:39:33.751409] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:17.751 [2024-07-14 07:39:33.751425] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:23680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:17.751 [2024-07-14 07:39:33.751439] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:17.751 [2024-07-14 07:39:33.751454] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:23808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:17.751 [2024-07-14 07:39:33.751467] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:17.752 [2024-07-14 07:39:33.751483] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:23936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:17.752 [2024-07-14 07:39:33.751497] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:17.752 [2024-07-14 07:39:33.751512] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:24064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:17.752 [2024-07-14 07:39:33.751526] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:17.752 [2024-07-14 07:39:33.751542] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:24192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:17.752 [2024-07-14 07:39:33.751556] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:17.752 [2024-07-14 07:39:33.751571] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:28672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:17.752 [2024-07-14 07:39:33.751585] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:17.752 [2024-07-14 07:39:33.751600] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:28800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:17.752 [2024-07-14 07:39:33.751614] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:17.752 [2024-07-14 07:39:33.751629] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:28928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:17.752 [2024-07-14 07:39:33.751643] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:17.752 [2024-07-14 07:39:33.751662] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:29056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:17.752 [2024-07-14 07:39:33.751676] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:17.752 [2024-07-14 07:39:33.751692] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x362cd70 is same with the state(5) to be set 00:22:17.752 [2024-07-14 07:39:33.753811] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode6] resetting controller 00:22:17.752 [2024-07-14 07:39:33.754165] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:17.752 [2024-07-14 07:39:33.754333] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:17.752 [2024-07-14 07:39:33.754358] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2424f70 with addr=10.0.0.2, port=4420 00:22:17.752 [2024-07-14 07:39:33.754376] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2424f70 is same with the state(5) to be set 00:22:17.752 [2024-07-14 07:39:33.754428] bdev_nvme.c:2867:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:22:17.752 [2024-07-14 07:39:33.754454] bdev_nvme.c:2867:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:22:17.752 [2024-07-14 07:39:33.754476] bdev_nvme.c:2867:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:22:17.752 [2024-07-14 07:39:33.754495] bdev_nvme.c:2867:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:22:17.752 [2024-07-14 07:39:33.754516] bdev_nvme.c:2867:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:22:17.752 [2024-07-14 07:39:33.754535] bdev_nvme.c:2867:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:22:17.752 [2024-07-14 07:39:33.754554] bdev_nvme.c:2867:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:22:17.752 [2024-07-14 07:39:33.754573] bdev_nvme.c:2867:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:22:17.752 [2024-07-14 07:39:33.754600] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2424f70 (9): Bad file descriptor 00:22:17.752 [2024-07-14 07:39:33.755032] nvme_tcp.c:1159:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:22:17.752 [2024-07-14 07:39:33.755413] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:22:17.752 [2024-07-14 07:39:33.755609] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:17.752 [2024-07-14 07:39:33.755770] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:17.752 [2024-07-14 07:39:33.755795] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x25e3490 with addr=10.0.0.2, port=4420 00:22:17.752 [2024-07-14 07:39:33.755811] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x25e3490 is same with the state(5) to be set 00:22:17.752 [2024-07-14 07:39:33.756148] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:24320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:17.752 [2024-07-14 07:39:33.756175] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:17.752 [2024-07-14 07:39:33.756201] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:24448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:17.752 [2024-07-14 07:39:33.756217] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:17.752 [2024-07-14 07:39:33.756235] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:24576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:17.752 [2024-07-14 07:39:33.756250] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:17.752 [2024-07-14 07:39:33.756273] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:24704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:17.752 [2024-07-14 07:39:33.756288] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:17.752 [2024-07-14 07:39:33.756304] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:18944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:17.752 [2024-07-14 07:39:33.756318] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:17.752 [2024-07-14 07:39:33.756334] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:19200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:17.752 [2024-07-14 07:39:33.756348] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:17.752 [2024-07-14 07:39:33.756364] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:24832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:17.752 [2024-07-14 07:39:33.756378] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:17.752 [2024-07-14 07:39:33.756394] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:24960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:17.752 [2024-07-14 07:39:33.756408] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:17.752 [2024-07-14 07:39:33.756424] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:17.752 [2024-07-14 07:39:33.756438] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:17.752 [2024-07-14 07:39:33.756453] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:19456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:17.752 [2024-07-14 07:39:33.756467] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:17.752 [2024-07-14 07:39:33.756482] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:19584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:17.752 [2024-07-14 07:39:33.756496] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:17.752 [2024-07-14 07:39:33.756511] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:19712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:17.752 [2024-07-14 07:39:33.756525] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:17.752 [2024-07-14 07:39:33.756540] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:19840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:17.752 [2024-07-14 07:39:33.756554] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:17.752 [2024-07-14 07:39:33.756570] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:19968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:17.752 [2024-07-14 07:39:33.756584] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:17.752 [2024-07-14 07:39:33.756599] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:25216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:17.752 [2024-07-14 07:39:33.756613] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:17.752 [2024-07-14 07:39:33.756628] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:25344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:17.752 [2024-07-14 07:39:33.756646] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:17.752 [2024-07-14 07:39:33.756662] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:20096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:17.752 [2024-07-14 07:39:33.756676] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:17.752 [2024-07-14 07:39:33.756691] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:20352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:17.752 [2024-07-14 07:39:33.756705] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:17.752 [2024-07-14 07:39:33.756720] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:20480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:17.752 [2024-07-14 07:39:33.756734] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:17.752 [2024-07-14 07:39:33.756750] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:20736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:17.752 [2024-07-14 07:39:33.756764] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:17.752 [2024-07-14 07:39:33.756780] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:20864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:17.752 [2024-07-14 07:39:33.756794] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:17.752 [2024-07-14 07:39:33.756809] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:20992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:17.752 [2024-07-14 07:39:33.756823] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:17.752 [2024-07-14 07:39:33.756839] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:21376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:17.752 [2024-07-14 07:39:33.756853] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:17.752 [2024-07-14 07:39:33.756875] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:25472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:17.752 [2024-07-14 07:39:33.756892] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:17.752 [2024-07-14 07:39:33.756908] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:21504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:17.752 [2024-07-14 07:39:33.756922] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:17.752 [2024-07-14 07:39:33.756938] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:21760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:17.752 [2024-07-14 07:39:33.756952] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:17.752 [2024-07-14 07:39:33.756967] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:22272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:17.752 [2024-07-14 07:39:33.756981] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:17.752 [2024-07-14 07:39:33.756997] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:22656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:17.752 [2024-07-14 07:39:33.757010] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:17.752 [2024-07-14 07:39:33.757030] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:23040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:17.752 [2024-07-14 07:39:33.757044] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:17.752 [2024-07-14 07:39:33.757060] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:23424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:17.752 [2024-07-14 07:39:33.757074] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:17.752 [2024-07-14 07:39:33.757090] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:23552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:17.752 [2024-07-14 07:39:33.757103] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:17.752 [2024-07-14 07:39:33.757119] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:25600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:17.752 [2024-07-14 07:39:33.757133] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:17.752 [2024-07-14 07:39:33.757148] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:23680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:17.752 [2024-07-14 07:39:33.757162] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:17.752 [2024-07-14 07:39:33.757178] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:25728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:17.752 [2024-07-14 07:39:33.757191] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:17.752 [2024-07-14 07:39:33.757207] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:25856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:17.752 [2024-07-14 07:39:33.757222] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:17.752 [2024-07-14 07:39:33.757239] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:25984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:17.752 [2024-07-14 07:39:33.757253] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:17.752 [2024-07-14 07:39:33.757269] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:26112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:17.752 [2024-07-14 07:39:33.757283] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:17.752 [2024-07-14 07:39:33.757299] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:26240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:17.752 [2024-07-14 07:39:33.757313] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:17.752 [2024-07-14 07:39:33.757329] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:26368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:17.752 [2024-07-14 07:39:33.757345] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:17.752 [2024-07-14 07:39:33.757360] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:26496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:17.752 [2024-07-14 07:39:33.757375] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:17.752 [2024-07-14 07:39:33.757391] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:26624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:17.752 [2024-07-14 07:39:33.757408] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:17.752 [2024-07-14 07:39:33.757424] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:26752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:17.752 [2024-07-14 07:39:33.757438] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:17.752 [2024-07-14 07:39:33.757454] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:26880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:17.752 [2024-07-14 07:39:33.757468] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:17.752 [2024-07-14 07:39:33.757483] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:27008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:17.752 [2024-07-14 07:39:33.757497] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:17.752 [2024-07-14 07:39:33.757512] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:27136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:17.752 [2024-07-14 07:39:33.757526] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:17.752 [2024-07-14 07:39:33.757542] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:27264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:17.752 [2024-07-14 07:39:33.757556] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:17.752 [2024-07-14 07:39:33.757572] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:27392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:17.752 [2024-07-14 07:39:33.757586] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:17.752 [2024-07-14 07:39:33.757601] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:27520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:17.752 [2024-07-14 07:39:33.757615] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:17.752 [2024-07-14 07:39:33.757631] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:27648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:17.752 [2024-07-14 07:39:33.757644] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:17.752 [2024-07-14 07:39:33.757660] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:27776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:17.752 [2024-07-14 07:39:33.757674] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:17.752 [2024-07-14 07:39:33.757689] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:27904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:17.752 [2024-07-14 07:39:33.757704] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:17.752 [2024-07-14 07:39:33.757720] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:28032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:17.752 [2024-07-14 07:39:33.757734] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:17.752 [2024-07-14 07:39:33.757750] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:28160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:17.752 [2024-07-14 07:39:33.757765] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:17.752 [2024-07-14 07:39:33.757784] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:28288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:17.752 [2024-07-14 07:39:33.757799] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:17.752 [2024-07-14 07:39:33.757815] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:28416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:17.752 [2024-07-14 07:39:33.757829] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:17.752 [2024-07-14 07:39:33.757845] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:28544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:17.752 [2024-07-14 07:39:33.757859] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:17.752 [2024-07-14 07:39:33.757884] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:28672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:17.752 [2024-07-14 07:39:33.757899] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:17.752 [2024-07-14 07:39:33.757915] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:28800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:17.752 [2024-07-14 07:39:33.757930] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:17.752 [2024-07-14 07:39:33.757945] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:28928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:17.752 [2024-07-14 07:39:33.757959] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:17.752 [2024-07-14 07:39:33.757975] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:29056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:17.752 [2024-07-14 07:39:33.757989] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:17.752 [2024-07-14 07:39:33.758004] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:23808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:17.752 [2024-07-14 07:39:33.758018] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:17.752 [2024-07-14 07:39:33.758034] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:23936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:17.752 [2024-07-14 07:39:33.758047] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:17.752 [2024-07-14 07:39:33.758063] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:24064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:17.752 [2024-07-14 07:39:33.758076] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:17.752 [2024-07-14 07:39:33.758092] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:24192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:17.752 [2024-07-14 07:39:33.758107] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:17.752 [2024-07-14 07:39:33.758121] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2411e70 is same with the state(5) to be set 00:22:17.752 [2024-07-14 07:39:33.759415] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:24320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:17.753 [2024-07-14 07:39:33.759439] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:17.753 [2024-07-14 07:39:33.759464] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:24448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:17.753 [2024-07-14 07:39:33.759481] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:17.753 [2024-07-14 07:39:33.759497] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:24576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:17.753 [2024-07-14 07:39:33.759512] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:17.753 [2024-07-14 07:39:33.759527] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:24704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:17.753 [2024-07-14 07:39:33.759541] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:17.753 [2024-07-14 07:39:33.759556] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:18944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:17.753 [2024-07-14 07:39:33.759570] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:17.753 [2024-07-14 07:39:33.759585] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:19200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:17.753 [2024-07-14 07:39:33.759599] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:17.753 [2024-07-14 07:39:33.759614] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:19456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:17.753 [2024-07-14 07:39:33.759629] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:17.753 [2024-07-14 07:39:33.759644] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:19584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:17.753 [2024-07-14 07:39:33.759658] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:17.753 [2024-07-14 07:39:33.759673] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:19712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:17.753 [2024-07-14 07:39:33.759687] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:17.753 [2024-07-14 07:39:33.759702] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:19840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:17.753 [2024-07-14 07:39:33.759716] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:17.753 [2024-07-14 07:39:33.759731] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:24832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:17.753 [2024-07-14 07:39:33.759746] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:17.753 [2024-07-14 07:39:33.759761] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:24960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:17.753 [2024-07-14 07:39:33.759775] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:17.753 [2024-07-14 07:39:33.759790] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:19968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:17.753 [2024-07-14 07:39:33.759804] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:17.753 [2024-07-14 07:39:33.759820] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:25088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:17.753 [2024-07-14 07:39:33.759837] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:17.753 [2024-07-14 07:39:33.759854] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:25216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:17.753 [2024-07-14 07:39:33.759875] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:17.753 [2024-07-14 07:39:33.759893] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:20096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:17.753 [2024-07-14 07:39:33.759908] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:17.753 [2024-07-14 07:39:33.759923] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:20352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:17.753 [2024-07-14 07:39:33.759937] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:17.753 [2024-07-14 07:39:33.759953] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:25344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:17.753 [2024-07-14 07:39:33.759966] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:17.753 [2024-07-14 07:39:33.759982] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:20480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:17.753 [2024-07-14 07:39:33.759997] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:17.753 [2024-07-14 07:39:33.760012] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:20736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:17.753 [2024-07-14 07:39:33.760026] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:17.753 [2024-07-14 07:39:33.760041] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:20864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:17.753 [2024-07-14 07:39:33.760055] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:17.753 [2024-07-14 07:39:33.760072] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:25472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:17.753 [2024-07-14 07:39:33.760086] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:17.753 [2024-07-14 07:39:33.760101] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:25600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:17.753 [2024-07-14 07:39:33.760115] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:17.753 [2024-07-14 07:39:33.760131] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:20992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:17.753 [2024-07-14 07:39:33.760144] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:17.753 [2024-07-14 07:39:33.760160] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:21376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:17.753 [2024-07-14 07:39:33.760173] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:17.753 [2024-07-14 07:39:33.760189] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:21504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:17.753 [2024-07-14 07:39:33.760203] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:17.753 [2024-07-14 07:39:33.760219] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:25728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:17.753 [2024-07-14 07:39:33.760237] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:17.753 [2024-07-14 07:39:33.760253] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:21760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:17.753 [2024-07-14 07:39:33.760267] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:17.753 [2024-07-14 07:39:33.760283] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:22272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:17.753 [2024-07-14 07:39:33.760297] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:17.753 [2024-07-14 07:39:33.760312] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:25856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:17.753 [2024-07-14 07:39:33.760325] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:17.753 [2024-07-14 07:39:33.760341] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:25984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:17.753 [2024-07-14 07:39:33.760355] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:17.753 [2024-07-14 07:39:33.760370] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:22656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:17.753 [2024-07-14 07:39:33.760384] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:17.753 [2024-07-14 07:39:33.760401] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:23040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:17.753 [2024-07-14 07:39:33.760414] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:17.753 [2024-07-14 07:39:33.760430] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:23424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:17.753 [2024-07-14 07:39:33.760444] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:17.753 [2024-07-14 07:39:33.760459] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:23552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:17.753 [2024-07-14 07:39:33.760473] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:17.753 [2024-07-14 07:39:33.760489] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:23680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:17.753 [2024-07-14 07:39:33.760503] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:17.753 [2024-07-14 07:39:33.760519] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:23808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:17.753 [2024-07-14 07:39:33.760532] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:17.753 [2024-07-14 07:39:33.760548] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:23936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:17.753 [2024-07-14 07:39:33.760562] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:17.753 [2024-07-14 07:39:33.760577] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:24064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:17.753 [2024-07-14 07:39:33.760591] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:17.753 [2024-07-14 07:39:33.760610] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:24192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:17.753 [2024-07-14 07:39:33.760624] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:17.753 [2024-07-14 07:39:33.760640] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:26112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:17.753 [2024-07-14 07:39:33.760654] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:17.753 [2024-07-14 07:39:33.760670] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:26240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:17.753 [2024-07-14 07:39:33.760683] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:17.753 [2024-07-14 07:39:33.760699] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:26368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:17.753 [2024-07-14 07:39:33.760713] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:17.753 [2024-07-14 07:39:33.760728] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:26496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:17.753 [2024-07-14 07:39:33.760742] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:17.753 [2024-07-14 07:39:33.760758] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:26624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:17.753 [2024-07-14 07:39:33.760771] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:17.753 [2024-07-14 07:39:33.760787] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:26752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:17.753 [2024-07-14 07:39:33.760801] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:17.753 [2024-07-14 07:39:33.760816] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:26880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:17.753 [2024-07-14 07:39:33.760830] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:17.753 [2024-07-14 07:39:33.760846] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:27008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:17.753 [2024-07-14 07:39:33.760860] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:17.753 [2024-07-14 07:39:33.760890] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:27136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:17.753 [2024-07-14 07:39:33.760905] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:17.753 [2024-07-14 07:39:33.760921] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:27264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:17.753 [2024-07-14 07:39:33.760935] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:17.753 [2024-07-14 07:39:33.760951] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:27392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:17.753 [2024-07-14 07:39:33.760966] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:17.753 [2024-07-14 07:39:33.760983] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:27520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:17.753 [2024-07-14 07:39:33.761001] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:17.753 [2024-07-14 07:39:33.761017] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:27648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:17.753 [2024-07-14 07:39:33.761031] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:17.753 [2024-07-14 07:39:33.761047] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:27776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:17.753 [2024-07-14 07:39:33.761061] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:17.753 [2024-07-14 07:39:33.761077] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:27904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:17.753 [2024-07-14 07:39:33.761091] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:17.753 [2024-07-14 07:39:33.761107] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:28032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:17.753 [2024-07-14 07:39:33.761121] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:17.753 [2024-07-14 07:39:33.761136] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:28160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:17.753 [2024-07-14 07:39:33.761150] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:17.753 [2024-07-14 07:39:33.761165] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:28288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:17.753 [2024-07-14 07:39:33.761179] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:17.753 [2024-07-14 07:39:33.761195] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:28416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:17.753 [2024-07-14 07:39:33.761209] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:17.753 [2024-07-14 07:39:33.761224] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:28544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:17.753 [2024-07-14 07:39:33.761238] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:17.753 [2024-07-14 07:39:33.761253] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:28672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:17.753 [2024-07-14 07:39:33.761267] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:17.753 [2024-07-14 07:39:33.761283] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:28800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:17.753 [2024-07-14 07:39:33.761297] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:17.753 [2024-07-14 07:39:33.761312] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:28928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:17.753 [2024-07-14 07:39:33.761325] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:17.753 [2024-07-14 07:39:33.761341] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:29056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:17.753 [2024-07-14 07:39:33.761355] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:17.753 [2024-07-14 07:39:33.761373] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x259b820 is same with the state(5) to be set 00:22:17.753 [2024-07-14 07:39:33.764337] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode10] resetting controller 00:22:17.753 [2024-07-14 07:39:33.764381] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode5] resetting controller 00:22:17.753 [2024-07-14 07:39:33.764401] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode7] resetting controller 00:22:17.753 [2024-07-14 07:39:33.764419] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode8] resetting controller 00:22:17.753 [2024-07-14 07:39:33.764438] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode2] resetting controller 00:22:17.753 [2024-07-14 07:39:33.764794] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:17.753 [2024-07-14 07:39:33.764976] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:17.753 [2024-07-14 07:39:33.765004] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2422210 with addr=10.0.0.2, port=4420 00:22:17.753 [2024-07-14 07:39:33.765023] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2422210 is same with the state(5) to be set 00:22:17.753 [2024-07-14 07:39:33.765051] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x25e3490 (9): Bad file descriptor 00:22:17.753 [2024-07-14 07:39:33.765070] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode3] Ctrlr is in error state 00:22:17.753 [2024-07-14 07:39:33.765091] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode3] controller reinitialization failed 00:22:17.753 [2024-07-14 07:39:33.765108] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode3] in failed state. 00:22:17.753 [2024-07-14 07:39:33.765169] bdev_nvme.c:2867:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:22:17.753 [2024-07-14 07:39:33.765204] bdev_nvme.c:2867:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:22:17.753 [2024-07-14 07:39:33.765226] bdev_nvme.c:2867:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:22:17.753 [2024-07-14 07:39:33.765245] bdev_nvme.c:2867:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:22:17.753 task offset: 29184 on job bdev=Nvme3n1 fails 00:22:17.753 00:22:17.753 Latency(us) 00:22:17.753 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:17.753 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:22:17.753 Job: Nvme1n1 ended in about 0.60 seconds with error 00:22:17.753 Verification LBA range: start 0x0 length 0x400 00:22:17.753 Nvme1n1 : 0.60 280.40 17.53 106.82 0.00 163964.71 10582.85 173985.94 00:22:17.753 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:22:17.753 Job: Nvme2n1 ended in about 0.61 seconds with error 00:22:17.753 Verification LBA range: start 0x0 length 0x400 00:22:17.753 Nvme2n1 : 0.61 267.41 16.71 104.35 0.00 168685.15 100973.99 150684.25 00:22:17.753 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:22:17.754 Job: Nvme3n1 ended in about 0.59 seconds with error 00:22:17.754 Verification LBA range: start 0x0 length 0x400 00:22:17.754 Nvme3n1 : 0.59 354.92 22.18 109.21 0.00 133214.23 23495.87 127382.57 00:22:17.754 Job: Nvme4n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:22:17.754 Job: Nvme4n1 ended in about 0.62 seconds with error 00:22:17.754 Verification LBA range: start 0x0 length 0x400 00:22:17.754 Nvme4n1 : 0.62 266.01 16.63 103.81 0.00 165432.62 95148.56 144470.47 00:22:17.754 Job: Nvme5n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:22:17.754 Job: Nvme5n1 ended in about 0.60 seconds with error 00:22:17.754 Verification LBA range: start 0x0 length 0x400 00:22:17.754 Nvme5n1 : 0.60 346.41 21.65 106.59 0.00 133063.60 23787.14 131266.18 00:22:17.754 Job: Nvme6n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:22:17.754 Job: Nvme6n1 ended in about 0.59 seconds with error 00:22:17.754 Verification LBA range: start 0x0 length 0x400 00:22:17.754 Nvme6n1 : 0.59 352.89 22.06 108.58 0.00 128731.12 53205.52 118061.89 00:22:17.754 Job: Nvme7n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:22:17.754 Job: Nvme7n1 ended in about 0.60 seconds with error 00:22:17.754 Verification LBA range: start 0x0 length 0x400 00:22:17.754 Nvme7n1 : 0.60 345.78 21.61 106.39 0.00 129829.10 56700.78 118838.61 00:22:17.754 Job: Nvme8n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:22:17.754 Job: Nvme8n1 ended in about 0.60 seconds with error 00:22:17.754 Verification LBA range: start 0x0 length 0x400 00:22:17.754 Nvme8n1 : 0.60 345.16 21.57 106.20 0.00 128363.11 47768.46 115731.72 00:22:17.754 Job: Nvme9n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:22:17.754 Job: Nvme9n1 ended in about 0.60 seconds with error 00:22:17.754 Verification LBA range: start 0x0 length 0x400 00:22:17.754 Nvme9n1 : 0.60 344.55 21.53 106.01 0.00 126909.07 18350.08 118838.61 00:22:17.754 Job: Nvme10n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:22:17.754 Job: Nvme10n1 ended in about 0.61 seconds with error 00:22:17.754 Verification LBA range: start 0x0 length 0x400 00:22:17.754 Nvme10n1 : 0.61 270.24 16.89 105.46 0.00 150299.30 78060.66 132042.90 00:22:17.754 =================================================================================================================== 00:22:17.754 Total : 3173.77 198.36 1063.43 0.00 141552.94 10582.85 173985.94 00:22:17.754 [2024-07-14 07:39:33.790500] app.c: 910:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:22:17.754 [2024-07-14 07:39:33.790586] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode4] resetting controller 00:22:17.754 [2024-07-14 07:39:33.790621] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode9] resetting controller 00:22:17.754 [2024-07-14 07:39:33.790640] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:22:17.754 [2024-07-14 07:39:33.790987] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:17.754 [2024-07-14 07:39:33.791157] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:17.754 [2024-07-14 07:39:33.791184] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x245d4f0 with addr=10.0.0.2, port=4420 00:22:17.754 [2024-07-14 07:39:33.791204] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x245d4f0 is same with the state(5) to be set 00:22:17.754 [2024-07-14 07:39:33.791367] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:17.754 [2024-07-14 07:39:33.791537] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:17.754 [2024-07-14 07:39:33.791562] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x25e38c0 with addr=10.0.0.2, port=4420 00:22:17.754 [2024-07-14 07:39:33.791578] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x25e38c0 is same with the state(5) to be set 00:22:17.754 [2024-07-14 07:39:33.791754] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:17.754 [2024-07-14 07:39:33.791903] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:17.754 [2024-07-14 07:39:33.791929] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x241be60 with addr=10.0.0.2, port=4420 00:22:17.754 [2024-07-14 07:39:33.791945] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x241be60 is same with the state(5) to be set 00:22:17.754 [2024-07-14 07:39:33.792128] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:17.754 [2024-07-14 07:39:33.792284] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:17.754 [2024-07-14 07:39:33.792309] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x241ba30 with addr=10.0.0.2, port=4420 00:22:17.754 [2024-07-14 07:39:33.792336] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x241ba30 is same with the state(5) to be set 00:22:17.754 [2024-07-14 07:39:33.792503] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:17.754 [2024-07-14 07:39:33.792659] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:17.754 [2024-07-14 07:39:33.792684] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x25e8a60 with addr=10.0.0.2, port=4420 00:22:17.754 [2024-07-14 07:39:33.792700] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x25e8a60 is same with the state(5) to be set 00:22:17.754 [2024-07-14 07:39:33.792727] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2422210 (9): Bad file descriptor 00:22:17.754 [2024-07-14 07:39:33.792749] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode6] Ctrlr is in error state 00:22:17.754 [2024-07-14 07:39:33.792763] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode6] controller reinitialization failed 00:22:17.754 [2024-07-14 07:39:33.792780] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode6] in failed state. 00:22:17.754 [2024-07-14 07:39:33.793479] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:22:17.754 [2024-07-14 07:39:33.793676] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:17.754 [2024-07-14 07:39:33.793825] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:17.754 [2024-07-14 07:39:33.793851] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2385a50 with addr=10.0.0.2, port=4420 00:22:17.754 [2024-07-14 07:39:33.793873] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2385a50 is same with the state(5) to be set 00:22:17.754 [2024-07-14 07:39:33.794033] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:17.754 [2024-07-14 07:39:33.794187] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:17.754 [2024-07-14 07:39:33.794212] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x25a6830 with addr=10.0.0.2, port=4420 00:22:17.754 [2024-07-14 07:39:33.794228] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x25a6830 is same with the state(5) to be set 00:22:17.754 [2024-07-14 07:39:33.794247] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x245d4f0 (9): Bad file descriptor 00:22:17.754 [2024-07-14 07:39:33.794266] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x25e38c0 (9): Bad file descriptor 00:22:17.754 [2024-07-14 07:39:33.794284] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x241be60 (9): Bad file descriptor 00:22:17.754 [2024-07-14 07:39:33.794300] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x241ba30 (9): Bad file descriptor 00:22:17.754 [2024-07-14 07:39:33.794318] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x25e8a60 (9): Bad file descriptor 00:22:17.754 [2024-07-14 07:39:33.794333] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:22:17.754 [2024-07-14 07:39:33.794346] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:22:17.754 [2024-07-14 07:39:33.794359] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:22:17.754 [2024-07-14 07:39:33.794385] bdev_nvme.c:2867:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:22:17.754 [2024-07-14 07:39:33.794450] bdev_nvme.c:2867:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:22:17.754 [2024-07-14 07:39:33.794473] bdev_nvme.c:2867:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:22:17.754 [2024-07-14 07:39:33.794491] bdev_nvme.c:2867:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:22:17.754 [2024-07-14 07:39:33.794515] bdev_nvme.c:2867:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:22:17.754 [2024-07-14 07:39:33.794533] bdev_nvme.c:2867:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:22:17.754 [2024-07-14 07:39:33.794610] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:22:17.754 [2024-07-14 07:39:33.794643] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2385a50 (9): Bad file descriptor 00:22:17.754 [2024-07-14 07:39:33.794663] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x25a6830 (9): Bad file descriptor 00:22:17.754 [2024-07-14 07:39:33.794679] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode10] Ctrlr is in error state 00:22:17.754 [2024-07-14 07:39:33.794692] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode10] controller reinitialization failed 00:22:17.754 [2024-07-14 07:39:33.794705] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode10] in failed state. 00:22:17.754 [2024-07-14 07:39:33.794726] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode5] Ctrlr is in error state 00:22:17.754 [2024-07-14 07:39:33.794740] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode5] controller reinitialization failed 00:22:17.754 [2024-07-14 07:39:33.794753] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode5] in failed state. 00:22:17.754 [2024-07-14 07:39:33.794769] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode7] Ctrlr is in error state 00:22:17.754 [2024-07-14 07:39:33.794782] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode7] controller reinitialization failed 00:22:17.754 [2024-07-14 07:39:33.794795] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode7] in failed state. 00:22:17.754 [2024-07-14 07:39:33.794810] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode8] Ctrlr is in error state 00:22:17.754 [2024-07-14 07:39:33.794824] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode8] controller reinitialization failed 00:22:17.754 [2024-07-14 07:39:33.794836] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode8] in failed state. 00:22:17.754 [2024-07-14 07:39:33.794852] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode2] Ctrlr is in error state 00:22:17.754 [2024-07-14 07:39:33.794874] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode2] controller reinitialization failed 00:22:17.754 [2024-07-14 07:39:33.794889] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode2] in failed state. 00:22:17.754 [2024-07-14 07:39:33.794954] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode3] resetting controller 00:22:17.754 [2024-07-14 07:39:33.794978] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode6] resetting controller 00:22:17.754 [2024-07-14 07:39:33.794996] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:22:17.754 [2024-07-14 07:39:33.795008] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:22:17.754 [2024-07-14 07:39:33.795020] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:22:17.754 [2024-07-14 07:39:33.795032] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:22:17.754 [2024-07-14 07:39:33.795044] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:22:17.754 [2024-07-14 07:39:33.795071] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode4] Ctrlr is in error state 00:22:17.754 [2024-07-14 07:39:33.795087] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode4] controller reinitialization failed 00:22:17.754 [2024-07-14 07:39:33.795100] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode4] in failed state. 00:22:17.754 [2024-07-14 07:39:33.795121] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode9] Ctrlr is in error state 00:22:17.754 [2024-07-14 07:39:33.795136] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode9] controller reinitialization failed 00:22:17.754 [2024-07-14 07:39:33.795149] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode9] in failed state. 00:22:17.754 [2024-07-14 07:39:33.795187] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:22:17.754 [2024-07-14 07:39:33.795204] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:22:17.754 [2024-07-14 07:39:33.795355] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:17.754 [2024-07-14 07:39:33.795510] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:17.754 [2024-07-14 07:39:33.795534] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2424f70 with addr=10.0.0.2, port=4420 00:22:17.754 [2024-07-14 07:39:33.795550] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2424f70 is same with the state(5) to be set 00:22:17.754 [2024-07-14 07:39:33.795699] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:17.754 [2024-07-14 07:39:33.795855] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:17.754 [2024-07-14 07:39:33.795887] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x25e3490 with addr=10.0.0.2, port=4420 00:22:17.754 [2024-07-14 07:39:33.795904] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x25e3490 is same with the state(5) to be set 00:22:17.754 [2024-07-14 07:39:33.795946] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2424f70 (9): Bad file descriptor 00:22:17.754 [2024-07-14 07:39:33.795970] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x25e3490 (9): Bad file descriptor 00:22:17.754 [2024-07-14 07:39:33.796011] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode3] Ctrlr is in error state 00:22:17.754 [2024-07-14 07:39:33.796029] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode3] controller reinitialization failed 00:22:17.754 [2024-07-14 07:39:33.796043] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode3] in failed state. 00:22:17.754 [2024-07-14 07:39:33.796059] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode6] Ctrlr is in error state 00:22:17.754 [2024-07-14 07:39:33.796073] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode6] controller reinitialization failed 00:22:17.754 [2024-07-14 07:39:33.796085] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode6] in failed state. 00:22:17.754 [2024-07-14 07:39:33.796122] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:22:17.754 [2024-07-14 07:39:33.796138] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:22:18.322 07:39:34 -- target/shutdown.sh@135 -- # nvmfpid= 00:22:18.322 07:39:34 -- target/shutdown.sh@138 -- # sleep 1 00:22:19.256 07:39:35 -- target/shutdown.sh@141 -- # kill -9 4157870 00:22:19.256 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/shutdown.sh: line 141: kill: (4157870) - No such process 00:22:19.256 07:39:35 -- target/shutdown.sh@141 -- # true 00:22:19.256 07:39:35 -- target/shutdown.sh@143 -- # stoptarget 00:22:19.256 07:39:35 -- target/shutdown.sh@41 -- # rm -f ./local-job0-0-verify.state 00:22:19.256 07:39:35 -- target/shutdown.sh@42 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:22:19.256 07:39:35 -- target/shutdown.sh@43 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:22:19.256 07:39:35 -- target/shutdown.sh@45 -- # nvmftestfini 00:22:19.256 07:39:35 -- nvmf/common.sh@476 -- # nvmfcleanup 00:22:19.256 07:39:35 -- nvmf/common.sh@116 -- # sync 00:22:19.256 07:39:35 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:22:19.256 07:39:35 -- nvmf/common.sh@119 -- # set +e 00:22:19.256 07:39:35 -- nvmf/common.sh@120 -- # for i in {1..20} 00:22:19.256 07:39:35 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:22:19.256 rmmod nvme_tcp 00:22:19.256 rmmod nvme_fabrics 00:22:19.256 rmmod nvme_keyring 00:22:19.256 07:39:35 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:22:19.256 07:39:35 -- nvmf/common.sh@123 -- # set -e 00:22:19.256 07:39:35 -- nvmf/common.sh@124 -- # return 0 00:22:19.256 07:39:35 -- nvmf/common.sh@477 -- # '[' -n '' ']' 00:22:19.256 07:39:35 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:22:19.256 07:39:35 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:22:19.256 07:39:35 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:22:19.257 07:39:35 -- nvmf/common.sh@273 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:22:19.257 07:39:35 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:22:19.257 07:39:35 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:19.257 07:39:35 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:22:19.257 07:39:35 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:21.785 07:39:37 -- nvmf/common.sh@278 -- # ip -4 addr flush cvl_0_1 00:22:21.785 00:22:21.785 real 0m7.601s 00:22:21.785 user 0m18.632s 00:22:21.785 sys 0m1.384s 00:22:21.785 07:39:37 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:22:21.785 07:39:37 -- common/autotest_common.sh@10 -- # set +x 00:22:21.785 ************************************ 00:22:21.785 END TEST nvmf_shutdown_tc3 00:22:21.785 ************************************ 00:22:21.785 07:39:37 -- target/shutdown.sh@150 -- # trap - SIGINT SIGTERM EXIT 00:22:21.785 00:22:21.785 real 0m28.272s 00:22:21.785 user 1m21.184s 00:22:21.785 sys 0m6.090s 00:22:21.785 07:39:37 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:22:21.785 07:39:37 -- common/autotest_common.sh@10 -- # set +x 00:22:21.785 ************************************ 00:22:21.785 END TEST nvmf_shutdown 00:22:21.785 ************************************ 00:22:21.785 07:39:37 -- nvmf/nvmf.sh@86 -- # timing_exit target 00:22:21.785 07:39:37 -- common/autotest_common.sh@718 -- # xtrace_disable 00:22:21.785 07:39:37 -- common/autotest_common.sh@10 -- # set +x 00:22:21.785 07:39:37 -- nvmf/nvmf.sh@88 -- # timing_enter host 00:22:21.785 07:39:37 -- common/autotest_common.sh@712 -- # xtrace_disable 00:22:21.785 07:39:37 -- common/autotest_common.sh@10 -- # set +x 00:22:21.785 07:39:37 -- nvmf/nvmf.sh@90 -- # [[ 0 -eq 0 ]] 00:22:21.785 07:39:37 -- nvmf/nvmf.sh@91 -- # run_test nvmf_multicontroller /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/multicontroller.sh --transport=tcp 00:22:21.785 07:39:37 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:22:21.785 07:39:37 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:22:21.785 07:39:37 -- common/autotest_common.sh@10 -- # set +x 00:22:21.786 ************************************ 00:22:21.786 START TEST nvmf_multicontroller 00:22:21.786 ************************************ 00:22:21.786 07:39:37 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/multicontroller.sh --transport=tcp 00:22:21.786 * Looking for test storage... 00:22:21.786 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:22:21.786 07:39:37 -- host/multicontroller.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:22:21.786 07:39:37 -- nvmf/common.sh@7 -- # uname -s 00:22:21.786 07:39:37 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:22:21.786 07:39:37 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:22:21.786 07:39:37 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:22:21.786 07:39:37 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:22:21.786 07:39:37 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:22:21.786 07:39:37 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:22:21.786 07:39:37 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:22:21.786 07:39:37 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:22:21.786 07:39:37 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:22:21.786 07:39:37 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:22:21.786 07:39:37 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:22:21.786 07:39:37 -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:22:21.786 07:39:37 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:22:21.786 07:39:37 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:22:21.786 07:39:37 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:22:21.786 07:39:37 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:22:21.786 07:39:37 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:22:21.786 07:39:37 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:22:21.786 07:39:37 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:22:21.786 07:39:37 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:21.786 07:39:37 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:21.786 07:39:37 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:21.786 07:39:37 -- paths/export.sh@5 -- # export PATH 00:22:21.786 07:39:37 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:21.786 07:39:37 -- nvmf/common.sh@46 -- # : 0 00:22:21.786 07:39:37 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:22:21.786 07:39:37 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:22:21.786 07:39:37 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:22:21.786 07:39:37 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:22:21.786 07:39:37 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:22:21.786 07:39:37 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:22:21.786 07:39:37 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:22:21.786 07:39:37 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:22:21.786 07:39:37 -- host/multicontroller.sh@11 -- # MALLOC_BDEV_SIZE=64 00:22:21.786 07:39:37 -- host/multicontroller.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:22:21.786 07:39:37 -- host/multicontroller.sh@13 -- # NVMF_HOST_FIRST_PORT=60000 00:22:21.786 07:39:37 -- host/multicontroller.sh@14 -- # NVMF_HOST_SECOND_PORT=60001 00:22:21.786 07:39:37 -- host/multicontroller.sh@16 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:22:21.786 07:39:37 -- host/multicontroller.sh@18 -- # '[' tcp == rdma ']' 00:22:21.786 07:39:37 -- host/multicontroller.sh@23 -- # nvmftestinit 00:22:21.786 07:39:37 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:22:21.786 07:39:37 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:22:21.786 07:39:37 -- nvmf/common.sh@436 -- # prepare_net_devs 00:22:21.786 07:39:37 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:22:21.786 07:39:37 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:22:21.786 07:39:37 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:21.786 07:39:37 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:22:21.786 07:39:37 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:21.786 07:39:37 -- nvmf/common.sh@402 -- # [[ phy != virt ]] 00:22:21.786 07:39:37 -- nvmf/common.sh@402 -- # gather_supported_nvmf_pci_devs 00:22:21.786 07:39:37 -- nvmf/common.sh@284 -- # xtrace_disable 00:22:21.786 07:39:37 -- common/autotest_common.sh@10 -- # set +x 00:22:23.159 07:39:39 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:22:23.159 07:39:39 -- nvmf/common.sh@290 -- # pci_devs=() 00:22:23.159 07:39:39 -- nvmf/common.sh@290 -- # local -a pci_devs 00:22:23.159 07:39:39 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:22:23.159 07:39:39 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:22:23.159 07:39:39 -- nvmf/common.sh@292 -- # pci_drivers=() 00:22:23.159 07:39:39 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:22:23.159 07:39:39 -- nvmf/common.sh@294 -- # net_devs=() 00:22:23.159 07:39:39 -- nvmf/common.sh@294 -- # local -ga net_devs 00:22:23.159 07:39:39 -- nvmf/common.sh@295 -- # e810=() 00:22:23.159 07:39:39 -- nvmf/common.sh@295 -- # local -ga e810 00:22:23.159 07:39:39 -- nvmf/common.sh@296 -- # x722=() 00:22:23.159 07:39:39 -- nvmf/common.sh@296 -- # local -ga x722 00:22:23.159 07:39:39 -- nvmf/common.sh@297 -- # mlx=() 00:22:23.159 07:39:39 -- nvmf/common.sh@297 -- # local -ga mlx 00:22:23.159 07:39:39 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:22:23.159 07:39:39 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:22:23.159 07:39:39 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:22:23.159 07:39:39 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:22:23.159 07:39:39 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:22:23.159 07:39:39 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:22:23.159 07:39:39 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:22:23.159 07:39:39 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:22:23.159 07:39:39 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:22:23.159 07:39:39 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:22:23.159 07:39:39 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:22:23.159 07:39:39 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:22:23.159 07:39:39 -- nvmf/common.sh@320 -- # [[ tcp == rdma ]] 00:22:23.159 07:39:39 -- nvmf/common.sh@326 -- # [[ e810 == mlx5 ]] 00:22:23.159 07:39:39 -- nvmf/common.sh@328 -- # [[ e810 == e810 ]] 00:22:23.159 07:39:39 -- nvmf/common.sh@329 -- # pci_devs=("${e810[@]}") 00:22:23.159 07:39:39 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:22:23.159 07:39:39 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:22:23.160 07:39:39 -- nvmf/common.sh@340 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:22:23.160 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:22:23.160 07:39:39 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:22:23.160 07:39:39 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:22:23.160 07:39:39 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:23.160 07:39:39 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:23.160 07:39:39 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:22:23.160 07:39:39 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:22:23.160 07:39:39 -- nvmf/common.sh@340 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:22:23.160 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:22:23.160 07:39:39 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:22:23.160 07:39:39 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:22:23.160 07:39:39 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:23.160 07:39:39 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:23.160 07:39:39 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:22:23.160 07:39:39 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:22:23.160 07:39:39 -- nvmf/common.sh@371 -- # [[ e810 == e810 ]] 00:22:23.160 07:39:39 -- nvmf/common.sh@371 -- # [[ tcp == rdma ]] 00:22:23.160 07:39:39 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:22:23.160 07:39:39 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:23.160 07:39:39 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:22:23.160 07:39:39 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:23.160 07:39:39 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:22:23.160 Found net devices under 0000:0a:00.0: cvl_0_0 00:22:23.160 07:39:39 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:22:23.160 07:39:39 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:22:23.160 07:39:39 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:23.160 07:39:39 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:22:23.160 07:39:39 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:23.160 07:39:39 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:22:23.160 Found net devices under 0000:0a:00.1: cvl_0_1 00:22:23.160 07:39:39 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:22:23.160 07:39:39 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:22:23.160 07:39:39 -- nvmf/common.sh@402 -- # is_hw=yes 00:22:23.160 07:39:39 -- nvmf/common.sh@404 -- # [[ yes == yes ]] 00:22:23.160 07:39:39 -- nvmf/common.sh@405 -- # [[ tcp == tcp ]] 00:22:23.160 07:39:39 -- nvmf/common.sh@406 -- # nvmf_tcp_init 00:22:23.160 07:39:39 -- nvmf/common.sh@228 -- # NVMF_INITIATOR_IP=10.0.0.1 00:22:23.160 07:39:39 -- nvmf/common.sh@229 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:22:23.160 07:39:39 -- nvmf/common.sh@230 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:22:23.160 07:39:39 -- nvmf/common.sh@233 -- # (( 2 > 1 )) 00:22:23.160 07:39:39 -- nvmf/common.sh@235 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:22:23.160 07:39:39 -- nvmf/common.sh@236 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:22:23.160 07:39:39 -- nvmf/common.sh@239 -- # NVMF_SECOND_TARGET_IP= 00:22:23.160 07:39:39 -- nvmf/common.sh@241 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:22:23.160 07:39:39 -- nvmf/common.sh@242 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:22:23.160 07:39:39 -- nvmf/common.sh@243 -- # ip -4 addr flush cvl_0_0 00:22:23.160 07:39:39 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_1 00:22:23.160 07:39:39 -- nvmf/common.sh@247 -- # ip netns add cvl_0_0_ns_spdk 00:22:23.160 07:39:39 -- nvmf/common.sh@250 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:22:23.419 07:39:39 -- nvmf/common.sh@253 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:22:23.419 07:39:39 -- nvmf/common.sh@254 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:22:23.419 07:39:39 -- nvmf/common.sh@257 -- # ip link set cvl_0_1 up 00:22:23.419 07:39:39 -- nvmf/common.sh@259 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:22:23.419 07:39:39 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:22:23.419 07:39:39 -- nvmf/common.sh@263 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:22:23.419 07:39:39 -- nvmf/common.sh@266 -- # ping -c 1 10.0.0.2 00:22:23.419 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:22:23.419 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.117 ms 00:22:23.419 00:22:23.419 --- 10.0.0.2 ping statistics --- 00:22:23.419 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:23.419 rtt min/avg/max/mdev = 0.117/0.117/0.117/0.000 ms 00:22:23.419 07:39:39 -- nvmf/common.sh@267 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:22:23.419 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:22:23.419 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.239 ms 00:22:23.419 00:22:23.419 --- 10.0.0.1 ping statistics --- 00:22:23.419 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:23.419 rtt min/avg/max/mdev = 0.239/0.239/0.239/0.000 ms 00:22:23.419 07:39:39 -- nvmf/common.sh@269 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:22:23.419 07:39:39 -- nvmf/common.sh@410 -- # return 0 00:22:23.419 07:39:39 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:22:23.419 07:39:39 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:22:23.419 07:39:39 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:22:23.419 07:39:39 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:22:23.419 07:39:39 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:22:23.419 07:39:39 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:22:23.419 07:39:39 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:22:23.419 07:39:39 -- host/multicontroller.sh@25 -- # nvmfappstart -m 0xE 00:22:23.419 07:39:39 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:22:23.419 07:39:39 -- common/autotest_common.sh@712 -- # xtrace_disable 00:22:23.419 07:39:39 -- common/autotest_common.sh@10 -- # set +x 00:22:23.419 07:39:39 -- nvmf/common.sh@469 -- # nvmfpid=4160286 00:22:23.419 07:39:39 -- nvmf/common.sh@468 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:22:23.419 07:39:39 -- nvmf/common.sh@470 -- # waitforlisten 4160286 00:22:23.419 07:39:39 -- common/autotest_common.sh@819 -- # '[' -z 4160286 ']' 00:22:23.419 07:39:39 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:23.419 07:39:39 -- common/autotest_common.sh@824 -- # local max_retries=100 00:22:23.419 07:39:39 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:23.419 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:23.419 07:39:39 -- common/autotest_common.sh@828 -- # xtrace_disable 00:22:23.419 07:39:39 -- common/autotest_common.sh@10 -- # set +x 00:22:23.419 [2024-07-14 07:39:39.481925] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:22:23.419 [2024-07-14 07:39:39.482016] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:22:23.419 EAL: No free 2048 kB hugepages reported on node 1 00:22:23.419 [2024-07-14 07:39:39.552705] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 3 00:22:23.677 [2024-07-14 07:39:39.667929] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:22:23.678 [2024-07-14 07:39:39.668095] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:22:23.678 [2024-07-14 07:39:39.668124] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:22:23.678 [2024-07-14 07:39:39.668147] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:22:23.678 [2024-07-14 07:39:39.668252] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:22:23.678 [2024-07-14 07:39:39.668346] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:22:23.678 [2024-07-14 07:39:39.668364] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:22:24.242 07:39:40 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:22:24.242 07:39:40 -- common/autotest_common.sh@852 -- # return 0 00:22:24.242 07:39:40 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:22:24.242 07:39:40 -- common/autotest_common.sh@718 -- # xtrace_disable 00:22:24.242 07:39:40 -- common/autotest_common.sh@10 -- # set +x 00:22:24.500 07:39:40 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:22:24.500 07:39:40 -- host/multicontroller.sh@27 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:22:24.500 07:39:40 -- common/autotest_common.sh@551 -- # xtrace_disable 00:22:24.500 07:39:40 -- common/autotest_common.sh@10 -- # set +x 00:22:24.500 [2024-07-14 07:39:40.432577] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:22:24.500 07:39:40 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:22:24.500 07:39:40 -- host/multicontroller.sh@29 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:22:24.500 07:39:40 -- common/autotest_common.sh@551 -- # xtrace_disable 00:22:24.500 07:39:40 -- common/autotest_common.sh@10 -- # set +x 00:22:24.500 Malloc0 00:22:24.500 07:39:40 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:22:24.500 07:39:40 -- host/multicontroller.sh@30 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:22:24.500 07:39:40 -- common/autotest_common.sh@551 -- # xtrace_disable 00:22:24.500 07:39:40 -- common/autotest_common.sh@10 -- # set +x 00:22:24.500 07:39:40 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:22:24.500 07:39:40 -- host/multicontroller.sh@31 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:22:24.500 07:39:40 -- common/autotest_common.sh@551 -- # xtrace_disable 00:22:24.500 07:39:40 -- common/autotest_common.sh@10 -- # set +x 00:22:24.500 07:39:40 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:22:24.500 07:39:40 -- host/multicontroller.sh@33 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:22:24.500 07:39:40 -- common/autotest_common.sh@551 -- # xtrace_disable 00:22:24.500 07:39:40 -- common/autotest_common.sh@10 -- # set +x 00:22:24.500 [2024-07-14 07:39:40.496667] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:22:24.500 07:39:40 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:22:24.500 07:39:40 -- host/multicontroller.sh@34 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:22:24.500 07:39:40 -- common/autotest_common.sh@551 -- # xtrace_disable 00:22:24.500 07:39:40 -- common/autotest_common.sh@10 -- # set +x 00:22:24.500 [2024-07-14 07:39:40.504585] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:22:24.500 07:39:40 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:22:24.500 07:39:40 -- host/multicontroller.sh@36 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:22:24.500 07:39:40 -- common/autotest_common.sh@551 -- # xtrace_disable 00:22:24.500 07:39:40 -- common/autotest_common.sh@10 -- # set +x 00:22:24.500 Malloc1 00:22:24.500 07:39:40 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:22:24.500 07:39:40 -- host/multicontroller.sh@37 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK00000000000002 00:22:24.500 07:39:40 -- common/autotest_common.sh@551 -- # xtrace_disable 00:22:24.500 07:39:40 -- common/autotest_common.sh@10 -- # set +x 00:22:24.500 07:39:40 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:22:24.500 07:39:40 -- host/multicontroller.sh@38 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Malloc1 00:22:24.500 07:39:40 -- common/autotest_common.sh@551 -- # xtrace_disable 00:22:24.500 07:39:40 -- common/autotest_common.sh@10 -- # set +x 00:22:24.501 07:39:40 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:22:24.501 07:39:40 -- host/multicontroller.sh@40 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:22:24.501 07:39:40 -- common/autotest_common.sh@551 -- # xtrace_disable 00:22:24.501 07:39:40 -- common/autotest_common.sh@10 -- # set +x 00:22:24.501 07:39:40 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:22:24.501 07:39:40 -- host/multicontroller.sh@41 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4421 00:22:24.501 07:39:40 -- common/autotest_common.sh@551 -- # xtrace_disable 00:22:24.501 07:39:40 -- common/autotest_common.sh@10 -- # set +x 00:22:24.501 07:39:40 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:22:24.501 07:39:40 -- host/multicontroller.sh@44 -- # bdevperf_pid=4160450 00:22:24.501 07:39:40 -- host/multicontroller.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w write -t 1 -f 00:22:24.501 07:39:40 -- host/multicontroller.sh@46 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; pap "$testdir/try.txt"; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:22:24.501 07:39:40 -- host/multicontroller.sh@47 -- # waitforlisten 4160450 /var/tmp/bdevperf.sock 00:22:24.501 07:39:40 -- common/autotest_common.sh@819 -- # '[' -z 4160450 ']' 00:22:24.501 07:39:40 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:22:24.501 07:39:40 -- common/autotest_common.sh@824 -- # local max_retries=100 00:22:24.501 07:39:40 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:22:24.501 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:22:24.501 07:39:40 -- common/autotest_common.sh@828 -- # xtrace_disable 00:22:24.501 07:39:40 -- common/autotest_common.sh@10 -- # set +x 00:22:25.432 07:39:41 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:22:25.432 07:39:41 -- common/autotest_common.sh@852 -- # return 0 00:22:25.432 07:39:41 -- host/multicontroller.sh@50 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 00:22:25.432 07:39:41 -- common/autotest_common.sh@551 -- # xtrace_disable 00:22:25.432 07:39:41 -- common/autotest_common.sh@10 -- # set +x 00:22:25.432 NVMe0n1 00:22:25.432 07:39:41 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:22:25.432 07:39:41 -- host/multicontroller.sh@54 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:22:25.432 07:39:41 -- host/multicontroller.sh@54 -- # grep -c NVMe 00:22:25.432 07:39:41 -- common/autotest_common.sh@551 -- # xtrace_disable 00:22:25.432 07:39:41 -- common/autotest_common.sh@10 -- # set +x 00:22:25.690 07:39:41 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:22:25.690 1 00:22:25.690 07:39:41 -- host/multicontroller.sh@60 -- # NOT rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -q nqn.2021-09-7.io.spdk:00001 00:22:25.690 07:39:41 -- common/autotest_common.sh@640 -- # local es=0 00:22:25.690 07:39:41 -- common/autotest_common.sh@642 -- # valid_exec_arg rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -q nqn.2021-09-7.io.spdk:00001 00:22:25.690 07:39:41 -- common/autotest_common.sh@628 -- # local arg=rpc_cmd 00:22:25.690 07:39:41 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:22:25.690 07:39:41 -- common/autotest_common.sh@632 -- # type -t rpc_cmd 00:22:25.690 07:39:41 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:22:25.690 07:39:41 -- common/autotest_common.sh@643 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -q nqn.2021-09-7.io.spdk:00001 00:22:25.690 07:39:41 -- common/autotest_common.sh@551 -- # xtrace_disable 00:22:25.690 07:39:41 -- common/autotest_common.sh@10 -- # set +x 00:22:25.690 request: 00:22:25.690 { 00:22:25.690 "name": "NVMe0", 00:22:25.690 "trtype": "tcp", 00:22:25.690 "traddr": "10.0.0.2", 00:22:25.690 "hostnqn": "nqn.2021-09-7.io.spdk:00001", 00:22:25.690 "hostaddr": "10.0.0.2", 00:22:25.690 "hostsvcid": "60000", 00:22:25.690 "adrfam": "ipv4", 00:22:25.690 "trsvcid": "4420", 00:22:25.690 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:22:25.690 "method": "bdev_nvme_attach_controller", 00:22:25.690 "req_id": 1 00:22:25.690 } 00:22:25.690 Got JSON-RPC error response 00:22:25.690 response: 00:22:25.690 { 00:22:25.690 "code": -114, 00:22:25.690 "message": "A controller named NVMe0 already exists with the specified network path\n" 00:22:25.690 } 00:22:25.690 07:39:41 -- common/autotest_common.sh@579 -- # [[ 1 == 0 ]] 00:22:25.690 07:39:41 -- common/autotest_common.sh@643 -- # es=1 00:22:25.690 07:39:41 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:22:25.690 07:39:41 -- common/autotest_common.sh@662 -- # [[ -n '' ]] 00:22:25.690 07:39:41 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:22:25.690 07:39:41 -- host/multicontroller.sh@65 -- # NOT rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -i 10.0.0.2 -c 60000 00:22:25.690 07:39:41 -- common/autotest_common.sh@640 -- # local es=0 00:22:25.690 07:39:41 -- common/autotest_common.sh@642 -- # valid_exec_arg rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -i 10.0.0.2 -c 60000 00:22:25.690 07:39:41 -- common/autotest_common.sh@628 -- # local arg=rpc_cmd 00:22:25.691 07:39:41 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:22:25.691 07:39:41 -- common/autotest_common.sh@632 -- # type -t rpc_cmd 00:22:25.691 07:39:41 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:22:25.691 07:39:41 -- common/autotest_common.sh@643 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -i 10.0.0.2 -c 60000 00:22:25.691 07:39:41 -- common/autotest_common.sh@551 -- # xtrace_disable 00:22:25.691 07:39:41 -- common/autotest_common.sh@10 -- # set +x 00:22:25.691 request: 00:22:25.691 { 00:22:25.691 "name": "NVMe0", 00:22:25.691 "trtype": "tcp", 00:22:25.691 "traddr": "10.0.0.2", 00:22:25.691 "hostaddr": "10.0.0.2", 00:22:25.691 "hostsvcid": "60000", 00:22:25.691 "adrfam": "ipv4", 00:22:25.691 "trsvcid": "4420", 00:22:25.691 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:22:25.691 "method": "bdev_nvme_attach_controller", 00:22:25.691 "req_id": 1 00:22:25.691 } 00:22:25.691 Got JSON-RPC error response 00:22:25.691 response: 00:22:25.691 { 00:22:25.691 "code": -114, 00:22:25.691 "message": "A controller named NVMe0 already exists with the specified network path\n" 00:22:25.691 } 00:22:25.691 07:39:41 -- common/autotest_common.sh@579 -- # [[ 1 == 0 ]] 00:22:25.691 07:39:41 -- common/autotest_common.sh@643 -- # es=1 00:22:25.691 07:39:41 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:22:25.691 07:39:41 -- common/autotest_common.sh@662 -- # [[ -n '' ]] 00:22:25.691 07:39:41 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:22:25.691 07:39:41 -- host/multicontroller.sh@69 -- # NOT rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -x disable 00:22:25.691 07:39:41 -- common/autotest_common.sh@640 -- # local es=0 00:22:25.691 07:39:41 -- common/autotest_common.sh@642 -- # valid_exec_arg rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -x disable 00:22:25.691 07:39:41 -- common/autotest_common.sh@628 -- # local arg=rpc_cmd 00:22:25.691 07:39:41 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:22:25.691 07:39:41 -- common/autotest_common.sh@632 -- # type -t rpc_cmd 00:22:25.691 07:39:41 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:22:25.691 07:39:41 -- common/autotest_common.sh@643 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -x disable 00:22:25.691 07:39:41 -- common/autotest_common.sh@551 -- # xtrace_disable 00:22:25.691 07:39:41 -- common/autotest_common.sh@10 -- # set +x 00:22:25.691 request: 00:22:25.691 { 00:22:25.691 "name": "NVMe0", 00:22:25.691 "trtype": "tcp", 00:22:25.691 "traddr": "10.0.0.2", 00:22:25.691 "hostaddr": "10.0.0.2", 00:22:25.691 "hostsvcid": "60000", 00:22:25.691 "adrfam": "ipv4", 00:22:25.691 "trsvcid": "4420", 00:22:25.691 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:22:25.691 "multipath": "disable", 00:22:25.691 "method": "bdev_nvme_attach_controller", 00:22:25.691 "req_id": 1 00:22:25.691 } 00:22:25.691 Got JSON-RPC error response 00:22:25.691 response: 00:22:25.691 { 00:22:25.691 "code": -114, 00:22:25.691 "message": "A controller named NVMe0 already exists and multipath is disabled\n" 00:22:25.691 } 00:22:25.691 07:39:41 -- common/autotest_common.sh@579 -- # [[ 1 == 0 ]] 00:22:25.691 07:39:41 -- common/autotest_common.sh@643 -- # es=1 00:22:25.691 07:39:41 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:22:25.691 07:39:41 -- common/autotest_common.sh@662 -- # [[ -n '' ]] 00:22:25.691 07:39:41 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:22:25.691 07:39:41 -- host/multicontroller.sh@74 -- # NOT rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -x failover 00:22:25.691 07:39:41 -- common/autotest_common.sh@640 -- # local es=0 00:22:25.691 07:39:41 -- common/autotest_common.sh@642 -- # valid_exec_arg rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -x failover 00:22:25.691 07:39:41 -- common/autotest_common.sh@628 -- # local arg=rpc_cmd 00:22:25.691 07:39:41 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:22:25.691 07:39:41 -- common/autotest_common.sh@632 -- # type -t rpc_cmd 00:22:25.691 07:39:41 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:22:25.691 07:39:41 -- common/autotest_common.sh@643 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -x failover 00:22:25.691 07:39:41 -- common/autotest_common.sh@551 -- # xtrace_disable 00:22:25.691 07:39:41 -- common/autotest_common.sh@10 -- # set +x 00:22:25.691 request: 00:22:25.691 { 00:22:25.691 "name": "NVMe0", 00:22:25.691 "trtype": "tcp", 00:22:25.691 "traddr": "10.0.0.2", 00:22:25.691 "hostaddr": "10.0.0.2", 00:22:25.691 "hostsvcid": "60000", 00:22:25.691 "adrfam": "ipv4", 00:22:25.691 "trsvcid": "4420", 00:22:25.691 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:22:25.691 "multipath": "failover", 00:22:25.691 "method": "bdev_nvme_attach_controller", 00:22:25.691 "req_id": 1 00:22:25.691 } 00:22:25.691 Got JSON-RPC error response 00:22:25.691 response: 00:22:25.691 { 00:22:25.691 "code": -114, 00:22:25.691 "message": "A controller named NVMe0 already exists with the specified network path\n" 00:22:25.691 } 00:22:25.691 07:39:41 -- common/autotest_common.sh@579 -- # [[ 1 == 0 ]] 00:22:25.691 07:39:41 -- common/autotest_common.sh@643 -- # es=1 00:22:25.691 07:39:41 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:22:25.691 07:39:41 -- common/autotest_common.sh@662 -- # [[ -n '' ]] 00:22:25.691 07:39:41 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:22:25.691 07:39:41 -- host/multicontroller.sh@79 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:22:25.691 07:39:41 -- common/autotest_common.sh@551 -- # xtrace_disable 00:22:25.691 07:39:41 -- common/autotest_common.sh@10 -- # set +x 00:22:25.691 00:22:25.691 07:39:41 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:22:25.691 07:39:41 -- host/multicontroller.sh@83 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:22:25.691 07:39:41 -- common/autotest_common.sh@551 -- # xtrace_disable 00:22:25.691 07:39:41 -- common/autotest_common.sh@10 -- # set +x 00:22:25.691 07:39:41 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:22:25.691 07:39:41 -- host/multicontroller.sh@87 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe1 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 00:22:25.691 07:39:41 -- common/autotest_common.sh@551 -- # xtrace_disable 00:22:25.691 07:39:41 -- common/autotest_common.sh@10 -- # set +x 00:22:25.992 00:22:25.992 07:39:41 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:22:25.993 07:39:41 -- host/multicontroller.sh@90 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:22:25.993 07:39:41 -- common/autotest_common.sh@551 -- # xtrace_disable 00:22:25.993 07:39:41 -- host/multicontroller.sh@90 -- # grep -c NVMe 00:22:25.993 07:39:41 -- common/autotest_common.sh@10 -- # set +x 00:22:25.993 07:39:41 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:22:25.993 07:39:41 -- host/multicontroller.sh@90 -- # '[' 2 '!=' 2 ']' 00:22:25.993 07:39:41 -- host/multicontroller.sh@95 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:22:26.925 0 00:22:26.925 07:39:43 -- host/multicontroller.sh@98 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe1 00:22:26.925 07:39:43 -- common/autotest_common.sh@551 -- # xtrace_disable 00:22:26.925 07:39:43 -- common/autotest_common.sh@10 -- # set +x 00:22:26.925 07:39:43 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:22:26.925 07:39:43 -- host/multicontroller.sh@100 -- # killprocess 4160450 00:22:26.925 07:39:43 -- common/autotest_common.sh@926 -- # '[' -z 4160450 ']' 00:22:26.925 07:39:43 -- common/autotest_common.sh@930 -- # kill -0 4160450 00:22:26.925 07:39:43 -- common/autotest_common.sh@931 -- # uname 00:22:26.925 07:39:43 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:22:26.925 07:39:43 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 4160450 00:22:27.185 07:39:43 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:22:27.185 07:39:43 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:22:27.185 07:39:43 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 4160450' 00:22:27.185 killing process with pid 4160450 00:22:27.185 07:39:43 -- common/autotest_common.sh@945 -- # kill 4160450 00:22:27.185 07:39:43 -- common/autotest_common.sh@950 -- # wait 4160450 00:22:27.443 07:39:43 -- host/multicontroller.sh@102 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:22:27.443 07:39:43 -- common/autotest_common.sh@551 -- # xtrace_disable 00:22:27.443 07:39:43 -- common/autotest_common.sh@10 -- # set +x 00:22:27.443 07:39:43 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:22:27.443 07:39:43 -- host/multicontroller.sh@103 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:22:27.443 07:39:43 -- common/autotest_common.sh@551 -- # xtrace_disable 00:22:27.443 07:39:43 -- common/autotest_common.sh@10 -- # set +x 00:22:27.443 07:39:43 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:22:27.443 07:39:43 -- host/multicontroller.sh@105 -- # trap - SIGINT SIGTERM EXIT 00:22:27.443 07:39:43 -- host/multicontroller.sh@107 -- # pap /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:22:27.443 07:39:43 -- common/autotest_common.sh@1597 -- # read -r file 00:22:27.443 07:39:43 -- common/autotest_common.sh@1596 -- # find /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt -type f 00:22:27.443 07:39:43 -- common/autotest_common.sh@1596 -- # sort -u 00:22:27.443 07:39:43 -- common/autotest_common.sh@1598 -- # cat 00:22:27.443 --- /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt --- 00:22:27.443 [2024-07-14 07:39:40.597156] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:22:27.443 [2024-07-14 07:39:40.597242] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid4160450 ] 00:22:27.443 EAL: No free 2048 kB hugepages reported on node 1 00:22:27.443 [2024-07-14 07:39:40.656808] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:27.443 [2024-07-14 07:39:40.764693] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:22:27.443 [2024-07-14 07:39:41.895410] bdev.c:4553:bdev_name_add: *ERROR*: Bdev name 81dc805c-7849-45a9-80ca-0d01aa5ee74e already exists 00:22:27.443 [2024-07-14 07:39:41.895453] bdev.c:7603:bdev_register: *ERROR*: Unable to add uuid:81dc805c-7849-45a9-80ca-0d01aa5ee74e alias for bdev NVMe1n1 00:22:27.443 [2024-07-14 07:39:41.895471] bdev_nvme.c:4236:nvme_bdev_create: *ERROR*: spdk_bdev_register() failed 00:22:27.443 Running I/O for 1 seconds... 00:22:27.443 00:22:27.443 Latency(us) 00:22:27.443 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:27.443 Job: NVMe0n1 (Core Mask 0x1, workload: write, depth: 128, IO size: 4096) 00:22:27.443 NVMe0n1 : 1.01 19595.37 76.54 0.00 0.00 6516.39 1990.35 9903.22 00:22:27.443 =================================================================================================================== 00:22:27.443 Total : 19595.37 76.54 0.00 0.00 6516.39 1990.35 9903.22 00:22:27.443 Received shutdown signal, test time was about 1.000000 seconds 00:22:27.443 00:22:27.443 Latency(us) 00:22:27.443 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:27.443 =================================================================================================================== 00:22:27.443 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:22:27.443 --- /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt --- 00:22:27.443 07:39:43 -- common/autotest_common.sh@1603 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:22:27.443 07:39:43 -- common/autotest_common.sh@1597 -- # read -r file 00:22:27.443 07:39:43 -- host/multicontroller.sh@108 -- # nvmftestfini 00:22:27.443 07:39:43 -- nvmf/common.sh@476 -- # nvmfcleanup 00:22:27.443 07:39:43 -- nvmf/common.sh@116 -- # sync 00:22:27.443 07:39:43 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:22:27.443 07:39:43 -- nvmf/common.sh@119 -- # set +e 00:22:27.443 07:39:43 -- nvmf/common.sh@120 -- # for i in {1..20} 00:22:27.443 07:39:43 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:22:27.443 rmmod nvme_tcp 00:22:27.443 rmmod nvme_fabrics 00:22:27.443 rmmod nvme_keyring 00:22:27.443 07:39:43 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:22:27.443 07:39:43 -- nvmf/common.sh@123 -- # set -e 00:22:27.443 07:39:43 -- nvmf/common.sh@124 -- # return 0 00:22:27.443 07:39:43 -- nvmf/common.sh@477 -- # '[' -n 4160286 ']' 00:22:27.443 07:39:43 -- nvmf/common.sh@478 -- # killprocess 4160286 00:22:27.443 07:39:43 -- common/autotest_common.sh@926 -- # '[' -z 4160286 ']' 00:22:27.443 07:39:43 -- common/autotest_common.sh@930 -- # kill -0 4160286 00:22:27.443 07:39:43 -- common/autotest_common.sh@931 -- # uname 00:22:27.443 07:39:43 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:22:27.443 07:39:43 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 4160286 00:22:27.443 07:39:43 -- common/autotest_common.sh@932 -- # process_name=reactor_1 00:22:27.443 07:39:43 -- common/autotest_common.sh@936 -- # '[' reactor_1 = sudo ']' 00:22:27.443 07:39:43 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 4160286' 00:22:27.443 killing process with pid 4160286 00:22:27.443 07:39:43 -- common/autotest_common.sh@945 -- # kill 4160286 00:22:27.443 07:39:43 -- common/autotest_common.sh@950 -- # wait 4160286 00:22:27.701 07:39:43 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:22:27.701 07:39:43 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:22:27.701 07:39:43 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:22:27.701 07:39:43 -- nvmf/common.sh@273 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:22:27.701 07:39:43 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:22:27.701 07:39:43 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:27.701 07:39:43 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:22:27.701 07:39:43 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:30.234 07:39:45 -- nvmf/common.sh@278 -- # ip -4 addr flush cvl_0_1 00:22:30.234 00:22:30.234 real 0m8.400s 00:22:30.234 user 0m16.019s 00:22:30.234 sys 0m2.181s 00:22:30.234 07:39:45 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:22:30.234 07:39:45 -- common/autotest_common.sh@10 -- # set +x 00:22:30.234 ************************************ 00:22:30.234 END TEST nvmf_multicontroller 00:22:30.234 ************************************ 00:22:30.235 07:39:45 -- nvmf/nvmf.sh@92 -- # run_test nvmf_aer /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/aer.sh --transport=tcp 00:22:30.235 07:39:45 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:22:30.235 07:39:45 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:22:30.235 07:39:45 -- common/autotest_common.sh@10 -- # set +x 00:22:30.235 ************************************ 00:22:30.235 START TEST nvmf_aer 00:22:30.235 ************************************ 00:22:30.235 07:39:45 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/aer.sh --transport=tcp 00:22:30.235 * Looking for test storage... 00:22:30.235 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:22:30.235 07:39:45 -- host/aer.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:22:30.235 07:39:45 -- nvmf/common.sh@7 -- # uname -s 00:22:30.235 07:39:45 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:22:30.235 07:39:45 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:22:30.235 07:39:45 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:22:30.235 07:39:45 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:22:30.235 07:39:45 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:22:30.235 07:39:45 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:22:30.235 07:39:45 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:22:30.235 07:39:45 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:22:30.235 07:39:45 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:22:30.235 07:39:45 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:22:30.235 07:39:45 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:22:30.235 07:39:45 -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:22:30.235 07:39:45 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:22:30.235 07:39:45 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:22:30.235 07:39:45 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:22:30.235 07:39:45 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:22:30.235 07:39:45 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:22:30.235 07:39:45 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:22:30.235 07:39:45 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:22:30.235 07:39:45 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:30.235 07:39:45 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:30.235 07:39:45 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:30.235 07:39:45 -- paths/export.sh@5 -- # export PATH 00:22:30.235 07:39:45 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:30.235 07:39:45 -- nvmf/common.sh@46 -- # : 0 00:22:30.235 07:39:45 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:22:30.235 07:39:45 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:22:30.235 07:39:45 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:22:30.235 07:39:45 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:22:30.235 07:39:45 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:22:30.235 07:39:45 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:22:30.235 07:39:45 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:22:30.235 07:39:45 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:22:30.235 07:39:45 -- host/aer.sh@11 -- # nvmftestinit 00:22:30.235 07:39:45 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:22:30.235 07:39:45 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:22:30.235 07:39:45 -- nvmf/common.sh@436 -- # prepare_net_devs 00:22:30.235 07:39:45 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:22:30.235 07:39:45 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:22:30.235 07:39:45 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:30.235 07:39:45 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:22:30.235 07:39:45 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:30.235 07:39:45 -- nvmf/common.sh@402 -- # [[ phy != virt ]] 00:22:30.235 07:39:45 -- nvmf/common.sh@402 -- # gather_supported_nvmf_pci_devs 00:22:30.235 07:39:45 -- nvmf/common.sh@284 -- # xtrace_disable 00:22:30.235 07:39:45 -- common/autotest_common.sh@10 -- # set +x 00:22:32.138 07:39:47 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:22:32.138 07:39:47 -- nvmf/common.sh@290 -- # pci_devs=() 00:22:32.138 07:39:47 -- nvmf/common.sh@290 -- # local -a pci_devs 00:22:32.138 07:39:47 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:22:32.138 07:39:47 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:22:32.138 07:39:47 -- nvmf/common.sh@292 -- # pci_drivers=() 00:22:32.138 07:39:47 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:22:32.138 07:39:47 -- nvmf/common.sh@294 -- # net_devs=() 00:22:32.138 07:39:47 -- nvmf/common.sh@294 -- # local -ga net_devs 00:22:32.138 07:39:47 -- nvmf/common.sh@295 -- # e810=() 00:22:32.138 07:39:47 -- nvmf/common.sh@295 -- # local -ga e810 00:22:32.138 07:39:47 -- nvmf/common.sh@296 -- # x722=() 00:22:32.138 07:39:47 -- nvmf/common.sh@296 -- # local -ga x722 00:22:32.138 07:39:47 -- nvmf/common.sh@297 -- # mlx=() 00:22:32.138 07:39:47 -- nvmf/common.sh@297 -- # local -ga mlx 00:22:32.138 07:39:47 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:22:32.139 07:39:47 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:22:32.139 07:39:47 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:22:32.139 07:39:47 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:22:32.139 07:39:47 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:22:32.139 07:39:47 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:22:32.139 07:39:47 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:22:32.139 07:39:47 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:22:32.139 07:39:47 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:22:32.139 07:39:47 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:22:32.139 07:39:47 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:22:32.139 07:39:47 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:22:32.139 07:39:47 -- nvmf/common.sh@320 -- # [[ tcp == rdma ]] 00:22:32.139 07:39:47 -- nvmf/common.sh@326 -- # [[ e810 == mlx5 ]] 00:22:32.139 07:39:47 -- nvmf/common.sh@328 -- # [[ e810 == e810 ]] 00:22:32.139 07:39:47 -- nvmf/common.sh@329 -- # pci_devs=("${e810[@]}") 00:22:32.139 07:39:47 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:22:32.139 07:39:47 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:22:32.139 07:39:47 -- nvmf/common.sh@340 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:22:32.139 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:22:32.139 07:39:47 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:22:32.139 07:39:47 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:22:32.139 07:39:47 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:32.139 07:39:47 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:32.139 07:39:47 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:22:32.139 07:39:47 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:22:32.139 07:39:47 -- nvmf/common.sh@340 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:22:32.139 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:22:32.139 07:39:47 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:22:32.139 07:39:47 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:22:32.139 07:39:47 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:32.139 07:39:47 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:32.139 07:39:47 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:22:32.139 07:39:47 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:22:32.139 07:39:47 -- nvmf/common.sh@371 -- # [[ e810 == e810 ]] 00:22:32.139 07:39:47 -- nvmf/common.sh@371 -- # [[ tcp == rdma ]] 00:22:32.139 07:39:47 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:22:32.139 07:39:47 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:32.139 07:39:47 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:22:32.139 07:39:47 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:32.139 07:39:47 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:22:32.139 Found net devices under 0000:0a:00.0: cvl_0_0 00:22:32.139 07:39:47 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:22:32.139 07:39:47 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:22:32.139 07:39:47 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:32.139 07:39:47 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:22:32.139 07:39:47 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:32.139 07:39:47 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:22:32.139 Found net devices under 0000:0a:00.1: cvl_0_1 00:22:32.139 07:39:47 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:22:32.139 07:39:47 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:22:32.139 07:39:47 -- nvmf/common.sh@402 -- # is_hw=yes 00:22:32.139 07:39:47 -- nvmf/common.sh@404 -- # [[ yes == yes ]] 00:22:32.139 07:39:47 -- nvmf/common.sh@405 -- # [[ tcp == tcp ]] 00:22:32.139 07:39:47 -- nvmf/common.sh@406 -- # nvmf_tcp_init 00:22:32.139 07:39:47 -- nvmf/common.sh@228 -- # NVMF_INITIATOR_IP=10.0.0.1 00:22:32.139 07:39:47 -- nvmf/common.sh@229 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:22:32.139 07:39:47 -- nvmf/common.sh@230 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:22:32.139 07:39:47 -- nvmf/common.sh@233 -- # (( 2 > 1 )) 00:22:32.139 07:39:47 -- nvmf/common.sh@235 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:22:32.139 07:39:47 -- nvmf/common.sh@236 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:22:32.139 07:39:47 -- nvmf/common.sh@239 -- # NVMF_SECOND_TARGET_IP= 00:22:32.139 07:39:47 -- nvmf/common.sh@241 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:22:32.139 07:39:47 -- nvmf/common.sh@242 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:22:32.139 07:39:47 -- nvmf/common.sh@243 -- # ip -4 addr flush cvl_0_0 00:22:32.139 07:39:47 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_1 00:22:32.139 07:39:47 -- nvmf/common.sh@247 -- # ip netns add cvl_0_0_ns_spdk 00:22:32.139 07:39:47 -- nvmf/common.sh@250 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:22:32.139 07:39:47 -- nvmf/common.sh@253 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:22:32.139 07:39:47 -- nvmf/common.sh@254 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:22:32.139 07:39:47 -- nvmf/common.sh@257 -- # ip link set cvl_0_1 up 00:22:32.139 07:39:47 -- nvmf/common.sh@259 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:22:32.139 07:39:48 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:22:32.139 07:39:48 -- nvmf/common.sh@263 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:22:32.139 07:39:48 -- nvmf/common.sh@266 -- # ping -c 1 10.0.0.2 00:22:32.139 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:22:32.139 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.276 ms 00:22:32.139 00:22:32.139 --- 10.0.0.2 ping statistics --- 00:22:32.139 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:32.139 rtt min/avg/max/mdev = 0.276/0.276/0.276/0.000 ms 00:22:32.139 07:39:48 -- nvmf/common.sh@267 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:22:32.139 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:22:32.139 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.184 ms 00:22:32.139 00:22:32.139 --- 10.0.0.1 ping statistics --- 00:22:32.139 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:32.139 rtt min/avg/max/mdev = 0.184/0.184/0.184/0.000 ms 00:22:32.139 07:39:48 -- nvmf/common.sh@269 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:22:32.139 07:39:48 -- nvmf/common.sh@410 -- # return 0 00:22:32.139 07:39:48 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:22:32.139 07:39:48 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:22:32.139 07:39:48 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:22:32.139 07:39:48 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:22:32.139 07:39:48 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:22:32.139 07:39:48 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:22:32.139 07:39:48 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:22:32.139 07:39:48 -- host/aer.sh@12 -- # nvmfappstart -m 0xF 00:22:32.139 07:39:48 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:22:32.139 07:39:48 -- common/autotest_common.sh@712 -- # xtrace_disable 00:22:32.139 07:39:48 -- common/autotest_common.sh@10 -- # set +x 00:22:32.139 07:39:48 -- nvmf/common.sh@469 -- # nvmfpid=4162804 00:22:32.139 07:39:48 -- nvmf/common.sh@468 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:22:32.139 07:39:48 -- nvmf/common.sh@470 -- # waitforlisten 4162804 00:22:32.139 07:39:48 -- common/autotest_common.sh@819 -- # '[' -z 4162804 ']' 00:22:32.139 07:39:48 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:32.139 07:39:48 -- common/autotest_common.sh@824 -- # local max_retries=100 00:22:32.139 07:39:48 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:32.139 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:32.139 07:39:48 -- common/autotest_common.sh@828 -- # xtrace_disable 00:22:32.139 07:39:48 -- common/autotest_common.sh@10 -- # set +x 00:22:32.139 [2024-07-14 07:39:48.134199] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:22:32.139 [2024-07-14 07:39:48.134284] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:22:32.139 EAL: No free 2048 kB hugepages reported on node 1 00:22:32.139 [2024-07-14 07:39:48.198497] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:22:32.139 [2024-07-14 07:39:48.306701] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:22:32.139 [2024-07-14 07:39:48.306852] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:22:32.139 [2024-07-14 07:39:48.306879] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:22:32.139 [2024-07-14 07:39:48.306908] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:22:32.139 [2024-07-14 07:39:48.306989] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:22:32.139 [2024-07-14 07:39:48.307043] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:22:32.139 [2024-07-14 07:39:48.307046] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:22:32.139 [2024-07-14 07:39:48.307013] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:22:33.075 07:39:49 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:22:33.075 07:39:49 -- common/autotest_common.sh@852 -- # return 0 00:22:33.075 07:39:49 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:22:33.075 07:39:49 -- common/autotest_common.sh@718 -- # xtrace_disable 00:22:33.075 07:39:49 -- common/autotest_common.sh@10 -- # set +x 00:22:33.075 07:39:49 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:22:33.075 07:39:49 -- host/aer.sh@14 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:22:33.075 07:39:49 -- common/autotest_common.sh@551 -- # xtrace_disable 00:22:33.075 07:39:49 -- common/autotest_common.sh@10 -- # set +x 00:22:33.075 [2024-07-14 07:39:49.122440] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:22:33.075 07:39:49 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:22:33.075 07:39:49 -- host/aer.sh@16 -- # rpc_cmd bdev_malloc_create 64 512 --name Malloc0 00:22:33.075 07:39:49 -- common/autotest_common.sh@551 -- # xtrace_disable 00:22:33.075 07:39:49 -- common/autotest_common.sh@10 -- # set +x 00:22:33.075 Malloc0 00:22:33.075 07:39:49 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:22:33.075 07:39:49 -- host/aer.sh@17 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 2 00:22:33.075 07:39:49 -- common/autotest_common.sh@551 -- # xtrace_disable 00:22:33.075 07:39:49 -- common/autotest_common.sh@10 -- # set +x 00:22:33.075 07:39:49 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:22:33.075 07:39:49 -- host/aer.sh@18 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:22:33.075 07:39:49 -- common/autotest_common.sh@551 -- # xtrace_disable 00:22:33.075 07:39:49 -- common/autotest_common.sh@10 -- # set +x 00:22:33.075 07:39:49 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:22:33.075 07:39:49 -- host/aer.sh@19 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:22:33.075 07:39:49 -- common/autotest_common.sh@551 -- # xtrace_disable 00:22:33.075 07:39:49 -- common/autotest_common.sh@10 -- # set +x 00:22:33.075 [2024-07-14 07:39:49.173459] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:22:33.075 07:39:49 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:22:33.075 07:39:49 -- host/aer.sh@21 -- # rpc_cmd nvmf_get_subsystems 00:22:33.075 07:39:49 -- common/autotest_common.sh@551 -- # xtrace_disable 00:22:33.075 07:39:49 -- common/autotest_common.sh@10 -- # set +x 00:22:33.075 [2024-07-14 07:39:49.181212] nvmf_rpc.c: 275:rpc_nvmf_get_subsystems: *WARNING*: rpc_nvmf_get_subsystems: deprecated feature listener.transport is deprecated in favor of trtype to be removed in v24.05 00:22:33.075 [ 00:22:33.075 { 00:22:33.075 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:22:33.075 "subtype": "Discovery", 00:22:33.075 "listen_addresses": [], 00:22:33.075 "allow_any_host": true, 00:22:33.075 "hosts": [] 00:22:33.075 }, 00:22:33.075 { 00:22:33.075 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:22:33.075 "subtype": "NVMe", 00:22:33.075 "listen_addresses": [ 00:22:33.075 { 00:22:33.075 "transport": "TCP", 00:22:33.075 "trtype": "TCP", 00:22:33.075 "adrfam": "IPv4", 00:22:33.075 "traddr": "10.0.0.2", 00:22:33.075 "trsvcid": "4420" 00:22:33.075 } 00:22:33.075 ], 00:22:33.075 "allow_any_host": true, 00:22:33.075 "hosts": [], 00:22:33.075 "serial_number": "SPDK00000000000001", 00:22:33.075 "model_number": "SPDK bdev Controller", 00:22:33.075 "max_namespaces": 2, 00:22:33.075 "min_cntlid": 1, 00:22:33.075 "max_cntlid": 65519, 00:22:33.075 "namespaces": [ 00:22:33.075 { 00:22:33.075 "nsid": 1, 00:22:33.075 "bdev_name": "Malloc0", 00:22:33.075 "name": "Malloc0", 00:22:33.075 "nguid": "CE8DE89C38A046A59709C06638E9FDF0", 00:22:33.075 "uuid": "ce8de89c-38a0-46a5-9709-c06638e9fdf0" 00:22:33.075 } 00:22:33.075 ] 00:22:33.075 } 00:22:33.075 ] 00:22:33.075 07:39:49 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:22:33.075 07:39:49 -- host/aer.sh@23 -- # AER_TOUCH_FILE=/tmp/aer_touch_file 00:22:33.075 07:39:49 -- host/aer.sh@24 -- # rm -f /tmp/aer_touch_file 00:22:33.075 07:39:49 -- host/aer.sh@33 -- # aerpid=4162965 00:22:33.075 07:39:49 -- host/aer.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/aer/aer -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' -n 2 -t /tmp/aer_touch_file 00:22:33.075 07:39:49 -- host/aer.sh@36 -- # waitforfile /tmp/aer_touch_file 00:22:33.075 07:39:49 -- common/autotest_common.sh@1244 -- # local i=0 00:22:33.075 07:39:49 -- common/autotest_common.sh@1245 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:22:33.075 07:39:49 -- common/autotest_common.sh@1246 -- # '[' 0 -lt 200 ']' 00:22:33.075 07:39:49 -- common/autotest_common.sh@1247 -- # i=1 00:22:33.075 07:39:49 -- common/autotest_common.sh@1248 -- # sleep 0.1 00:22:33.075 EAL: No free 2048 kB hugepages reported on node 1 00:22:33.334 07:39:49 -- common/autotest_common.sh@1245 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:22:33.334 07:39:49 -- common/autotest_common.sh@1246 -- # '[' 1 -lt 200 ']' 00:22:33.334 07:39:49 -- common/autotest_common.sh@1247 -- # i=2 00:22:33.334 07:39:49 -- common/autotest_common.sh@1248 -- # sleep 0.1 00:22:33.334 07:39:49 -- common/autotest_common.sh@1245 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:22:33.334 07:39:49 -- common/autotest_common.sh@1251 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:22:33.334 07:39:49 -- common/autotest_common.sh@1255 -- # return 0 00:22:33.334 07:39:49 -- host/aer.sh@39 -- # rpc_cmd bdev_malloc_create 64 4096 --name Malloc1 00:22:33.334 07:39:49 -- common/autotest_common.sh@551 -- # xtrace_disable 00:22:33.334 07:39:49 -- common/autotest_common.sh@10 -- # set +x 00:22:33.334 Malloc1 00:22:33.334 07:39:49 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:22:33.334 07:39:49 -- host/aer.sh@40 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 2 00:22:33.334 07:39:49 -- common/autotest_common.sh@551 -- # xtrace_disable 00:22:33.334 07:39:49 -- common/autotest_common.sh@10 -- # set +x 00:22:33.334 07:39:49 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:22:33.334 07:39:49 -- host/aer.sh@41 -- # rpc_cmd nvmf_get_subsystems 00:22:33.334 07:39:49 -- common/autotest_common.sh@551 -- # xtrace_disable 00:22:33.334 07:39:49 -- common/autotest_common.sh@10 -- # set +x 00:22:33.334 [ 00:22:33.334 { 00:22:33.334 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:22:33.334 "subtype": "Discovery", 00:22:33.334 "listen_addresses": [], 00:22:33.334 "allow_any_host": true, 00:22:33.334 "hosts": [] 00:22:33.334 }, 00:22:33.334 { 00:22:33.334 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:22:33.334 "subtype": "NVMe", 00:22:33.334 "listen_addresses": [ 00:22:33.334 { 00:22:33.334 "transport": "TCP", 00:22:33.334 "trtype": "TCP", 00:22:33.334 "adrfam": "IPv4", 00:22:33.334 "traddr": "10.0.0.2", 00:22:33.334 "trsvcid": "4420" 00:22:33.334 } 00:22:33.334 ], 00:22:33.334 "allow_any_host": true, 00:22:33.334 "hosts": [], 00:22:33.334 "serial_number": "SPDK00000000000001", 00:22:33.334 "model_number": "SPDK bdev Controller", 00:22:33.334 "max_namespaces": 2, 00:22:33.334 "min_cntlid": 1, 00:22:33.334 "max_cntlid": 65519, 00:22:33.334 "namespaces": [ 00:22:33.334 { 00:22:33.334 "nsid": 1, 00:22:33.334 "bdev_name": "Malloc0", 00:22:33.334 "name": "Malloc0", 00:22:33.334 "nguid": "CE8DE89C38A046A59709C06638E9FDF0", 00:22:33.334 "uuid": "ce8de89c-38a0-46a5-9709-c06638e9fdf0" 00:22:33.334 }, 00:22:33.334 { 00:22:33.334 "nsid": 2, 00:22:33.334 "bdev_name": "Malloc1", 00:22:33.334 "name": "Malloc1", 00:22:33.334 "nguid": "AED807D0750F4174B652DCA79E789FE3", 00:22:33.334 "uuid": "aed807d0-750f-4174-b652-dca79e789fe3" 00:22:33.334 } 00:22:33.334 ] 00:22:33.334 } 00:22:33.334 ] 00:22:33.334 07:39:49 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:22:33.334 07:39:49 -- host/aer.sh@43 -- # wait 4162965 00:22:33.334 Asynchronous Event Request test 00:22:33.334 Attaching to 10.0.0.2 00:22:33.334 Attached to 10.0.0.2 00:22:33.334 Registering asynchronous event callbacks... 00:22:33.334 Starting namespace attribute notice tests for all controllers... 00:22:33.334 10.0.0.2: aer_cb for log page 4, aen_event_type: 0x02, aen_event_info: 0x00 00:22:33.334 aer_cb - Changed Namespace 00:22:33.334 Cleaning up... 00:22:33.334 07:39:49 -- host/aer.sh@45 -- # rpc_cmd bdev_malloc_delete Malloc0 00:22:33.334 07:39:49 -- common/autotest_common.sh@551 -- # xtrace_disable 00:22:33.334 07:39:49 -- common/autotest_common.sh@10 -- # set +x 00:22:33.334 07:39:49 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:22:33.334 07:39:49 -- host/aer.sh@46 -- # rpc_cmd bdev_malloc_delete Malloc1 00:22:33.334 07:39:49 -- common/autotest_common.sh@551 -- # xtrace_disable 00:22:33.334 07:39:49 -- common/autotest_common.sh@10 -- # set +x 00:22:33.592 07:39:49 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:22:33.592 07:39:49 -- host/aer.sh@47 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:22:33.592 07:39:49 -- common/autotest_common.sh@551 -- # xtrace_disable 00:22:33.592 07:39:49 -- common/autotest_common.sh@10 -- # set +x 00:22:33.592 07:39:49 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:22:33.592 07:39:49 -- host/aer.sh@49 -- # trap - SIGINT SIGTERM EXIT 00:22:33.592 07:39:49 -- host/aer.sh@51 -- # nvmftestfini 00:22:33.592 07:39:49 -- nvmf/common.sh@476 -- # nvmfcleanup 00:22:33.592 07:39:49 -- nvmf/common.sh@116 -- # sync 00:22:33.592 07:39:49 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:22:33.592 07:39:49 -- nvmf/common.sh@119 -- # set +e 00:22:33.592 07:39:49 -- nvmf/common.sh@120 -- # for i in {1..20} 00:22:33.592 07:39:49 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:22:33.592 rmmod nvme_tcp 00:22:33.592 rmmod nvme_fabrics 00:22:33.592 rmmod nvme_keyring 00:22:33.592 07:39:49 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:22:33.592 07:39:49 -- nvmf/common.sh@123 -- # set -e 00:22:33.592 07:39:49 -- nvmf/common.sh@124 -- # return 0 00:22:33.592 07:39:49 -- nvmf/common.sh@477 -- # '[' -n 4162804 ']' 00:22:33.592 07:39:49 -- nvmf/common.sh@478 -- # killprocess 4162804 00:22:33.592 07:39:49 -- common/autotest_common.sh@926 -- # '[' -z 4162804 ']' 00:22:33.592 07:39:49 -- common/autotest_common.sh@930 -- # kill -0 4162804 00:22:33.592 07:39:49 -- common/autotest_common.sh@931 -- # uname 00:22:33.592 07:39:49 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:22:33.592 07:39:49 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 4162804 00:22:33.593 07:39:49 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:22:33.593 07:39:49 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:22:33.593 07:39:49 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 4162804' 00:22:33.593 killing process with pid 4162804 00:22:33.593 07:39:49 -- common/autotest_common.sh@945 -- # kill 4162804 00:22:33.593 [2024-07-14 07:39:49.614963] app.c: 883:log_deprecation_hits: *WARNING*: rpc_nvmf_get_subsystems: deprecation 'listener.transport is deprecated in favor of trtype' scheduled for removal in v24.05 hit 1 times 00:22:33.593 07:39:49 -- common/autotest_common.sh@950 -- # wait 4162804 00:22:33.851 07:39:49 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:22:33.851 07:39:49 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:22:33.851 07:39:49 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:22:33.851 07:39:49 -- nvmf/common.sh@273 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:22:33.851 07:39:49 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:22:33.851 07:39:49 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:33.851 07:39:49 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:22:33.851 07:39:49 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:36.382 07:39:51 -- nvmf/common.sh@278 -- # ip -4 addr flush cvl_0_1 00:22:36.382 00:22:36.382 real 0m6.082s 00:22:36.382 user 0m7.064s 00:22:36.382 sys 0m1.911s 00:22:36.382 07:39:51 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:22:36.382 07:39:51 -- common/autotest_common.sh@10 -- # set +x 00:22:36.383 ************************************ 00:22:36.383 END TEST nvmf_aer 00:22:36.383 ************************************ 00:22:36.383 07:39:51 -- nvmf/nvmf.sh@93 -- # run_test nvmf_async_init /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/async_init.sh --transport=tcp 00:22:36.383 07:39:51 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:22:36.383 07:39:51 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:22:36.383 07:39:51 -- common/autotest_common.sh@10 -- # set +x 00:22:36.383 ************************************ 00:22:36.383 START TEST nvmf_async_init 00:22:36.383 ************************************ 00:22:36.383 07:39:51 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/async_init.sh --transport=tcp 00:22:36.383 * Looking for test storage... 00:22:36.383 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:22:36.383 07:39:52 -- host/async_init.sh@11 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:22:36.383 07:39:52 -- nvmf/common.sh@7 -- # uname -s 00:22:36.383 07:39:52 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:22:36.383 07:39:52 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:22:36.383 07:39:52 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:22:36.383 07:39:52 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:22:36.383 07:39:52 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:22:36.383 07:39:52 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:22:36.383 07:39:52 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:22:36.383 07:39:52 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:22:36.383 07:39:52 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:22:36.383 07:39:52 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:22:36.383 07:39:52 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:22:36.383 07:39:52 -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:22:36.383 07:39:52 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:22:36.383 07:39:52 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:22:36.383 07:39:52 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:22:36.383 07:39:52 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:22:36.383 07:39:52 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:22:36.383 07:39:52 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:22:36.383 07:39:52 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:22:36.383 07:39:52 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:36.383 07:39:52 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:36.383 07:39:52 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:36.383 07:39:52 -- paths/export.sh@5 -- # export PATH 00:22:36.383 07:39:52 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:36.383 07:39:52 -- nvmf/common.sh@46 -- # : 0 00:22:36.383 07:39:52 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:22:36.383 07:39:52 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:22:36.383 07:39:52 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:22:36.383 07:39:52 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:22:36.383 07:39:52 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:22:36.383 07:39:52 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:22:36.383 07:39:52 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:22:36.383 07:39:52 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:22:36.383 07:39:52 -- host/async_init.sh@13 -- # null_bdev_size=1024 00:22:36.383 07:39:52 -- host/async_init.sh@14 -- # null_block_size=512 00:22:36.383 07:39:52 -- host/async_init.sh@15 -- # null_bdev=null0 00:22:36.383 07:39:52 -- host/async_init.sh@16 -- # nvme_bdev=nvme0 00:22:36.383 07:39:52 -- host/async_init.sh@20 -- # uuidgen 00:22:36.383 07:39:52 -- host/async_init.sh@20 -- # tr -d - 00:22:36.383 07:39:52 -- host/async_init.sh@20 -- # nguid=f3d78bf8983a487e9ded12c7b6dcc37f 00:22:36.383 07:39:52 -- host/async_init.sh@22 -- # nvmftestinit 00:22:36.383 07:39:52 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:22:36.383 07:39:52 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:22:36.383 07:39:52 -- nvmf/common.sh@436 -- # prepare_net_devs 00:22:36.383 07:39:52 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:22:36.383 07:39:52 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:22:36.383 07:39:52 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:36.383 07:39:52 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:22:36.383 07:39:52 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:36.383 07:39:52 -- nvmf/common.sh@402 -- # [[ phy != virt ]] 00:22:36.383 07:39:52 -- nvmf/common.sh@402 -- # gather_supported_nvmf_pci_devs 00:22:36.383 07:39:52 -- nvmf/common.sh@284 -- # xtrace_disable 00:22:36.383 07:39:52 -- common/autotest_common.sh@10 -- # set +x 00:22:38.287 07:39:54 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:22:38.287 07:39:54 -- nvmf/common.sh@290 -- # pci_devs=() 00:22:38.287 07:39:54 -- nvmf/common.sh@290 -- # local -a pci_devs 00:22:38.287 07:39:54 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:22:38.287 07:39:54 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:22:38.287 07:39:54 -- nvmf/common.sh@292 -- # pci_drivers=() 00:22:38.287 07:39:54 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:22:38.287 07:39:54 -- nvmf/common.sh@294 -- # net_devs=() 00:22:38.287 07:39:54 -- nvmf/common.sh@294 -- # local -ga net_devs 00:22:38.287 07:39:54 -- nvmf/common.sh@295 -- # e810=() 00:22:38.287 07:39:54 -- nvmf/common.sh@295 -- # local -ga e810 00:22:38.287 07:39:54 -- nvmf/common.sh@296 -- # x722=() 00:22:38.287 07:39:54 -- nvmf/common.sh@296 -- # local -ga x722 00:22:38.287 07:39:54 -- nvmf/common.sh@297 -- # mlx=() 00:22:38.287 07:39:54 -- nvmf/common.sh@297 -- # local -ga mlx 00:22:38.287 07:39:54 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:22:38.287 07:39:54 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:22:38.287 07:39:54 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:22:38.287 07:39:54 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:22:38.287 07:39:54 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:22:38.287 07:39:54 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:22:38.287 07:39:54 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:22:38.287 07:39:54 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:22:38.287 07:39:54 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:22:38.287 07:39:54 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:22:38.287 07:39:54 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:22:38.287 07:39:54 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:22:38.287 07:39:54 -- nvmf/common.sh@320 -- # [[ tcp == rdma ]] 00:22:38.287 07:39:54 -- nvmf/common.sh@326 -- # [[ e810 == mlx5 ]] 00:22:38.287 07:39:54 -- nvmf/common.sh@328 -- # [[ e810 == e810 ]] 00:22:38.287 07:39:54 -- nvmf/common.sh@329 -- # pci_devs=("${e810[@]}") 00:22:38.287 07:39:54 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:22:38.287 07:39:54 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:22:38.287 07:39:54 -- nvmf/common.sh@340 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:22:38.287 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:22:38.287 07:39:54 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:22:38.287 07:39:54 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:22:38.287 07:39:54 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:38.287 07:39:54 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:38.287 07:39:54 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:22:38.287 07:39:54 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:22:38.287 07:39:54 -- nvmf/common.sh@340 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:22:38.287 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:22:38.287 07:39:54 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:22:38.287 07:39:54 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:22:38.287 07:39:54 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:38.287 07:39:54 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:38.287 07:39:54 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:22:38.287 07:39:54 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:22:38.287 07:39:54 -- nvmf/common.sh@371 -- # [[ e810 == e810 ]] 00:22:38.287 07:39:54 -- nvmf/common.sh@371 -- # [[ tcp == rdma ]] 00:22:38.287 07:39:54 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:22:38.287 07:39:54 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:38.287 07:39:54 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:22:38.287 07:39:54 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:38.287 07:39:54 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:22:38.287 Found net devices under 0000:0a:00.0: cvl_0_0 00:22:38.287 07:39:54 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:22:38.287 07:39:54 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:22:38.287 07:39:54 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:38.287 07:39:54 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:22:38.287 07:39:54 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:38.287 07:39:54 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:22:38.287 Found net devices under 0000:0a:00.1: cvl_0_1 00:22:38.287 07:39:54 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:22:38.287 07:39:54 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:22:38.287 07:39:54 -- nvmf/common.sh@402 -- # is_hw=yes 00:22:38.287 07:39:54 -- nvmf/common.sh@404 -- # [[ yes == yes ]] 00:22:38.287 07:39:54 -- nvmf/common.sh@405 -- # [[ tcp == tcp ]] 00:22:38.287 07:39:54 -- nvmf/common.sh@406 -- # nvmf_tcp_init 00:22:38.287 07:39:54 -- nvmf/common.sh@228 -- # NVMF_INITIATOR_IP=10.0.0.1 00:22:38.287 07:39:54 -- nvmf/common.sh@229 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:22:38.287 07:39:54 -- nvmf/common.sh@230 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:22:38.287 07:39:54 -- nvmf/common.sh@233 -- # (( 2 > 1 )) 00:22:38.287 07:39:54 -- nvmf/common.sh@235 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:22:38.287 07:39:54 -- nvmf/common.sh@236 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:22:38.287 07:39:54 -- nvmf/common.sh@239 -- # NVMF_SECOND_TARGET_IP= 00:22:38.287 07:39:54 -- nvmf/common.sh@241 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:22:38.287 07:39:54 -- nvmf/common.sh@242 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:22:38.287 07:39:54 -- nvmf/common.sh@243 -- # ip -4 addr flush cvl_0_0 00:22:38.287 07:39:54 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_1 00:22:38.287 07:39:54 -- nvmf/common.sh@247 -- # ip netns add cvl_0_0_ns_spdk 00:22:38.287 07:39:54 -- nvmf/common.sh@250 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:22:38.287 07:39:54 -- nvmf/common.sh@253 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:22:38.287 07:39:54 -- nvmf/common.sh@254 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:22:38.287 07:39:54 -- nvmf/common.sh@257 -- # ip link set cvl_0_1 up 00:22:38.287 07:39:54 -- nvmf/common.sh@259 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:22:38.287 07:39:54 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:22:38.287 07:39:54 -- nvmf/common.sh@263 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:22:38.287 07:39:54 -- nvmf/common.sh@266 -- # ping -c 1 10.0.0.2 00:22:38.287 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:22:38.287 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.125 ms 00:22:38.287 00:22:38.287 --- 10.0.0.2 ping statistics --- 00:22:38.287 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:38.287 rtt min/avg/max/mdev = 0.125/0.125/0.125/0.000 ms 00:22:38.287 07:39:54 -- nvmf/common.sh@267 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:22:38.287 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:22:38.287 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.162 ms 00:22:38.287 00:22:38.287 --- 10.0.0.1 ping statistics --- 00:22:38.287 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:38.287 rtt min/avg/max/mdev = 0.162/0.162/0.162/0.000 ms 00:22:38.287 07:39:54 -- nvmf/common.sh@269 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:22:38.287 07:39:54 -- nvmf/common.sh@410 -- # return 0 00:22:38.287 07:39:54 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:22:38.287 07:39:54 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:22:38.287 07:39:54 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:22:38.287 07:39:54 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:22:38.287 07:39:54 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:22:38.287 07:39:54 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:22:38.287 07:39:54 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:22:38.287 07:39:54 -- host/async_init.sh@23 -- # nvmfappstart -m 0x1 00:22:38.287 07:39:54 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:22:38.287 07:39:54 -- common/autotest_common.sh@712 -- # xtrace_disable 00:22:38.287 07:39:54 -- common/autotest_common.sh@10 -- # set +x 00:22:38.287 07:39:54 -- nvmf/common.sh@469 -- # nvmfpid=4164921 00:22:38.287 07:39:54 -- nvmf/common.sh@468 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:22:38.287 07:39:54 -- nvmf/common.sh@470 -- # waitforlisten 4164921 00:22:38.287 07:39:54 -- common/autotest_common.sh@819 -- # '[' -z 4164921 ']' 00:22:38.287 07:39:54 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:38.287 07:39:54 -- common/autotest_common.sh@824 -- # local max_retries=100 00:22:38.287 07:39:54 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:38.287 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:38.287 07:39:54 -- common/autotest_common.sh@828 -- # xtrace_disable 00:22:38.287 07:39:54 -- common/autotest_common.sh@10 -- # set +x 00:22:38.287 [2024-07-14 07:39:54.214880] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:22:38.287 [2024-07-14 07:39:54.214981] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:22:38.287 EAL: No free 2048 kB hugepages reported on node 1 00:22:38.287 [2024-07-14 07:39:54.287005] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:38.287 [2024-07-14 07:39:54.400209] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:22:38.287 [2024-07-14 07:39:54.400373] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:22:38.287 [2024-07-14 07:39:54.400392] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:22:38.287 [2024-07-14 07:39:54.400406] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:22:38.287 [2024-07-14 07:39:54.400436] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:22:39.222 07:39:55 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:22:39.222 07:39:55 -- common/autotest_common.sh@852 -- # return 0 00:22:39.222 07:39:55 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:22:39.222 07:39:55 -- common/autotest_common.sh@718 -- # xtrace_disable 00:22:39.222 07:39:55 -- common/autotest_common.sh@10 -- # set +x 00:22:39.222 07:39:55 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:22:39.222 07:39:55 -- host/async_init.sh@26 -- # rpc_cmd nvmf_create_transport -t tcp -o 00:22:39.222 07:39:55 -- common/autotest_common.sh@551 -- # xtrace_disable 00:22:39.222 07:39:55 -- common/autotest_common.sh@10 -- # set +x 00:22:39.222 [2024-07-14 07:39:55.168356] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:22:39.222 07:39:55 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:22:39.222 07:39:55 -- host/async_init.sh@27 -- # rpc_cmd bdev_null_create null0 1024 512 00:22:39.222 07:39:55 -- common/autotest_common.sh@551 -- # xtrace_disable 00:22:39.222 07:39:55 -- common/autotest_common.sh@10 -- # set +x 00:22:39.222 null0 00:22:39.222 07:39:55 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:22:39.222 07:39:55 -- host/async_init.sh@28 -- # rpc_cmd bdev_wait_for_examine 00:22:39.222 07:39:55 -- common/autotest_common.sh@551 -- # xtrace_disable 00:22:39.222 07:39:55 -- common/autotest_common.sh@10 -- # set +x 00:22:39.222 07:39:55 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:22:39.222 07:39:55 -- host/async_init.sh@29 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a 00:22:39.222 07:39:55 -- common/autotest_common.sh@551 -- # xtrace_disable 00:22:39.222 07:39:55 -- common/autotest_common.sh@10 -- # set +x 00:22:39.222 07:39:55 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:22:39.222 07:39:55 -- host/async_init.sh@30 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 null0 -g f3d78bf8983a487e9ded12c7b6dcc37f 00:22:39.222 07:39:55 -- common/autotest_common.sh@551 -- # xtrace_disable 00:22:39.222 07:39:55 -- common/autotest_common.sh@10 -- # set +x 00:22:39.222 07:39:55 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:22:39.222 07:39:55 -- host/async_init.sh@31 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:22:39.222 07:39:55 -- common/autotest_common.sh@551 -- # xtrace_disable 00:22:39.222 07:39:55 -- common/autotest_common.sh@10 -- # set +x 00:22:39.222 [2024-07-14 07:39:55.208602] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:22:39.222 07:39:55 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:22:39.222 07:39:55 -- host/async_init.sh@37 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.2 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode0 00:22:39.222 07:39:55 -- common/autotest_common.sh@551 -- # xtrace_disable 00:22:39.222 07:39:55 -- common/autotest_common.sh@10 -- # set +x 00:22:39.480 nvme0n1 00:22:39.480 07:39:55 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:22:39.480 07:39:55 -- host/async_init.sh@41 -- # rpc_cmd bdev_get_bdevs -b nvme0n1 00:22:39.480 07:39:55 -- common/autotest_common.sh@551 -- # xtrace_disable 00:22:39.480 07:39:55 -- common/autotest_common.sh@10 -- # set +x 00:22:39.480 [ 00:22:39.480 { 00:22:39.480 "name": "nvme0n1", 00:22:39.480 "aliases": [ 00:22:39.480 "f3d78bf8-983a-487e-9ded-12c7b6dcc37f" 00:22:39.480 ], 00:22:39.480 "product_name": "NVMe disk", 00:22:39.480 "block_size": 512, 00:22:39.480 "num_blocks": 2097152, 00:22:39.480 "uuid": "f3d78bf8-983a-487e-9ded-12c7b6dcc37f", 00:22:39.480 "assigned_rate_limits": { 00:22:39.480 "rw_ios_per_sec": 0, 00:22:39.480 "rw_mbytes_per_sec": 0, 00:22:39.480 "r_mbytes_per_sec": 0, 00:22:39.480 "w_mbytes_per_sec": 0 00:22:39.480 }, 00:22:39.480 "claimed": false, 00:22:39.480 "zoned": false, 00:22:39.480 "supported_io_types": { 00:22:39.480 "read": true, 00:22:39.480 "write": true, 00:22:39.480 "unmap": false, 00:22:39.480 "write_zeroes": true, 00:22:39.480 "flush": true, 00:22:39.480 "reset": true, 00:22:39.480 "compare": true, 00:22:39.480 "compare_and_write": true, 00:22:39.480 "abort": true, 00:22:39.480 "nvme_admin": true, 00:22:39.480 "nvme_io": true 00:22:39.480 }, 00:22:39.480 "driver_specific": { 00:22:39.480 "nvme": [ 00:22:39.480 { 00:22:39.480 "trid": { 00:22:39.480 "trtype": "TCP", 00:22:39.480 "adrfam": "IPv4", 00:22:39.480 "traddr": "10.0.0.2", 00:22:39.480 "trsvcid": "4420", 00:22:39.480 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:22:39.480 }, 00:22:39.480 "ctrlr_data": { 00:22:39.480 "cntlid": 1, 00:22:39.480 "vendor_id": "0x8086", 00:22:39.480 "model_number": "SPDK bdev Controller", 00:22:39.480 "serial_number": "00000000000000000000", 00:22:39.480 "firmware_revision": "24.01.1", 00:22:39.480 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:22:39.480 "oacs": { 00:22:39.480 "security": 0, 00:22:39.480 "format": 0, 00:22:39.480 "firmware": 0, 00:22:39.480 "ns_manage": 0 00:22:39.480 }, 00:22:39.480 "multi_ctrlr": true, 00:22:39.480 "ana_reporting": false 00:22:39.480 }, 00:22:39.480 "vs": { 00:22:39.480 "nvme_version": "1.3" 00:22:39.480 }, 00:22:39.480 "ns_data": { 00:22:39.480 "id": 1, 00:22:39.480 "can_share": true 00:22:39.480 } 00:22:39.480 } 00:22:39.480 ], 00:22:39.480 "mp_policy": "active_passive" 00:22:39.480 } 00:22:39.480 } 00:22:39.480 ] 00:22:39.480 07:39:55 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:22:39.480 07:39:55 -- host/async_init.sh@44 -- # rpc_cmd bdev_nvme_reset_controller nvme0 00:22:39.480 07:39:55 -- common/autotest_common.sh@551 -- # xtrace_disable 00:22:39.480 07:39:55 -- common/autotest_common.sh@10 -- # set +x 00:22:39.480 [2024-07-14 07:39:55.457328] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:22:39.480 [2024-07-14 07:39:55.457417] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xcd3a80 (9): Bad file descriptor 00:22:39.480 [2024-07-14 07:39:55.590017] bdev_nvme.c:2040:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:22:39.480 07:39:55 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:22:39.480 07:39:55 -- host/async_init.sh@47 -- # rpc_cmd bdev_get_bdevs -b nvme0n1 00:22:39.480 07:39:55 -- common/autotest_common.sh@551 -- # xtrace_disable 00:22:39.480 07:39:55 -- common/autotest_common.sh@10 -- # set +x 00:22:39.480 [ 00:22:39.480 { 00:22:39.480 "name": "nvme0n1", 00:22:39.480 "aliases": [ 00:22:39.480 "f3d78bf8-983a-487e-9ded-12c7b6dcc37f" 00:22:39.480 ], 00:22:39.480 "product_name": "NVMe disk", 00:22:39.480 "block_size": 512, 00:22:39.480 "num_blocks": 2097152, 00:22:39.480 "uuid": "f3d78bf8-983a-487e-9ded-12c7b6dcc37f", 00:22:39.480 "assigned_rate_limits": { 00:22:39.480 "rw_ios_per_sec": 0, 00:22:39.480 "rw_mbytes_per_sec": 0, 00:22:39.480 "r_mbytes_per_sec": 0, 00:22:39.480 "w_mbytes_per_sec": 0 00:22:39.480 }, 00:22:39.480 "claimed": false, 00:22:39.480 "zoned": false, 00:22:39.480 "supported_io_types": { 00:22:39.480 "read": true, 00:22:39.480 "write": true, 00:22:39.480 "unmap": false, 00:22:39.480 "write_zeroes": true, 00:22:39.480 "flush": true, 00:22:39.480 "reset": true, 00:22:39.480 "compare": true, 00:22:39.480 "compare_and_write": true, 00:22:39.480 "abort": true, 00:22:39.480 "nvme_admin": true, 00:22:39.480 "nvme_io": true 00:22:39.480 }, 00:22:39.480 "driver_specific": { 00:22:39.480 "nvme": [ 00:22:39.480 { 00:22:39.480 "trid": { 00:22:39.480 "trtype": "TCP", 00:22:39.480 "adrfam": "IPv4", 00:22:39.480 "traddr": "10.0.0.2", 00:22:39.480 "trsvcid": "4420", 00:22:39.480 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:22:39.480 }, 00:22:39.480 "ctrlr_data": { 00:22:39.480 "cntlid": 2, 00:22:39.480 "vendor_id": "0x8086", 00:22:39.480 "model_number": "SPDK bdev Controller", 00:22:39.480 "serial_number": "00000000000000000000", 00:22:39.480 "firmware_revision": "24.01.1", 00:22:39.480 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:22:39.480 "oacs": { 00:22:39.480 "security": 0, 00:22:39.480 "format": 0, 00:22:39.480 "firmware": 0, 00:22:39.480 "ns_manage": 0 00:22:39.480 }, 00:22:39.480 "multi_ctrlr": true, 00:22:39.480 "ana_reporting": false 00:22:39.480 }, 00:22:39.480 "vs": { 00:22:39.480 "nvme_version": "1.3" 00:22:39.480 }, 00:22:39.480 "ns_data": { 00:22:39.480 "id": 1, 00:22:39.480 "can_share": true 00:22:39.480 } 00:22:39.480 } 00:22:39.480 ], 00:22:39.480 "mp_policy": "active_passive" 00:22:39.480 } 00:22:39.480 } 00:22:39.480 ] 00:22:39.480 07:39:55 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:22:39.480 07:39:55 -- host/async_init.sh@50 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:22:39.480 07:39:55 -- common/autotest_common.sh@551 -- # xtrace_disable 00:22:39.480 07:39:55 -- common/autotest_common.sh@10 -- # set +x 00:22:39.480 07:39:55 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:22:39.480 07:39:55 -- host/async_init.sh@53 -- # mktemp 00:22:39.480 07:39:55 -- host/async_init.sh@53 -- # key_path=/tmp/tmp.5F40RCKJSG 00:22:39.480 07:39:55 -- host/async_init.sh@54 -- # echo -n NVMeTLSkey-1:01:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: 00:22:39.480 07:39:55 -- host/async_init.sh@55 -- # chmod 0600 /tmp/tmp.5F40RCKJSG 00:22:39.480 07:39:55 -- host/async_init.sh@56 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode0 --disable 00:22:39.480 07:39:55 -- common/autotest_common.sh@551 -- # xtrace_disable 00:22:39.480 07:39:55 -- common/autotest_common.sh@10 -- # set +x 00:22:39.480 07:39:55 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:22:39.480 07:39:55 -- host/async_init.sh@57 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4421 --secure-channel 00:22:39.480 07:39:55 -- common/autotest_common.sh@551 -- # xtrace_disable 00:22:39.480 07:39:55 -- common/autotest_common.sh@10 -- # set +x 00:22:39.480 [2024-07-14 07:39:55.641979] tcp.c: 912:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:22:39.480 [2024-07-14 07:39:55.642123] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:22:39.480 07:39:55 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:22:39.480 07:39:55 -- host/async_init.sh@59 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.5F40RCKJSG 00:22:39.480 07:39:55 -- common/autotest_common.sh@551 -- # xtrace_disable 00:22:39.480 07:39:55 -- common/autotest_common.sh@10 -- # set +x 00:22:39.738 07:39:55 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:22:39.738 07:39:55 -- host/async_init.sh@65 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.2 -f ipv4 -s 4421 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.5F40RCKJSG 00:22:39.738 07:39:55 -- common/autotest_common.sh@551 -- # xtrace_disable 00:22:39.738 07:39:55 -- common/autotest_common.sh@10 -- # set +x 00:22:39.738 [2024-07-14 07:39:55.658013] bdev_nvme_rpc.c: 477:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:22:39.738 nvme0n1 00:22:39.738 07:39:55 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:22:39.738 07:39:55 -- host/async_init.sh@69 -- # rpc_cmd bdev_get_bdevs -b nvme0n1 00:22:39.738 07:39:55 -- common/autotest_common.sh@551 -- # xtrace_disable 00:22:39.738 07:39:55 -- common/autotest_common.sh@10 -- # set +x 00:22:39.738 [ 00:22:39.738 { 00:22:39.738 "name": "nvme0n1", 00:22:39.738 "aliases": [ 00:22:39.738 "f3d78bf8-983a-487e-9ded-12c7b6dcc37f" 00:22:39.738 ], 00:22:39.738 "product_name": "NVMe disk", 00:22:39.738 "block_size": 512, 00:22:39.738 "num_blocks": 2097152, 00:22:39.738 "uuid": "f3d78bf8-983a-487e-9ded-12c7b6dcc37f", 00:22:39.738 "assigned_rate_limits": { 00:22:39.738 "rw_ios_per_sec": 0, 00:22:39.738 "rw_mbytes_per_sec": 0, 00:22:39.738 "r_mbytes_per_sec": 0, 00:22:39.738 "w_mbytes_per_sec": 0 00:22:39.738 }, 00:22:39.738 "claimed": false, 00:22:39.738 "zoned": false, 00:22:39.738 "supported_io_types": { 00:22:39.738 "read": true, 00:22:39.738 "write": true, 00:22:39.738 "unmap": false, 00:22:39.738 "write_zeroes": true, 00:22:39.738 "flush": true, 00:22:39.738 "reset": true, 00:22:39.738 "compare": true, 00:22:39.738 "compare_and_write": true, 00:22:39.738 "abort": true, 00:22:39.738 "nvme_admin": true, 00:22:39.738 "nvme_io": true 00:22:39.739 }, 00:22:39.739 "driver_specific": { 00:22:39.739 "nvme": [ 00:22:39.739 { 00:22:39.739 "trid": { 00:22:39.739 "trtype": "TCP", 00:22:39.739 "adrfam": "IPv4", 00:22:39.739 "traddr": "10.0.0.2", 00:22:39.739 "trsvcid": "4421", 00:22:39.739 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:22:39.739 }, 00:22:39.739 "ctrlr_data": { 00:22:39.739 "cntlid": 3, 00:22:39.739 "vendor_id": "0x8086", 00:22:39.739 "model_number": "SPDK bdev Controller", 00:22:39.739 "serial_number": "00000000000000000000", 00:22:39.739 "firmware_revision": "24.01.1", 00:22:39.739 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:22:39.739 "oacs": { 00:22:39.739 "security": 0, 00:22:39.739 "format": 0, 00:22:39.739 "firmware": 0, 00:22:39.739 "ns_manage": 0 00:22:39.739 }, 00:22:39.739 "multi_ctrlr": true, 00:22:39.739 "ana_reporting": false 00:22:39.739 }, 00:22:39.739 "vs": { 00:22:39.739 "nvme_version": "1.3" 00:22:39.739 }, 00:22:39.739 "ns_data": { 00:22:39.739 "id": 1, 00:22:39.739 "can_share": true 00:22:39.739 } 00:22:39.739 } 00:22:39.739 ], 00:22:39.739 "mp_policy": "active_passive" 00:22:39.739 } 00:22:39.739 } 00:22:39.739 ] 00:22:39.739 07:39:55 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:22:39.739 07:39:55 -- host/async_init.sh@72 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:22:39.739 07:39:55 -- common/autotest_common.sh@551 -- # xtrace_disable 00:22:39.739 07:39:55 -- common/autotest_common.sh@10 -- # set +x 00:22:39.739 07:39:55 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:22:39.739 07:39:55 -- host/async_init.sh@75 -- # rm -f /tmp/tmp.5F40RCKJSG 00:22:39.739 07:39:55 -- host/async_init.sh@77 -- # trap - SIGINT SIGTERM EXIT 00:22:39.739 07:39:55 -- host/async_init.sh@78 -- # nvmftestfini 00:22:39.739 07:39:55 -- nvmf/common.sh@476 -- # nvmfcleanup 00:22:39.739 07:39:55 -- nvmf/common.sh@116 -- # sync 00:22:39.739 07:39:55 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:22:39.739 07:39:55 -- nvmf/common.sh@119 -- # set +e 00:22:39.739 07:39:55 -- nvmf/common.sh@120 -- # for i in {1..20} 00:22:39.739 07:39:55 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:22:39.739 rmmod nvme_tcp 00:22:39.739 rmmod nvme_fabrics 00:22:39.739 rmmod nvme_keyring 00:22:39.739 07:39:55 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:22:39.739 07:39:55 -- nvmf/common.sh@123 -- # set -e 00:22:39.739 07:39:55 -- nvmf/common.sh@124 -- # return 0 00:22:39.739 07:39:55 -- nvmf/common.sh@477 -- # '[' -n 4164921 ']' 00:22:39.739 07:39:55 -- nvmf/common.sh@478 -- # killprocess 4164921 00:22:39.739 07:39:55 -- common/autotest_common.sh@926 -- # '[' -z 4164921 ']' 00:22:39.739 07:39:55 -- common/autotest_common.sh@930 -- # kill -0 4164921 00:22:39.739 07:39:55 -- common/autotest_common.sh@931 -- # uname 00:22:39.739 07:39:55 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:22:39.739 07:39:55 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 4164921 00:22:39.739 07:39:55 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:22:39.739 07:39:55 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:22:39.739 07:39:55 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 4164921' 00:22:39.739 killing process with pid 4164921 00:22:39.739 07:39:55 -- common/autotest_common.sh@945 -- # kill 4164921 00:22:39.739 07:39:55 -- common/autotest_common.sh@950 -- # wait 4164921 00:22:39.997 07:39:56 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:22:39.997 07:39:56 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:22:39.997 07:39:56 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:22:39.997 07:39:56 -- nvmf/common.sh@273 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:22:39.997 07:39:56 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:22:39.997 07:39:56 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:39.997 07:39:56 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:22:39.997 07:39:56 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:42.530 07:39:58 -- nvmf/common.sh@278 -- # ip -4 addr flush cvl_0_1 00:22:42.530 00:22:42.530 real 0m6.148s 00:22:42.530 user 0m2.871s 00:22:42.530 sys 0m1.857s 00:22:42.530 07:39:58 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:22:42.530 07:39:58 -- common/autotest_common.sh@10 -- # set +x 00:22:42.530 ************************************ 00:22:42.530 END TEST nvmf_async_init 00:22:42.530 ************************************ 00:22:42.530 07:39:58 -- nvmf/nvmf.sh@94 -- # run_test dma /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/dma.sh --transport=tcp 00:22:42.530 07:39:58 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:22:42.530 07:39:58 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:22:42.530 07:39:58 -- common/autotest_common.sh@10 -- # set +x 00:22:42.530 ************************************ 00:22:42.530 START TEST dma 00:22:42.530 ************************************ 00:22:42.530 07:39:58 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/dma.sh --transport=tcp 00:22:42.530 * Looking for test storage... 00:22:42.530 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:22:42.530 07:39:58 -- host/dma.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:22:42.530 07:39:58 -- nvmf/common.sh@7 -- # uname -s 00:22:42.530 07:39:58 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:22:42.530 07:39:58 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:22:42.530 07:39:58 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:22:42.530 07:39:58 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:22:42.530 07:39:58 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:22:42.530 07:39:58 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:22:42.530 07:39:58 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:22:42.530 07:39:58 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:22:42.530 07:39:58 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:22:42.530 07:39:58 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:22:42.530 07:39:58 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:22:42.530 07:39:58 -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:22:42.530 07:39:58 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:22:42.530 07:39:58 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:22:42.530 07:39:58 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:22:42.530 07:39:58 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:22:42.530 07:39:58 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:22:42.530 07:39:58 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:22:42.530 07:39:58 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:22:42.530 07:39:58 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:42.531 07:39:58 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:42.531 07:39:58 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:42.531 07:39:58 -- paths/export.sh@5 -- # export PATH 00:22:42.531 07:39:58 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:42.531 07:39:58 -- nvmf/common.sh@46 -- # : 0 00:22:42.531 07:39:58 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:22:42.531 07:39:58 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:22:42.531 07:39:58 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:22:42.531 07:39:58 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:22:42.531 07:39:58 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:22:42.531 07:39:58 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:22:42.531 07:39:58 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:22:42.531 07:39:58 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:22:42.531 07:39:58 -- host/dma.sh@12 -- # '[' tcp '!=' rdma ']' 00:22:42.531 07:39:58 -- host/dma.sh@13 -- # exit 0 00:22:42.531 00:22:42.531 real 0m0.067s 00:22:42.531 user 0m0.032s 00:22:42.531 sys 0m0.040s 00:22:42.531 07:39:58 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:22:42.531 07:39:58 -- common/autotest_common.sh@10 -- # set +x 00:22:42.531 ************************************ 00:22:42.531 END TEST dma 00:22:42.531 ************************************ 00:22:42.531 07:39:58 -- nvmf/nvmf.sh@97 -- # run_test nvmf_identify /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/identify.sh --transport=tcp 00:22:42.531 07:39:58 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:22:42.531 07:39:58 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:22:42.531 07:39:58 -- common/autotest_common.sh@10 -- # set +x 00:22:42.531 ************************************ 00:22:42.531 START TEST nvmf_identify 00:22:42.531 ************************************ 00:22:42.531 07:39:58 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/identify.sh --transport=tcp 00:22:42.531 * Looking for test storage... 00:22:42.531 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:22:42.531 07:39:58 -- host/identify.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:22:42.531 07:39:58 -- nvmf/common.sh@7 -- # uname -s 00:22:42.531 07:39:58 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:22:42.531 07:39:58 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:22:42.531 07:39:58 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:22:42.531 07:39:58 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:22:42.531 07:39:58 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:22:42.531 07:39:58 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:22:42.531 07:39:58 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:22:42.531 07:39:58 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:22:42.531 07:39:58 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:22:42.531 07:39:58 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:22:42.531 07:39:58 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:22:42.531 07:39:58 -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:22:42.531 07:39:58 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:22:42.531 07:39:58 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:22:42.531 07:39:58 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:22:42.531 07:39:58 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:22:42.531 07:39:58 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:22:42.531 07:39:58 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:22:42.531 07:39:58 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:22:42.531 07:39:58 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:42.531 07:39:58 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:42.531 07:39:58 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:42.531 07:39:58 -- paths/export.sh@5 -- # export PATH 00:22:42.531 07:39:58 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:42.531 07:39:58 -- nvmf/common.sh@46 -- # : 0 00:22:42.531 07:39:58 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:22:42.531 07:39:58 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:22:42.531 07:39:58 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:22:42.531 07:39:58 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:22:42.531 07:39:58 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:22:42.531 07:39:58 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:22:42.531 07:39:58 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:22:42.531 07:39:58 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:22:42.531 07:39:58 -- host/identify.sh@11 -- # MALLOC_BDEV_SIZE=64 00:22:42.531 07:39:58 -- host/identify.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:22:42.531 07:39:58 -- host/identify.sh@14 -- # nvmftestinit 00:22:42.531 07:39:58 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:22:42.531 07:39:58 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:22:42.531 07:39:58 -- nvmf/common.sh@436 -- # prepare_net_devs 00:22:42.531 07:39:58 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:22:42.531 07:39:58 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:22:42.531 07:39:58 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:42.531 07:39:58 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:22:42.531 07:39:58 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:42.531 07:39:58 -- nvmf/common.sh@402 -- # [[ phy != virt ]] 00:22:42.531 07:39:58 -- nvmf/common.sh@402 -- # gather_supported_nvmf_pci_devs 00:22:42.531 07:39:58 -- nvmf/common.sh@284 -- # xtrace_disable 00:22:42.531 07:39:58 -- common/autotest_common.sh@10 -- # set +x 00:22:44.432 07:40:00 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:22:44.432 07:40:00 -- nvmf/common.sh@290 -- # pci_devs=() 00:22:44.432 07:40:00 -- nvmf/common.sh@290 -- # local -a pci_devs 00:22:44.432 07:40:00 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:22:44.432 07:40:00 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:22:44.432 07:40:00 -- nvmf/common.sh@292 -- # pci_drivers=() 00:22:44.432 07:40:00 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:22:44.432 07:40:00 -- nvmf/common.sh@294 -- # net_devs=() 00:22:44.432 07:40:00 -- nvmf/common.sh@294 -- # local -ga net_devs 00:22:44.432 07:40:00 -- nvmf/common.sh@295 -- # e810=() 00:22:44.432 07:40:00 -- nvmf/common.sh@295 -- # local -ga e810 00:22:44.432 07:40:00 -- nvmf/common.sh@296 -- # x722=() 00:22:44.432 07:40:00 -- nvmf/common.sh@296 -- # local -ga x722 00:22:44.432 07:40:00 -- nvmf/common.sh@297 -- # mlx=() 00:22:44.432 07:40:00 -- nvmf/common.sh@297 -- # local -ga mlx 00:22:44.432 07:40:00 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:22:44.432 07:40:00 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:22:44.432 07:40:00 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:22:44.432 07:40:00 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:22:44.432 07:40:00 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:22:44.432 07:40:00 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:22:44.432 07:40:00 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:22:44.432 07:40:00 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:22:44.432 07:40:00 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:22:44.432 07:40:00 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:22:44.432 07:40:00 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:22:44.432 07:40:00 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:22:44.432 07:40:00 -- nvmf/common.sh@320 -- # [[ tcp == rdma ]] 00:22:44.432 07:40:00 -- nvmf/common.sh@326 -- # [[ e810 == mlx5 ]] 00:22:44.432 07:40:00 -- nvmf/common.sh@328 -- # [[ e810 == e810 ]] 00:22:44.432 07:40:00 -- nvmf/common.sh@329 -- # pci_devs=("${e810[@]}") 00:22:44.432 07:40:00 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:22:44.432 07:40:00 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:22:44.432 07:40:00 -- nvmf/common.sh@340 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:22:44.432 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:22:44.432 07:40:00 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:22:44.432 07:40:00 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:22:44.432 07:40:00 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:44.432 07:40:00 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:44.432 07:40:00 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:22:44.432 07:40:00 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:22:44.432 07:40:00 -- nvmf/common.sh@340 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:22:44.432 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:22:44.432 07:40:00 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:22:44.432 07:40:00 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:22:44.432 07:40:00 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:44.432 07:40:00 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:44.432 07:40:00 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:22:44.432 07:40:00 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:22:44.432 07:40:00 -- nvmf/common.sh@371 -- # [[ e810 == e810 ]] 00:22:44.432 07:40:00 -- nvmf/common.sh@371 -- # [[ tcp == rdma ]] 00:22:44.432 07:40:00 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:22:44.432 07:40:00 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:44.432 07:40:00 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:22:44.432 07:40:00 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:44.432 07:40:00 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:22:44.432 Found net devices under 0000:0a:00.0: cvl_0_0 00:22:44.432 07:40:00 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:22:44.432 07:40:00 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:22:44.432 07:40:00 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:44.432 07:40:00 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:22:44.432 07:40:00 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:44.432 07:40:00 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:22:44.432 Found net devices under 0000:0a:00.1: cvl_0_1 00:22:44.432 07:40:00 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:22:44.432 07:40:00 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:22:44.432 07:40:00 -- nvmf/common.sh@402 -- # is_hw=yes 00:22:44.432 07:40:00 -- nvmf/common.sh@404 -- # [[ yes == yes ]] 00:22:44.432 07:40:00 -- nvmf/common.sh@405 -- # [[ tcp == tcp ]] 00:22:44.432 07:40:00 -- nvmf/common.sh@406 -- # nvmf_tcp_init 00:22:44.432 07:40:00 -- nvmf/common.sh@228 -- # NVMF_INITIATOR_IP=10.0.0.1 00:22:44.432 07:40:00 -- nvmf/common.sh@229 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:22:44.432 07:40:00 -- nvmf/common.sh@230 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:22:44.432 07:40:00 -- nvmf/common.sh@233 -- # (( 2 > 1 )) 00:22:44.432 07:40:00 -- nvmf/common.sh@235 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:22:44.432 07:40:00 -- nvmf/common.sh@236 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:22:44.432 07:40:00 -- nvmf/common.sh@239 -- # NVMF_SECOND_TARGET_IP= 00:22:44.432 07:40:00 -- nvmf/common.sh@241 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:22:44.432 07:40:00 -- nvmf/common.sh@242 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:22:44.432 07:40:00 -- nvmf/common.sh@243 -- # ip -4 addr flush cvl_0_0 00:22:44.432 07:40:00 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_1 00:22:44.432 07:40:00 -- nvmf/common.sh@247 -- # ip netns add cvl_0_0_ns_spdk 00:22:44.432 07:40:00 -- nvmf/common.sh@250 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:22:44.432 07:40:00 -- nvmf/common.sh@253 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:22:44.432 07:40:00 -- nvmf/common.sh@254 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:22:44.432 07:40:00 -- nvmf/common.sh@257 -- # ip link set cvl_0_1 up 00:22:44.432 07:40:00 -- nvmf/common.sh@259 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:22:44.432 07:40:00 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:22:44.432 07:40:00 -- nvmf/common.sh@263 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:22:44.432 07:40:00 -- nvmf/common.sh@266 -- # ping -c 1 10.0.0.2 00:22:44.432 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:22:44.432 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.243 ms 00:22:44.432 00:22:44.432 --- 10.0.0.2 ping statistics --- 00:22:44.432 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:44.432 rtt min/avg/max/mdev = 0.243/0.243/0.243/0.000 ms 00:22:44.432 07:40:00 -- nvmf/common.sh@267 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:22:44.432 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:22:44.432 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.154 ms 00:22:44.432 00:22:44.432 --- 10.0.0.1 ping statistics --- 00:22:44.432 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:44.432 rtt min/avg/max/mdev = 0.154/0.154/0.154/0.000 ms 00:22:44.432 07:40:00 -- nvmf/common.sh@269 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:22:44.432 07:40:00 -- nvmf/common.sh@410 -- # return 0 00:22:44.432 07:40:00 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:22:44.432 07:40:00 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:22:44.432 07:40:00 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:22:44.432 07:40:00 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:22:44.432 07:40:00 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:22:44.432 07:40:00 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:22:44.432 07:40:00 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:22:44.432 07:40:00 -- host/identify.sh@16 -- # timing_enter start_nvmf_tgt 00:22:44.432 07:40:00 -- common/autotest_common.sh@712 -- # xtrace_disable 00:22:44.432 07:40:00 -- common/autotest_common.sh@10 -- # set +x 00:22:44.432 07:40:00 -- host/identify.sh@19 -- # nvmfpid=4167182 00:22:44.433 07:40:00 -- host/identify.sh@18 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:22:44.433 07:40:00 -- host/identify.sh@21 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:22:44.433 07:40:00 -- host/identify.sh@23 -- # waitforlisten 4167182 00:22:44.433 07:40:00 -- common/autotest_common.sh@819 -- # '[' -z 4167182 ']' 00:22:44.433 07:40:00 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:44.433 07:40:00 -- common/autotest_common.sh@824 -- # local max_retries=100 00:22:44.433 07:40:00 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:44.433 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:44.433 07:40:00 -- common/autotest_common.sh@828 -- # xtrace_disable 00:22:44.433 07:40:00 -- common/autotest_common.sh@10 -- # set +x 00:22:44.433 [2024-07-14 07:40:00.524424] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:22:44.433 [2024-07-14 07:40:00.524514] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:22:44.433 EAL: No free 2048 kB hugepages reported on node 1 00:22:44.433 [2024-07-14 07:40:00.589368] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:22:44.690 [2024-07-14 07:40:00.700369] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:22:44.690 [2024-07-14 07:40:00.700535] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:22:44.690 [2024-07-14 07:40:00.700563] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:22:44.690 [2024-07-14 07:40:00.700583] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:22:44.690 [2024-07-14 07:40:00.700680] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:22:44.690 [2024-07-14 07:40:00.700738] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:22:44.690 [2024-07-14 07:40:00.700767] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:22:44.690 [2024-07-14 07:40:00.700770] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:22:45.623 07:40:01 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:22:45.623 07:40:01 -- common/autotest_common.sh@852 -- # return 0 00:22:45.623 07:40:01 -- host/identify.sh@24 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:22:45.623 07:40:01 -- common/autotest_common.sh@551 -- # xtrace_disable 00:22:45.623 07:40:01 -- common/autotest_common.sh@10 -- # set +x 00:22:45.623 [2024-07-14 07:40:01.469301] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:22:45.623 07:40:01 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:22:45.623 07:40:01 -- host/identify.sh@25 -- # timing_exit start_nvmf_tgt 00:22:45.623 07:40:01 -- common/autotest_common.sh@718 -- # xtrace_disable 00:22:45.623 07:40:01 -- common/autotest_common.sh@10 -- # set +x 00:22:45.623 07:40:01 -- host/identify.sh@27 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:22:45.623 07:40:01 -- common/autotest_common.sh@551 -- # xtrace_disable 00:22:45.623 07:40:01 -- common/autotest_common.sh@10 -- # set +x 00:22:45.623 Malloc0 00:22:45.623 07:40:01 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:22:45.623 07:40:01 -- host/identify.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:22:45.623 07:40:01 -- common/autotest_common.sh@551 -- # xtrace_disable 00:22:45.623 07:40:01 -- common/autotest_common.sh@10 -- # set +x 00:22:45.623 07:40:01 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:22:45.623 07:40:01 -- host/identify.sh@31 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 --nguid ABCDEF0123456789ABCDEF0123456789 --eui64 ABCDEF0123456789 00:22:45.623 07:40:01 -- common/autotest_common.sh@551 -- # xtrace_disable 00:22:45.623 07:40:01 -- common/autotest_common.sh@10 -- # set +x 00:22:45.623 07:40:01 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:22:45.623 07:40:01 -- host/identify.sh@34 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:22:45.623 07:40:01 -- common/autotest_common.sh@551 -- # xtrace_disable 00:22:45.623 07:40:01 -- common/autotest_common.sh@10 -- # set +x 00:22:45.623 [2024-07-14 07:40:01.537469] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:22:45.623 07:40:01 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:22:45.623 07:40:01 -- host/identify.sh@35 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:22:45.623 07:40:01 -- common/autotest_common.sh@551 -- # xtrace_disable 00:22:45.623 07:40:01 -- common/autotest_common.sh@10 -- # set +x 00:22:45.623 07:40:01 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:22:45.623 07:40:01 -- host/identify.sh@37 -- # rpc_cmd nvmf_get_subsystems 00:22:45.623 07:40:01 -- common/autotest_common.sh@551 -- # xtrace_disable 00:22:45.623 07:40:01 -- common/autotest_common.sh@10 -- # set +x 00:22:45.623 [2024-07-14 07:40:01.553245] nvmf_rpc.c: 275:rpc_nvmf_get_subsystems: *WARNING*: rpc_nvmf_get_subsystems: deprecated feature listener.transport is deprecated in favor of trtype to be removed in v24.05 00:22:45.623 [ 00:22:45.623 { 00:22:45.623 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:22:45.623 "subtype": "Discovery", 00:22:45.623 "listen_addresses": [ 00:22:45.623 { 00:22:45.623 "transport": "TCP", 00:22:45.623 "trtype": "TCP", 00:22:45.623 "adrfam": "IPv4", 00:22:45.623 "traddr": "10.0.0.2", 00:22:45.623 "trsvcid": "4420" 00:22:45.623 } 00:22:45.623 ], 00:22:45.623 "allow_any_host": true, 00:22:45.623 "hosts": [] 00:22:45.623 }, 00:22:45.623 { 00:22:45.623 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:22:45.623 "subtype": "NVMe", 00:22:45.623 "listen_addresses": [ 00:22:45.623 { 00:22:45.623 "transport": "TCP", 00:22:45.623 "trtype": "TCP", 00:22:45.623 "adrfam": "IPv4", 00:22:45.623 "traddr": "10.0.0.2", 00:22:45.623 "trsvcid": "4420" 00:22:45.624 } 00:22:45.624 ], 00:22:45.624 "allow_any_host": true, 00:22:45.624 "hosts": [], 00:22:45.624 "serial_number": "SPDK00000000000001", 00:22:45.624 "model_number": "SPDK bdev Controller", 00:22:45.624 "max_namespaces": 32, 00:22:45.624 "min_cntlid": 1, 00:22:45.624 "max_cntlid": 65519, 00:22:45.624 "namespaces": [ 00:22:45.624 { 00:22:45.624 "nsid": 1, 00:22:45.624 "bdev_name": "Malloc0", 00:22:45.624 "name": "Malloc0", 00:22:45.624 "nguid": "ABCDEF0123456789ABCDEF0123456789", 00:22:45.624 "eui64": "ABCDEF0123456789", 00:22:45.624 "uuid": "c58df8cb-cafa-493f-bb0c-9657d7f25528" 00:22:45.624 } 00:22:45.624 ] 00:22:45.624 } 00:22:45.624 ] 00:22:45.624 07:40:01 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:22:45.624 07:40:01 -- host/identify.sh@39 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2014-08.org.nvmexpress.discovery' -L all 00:22:45.624 [2024-07-14 07:40:01.579700] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:22:45.624 [2024-07-14 07:40:01.579742] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid4167341 ] 00:22:45.624 EAL: No free 2048 kB hugepages reported on node 1 00:22:45.624 [2024-07-14 07:40:01.613139] nvme_ctrlr.c:1478:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to connect adminq (no timeout) 00:22:45.624 [2024-07-14 07:40:01.613220] nvme_tcp.c:2244:nvme_tcp_qpair_connect_sock: *DEBUG*: adrfam 1 ai_family 2 00:22:45.624 [2024-07-14 07:40:01.613231] nvme_tcp.c:2248:nvme_tcp_qpair_connect_sock: *DEBUG*: trsvcid is 4420 00:22:45.624 [2024-07-14 07:40:01.613247] nvme_tcp.c:2266:nvme_tcp_qpair_connect_sock: *DEBUG*: sock_impl_name is (null) 00:22:45.624 [2024-07-14 07:40:01.613261] sock.c: 334:spdk_sock_connect_ext: *DEBUG*: Creating a client socket using impl posix 00:22:45.624 [2024-07-14 07:40:01.616923] nvme_ctrlr.c:1478:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for connect adminq (no timeout) 00:22:45.624 [2024-07-14 07:40:01.616998] nvme_tcp.c:1487:nvme_tcp_send_icreq_complete: *DEBUG*: Complete the icreq send for tqpair=0x89fe10 0 00:22:45.624 [2024-07-14 07:40:01.623891] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 1 00:22:45.624 [2024-07-14 07:40:01.623914] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =1 00:22:45.624 [2024-07-14 07:40:01.623923] nvme_tcp.c:1533:nvme_tcp_icresp_handle: *DEBUG*: host_hdgst_enable: 0 00:22:45.624 [2024-07-14 07:40:01.623930] nvme_tcp.c:1534:nvme_tcp_icresp_handle: *DEBUG*: host_ddgst_enable: 0 00:22:45.624 [2024-07-14 07:40:01.624032] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:45.624 [2024-07-14 07:40:01.624047] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:45.624 [2024-07-14 07:40:01.624056] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x89fe10) 00:22:45.624 [2024-07-14 07:40:01.624082] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:0 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x400 00:22:45.624 [2024-07-14 07:40:01.624111] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x91fbf0, cid 0, qid 0 00:22:45.624 [2024-07-14 07:40:01.631892] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:45.624 [2024-07-14 07:40:01.631910] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:45.624 [2024-07-14 07:40:01.631918] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:45.624 [2024-07-14 07:40:01.631926] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x91fbf0) on tqpair=0x89fe10 00:22:45.624 [2024-07-14 07:40:01.631948] nvme_fabric.c: 620:nvme_fabric_qpair_connect_poll: *DEBUG*: CNTLID 0x0001 00:22:45.624 [2024-07-14 07:40:01.631960] nvme_ctrlr.c:1478:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to read vs (no timeout) 00:22:45.624 [2024-07-14 07:40:01.631971] nvme_ctrlr.c:1478:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to read vs wait for vs (no timeout) 00:22:45.624 [2024-07-14 07:40:01.631991] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:45.624 [2024-07-14 07:40:01.632000] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:45.624 [2024-07-14 07:40:01.632006] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x89fe10) 00:22:45.624 [2024-07-14 07:40:01.632018] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:45.624 [2024-07-14 07:40:01.632041] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x91fbf0, cid 0, qid 0 00:22:45.624 [2024-07-14 07:40:01.632214] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:45.624 [2024-07-14 07:40:01.632226] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:45.624 [2024-07-14 07:40:01.632233] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:45.624 [2024-07-14 07:40:01.632239] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x91fbf0) on tqpair=0x89fe10 00:22:45.624 [2024-07-14 07:40:01.632250] nvme_ctrlr.c:1478:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to read cap (no timeout) 00:22:45.624 [2024-07-14 07:40:01.632263] nvme_ctrlr.c:1478:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to read cap wait for cap (no timeout) 00:22:45.624 [2024-07-14 07:40:01.632275] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:45.624 [2024-07-14 07:40:01.632282] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:45.624 [2024-07-14 07:40:01.632289] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x89fe10) 00:22:45.624 [2024-07-14 07:40:01.632299] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:45.624 [2024-07-14 07:40:01.632320] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x91fbf0, cid 0, qid 0 00:22:45.624 [2024-07-14 07:40:01.632500] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:45.624 [2024-07-14 07:40:01.632512] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:45.624 [2024-07-14 07:40:01.632518] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:45.624 [2024-07-14 07:40:01.632525] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x91fbf0) on tqpair=0x89fe10 00:22:45.624 [2024-07-14 07:40:01.632534] nvme_ctrlr.c:1478:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to check en (no timeout) 00:22:45.624 [2024-07-14 07:40:01.632549] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to check en wait for cc (timeout 15000 ms) 00:22:45.624 [2024-07-14 07:40:01.632561] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:45.624 [2024-07-14 07:40:01.632568] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:45.624 [2024-07-14 07:40:01.632575] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x89fe10) 00:22:45.624 [2024-07-14 07:40:01.632589] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:45.624 [2024-07-14 07:40:01.632611] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x91fbf0, cid 0, qid 0 00:22:45.624 [2024-07-14 07:40:01.632773] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:45.624 [2024-07-14 07:40:01.632788] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:45.624 [2024-07-14 07:40:01.632795] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:45.624 [2024-07-14 07:40:01.632801] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x91fbf0) on tqpair=0x89fe10 00:22:45.624 [2024-07-14 07:40:01.632810] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:22:45.624 [2024-07-14 07:40:01.632827] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:45.624 [2024-07-14 07:40:01.632836] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:45.624 [2024-07-14 07:40:01.632843] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x89fe10) 00:22:45.624 [2024-07-14 07:40:01.632853] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:45.624 [2024-07-14 07:40:01.632881] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x91fbf0, cid 0, qid 0 00:22:45.624 [2024-07-14 07:40:01.633039] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:45.624 [2024-07-14 07:40:01.633051] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:45.624 [2024-07-14 07:40:01.633058] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:45.624 [2024-07-14 07:40:01.633065] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x91fbf0) on tqpair=0x89fe10 00:22:45.624 [2024-07-14 07:40:01.633074] nvme_ctrlr.c:3737:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] CC.EN = 0 && CSTS.RDY = 0 00:22:45.624 [2024-07-14 07:40:01.633083] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to controller is disabled (timeout 15000 ms) 00:22:45.624 [2024-07-14 07:40:01.633096] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:22:45.624 [2024-07-14 07:40:01.633207] nvme_ctrlr.c:3930:nvme_ctrlr_process_init: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] Setting CC.EN = 1 00:22:45.624 [2024-07-14 07:40:01.633216] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:22:45.624 [2024-07-14 07:40:01.633232] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:45.624 [2024-07-14 07:40:01.633240] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:45.624 [2024-07-14 07:40:01.633246] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x89fe10) 00:22:45.624 [2024-07-14 07:40:01.633256] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:45.624 [2024-07-14 07:40:01.633277] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x91fbf0, cid 0, qid 0 00:22:45.624 [2024-07-14 07:40:01.633469] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:45.624 [2024-07-14 07:40:01.633481] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:45.624 [2024-07-14 07:40:01.633488] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:45.624 [2024-07-14 07:40:01.633495] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x91fbf0) on tqpair=0x89fe10 00:22:45.624 [2024-07-14 07:40:01.633503] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:22:45.624 [2024-07-14 07:40:01.633520] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:45.624 [2024-07-14 07:40:01.633528] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:45.624 [2024-07-14 07:40:01.633539] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x89fe10) 00:22:45.624 [2024-07-14 07:40:01.633550] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:45.624 [2024-07-14 07:40:01.633571] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x91fbf0, cid 0, qid 0 00:22:45.624 [2024-07-14 07:40:01.633719] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:45.624 [2024-07-14 07:40:01.633732] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:45.624 [2024-07-14 07:40:01.633738] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:45.624 [2024-07-14 07:40:01.633745] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x91fbf0) on tqpair=0x89fe10 00:22:45.624 [2024-07-14 07:40:01.633754] nvme_ctrlr.c:3772:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:22:45.624 [2024-07-14 07:40:01.633762] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to reset admin queue (timeout 30000 ms) 00:22:45.624 [2024-07-14 07:40:01.633776] nvme_ctrlr.c:1478:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to identify controller (no timeout) 00:22:45.625 [2024-07-14 07:40:01.633790] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for identify controller (timeout 30000 ms) 00:22:45.625 [2024-07-14 07:40:01.633806] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:45.625 [2024-07-14 07:40:01.633814] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:45.625 [2024-07-14 07:40:01.633820] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x89fe10) 00:22:45.625 [2024-07-14 07:40:01.633831] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:0 nsid:0 cdw10:00000001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:45.625 [2024-07-14 07:40:01.633853] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x91fbf0, cid 0, qid 0 00:22:45.625 [2024-07-14 07:40:01.634075] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:22:45.625 [2024-07-14 07:40:01.634089] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:22:45.625 [2024-07-14 07:40:01.634096] nvme_tcp.c:1650:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:22:45.625 [2024-07-14 07:40:01.634103] nvme_tcp.c:1651:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x89fe10): datao=0, datal=4096, cccid=0 00:22:45.625 [2024-07-14 07:40:01.634111] nvme_tcp.c:1662:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x91fbf0) on tqpair(0x89fe10): expected_datao=0, payload_size=4096 00:22:45.625 [2024-07-14 07:40:01.634162] nvme_tcp.c:1453:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:22:45.625 [2024-07-14 07:40:01.634188] nvme_tcp.c:1237:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:22:45.625 [2024-07-14 07:40:01.634391] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:45.625 [2024-07-14 07:40:01.634407] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:45.625 [2024-07-14 07:40:01.634413] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:45.625 [2024-07-14 07:40:01.634420] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x91fbf0) on tqpair=0x89fe10 00:22:45.625 [2024-07-14 07:40:01.634434] nvme_ctrlr.c:1972:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] transport max_xfer_size 4294967295 00:22:45.625 [2024-07-14 07:40:01.634444] nvme_ctrlr.c:1976:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] MDTS max_xfer_size 131072 00:22:45.625 [2024-07-14 07:40:01.634452] nvme_ctrlr.c:1979:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] CNTLID 0x0001 00:22:45.625 [2024-07-14 07:40:01.634461] nvme_ctrlr.c:2003:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] transport max_sges 16 00:22:45.625 [2024-07-14 07:40:01.634469] nvme_ctrlr.c:2018:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] fuses compare and write: 1 00:22:45.625 [2024-07-14 07:40:01.634482] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to configure AER (timeout 30000 ms) 00:22:45.625 [2024-07-14 07:40:01.634502] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for configure aer (timeout 30000 ms) 00:22:45.625 [2024-07-14 07:40:01.634516] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:45.625 [2024-07-14 07:40:01.634523] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:45.625 [2024-07-14 07:40:01.634530] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x89fe10) 00:22:45.625 [2024-07-14 07:40:01.634541] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:0 cdw10:0000000b SGL DATA BLOCK OFFSET 0x0 len:0x0 00:22:45.625 [2024-07-14 07:40:01.634579] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x91fbf0, cid 0, qid 0 00:22:45.625 [2024-07-14 07:40:01.634803] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:45.625 [2024-07-14 07:40:01.634816] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:45.625 [2024-07-14 07:40:01.634822] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:45.625 [2024-07-14 07:40:01.634829] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x91fbf0) on tqpair=0x89fe10 00:22:45.625 [2024-07-14 07:40:01.634843] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:45.625 [2024-07-14 07:40:01.634851] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:45.625 [2024-07-14 07:40:01.634857] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x89fe10) 00:22:45.625 [2024-07-14 07:40:01.634874] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:22:45.625 [2024-07-14 07:40:01.634887] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:45.625 [2024-07-14 07:40:01.634894] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:45.625 [2024-07-14 07:40:01.634900] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=1 on tqpair(0x89fe10) 00:22:45.625 [2024-07-14 07:40:01.634909] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:22:45.625 [2024-07-14 07:40:01.634919] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:45.625 [2024-07-14 07:40:01.634925] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:45.625 [2024-07-14 07:40:01.634932] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=2 on tqpair(0x89fe10) 00:22:45.625 [2024-07-14 07:40:01.634940] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:22:45.625 [2024-07-14 07:40:01.634950] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:45.625 [2024-07-14 07:40:01.634956] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:45.625 [2024-07-14 07:40:01.634962] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x89fe10) 00:22:45.625 [2024-07-14 07:40:01.634971] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:22:45.625 [2024-07-14 07:40:01.634980] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to set keep alive timeout (timeout 30000 ms) 00:22:45.625 [2024-07-14 07:40:01.634999] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:22:45.625 [2024-07-14 07:40:01.635011] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:45.625 [2024-07-14 07:40:01.635019] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:45.625 [2024-07-14 07:40:01.635025] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x89fe10) 00:22:45.625 [2024-07-14 07:40:01.635035] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:4 cdw10:0000000f SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:45.625 [2024-07-14 07:40:01.635062] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x91fbf0, cid 0, qid 0 00:22:45.625 [2024-07-14 07:40:01.635074] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x91fd50, cid 1, qid 0 00:22:45.625 [2024-07-14 07:40:01.635082] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x91feb0, cid 2, qid 0 00:22:45.625 [2024-07-14 07:40:01.635090] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x920010, cid 3, qid 0 00:22:45.625 [2024-07-14 07:40:01.635097] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x920170, cid 4, qid 0 00:22:45.625 [2024-07-14 07:40:01.635280] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:45.625 [2024-07-14 07:40:01.635292] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:45.625 [2024-07-14 07:40:01.635299] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:45.625 [2024-07-14 07:40:01.635306] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x920170) on tqpair=0x89fe10 00:22:45.625 [2024-07-14 07:40:01.635315] nvme_ctrlr.c:2890:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] Sending keep alive every 5000000 us 00:22:45.625 [2024-07-14 07:40:01.635324] nvme_ctrlr.c:1478:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to ready (no timeout) 00:22:45.625 [2024-07-14 07:40:01.635341] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:45.625 [2024-07-14 07:40:01.635351] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:45.625 [2024-07-14 07:40:01.635373] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x89fe10) 00:22:45.625 [2024-07-14 07:40:01.635383] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:00000001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:45.625 [2024-07-14 07:40:01.635403] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x920170, cid 4, qid 0 00:22:45.625 [2024-07-14 07:40:01.635621] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:22:45.625 [2024-07-14 07:40:01.635637] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:22:45.625 [2024-07-14 07:40:01.635644] nvme_tcp.c:1650:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:22:45.625 [2024-07-14 07:40:01.635651] nvme_tcp.c:1651:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x89fe10): datao=0, datal=4096, cccid=4 00:22:45.625 [2024-07-14 07:40:01.635658] nvme_tcp.c:1662:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x920170) on tqpair(0x89fe10): expected_datao=0, payload_size=4096 00:22:45.625 [2024-07-14 07:40:01.635669] nvme_tcp.c:1453:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:22:45.625 [2024-07-14 07:40:01.635677] nvme_tcp.c:1237:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:22:45.625 [2024-07-14 07:40:01.635728] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:45.625 [2024-07-14 07:40:01.635738] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:45.625 [2024-07-14 07:40:01.635745] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:45.625 [2024-07-14 07:40:01.635751] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x920170) on tqpair=0x89fe10 00:22:45.625 [2024-07-14 07:40:01.635772] nvme_ctrlr.c:4024:nvme_ctrlr_process_init: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] Ctrlr already in ready state 00:22:45.625 [2024-07-14 07:40:01.635812] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:45.625 [2024-07-14 07:40:01.635823] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:45.625 [2024-07-14 07:40:01.635829] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x89fe10) 00:22:45.625 [2024-07-14 07:40:01.635840] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:00ff0070 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:45.625 [2024-07-14 07:40:01.635852] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:45.625 [2024-07-14 07:40:01.635859] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:45.625 [2024-07-14 07:40:01.639889] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x89fe10) 00:22:45.625 [2024-07-14 07:40:01.639918] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 00:22:45.625 [2024-07-14 07:40:01.639947] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x920170, cid 4, qid 0 00:22:45.625 [2024-07-14 07:40:01.639974] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x9202d0, cid 5, qid 0 00:22:45.625 [2024-07-14 07:40:01.640198] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:22:45.625 [2024-07-14 07:40:01.640210] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:22:45.625 [2024-07-14 07:40:01.640217] nvme_tcp.c:1650:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:22:45.625 [2024-07-14 07:40:01.640223] nvme_tcp.c:1651:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x89fe10): datao=0, datal=1024, cccid=4 00:22:45.625 [2024-07-14 07:40:01.640231] nvme_tcp.c:1662:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x920170) on tqpair(0x89fe10): expected_datao=0, payload_size=1024 00:22:45.625 [2024-07-14 07:40:01.640242] nvme_tcp.c:1453:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:22:45.625 [2024-07-14 07:40:01.640249] nvme_tcp.c:1237:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:22:45.625 [2024-07-14 07:40:01.640273] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:45.625 [2024-07-14 07:40:01.640282] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:45.625 [2024-07-14 07:40:01.640288] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:45.625 [2024-07-14 07:40:01.640295] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x9202d0) on tqpair=0x89fe10 00:22:45.625 [2024-07-14 07:40:01.681017] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:45.625 [2024-07-14 07:40:01.681036] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:45.625 [2024-07-14 07:40:01.681043] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:45.625 [2024-07-14 07:40:01.681050] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x920170) on tqpair=0x89fe10 00:22:45.625 [2024-07-14 07:40:01.681069] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:45.626 [2024-07-14 07:40:01.681079] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:45.626 [2024-07-14 07:40:01.681085] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x89fe10) 00:22:45.626 [2024-07-14 07:40:01.681096] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:02ff0070 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:45.626 [2024-07-14 07:40:01.681126] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x920170, cid 4, qid 0 00:22:45.626 [2024-07-14 07:40:01.681308] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:22:45.626 [2024-07-14 07:40:01.681321] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:22:45.626 [2024-07-14 07:40:01.681328] nvme_tcp.c:1650:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:22:45.626 [2024-07-14 07:40:01.681334] nvme_tcp.c:1651:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x89fe10): datao=0, datal=3072, cccid=4 00:22:45.626 [2024-07-14 07:40:01.681342] nvme_tcp.c:1662:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x920170) on tqpair(0x89fe10): expected_datao=0, payload_size=3072 00:22:45.626 [2024-07-14 07:40:01.681353] nvme_tcp.c:1453:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:22:45.626 [2024-07-14 07:40:01.681361] nvme_tcp.c:1237:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:22:45.626 [2024-07-14 07:40:01.681420] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:45.626 [2024-07-14 07:40:01.681431] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:45.626 [2024-07-14 07:40:01.681438] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:45.626 [2024-07-14 07:40:01.681445] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x920170) on tqpair=0x89fe10 00:22:45.626 [2024-07-14 07:40:01.681460] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:45.626 [2024-07-14 07:40:01.681468] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:45.626 [2024-07-14 07:40:01.681475] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x89fe10) 00:22:45.626 [2024-07-14 07:40:01.681489] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:00010070 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:45.626 [2024-07-14 07:40:01.681518] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x920170, cid 4, qid 0 00:22:45.626 [2024-07-14 07:40:01.681691] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:22:45.626 [2024-07-14 07:40:01.681707] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:22:45.626 [2024-07-14 07:40:01.681713] nvme_tcp.c:1650:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:22:45.626 [2024-07-14 07:40:01.681720] nvme_tcp.c:1651:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x89fe10): datao=0, datal=8, cccid=4 00:22:45.626 [2024-07-14 07:40:01.681728] nvme_tcp.c:1662:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x920170) on tqpair(0x89fe10): expected_datao=0, payload_size=8 00:22:45.626 [2024-07-14 07:40:01.681738] nvme_tcp.c:1453:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:22:45.626 [2024-07-14 07:40:01.681745] nvme_tcp.c:1237:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:22:45.626 [2024-07-14 07:40:01.722077] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:45.626 [2024-07-14 07:40:01.722097] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:45.626 [2024-07-14 07:40:01.722105] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:45.626 [2024-07-14 07:40:01.722112] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x920170) on tqpair=0x89fe10 00:22:45.626 ===================================================== 00:22:45.626 NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2014-08.org.nvmexpress.discovery 00:22:45.626 ===================================================== 00:22:45.626 Controller Capabilities/Features 00:22:45.626 ================================ 00:22:45.626 Vendor ID: 0000 00:22:45.626 Subsystem Vendor ID: 0000 00:22:45.626 Serial Number: .................... 00:22:45.626 Model Number: ........................................ 00:22:45.626 Firmware Version: 24.01.1 00:22:45.626 Recommended Arb Burst: 0 00:22:45.626 IEEE OUI Identifier: 00 00 00 00:22:45.626 Multi-path I/O 00:22:45.626 May have multiple subsystem ports: No 00:22:45.626 May have multiple controllers: No 00:22:45.626 Associated with SR-IOV VF: No 00:22:45.626 Max Data Transfer Size: 131072 00:22:45.626 Max Number of Namespaces: 0 00:22:45.626 Max Number of I/O Queues: 1024 00:22:45.626 NVMe Specification Version (VS): 1.3 00:22:45.626 NVMe Specification Version (Identify): 1.3 00:22:45.626 Maximum Queue Entries: 128 00:22:45.626 Contiguous Queues Required: Yes 00:22:45.626 Arbitration Mechanisms Supported 00:22:45.626 Weighted Round Robin: Not Supported 00:22:45.626 Vendor Specific: Not Supported 00:22:45.626 Reset Timeout: 15000 ms 00:22:45.626 Doorbell Stride: 4 bytes 00:22:45.626 NVM Subsystem Reset: Not Supported 00:22:45.626 Command Sets Supported 00:22:45.626 NVM Command Set: Supported 00:22:45.626 Boot Partition: Not Supported 00:22:45.626 Memory Page Size Minimum: 4096 bytes 00:22:45.626 Memory Page Size Maximum: 4096 bytes 00:22:45.626 Persistent Memory Region: Not Supported 00:22:45.626 Optional Asynchronous Events Supported 00:22:45.626 Namespace Attribute Notices: Not Supported 00:22:45.626 Firmware Activation Notices: Not Supported 00:22:45.626 ANA Change Notices: Not Supported 00:22:45.626 PLE Aggregate Log Change Notices: Not Supported 00:22:45.626 LBA Status Info Alert Notices: Not Supported 00:22:45.626 EGE Aggregate Log Change Notices: Not Supported 00:22:45.626 Normal NVM Subsystem Shutdown event: Not Supported 00:22:45.626 Zone Descriptor Change Notices: Not Supported 00:22:45.626 Discovery Log Change Notices: Supported 00:22:45.626 Controller Attributes 00:22:45.626 128-bit Host Identifier: Not Supported 00:22:45.626 Non-Operational Permissive Mode: Not Supported 00:22:45.626 NVM Sets: Not Supported 00:22:45.626 Read Recovery Levels: Not Supported 00:22:45.626 Endurance Groups: Not Supported 00:22:45.626 Predictable Latency Mode: Not Supported 00:22:45.626 Traffic Based Keep ALive: Not Supported 00:22:45.626 Namespace Granularity: Not Supported 00:22:45.626 SQ Associations: Not Supported 00:22:45.626 UUID List: Not Supported 00:22:45.626 Multi-Domain Subsystem: Not Supported 00:22:45.626 Fixed Capacity Management: Not Supported 00:22:45.626 Variable Capacity Management: Not Supported 00:22:45.626 Delete Endurance Group: Not Supported 00:22:45.626 Delete NVM Set: Not Supported 00:22:45.626 Extended LBA Formats Supported: Not Supported 00:22:45.626 Flexible Data Placement Supported: Not Supported 00:22:45.626 00:22:45.626 Controller Memory Buffer Support 00:22:45.626 ================================ 00:22:45.626 Supported: No 00:22:45.626 00:22:45.626 Persistent Memory Region Support 00:22:45.626 ================================ 00:22:45.626 Supported: No 00:22:45.626 00:22:45.626 Admin Command Set Attributes 00:22:45.626 ============================ 00:22:45.626 Security Send/Receive: Not Supported 00:22:45.626 Format NVM: Not Supported 00:22:45.626 Firmware Activate/Download: Not Supported 00:22:45.626 Namespace Management: Not Supported 00:22:45.626 Device Self-Test: Not Supported 00:22:45.626 Directives: Not Supported 00:22:45.626 NVMe-MI: Not Supported 00:22:45.626 Virtualization Management: Not Supported 00:22:45.626 Doorbell Buffer Config: Not Supported 00:22:45.626 Get LBA Status Capability: Not Supported 00:22:45.626 Command & Feature Lockdown Capability: Not Supported 00:22:45.626 Abort Command Limit: 1 00:22:45.626 Async Event Request Limit: 4 00:22:45.626 Number of Firmware Slots: N/A 00:22:45.626 Firmware Slot 1 Read-Only: N/A 00:22:45.626 Firmware Activation Without Reset: N/A 00:22:45.626 Multiple Update Detection Support: N/A 00:22:45.626 Firmware Update Granularity: No Information Provided 00:22:45.626 Per-Namespace SMART Log: No 00:22:45.626 Asymmetric Namespace Access Log Page: Not Supported 00:22:45.626 Subsystem NQN: nqn.2014-08.org.nvmexpress.discovery 00:22:45.626 Command Effects Log Page: Not Supported 00:22:45.626 Get Log Page Extended Data: Supported 00:22:45.626 Telemetry Log Pages: Not Supported 00:22:45.626 Persistent Event Log Pages: Not Supported 00:22:45.626 Supported Log Pages Log Page: May Support 00:22:45.626 Commands Supported & Effects Log Page: Not Supported 00:22:45.626 Feature Identifiers & Effects Log Page:May Support 00:22:45.626 NVMe-MI Commands & Effects Log Page: May Support 00:22:45.626 Data Area 4 for Telemetry Log: Not Supported 00:22:45.626 Error Log Page Entries Supported: 128 00:22:45.626 Keep Alive: Not Supported 00:22:45.626 00:22:45.626 NVM Command Set Attributes 00:22:45.626 ========================== 00:22:45.626 Submission Queue Entry Size 00:22:45.626 Max: 1 00:22:45.626 Min: 1 00:22:45.626 Completion Queue Entry Size 00:22:45.626 Max: 1 00:22:45.626 Min: 1 00:22:45.626 Number of Namespaces: 0 00:22:45.626 Compare Command: Not Supported 00:22:45.626 Write Uncorrectable Command: Not Supported 00:22:45.626 Dataset Management Command: Not Supported 00:22:45.626 Write Zeroes Command: Not Supported 00:22:45.626 Set Features Save Field: Not Supported 00:22:45.626 Reservations: Not Supported 00:22:45.626 Timestamp: Not Supported 00:22:45.626 Copy: Not Supported 00:22:45.626 Volatile Write Cache: Not Present 00:22:45.626 Atomic Write Unit (Normal): 1 00:22:45.626 Atomic Write Unit (PFail): 1 00:22:45.626 Atomic Compare & Write Unit: 1 00:22:45.626 Fused Compare & Write: Supported 00:22:45.626 Scatter-Gather List 00:22:45.626 SGL Command Set: Supported 00:22:45.626 SGL Keyed: Supported 00:22:45.626 SGL Bit Bucket Descriptor: Not Supported 00:22:45.626 SGL Metadata Pointer: Not Supported 00:22:45.626 Oversized SGL: Not Supported 00:22:45.626 SGL Metadata Address: Not Supported 00:22:45.626 SGL Offset: Supported 00:22:45.626 Transport SGL Data Block: Not Supported 00:22:45.626 Replay Protected Memory Block: Not Supported 00:22:45.626 00:22:45.626 Firmware Slot Information 00:22:45.626 ========================= 00:22:45.626 Active slot: 0 00:22:45.626 00:22:45.626 00:22:45.626 Error Log 00:22:45.626 ========= 00:22:45.626 00:22:45.626 Active Namespaces 00:22:45.626 ================= 00:22:45.626 Discovery Log Page 00:22:45.626 ================== 00:22:45.627 Generation Counter: 2 00:22:45.627 Number of Records: 2 00:22:45.627 Record Format: 0 00:22:45.627 00:22:45.627 Discovery Log Entry 0 00:22:45.627 ---------------------- 00:22:45.627 Transport Type: 3 (TCP) 00:22:45.627 Address Family: 1 (IPv4) 00:22:45.627 Subsystem Type: 3 (Current Discovery Subsystem) 00:22:45.627 Entry Flags: 00:22:45.627 Duplicate Returned Information: 1 00:22:45.627 Explicit Persistent Connection Support for Discovery: 1 00:22:45.627 Transport Requirements: 00:22:45.627 Secure Channel: Not Required 00:22:45.627 Port ID: 0 (0x0000) 00:22:45.627 Controller ID: 65535 (0xffff) 00:22:45.627 Admin Max SQ Size: 128 00:22:45.627 Transport Service Identifier: 4420 00:22:45.627 NVM Subsystem Qualified Name: nqn.2014-08.org.nvmexpress.discovery 00:22:45.627 Transport Address: 10.0.0.2 00:22:45.627 Discovery Log Entry 1 00:22:45.627 ---------------------- 00:22:45.627 Transport Type: 3 (TCP) 00:22:45.627 Address Family: 1 (IPv4) 00:22:45.627 Subsystem Type: 2 (NVM Subsystem) 00:22:45.627 Entry Flags: 00:22:45.627 Duplicate Returned Information: 0 00:22:45.627 Explicit Persistent Connection Support for Discovery: 0 00:22:45.627 Transport Requirements: 00:22:45.627 Secure Channel: Not Required 00:22:45.627 Port ID: 0 (0x0000) 00:22:45.627 Controller ID: 65535 (0xffff) 00:22:45.627 Admin Max SQ Size: 128 00:22:45.627 Transport Service Identifier: 4420 00:22:45.627 NVM Subsystem Qualified Name: nqn.2016-06.io.spdk:cnode1 00:22:45.627 Transport Address: 10.0.0.2 [2024-07-14 07:40:01.722224] nvme_ctrlr.c:4220:nvme_ctrlr_destruct_async: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] Prepare to destruct SSD 00:22:45.627 [2024-07-14 07:40:01.722251] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:45.627 [2024-07-14 07:40:01.722264] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:45.627 [2024-07-14 07:40:01.722274] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:45.627 [2024-07-14 07:40:01.722283] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:45.627 [2024-07-14 07:40:01.722298] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:45.627 [2024-07-14 07:40:01.722307] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:45.627 [2024-07-14 07:40:01.722314] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x89fe10) 00:22:45.627 [2024-07-14 07:40:01.722325] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:45.627 [2024-07-14 07:40:01.722351] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x920010, cid 3, qid 0 00:22:45.627 [2024-07-14 07:40:01.722608] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:45.627 [2024-07-14 07:40:01.722621] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:45.627 [2024-07-14 07:40:01.722629] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:45.627 [2024-07-14 07:40:01.722636] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x920010) on tqpair=0x89fe10 00:22:45.627 [2024-07-14 07:40:01.722648] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:45.627 [2024-07-14 07:40:01.722656] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:45.627 [2024-07-14 07:40:01.722663] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x89fe10) 00:22:45.627 [2024-07-14 07:40:01.722674] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:45.627 [2024-07-14 07:40:01.722701] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x920010, cid 3, qid 0 00:22:45.627 [2024-07-14 07:40:01.726895] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:45.627 [2024-07-14 07:40:01.726923] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:45.627 [2024-07-14 07:40:01.726930] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:45.627 [2024-07-14 07:40:01.726941] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x920010) on tqpair=0x89fe10 00:22:45.627 [2024-07-14 07:40:01.726951] nvme_ctrlr.c:1070:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] RTD3E = 0 us 00:22:45.627 [2024-07-14 07:40:01.726959] nvme_ctrlr.c:1073:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] shutdown timeout = 10000 ms 00:22:45.627 [2024-07-14 07:40:01.726976] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:45.627 [2024-07-14 07:40:01.727000] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:45.627 [2024-07-14 07:40:01.727006] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x89fe10) 00:22:45.627 [2024-07-14 07:40:01.727017] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:45.627 [2024-07-14 07:40:01.727040] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x920010, cid 3, qid 0 00:22:45.627 [2024-07-14 07:40:01.727212] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:45.627 [2024-07-14 07:40:01.727227] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:45.627 [2024-07-14 07:40:01.727234] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:45.627 [2024-07-14 07:40:01.727240] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x920010) on tqpair=0x89fe10 00:22:45.627 [2024-07-14 07:40:01.727255] nvme_ctrlr.c:1192:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] shutdown complete in 0 milliseconds 00:22:45.627 00:22:45.627 07:40:01 -- host/identify.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' -L all 00:22:45.627 [2024-07-14 07:40:01.761118] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:22:45.627 [2024-07-14 07:40:01.761160] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid4167349 ] 00:22:45.627 EAL: No free 2048 kB hugepages reported on node 1 00:22:45.893 [2024-07-14 07:40:01.792904] nvme_ctrlr.c:1478:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to connect adminq (no timeout) 00:22:45.893 [2024-07-14 07:40:01.792961] nvme_tcp.c:2244:nvme_tcp_qpair_connect_sock: *DEBUG*: adrfam 1 ai_family 2 00:22:45.893 [2024-07-14 07:40:01.792972] nvme_tcp.c:2248:nvme_tcp_qpair_connect_sock: *DEBUG*: trsvcid is 4420 00:22:45.893 [2024-07-14 07:40:01.792987] nvme_tcp.c:2266:nvme_tcp_qpair_connect_sock: *DEBUG*: sock_impl_name is (null) 00:22:45.893 [2024-07-14 07:40:01.792999] sock.c: 334:spdk_sock_connect_ext: *DEBUG*: Creating a client socket using impl posix 00:22:45.893 [2024-07-14 07:40:01.796904] nvme_ctrlr.c:1478:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for connect adminq (no timeout) 00:22:45.893 [2024-07-14 07:40:01.796960] nvme_tcp.c:1487:nvme_tcp_send_icreq_complete: *DEBUG*: Complete the icreq send for tqpair=0xc1be10 0 00:22:45.893 [2024-07-14 07:40:01.804877] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 1 00:22:45.893 [2024-07-14 07:40:01.804898] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =1 00:22:45.893 [2024-07-14 07:40:01.804907] nvme_tcp.c:1533:nvme_tcp_icresp_handle: *DEBUG*: host_hdgst_enable: 0 00:22:45.893 [2024-07-14 07:40:01.804913] nvme_tcp.c:1534:nvme_tcp_icresp_handle: *DEBUG*: host_ddgst_enable: 0 00:22:45.893 [2024-07-14 07:40:01.804966] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:45.893 [2024-07-14 07:40:01.804978] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:45.893 [2024-07-14 07:40:01.804985] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xc1be10) 00:22:45.893 [2024-07-14 07:40:01.805003] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:0 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x400 00:22:45.893 [2024-07-14 07:40:01.805031] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xc9bbf0, cid 0, qid 0 00:22:45.893 [2024-07-14 07:40:01.811879] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:45.893 [2024-07-14 07:40:01.811898] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:45.893 [2024-07-14 07:40:01.811905] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:45.893 [2024-07-14 07:40:01.811912] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xc9bbf0) on tqpair=0xc1be10 00:22:45.893 [2024-07-14 07:40:01.811931] nvme_fabric.c: 620:nvme_fabric_qpair_connect_poll: *DEBUG*: CNTLID 0x0001 00:22:45.893 [2024-07-14 07:40:01.811942] nvme_ctrlr.c:1478:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to read vs (no timeout) 00:22:45.893 [2024-07-14 07:40:01.811952] nvme_ctrlr.c:1478:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to read vs wait for vs (no timeout) 00:22:45.893 [2024-07-14 07:40:01.811968] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:45.893 [2024-07-14 07:40:01.811977] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:45.893 [2024-07-14 07:40:01.811983] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xc1be10) 00:22:45.893 [2024-07-14 07:40:01.811994] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:45.893 [2024-07-14 07:40:01.812018] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xc9bbf0, cid 0, qid 0 00:22:45.893 [2024-07-14 07:40:01.812205] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:45.893 [2024-07-14 07:40:01.812220] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:45.893 [2024-07-14 07:40:01.812227] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:45.893 [2024-07-14 07:40:01.812234] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xc9bbf0) on tqpair=0xc1be10 00:22:45.893 [2024-07-14 07:40:01.812242] nvme_ctrlr.c:1478:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to read cap (no timeout) 00:22:45.893 [2024-07-14 07:40:01.812255] nvme_ctrlr.c:1478:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to read cap wait for cap (no timeout) 00:22:45.893 [2024-07-14 07:40:01.812267] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:45.893 [2024-07-14 07:40:01.812275] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:45.893 [2024-07-14 07:40:01.812281] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xc1be10) 00:22:45.893 [2024-07-14 07:40:01.812292] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:45.893 [2024-07-14 07:40:01.812313] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xc9bbf0, cid 0, qid 0 00:22:45.893 [2024-07-14 07:40:01.812579] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:45.893 [2024-07-14 07:40:01.812592] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:45.893 [2024-07-14 07:40:01.812599] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:45.893 [2024-07-14 07:40:01.812606] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xc9bbf0) on tqpair=0xc1be10 00:22:45.893 [2024-07-14 07:40:01.812614] nvme_ctrlr.c:1478:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to check en (no timeout) 00:22:45.893 [2024-07-14 07:40:01.812628] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to check en wait for cc (timeout 15000 ms) 00:22:45.893 [2024-07-14 07:40:01.812640] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:45.893 [2024-07-14 07:40:01.812647] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:45.893 [2024-07-14 07:40:01.812654] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xc1be10) 00:22:45.893 [2024-07-14 07:40:01.812664] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:45.893 [2024-07-14 07:40:01.812689] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xc9bbf0, cid 0, qid 0 00:22:45.893 [2024-07-14 07:40:01.812846] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:45.893 [2024-07-14 07:40:01.812859] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:45.893 [2024-07-14 07:40:01.812873] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:45.893 [2024-07-14 07:40:01.812880] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xc9bbf0) on tqpair=0xc1be10 00:22:45.893 [2024-07-14 07:40:01.812889] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:22:45.893 [2024-07-14 07:40:01.812906] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:45.893 [2024-07-14 07:40:01.812915] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:45.893 [2024-07-14 07:40:01.812921] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xc1be10) 00:22:45.893 [2024-07-14 07:40:01.812931] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:45.893 [2024-07-14 07:40:01.812952] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xc9bbf0, cid 0, qid 0 00:22:45.893 [2024-07-14 07:40:01.813115] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:45.893 [2024-07-14 07:40:01.813130] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:45.893 [2024-07-14 07:40:01.813137] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:45.893 [2024-07-14 07:40:01.813144] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xc9bbf0) on tqpair=0xc1be10 00:22:45.893 [2024-07-14 07:40:01.813151] nvme_ctrlr.c:3737:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] CC.EN = 0 && CSTS.RDY = 0 00:22:45.893 [2024-07-14 07:40:01.813160] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to controller is disabled (timeout 15000 ms) 00:22:45.893 [2024-07-14 07:40:01.813173] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:22:45.893 [2024-07-14 07:40:01.813283] nvme_ctrlr.c:3930:nvme_ctrlr_process_init: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] Setting CC.EN = 1 00:22:45.893 [2024-07-14 07:40:01.813291] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:22:45.893 [2024-07-14 07:40:01.813303] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:45.893 [2024-07-14 07:40:01.813310] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:45.893 [2024-07-14 07:40:01.813316] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xc1be10) 00:22:45.894 [2024-07-14 07:40:01.813326] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:45.894 [2024-07-14 07:40:01.813347] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xc9bbf0, cid 0, qid 0 00:22:45.894 [2024-07-14 07:40:01.813545] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:45.894 [2024-07-14 07:40:01.813561] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:45.894 [2024-07-14 07:40:01.813568] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:45.894 [2024-07-14 07:40:01.813574] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xc9bbf0) on tqpair=0xc1be10 00:22:45.894 [2024-07-14 07:40:01.813583] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:22:45.894 [2024-07-14 07:40:01.813600] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:45.894 [2024-07-14 07:40:01.813608] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:45.894 [2024-07-14 07:40:01.813614] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xc1be10) 00:22:45.894 [2024-07-14 07:40:01.813628] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:45.894 [2024-07-14 07:40:01.813650] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xc9bbf0, cid 0, qid 0 00:22:45.894 [2024-07-14 07:40:01.813806] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:45.894 [2024-07-14 07:40:01.813818] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:45.894 [2024-07-14 07:40:01.813825] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:45.894 [2024-07-14 07:40:01.813832] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xc9bbf0) on tqpair=0xc1be10 00:22:45.894 [2024-07-14 07:40:01.813839] nvme_ctrlr.c:3772:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:22:45.894 [2024-07-14 07:40:01.813847] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to reset admin queue (timeout 30000 ms) 00:22:45.894 [2024-07-14 07:40:01.813860] nvme_ctrlr.c:1478:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify controller (no timeout) 00:22:45.894 [2024-07-14 07:40:01.813893] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for identify controller (timeout 30000 ms) 00:22:45.894 [2024-07-14 07:40:01.813909] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:45.894 [2024-07-14 07:40:01.813917] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:45.894 [2024-07-14 07:40:01.813923] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xc1be10) 00:22:45.894 [2024-07-14 07:40:01.813934] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:0 nsid:0 cdw10:00000001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:45.894 [2024-07-14 07:40:01.813956] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xc9bbf0, cid 0, qid 0 00:22:45.894 [2024-07-14 07:40:01.814188] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:22:45.894 [2024-07-14 07:40:01.814204] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:22:45.894 [2024-07-14 07:40:01.814211] nvme_tcp.c:1650:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:22:45.894 [2024-07-14 07:40:01.814217] nvme_tcp.c:1651:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0xc1be10): datao=0, datal=4096, cccid=0 00:22:45.894 [2024-07-14 07:40:01.814225] nvme_tcp.c:1662:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0xc9bbf0) on tqpair(0xc1be10): expected_datao=0, payload_size=4096 00:22:45.894 [2024-07-14 07:40:01.814236] nvme_tcp.c:1453:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:22:45.894 [2024-07-14 07:40:01.814244] nvme_tcp.c:1237:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:22:45.894 [2024-07-14 07:40:01.814298] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:45.894 [2024-07-14 07:40:01.814309] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:45.894 [2024-07-14 07:40:01.814316] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:45.894 [2024-07-14 07:40:01.814322] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xc9bbf0) on tqpair=0xc1be10 00:22:45.894 [2024-07-14 07:40:01.814333] nvme_ctrlr.c:1972:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] transport max_xfer_size 4294967295 00:22:45.894 [2024-07-14 07:40:01.814342] nvme_ctrlr.c:1976:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] MDTS max_xfer_size 131072 00:22:45.894 [2024-07-14 07:40:01.814350] nvme_ctrlr.c:1979:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] CNTLID 0x0001 00:22:45.894 [2024-07-14 07:40:01.814356] nvme_ctrlr.c:2003:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] transport max_sges 16 00:22:45.894 [2024-07-14 07:40:01.814364] nvme_ctrlr.c:2018:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] fuses compare and write: 1 00:22:45.894 [2024-07-14 07:40:01.814372] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to configure AER (timeout 30000 ms) 00:22:45.894 [2024-07-14 07:40:01.814390] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for configure aer (timeout 30000 ms) 00:22:45.894 [2024-07-14 07:40:01.814407] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:45.894 [2024-07-14 07:40:01.814415] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:45.894 [2024-07-14 07:40:01.814421] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xc1be10) 00:22:45.894 [2024-07-14 07:40:01.814432] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:0 cdw10:0000000b SGL DATA BLOCK OFFSET 0x0 len:0x0 00:22:45.894 [2024-07-14 07:40:01.814453] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xc9bbf0, cid 0, qid 0 00:22:45.894 [2024-07-14 07:40:01.814719] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:45.894 [2024-07-14 07:40:01.814735] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:45.894 [2024-07-14 07:40:01.814742] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:45.894 [2024-07-14 07:40:01.814749] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xc9bbf0) on tqpair=0xc1be10 00:22:45.894 [2024-07-14 07:40:01.814759] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:45.894 [2024-07-14 07:40:01.814767] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:45.894 [2024-07-14 07:40:01.814773] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xc1be10) 00:22:45.894 [2024-07-14 07:40:01.814783] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:22:45.894 [2024-07-14 07:40:01.814793] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:45.894 [2024-07-14 07:40:01.814800] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:45.894 [2024-07-14 07:40:01.814806] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=1 on tqpair(0xc1be10) 00:22:45.894 [2024-07-14 07:40:01.814814] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:22:45.894 [2024-07-14 07:40:01.814824] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:45.894 [2024-07-14 07:40:01.814831] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:45.894 [2024-07-14 07:40:01.814837] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=2 on tqpair(0xc1be10) 00:22:45.894 [2024-07-14 07:40:01.814861] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:22:45.894 [2024-07-14 07:40:01.814880] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:45.894 [2024-07-14 07:40:01.814887] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:45.894 [2024-07-14 07:40:01.814893] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xc1be10) 00:22:45.894 [2024-07-14 07:40:01.814902] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:22:45.894 [2024-07-14 07:40:01.814926] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set keep alive timeout (timeout 30000 ms) 00:22:45.894 [2024-07-14 07:40:01.814946] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:22:45.894 [2024-07-14 07:40:01.814959] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:45.894 [2024-07-14 07:40:01.814966] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:45.894 [2024-07-14 07:40:01.814972] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0xc1be10) 00:22:45.894 [2024-07-14 07:40:01.814982] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:4 cdw10:0000000f SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:45.894 [2024-07-14 07:40:01.815005] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xc9bbf0, cid 0, qid 0 00:22:45.894 [2024-07-14 07:40:01.815016] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xc9bd50, cid 1, qid 0 00:22:45.894 [2024-07-14 07:40:01.815023] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xc9beb0, cid 2, qid 0 00:22:45.894 [2024-07-14 07:40:01.815034] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xc9c010, cid 3, qid 0 00:22:45.894 [2024-07-14 07:40:01.815043] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xc9c170, cid 4, qid 0 00:22:45.894 [2024-07-14 07:40:01.815254] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:45.894 [2024-07-14 07:40:01.815270] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:45.894 [2024-07-14 07:40:01.815277] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:45.894 [2024-07-14 07:40:01.815283] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xc9c170) on tqpair=0xc1be10 00:22:45.894 [2024-07-14 07:40:01.815291] nvme_ctrlr.c:2890:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] Sending keep alive every 5000000 us 00:22:45.894 [2024-07-14 07:40:01.815300] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify controller iocs specific (timeout 30000 ms) 00:22:45.894 [2024-07-14 07:40:01.815329] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set number of queues (timeout 30000 ms) 00:22:45.894 [2024-07-14 07:40:01.815345] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for set number of queues (timeout 30000 ms) 00:22:45.894 [2024-07-14 07:40:01.815357] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:45.894 [2024-07-14 07:40:01.815364] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:45.894 [2024-07-14 07:40:01.815370] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0xc1be10) 00:22:45.894 [2024-07-14 07:40:01.815380] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES NUMBER OF QUEUES cid:4 cdw10:00000007 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:22:45.894 [2024-07-14 07:40:01.815400] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xc9c170, cid 4, qid 0 00:22:45.894 [2024-07-14 07:40:01.815595] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:45.894 [2024-07-14 07:40:01.815608] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:45.894 [2024-07-14 07:40:01.815615] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:45.894 [2024-07-14 07:40:01.815621] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xc9c170) on tqpair=0xc1be10 00:22:45.894 [2024-07-14 07:40:01.815686] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify active ns (timeout 30000 ms) 00:22:45.894 [2024-07-14 07:40:01.815706] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for identify active ns (timeout 30000 ms) 00:22:45.894 [2024-07-14 07:40:01.815721] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:45.894 [2024-07-14 07:40:01.815743] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:45.894 [2024-07-14 07:40:01.815750] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0xc1be10) 00:22:45.894 [2024-07-14 07:40:01.815760] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:00000002 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:45.894 [2024-07-14 07:40:01.815780] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xc9c170, cid 4, qid 0 00:22:45.894 [2024-07-14 07:40:01.819879] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:22:45.894 [2024-07-14 07:40:01.819895] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:22:45.895 [2024-07-14 07:40:01.819902] nvme_tcp.c:1650:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:22:45.895 [2024-07-14 07:40:01.819908] nvme_tcp.c:1651:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0xc1be10): datao=0, datal=4096, cccid=4 00:22:45.895 [2024-07-14 07:40:01.819916] nvme_tcp.c:1662:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0xc9c170) on tqpair(0xc1be10): expected_datao=0, payload_size=4096 00:22:45.895 [2024-07-14 07:40:01.819926] nvme_tcp.c:1453:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:22:45.895 [2024-07-14 07:40:01.819934] nvme_tcp.c:1237:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:22:45.895 [2024-07-14 07:40:01.819946] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:45.895 [2024-07-14 07:40:01.819956] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:45.895 [2024-07-14 07:40:01.819962] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:45.895 [2024-07-14 07:40:01.819968] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xc9c170) on tqpair=0xc1be10 00:22:45.895 [2024-07-14 07:40:01.819990] nvme_ctrlr.c:4556:spdk_nvme_ctrlr_get_ns: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] Namespace 1 was added 00:22:45.895 [2024-07-14 07:40:01.820006] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify ns (timeout 30000 ms) 00:22:45.895 [2024-07-14 07:40:01.820039] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for identify ns (timeout 30000 ms) 00:22:45.895 [2024-07-14 07:40:01.820053] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:45.895 [2024-07-14 07:40:01.820061] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:45.895 [2024-07-14 07:40:01.820067] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0xc1be10) 00:22:45.895 [2024-07-14 07:40:01.820078] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:1 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:45.895 [2024-07-14 07:40:01.820101] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xc9c170, cid 4, qid 0 00:22:45.895 [2024-07-14 07:40:01.820319] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:22:45.895 [2024-07-14 07:40:01.820335] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:22:45.895 [2024-07-14 07:40:01.820342] nvme_tcp.c:1650:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:22:45.895 [2024-07-14 07:40:01.820348] nvme_tcp.c:1651:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0xc1be10): datao=0, datal=4096, cccid=4 00:22:45.895 [2024-07-14 07:40:01.820356] nvme_tcp.c:1662:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0xc9c170) on tqpair(0xc1be10): expected_datao=0, payload_size=4096 00:22:45.895 [2024-07-14 07:40:01.820407] nvme_tcp.c:1453:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:22:45.895 [2024-07-14 07:40:01.820432] nvme_tcp.c:1237:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:22:45.895 [2024-07-14 07:40:01.820623] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:45.895 [2024-07-14 07:40:01.820636] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:45.895 [2024-07-14 07:40:01.820643] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:45.895 [2024-07-14 07:40:01.820649] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xc9c170) on tqpair=0xc1be10 00:22:45.895 [2024-07-14 07:40:01.820674] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify namespace id descriptors (timeout 30000 ms) 00:22:45.895 [2024-07-14 07:40:01.820694] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for identify namespace id descriptors (timeout 30000 ms) 00:22:45.895 [2024-07-14 07:40:01.820708] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:45.895 [2024-07-14 07:40:01.820715] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:45.895 [2024-07-14 07:40:01.820721] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0xc1be10) 00:22:45.895 [2024-07-14 07:40:01.820732] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:1 cdw10:00000003 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:45.895 [2024-07-14 07:40:01.820753] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xc9c170, cid 4, qid 0 00:22:45.895 [2024-07-14 07:40:01.820939] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:22:45.895 [2024-07-14 07:40:01.820955] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:22:45.895 [2024-07-14 07:40:01.820962] nvme_tcp.c:1650:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:22:45.895 [2024-07-14 07:40:01.820968] nvme_tcp.c:1651:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0xc1be10): datao=0, datal=4096, cccid=4 00:22:45.895 [2024-07-14 07:40:01.820980] nvme_tcp.c:1662:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0xc9c170) on tqpair(0xc1be10): expected_datao=0, payload_size=4096 00:22:45.895 [2024-07-14 07:40:01.820992] nvme_tcp.c:1453:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:22:45.895 [2024-07-14 07:40:01.821000] nvme_tcp.c:1237:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:22:45.895 [2024-07-14 07:40:01.821052] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:45.895 [2024-07-14 07:40:01.821064] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:45.895 [2024-07-14 07:40:01.821071] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:45.895 [2024-07-14 07:40:01.821077] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xc9c170) on tqpair=0xc1be10 00:22:45.895 [2024-07-14 07:40:01.821091] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify ns iocs specific (timeout 30000 ms) 00:22:45.895 [2024-07-14 07:40:01.821106] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set supported log pages (timeout 30000 ms) 00:22:45.895 [2024-07-14 07:40:01.821123] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set supported features (timeout 30000 ms) 00:22:45.895 [2024-07-14 07:40:01.821133] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set doorbell buffer config (timeout 30000 ms) 00:22:45.895 [2024-07-14 07:40:01.821143] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set host ID (timeout 30000 ms) 00:22:45.895 [2024-07-14 07:40:01.821152] nvme_ctrlr.c:2978:nvme_ctrlr_set_host_id: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] NVMe-oF transport - not sending Set Features - Host ID 00:22:45.895 [2024-07-14 07:40:01.821160] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to transport ready (timeout 30000 ms) 00:22:45.895 [2024-07-14 07:40:01.821168] nvme_ctrlr.c:1478:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to ready (no timeout) 00:22:45.895 [2024-07-14 07:40:01.821187] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:45.895 [2024-07-14 07:40:01.821195] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:45.895 [2024-07-14 07:40:01.821202] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0xc1be10) 00:22:45.895 [2024-07-14 07:40:01.821228] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ARBITRATION cid:4 cdw10:00000001 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:45.895 [2024-07-14 07:40:01.821239] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:45.895 [2024-07-14 07:40:01.821246] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:45.895 [2024-07-14 07:40:01.821252] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0xc1be10) 00:22:45.895 [2024-07-14 07:40:01.821260] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 00:22:45.895 [2024-07-14 07:40:01.821285] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xc9c170, cid 4, qid 0 00:22:45.895 [2024-07-14 07:40:01.821311] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xc9c2d0, cid 5, qid 0 00:22:45.895 [2024-07-14 07:40:01.821499] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:45.895 [2024-07-14 07:40:01.821515] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:45.895 [2024-07-14 07:40:01.821521] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:45.895 [2024-07-14 07:40:01.821528] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xc9c170) on tqpair=0xc1be10 00:22:45.895 [2024-07-14 07:40:01.821538] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:45.895 [2024-07-14 07:40:01.821547] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:45.895 [2024-07-14 07:40:01.821554] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:45.895 [2024-07-14 07:40:01.821560] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xc9c2d0) on tqpair=0xc1be10 00:22:45.895 [2024-07-14 07:40:01.821576] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:45.895 [2024-07-14 07:40:01.821589] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:45.895 [2024-07-14 07:40:01.821596] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0xc1be10) 00:22:45.895 [2024-07-14 07:40:01.821622] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES POWER MANAGEMENT cid:5 cdw10:00000002 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:45.895 [2024-07-14 07:40:01.821643] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xc9c2d0, cid 5, qid 0 00:22:45.895 [2024-07-14 07:40:01.821858] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:45.895 [2024-07-14 07:40:01.821879] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:45.895 [2024-07-14 07:40:01.821887] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:45.895 [2024-07-14 07:40:01.821893] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xc9c2d0) on tqpair=0xc1be10 00:22:45.895 [2024-07-14 07:40:01.821909] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:45.895 [2024-07-14 07:40:01.821918] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:45.895 [2024-07-14 07:40:01.821924] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0xc1be10) 00:22:45.895 [2024-07-14 07:40:01.821934] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES TEMPERATURE THRESHOLD cid:5 cdw10:00000004 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:45.895 [2024-07-14 07:40:01.821955] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xc9c2d0, cid 5, qid 0 00:22:45.895 [2024-07-14 07:40:01.822125] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:45.895 [2024-07-14 07:40:01.822140] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:45.895 [2024-07-14 07:40:01.822147] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:45.895 [2024-07-14 07:40:01.822153] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xc9c2d0) on tqpair=0xc1be10 00:22:45.895 [2024-07-14 07:40:01.822169] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:45.895 [2024-07-14 07:40:01.822178] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:45.895 [2024-07-14 07:40:01.822184] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0xc1be10) 00:22:45.895 [2024-07-14 07:40:01.822195] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES NUMBER OF QUEUES cid:5 cdw10:00000007 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:45.895 [2024-07-14 07:40:01.822215] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xc9c2d0, cid 5, qid 0 00:22:45.895 [2024-07-14 07:40:01.822491] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:45.895 [2024-07-14 07:40:01.822507] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:45.895 [2024-07-14 07:40:01.822514] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:45.895 [2024-07-14 07:40:01.822520] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xc9c2d0) on tqpair=0xc1be10 00:22:45.895 [2024-07-14 07:40:01.822540] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:45.895 [2024-07-14 07:40:01.822550] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:45.895 [2024-07-14 07:40:01.822557] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0xc1be10) 00:22:45.895 [2024-07-14 07:40:01.822567] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:5 nsid:ffffffff cdw10:07ff0001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:45.895 [2024-07-14 07:40:01.822579] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:45.895 [2024-07-14 07:40:01.822586] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:45.895 [2024-07-14 07:40:01.822593] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0xc1be10) 00:22:45.895 [2024-07-14 07:40:01.822602] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:ffffffff cdw10:007f0002 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:45.896 [2024-07-14 07:40:01.822629] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:45.896 [2024-07-14 07:40:01.822640] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:45.896 [2024-07-14 07:40:01.822647] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=6 on tqpair(0xc1be10) 00:22:45.896 [2024-07-14 07:40:01.822656] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:6 nsid:ffffffff cdw10:007f0003 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:45.896 [2024-07-14 07:40:01.822668] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:45.896 [2024-07-14 07:40:01.822675] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:45.896 [2024-07-14 07:40:01.822681] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=7 on tqpair(0xc1be10) 00:22:45.896 [2024-07-14 07:40:01.822690] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:7 nsid:ffffffff cdw10:03ff0005 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:45.896 [2024-07-14 07:40:01.822712] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xc9c2d0, cid 5, qid 0 00:22:45.896 [2024-07-14 07:40:01.822738] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xc9c170, cid 4, qid 0 00:22:45.896 [2024-07-14 07:40:01.822745] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xc9c430, cid 6, qid 0 00:22:45.896 [2024-07-14 07:40:01.822753] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xc9c590, cid 7, qid 0 00:22:45.896 [2024-07-14 07:40:01.823102] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:22:45.896 [2024-07-14 07:40:01.823118] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:22:45.896 [2024-07-14 07:40:01.823125] nvme_tcp.c:1650:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:22:45.896 [2024-07-14 07:40:01.823132] nvme_tcp.c:1651:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0xc1be10): datao=0, datal=8192, cccid=5 00:22:45.896 [2024-07-14 07:40:01.823139] nvme_tcp.c:1662:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0xc9c2d0) on tqpair(0xc1be10): expected_datao=0, payload_size=8192 00:22:45.896 [2024-07-14 07:40:01.823151] nvme_tcp.c:1453:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:22:45.896 [2024-07-14 07:40:01.823159] nvme_tcp.c:1237:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:22:45.896 [2024-07-14 07:40:01.823167] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:22:45.896 [2024-07-14 07:40:01.823176] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:22:45.896 [2024-07-14 07:40:01.823183] nvme_tcp.c:1650:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:22:45.896 [2024-07-14 07:40:01.823189] nvme_tcp.c:1651:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0xc1be10): datao=0, datal=512, cccid=4 00:22:45.896 [2024-07-14 07:40:01.823196] nvme_tcp.c:1662:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0xc9c170) on tqpair(0xc1be10): expected_datao=0, payload_size=512 00:22:45.896 [2024-07-14 07:40:01.823207] nvme_tcp.c:1453:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:22:45.896 [2024-07-14 07:40:01.823214] nvme_tcp.c:1237:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:22:45.896 [2024-07-14 07:40:01.823222] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:22:45.896 [2024-07-14 07:40:01.823231] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:22:45.896 [2024-07-14 07:40:01.823237] nvme_tcp.c:1650:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:22:45.896 [2024-07-14 07:40:01.823244] nvme_tcp.c:1651:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0xc1be10): datao=0, datal=512, cccid=6 00:22:45.896 [2024-07-14 07:40:01.823251] nvme_tcp.c:1662:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0xc9c430) on tqpair(0xc1be10): expected_datao=0, payload_size=512 00:22:45.896 [2024-07-14 07:40:01.823261] nvme_tcp.c:1453:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:22:45.896 [2024-07-14 07:40:01.823269] nvme_tcp.c:1237:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:22:45.896 [2024-07-14 07:40:01.823277] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:22:45.896 [2024-07-14 07:40:01.823286] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:22:45.896 [2024-07-14 07:40:01.823292] nvme_tcp.c:1650:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:22:45.896 [2024-07-14 07:40:01.823299] nvme_tcp.c:1651:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0xc1be10): datao=0, datal=4096, cccid=7 00:22:45.896 [2024-07-14 07:40:01.823310] nvme_tcp.c:1662:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0xc9c590) on tqpair(0xc1be10): expected_datao=0, payload_size=4096 00:22:45.896 [2024-07-14 07:40:01.823322] nvme_tcp.c:1453:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:22:45.896 [2024-07-14 07:40:01.823329] nvme_tcp.c:1237:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:22:45.896 [2024-07-14 07:40:01.823341] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:45.896 [2024-07-14 07:40:01.823351] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:45.896 [2024-07-14 07:40:01.823357] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:45.896 [2024-07-14 07:40:01.823364] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xc9c2d0) on tqpair=0xc1be10 00:22:45.896 [2024-07-14 07:40:01.823399] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:45.896 [2024-07-14 07:40:01.823411] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:45.896 [2024-07-14 07:40:01.823418] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:45.896 [2024-07-14 07:40:01.823424] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xc9c170) on tqpair=0xc1be10 00:22:45.896 [2024-07-14 07:40:01.823437] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:45.896 [2024-07-14 07:40:01.823448] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:45.896 [2024-07-14 07:40:01.823454] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:45.896 [2024-07-14 07:40:01.823460] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xc9c430) on tqpair=0xc1be10 00:22:45.896 [2024-07-14 07:40:01.823471] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:45.896 [2024-07-14 07:40:01.823495] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:45.896 [2024-07-14 07:40:01.823502] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:45.896 [2024-07-14 07:40:01.823508] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xc9c590) on tqpair=0xc1be10 00:22:45.896 ===================================================== 00:22:45.896 NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:22:45.896 ===================================================== 00:22:45.896 Controller Capabilities/Features 00:22:45.896 ================================ 00:22:45.896 Vendor ID: 8086 00:22:45.896 Subsystem Vendor ID: 8086 00:22:45.896 Serial Number: SPDK00000000000001 00:22:45.896 Model Number: SPDK bdev Controller 00:22:45.896 Firmware Version: 24.01.1 00:22:45.896 Recommended Arb Burst: 6 00:22:45.896 IEEE OUI Identifier: e4 d2 5c 00:22:45.896 Multi-path I/O 00:22:45.896 May have multiple subsystem ports: Yes 00:22:45.896 May have multiple controllers: Yes 00:22:45.896 Associated with SR-IOV VF: No 00:22:45.896 Max Data Transfer Size: 131072 00:22:45.896 Max Number of Namespaces: 32 00:22:45.896 Max Number of I/O Queues: 127 00:22:45.896 NVMe Specification Version (VS): 1.3 00:22:45.896 NVMe Specification Version (Identify): 1.3 00:22:45.896 Maximum Queue Entries: 128 00:22:45.896 Contiguous Queues Required: Yes 00:22:45.896 Arbitration Mechanisms Supported 00:22:45.896 Weighted Round Robin: Not Supported 00:22:45.896 Vendor Specific: Not Supported 00:22:45.896 Reset Timeout: 15000 ms 00:22:45.896 Doorbell Stride: 4 bytes 00:22:45.896 NVM Subsystem Reset: Not Supported 00:22:45.896 Command Sets Supported 00:22:45.896 NVM Command Set: Supported 00:22:45.896 Boot Partition: Not Supported 00:22:45.896 Memory Page Size Minimum: 4096 bytes 00:22:45.896 Memory Page Size Maximum: 4096 bytes 00:22:45.896 Persistent Memory Region: Not Supported 00:22:45.896 Optional Asynchronous Events Supported 00:22:45.896 Namespace Attribute Notices: Supported 00:22:45.896 Firmware Activation Notices: Not Supported 00:22:45.896 ANA Change Notices: Not Supported 00:22:45.896 PLE Aggregate Log Change Notices: Not Supported 00:22:45.896 LBA Status Info Alert Notices: Not Supported 00:22:45.896 EGE Aggregate Log Change Notices: Not Supported 00:22:45.896 Normal NVM Subsystem Shutdown event: Not Supported 00:22:45.896 Zone Descriptor Change Notices: Not Supported 00:22:45.896 Discovery Log Change Notices: Not Supported 00:22:45.896 Controller Attributes 00:22:45.896 128-bit Host Identifier: Supported 00:22:45.896 Non-Operational Permissive Mode: Not Supported 00:22:45.896 NVM Sets: Not Supported 00:22:45.896 Read Recovery Levels: Not Supported 00:22:45.896 Endurance Groups: Not Supported 00:22:45.896 Predictable Latency Mode: Not Supported 00:22:45.896 Traffic Based Keep ALive: Not Supported 00:22:45.896 Namespace Granularity: Not Supported 00:22:45.896 SQ Associations: Not Supported 00:22:45.896 UUID List: Not Supported 00:22:45.896 Multi-Domain Subsystem: Not Supported 00:22:45.896 Fixed Capacity Management: Not Supported 00:22:45.896 Variable Capacity Management: Not Supported 00:22:45.896 Delete Endurance Group: Not Supported 00:22:45.896 Delete NVM Set: Not Supported 00:22:45.896 Extended LBA Formats Supported: Not Supported 00:22:45.896 Flexible Data Placement Supported: Not Supported 00:22:45.896 00:22:45.896 Controller Memory Buffer Support 00:22:45.896 ================================ 00:22:45.896 Supported: No 00:22:45.896 00:22:45.896 Persistent Memory Region Support 00:22:45.896 ================================ 00:22:45.896 Supported: No 00:22:45.896 00:22:45.896 Admin Command Set Attributes 00:22:45.896 ============================ 00:22:45.896 Security Send/Receive: Not Supported 00:22:45.896 Format NVM: Not Supported 00:22:45.896 Firmware Activate/Download: Not Supported 00:22:45.896 Namespace Management: Not Supported 00:22:45.896 Device Self-Test: Not Supported 00:22:45.896 Directives: Not Supported 00:22:45.896 NVMe-MI: Not Supported 00:22:45.896 Virtualization Management: Not Supported 00:22:45.896 Doorbell Buffer Config: Not Supported 00:22:45.896 Get LBA Status Capability: Not Supported 00:22:45.896 Command & Feature Lockdown Capability: Not Supported 00:22:45.896 Abort Command Limit: 4 00:22:45.896 Async Event Request Limit: 4 00:22:45.896 Number of Firmware Slots: N/A 00:22:45.896 Firmware Slot 1 Read-Only: N/A 00:22:45.896 Firmware Activation Without Reset: N/A 00:22:45.896 Multiple Update Detection Support: N/A 00:22:45.896 Firmware Update Granularity: No Information Provided 00:22:45.896 Per-Namespace SMART Log: No 00:22:45.896 Asymmetric Namespace Access Log Page: Not Supported 00:22:45.896 Subsystem NQN: nqn.2016-06.io.spdk:cnode1 00:22:45.896 Command Effects Log Page: Supported 00:22:45.896 Get Log Page Extended Data: Supported 00:22:45.896 Telemetry Log Pages: Not Supported 00:22:45.896 Persistent Event Log Pages: Not Supported 00:22:45.896 Supported Log Pages Log Page: May Support 00:22:45.896 Commands Supported & Effects Log Page: Not Supported 00:22:45.897 Feature Identifiers & Effects Log Page:May Support 00:22:45.897 NVMe-MI Commands & Effects Log Page: May Support 00:22:45.897 Data Area 4 for Telemetry Log: Not Supported 00:22:45.897 Error Log Page Entries Supported: 128 00:22:45.897 Keep Alive: Supported 00:22:45.897 Keep Alive Granularity: 10000 ms 00:22:45.897 00:22:45.897 NVM Command Set Attributes 00:22:45.897 ========================== 00:22:45.897 Submission Queue Entry Size 00:22:45.897 Max: 64 00:22:45.897 Min: 64 00:22:45.897 Completion Queue Entry Size 00:22:45.897 Max: 16 00:22:45.897 Min: 16 00:22:45.897 Number of Namespaces: 32 00:22:45.897 Compare Command: Supported 00:22:45.897 Write Uncorrectable Command: Not Supported 00:22:45.897 Dataset Management Command: Supported 00:22:45.897 Write Zeroes Command: Supported 00:22:45.897 Set Features Save Field: Not Supported 00:22:45.897 Reservations: Supported 00:22:45.897 Timestamp: Not Supported 00:22:45.897 Copy: Supported 00:22:45.897 Volatile Write Cache: Present 00:22:45.897 Atomic Write Unit (Normal): 1 00:22:45.897 Atomic Write Unit (PFail): 1 00:22:45.897 Atomic Compare & Write Unit: 1 00:22:45.897 Fused Compare & Write: Supported 00:22:45.897 Scatter-Gather List 00:22:45.897 SGL Command Set: Supported 00:22:45.897 SGL Keyed: Supported 00:22:45.897 SGL Bit Bucket Descriptor: Not Supported 00:22:45.897 SGL Metadata Pointer: Not Supported 00:22:45.897 Oversized SGL: Not Supported 00:22:45.897 SGL Metadata Address: Not Supported 00:22:45.897 SGL Offset: Supported 00:22:45.897 Transport SGL Data Block: Not Supported 00:22:45.897 Replay Protected Memory Block: Not Supported 00:22:45.897 00:22:45.897 Firmware Slot Information 00:22:45.897 ========================= 00:22:45.897 Active slot: 1 00:22:45.897 Slot 1 Firmware Revision: 24.01.1 00:22:45.897 00:22:45.897 00:22:45.897 Commands Supported and Effects 00:22:45.897 ============================== 00:22:45.897 Admin Commands 00:22:45.897 -------------- 00:22:45.897 Get Log Page (02h): Supported 00:22:45.897 Identify (06h): Supported 00:22:45.897 Abort (08h): Supported 00:22:45.897 Set Features (09h): Supported 00:22:45.897 Get Features (0Ah): Supported 00:22:45.897 Asynchronous Event Request (0Ch): Supported 00:22:45.897 Keep Alive (18h): Supported 00:22:45.897 I/O Commands 00:22:45.897 ------------ 00:22:45.897 Flush (00h): Supported LBA-Change 00:22:45.897 Write (01h): Supported LBA-Change 00:22:45.897 Read (02h): Supported 00:22:45.897 Compare (05h): Supported 00:22:45.897 Write Zeroes (08h): Supported LBA-Change 00:22:45.897 Dataset Management (09h): Supported LBA-Change 00:22:45.897 Copy (19h): Supported LBA-Change 00:22:45.897 Unknown (79h): Supported LBA-Change 00:22:45.897 Unknown (7Ah): Supported 00:22:45.897 00:22:45.897 Error Log 00:22:45.897 ========= 00:22:45.897 00:22:45.897 Arbitration 00:22:45.897 =========== 00:22:45.897 Arbitration Burst: 1 00:22:45.897 00:22:45.897 Power Management 00:22:45.897 ================ 00:22:45.897 Number of Power States: 1 00:22:45.897 Current Power State: Power State #0 00:22:45.897 Power State #0: 00:22:45.897 Max Power: 0.00 W 00:22:45.897 Non-Operational State: Operational 00:22:45.897 Entry Latency: Not Reported 00:22:45.897 Exit Latency: Not Reported 00:22:45.897 Relative Read Throughput: 0 00:22:45.897 Relative Read Latency: 0 00:22:45.897 Relative Write Throughput: 0 00:22:45.897 Relative Write Latency: 0 00:22:45.897 Idle Power: Not Reported 00:22:45.897 Active Power: Not Reported 00:22:45.897 Non-Operational Permissive Mode: Not Supported 00:22:45.897 00:22:45.897 Health Information 00:22:45.897 ================== 00:22:45.897 Critical Warnings: 00:22:45.897 Available Spare Space: OK 00:22:45.897 Temperature: OK 00:22:45.897 Device Reliability: OK 00:22:45.897 Read Only: No 00:22:45.897 Volatile Memory Backup: OK 00:22:45.897 Current Temperature: 0 Kelvin (-273 Celsius) 00:22:45.897 Temperature Threshold: [2024-07-14 07:40:01.823640] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:45.897 [2024-07-14 07:40:01.823652] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:45.897 [2024-07-14 07:40:01.823659] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=7 on tqpair(0xc1be10) 00:22:45.897 [2024-07-14 07:40:01.823669] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ERROR_RECOVERY cid:7 cdw10:00000005 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:45.897 [2024-07-14 07:40:01.823691] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xc9c590, cid 7, qid 0 00:22:45.897 [2024-07-14 07:40:01.827878] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:45.897 [2024-07-14 07:40:01.827895] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:45.897 [2024-07-14 07:40:01.827902] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:45.897 [2024-07-14 07:40:01.827909] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xc9c590) on tqpair=0xc1be10 00:22:45.897 [2024-07-14 07:40:01.827964] nvme_ctrlr.c:4220:nvme_ctrlr_destruct_async: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] Prepare to destruct SSD 00:22:45.897 [2024-07-14 07:40:01.827986] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:45.897 [2024-07-14 07:40:01.827998] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:45.897 [2024-07-14 07:40:01.828007] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:45.897 [2024-07-14 07:40:01.828017] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:45.897 [2024-07-14 07:40:01.828029] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:45.897 [2024-07-14 07:40:01.828037] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:45.897 [2024-07-14 07:40:01.828043] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xc1be10) 00:22:45.897 [2024-07-14 07:40:01.828057] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:45.897 [2024-07-14 07:40:01.828081] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xc9c010, cid 3, qid 0 00:22:45.897 [2024-07-14 07:40:01.828266] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:45.897 [2024-07-14 07:40:01.828279] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:45.897 [2024-07-14 07:40:01.828286] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:45.897 [2024-07-14 07:40:01.828292] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xc9c010) on tqpair=0xc1be10 00:22:45.897 [2024-07-14 07:40:01.828303] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:45.897 [2024-07-14 07:40:01.828310] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:45.897 [2024-07-14 07:40:01.828317] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xc1be10) 00:22:45.897 [2024-07-14 07:40:01.828327] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:45.897 [2024-07-14 07:40:01.828352] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xc9c010, cid 3, qid 0 00:22:45.897 [2024-07-14 07:40:01.828512] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:45.897 [2024-07-14 07:40:01.828524] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:45.897 [2024-07-14 07:40:01.828531] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:45.897 [2024-07-14 07:40:01.828538] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xc9c010) on tqpair=0xc1be10 00:22:45.897 [2024-07-14 07:40:01.828545] nvme_ctrlr.c:1070:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] RTD3E = 0 us 00:22:45.897 [2024-07-14 07:40:01.828553] nvme_ctrlr.c:1073:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] shutdown timeout = 10000 ms 00:22:45.897 [2024-07-14 07:40:01.828568] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:45.897 [2024-07-14 07:40:01.828577] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:45.897 [2024-07-14 07:40:01.828583] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xc1be10) 00:22:45.898 [2024-07-14 07:40:01.828593] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:45.898 [2024-07-14 07:40:01.828613] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xc9c010, cid 3, qid 0 00:22:45.898 [2024-07-14 07:40:01.828796] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:45.898 [2024-07-14 07:40:01.828812] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:45.898 [2024-07-14 07:40:01.828819] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:45.898 [2024-07-14 07:40:01.828825] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xc9c010) on tqpair=0xc1be10 00:22:45.898 [2024-07-14 07:40:01.828842] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:45.898 [2024-07-14 07:40:01.828851] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:45.898 [2024-07-14 07:40:01.828858] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xc1be10) 00:22:45.898 [2024-07-14 07:40:01.828876] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:45.898 [2024-07-14 07:40:01.828899] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xc9c010, cid 3, qid 0 00:22:45.898 [2024-07-14 07:40:01.829043] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:45.898 [2024-07-14 07:40:01.829055] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:45.898 [2024-07-14 07:40:01.829062] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:45.898 [2024-07-14 07:40:01.829069] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xc9c010) on tqpair=0xc1be10 00:22:45.898 [2024-07-14 07:40:01.829084] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:45.898 [2024-07-14 07:40:01.829097] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:45.898 [2024-07-14 07:40:01.829104] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xc1be10) 00:22:45.898 [2024-07-14 07:40:01.829114] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:45.898 [2024-07-14 07:40:01.829135] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xc9c010, cid 3, qid 0 00:22:45.898 [2024-07-14 07:40:01.829278] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:45.898 [2024-07-14 07:40:01.829293] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:45.898 [2024-07-14 07:40:01.829300] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:45.898 [2024-07-14 07:40:01.829307] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xc9c010) on tqpair=0xc1be10 00:22:45.898 [2024-07-14 07:40:01.829323] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:45.898 [2024-07-14 07:40:01.829332] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:45.898 [2024-07-14 07:40:01.829339] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xc1be10) 00:22:45.898 [2024-07-14 07:40:01.829349] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:45.898 [2024-07-14 07:40:01.829370] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xc9c010, cid 3, qid 0 00:22:45.898 [2024-07-14 07:40:01.829521] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:45.898 [2024-07-14 07:40:01.829536] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:45.898 [2024-07-14 07:40:01.829543] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:45.898 [2024-07-14 07:40:01.829549] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xc9c010) on tqpair=0xc1be10 00:22:45.898 [2024-07-14 07:40:01.829565] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:45.898 [2024-07-14 07:40:01.829574] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:45.898 [2024-07-14 07:40:01.829580] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xc1be10) 00:22:45.898 [2024-07-14 07:40:01.829591] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:45.898 [2024-07-14 07:40:01.829611] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xc9c010, cid 3, qid 0 00:22:45.898 [2024-07-14 07:40:01.829758] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:45.898 [2024-07-14 07:40:01.829770] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:45.898 [2024-07-14 07:40:01.829776] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:45.898 [2024-07-14 07:40:01.829783] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xc9c010) on tqpair=0xc1be10 00:22:45.898 [2024-07-14 07:40:01.829799] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:45.898 [2024-07-14 07:40:01.829807] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:45.898 [2024-07-14 07:40:01.829814] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xc1be10) 00:22:45.898 [2024-07-14 07:40:01.829824] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:45.898 [2024-07-14 07:40:01.829844] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xc9c010, cid 3, qid 0 00:22:45.898 [2024-07-14 07:40:01.830006] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:45.898 [2024-07-14 07:40:01.830021] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:45.898 [2024-07-14 07:40:01.830028] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:45.898 [2024-07-14 07:40:01.830035] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xc9c010) on tqpair=0xc1be10 00:22:45.898 [2024-07-14 07:40:01.830051] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:45.898 [2024-07-14 07:40:01.830060] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:45.898 [2024-07-14 07:40:01.830070] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xc1be10) 00:22:45.898 [2024-07-14 07:40:01.830081] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:45.898 [2024-07-14 07:40:01.830102] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xc9c010, cid 3, qid 0 00:22:45.898 [2024-07-14 07:40:01.830307] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:45.898 [2024-07-14 07:40:01.830322] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:45.898 [2024-07-14 07:40:01.830329] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:45.898 [2024-07-14 07:40:01.830335] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xc9c010) on tqpair=0xc1be10 00:22:45.898 [2024-07-14 07:40:01.830351] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:45.898 [2024-07-14 07:40:01.830360] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:45.898 [2024-07-14 07:40:01.830367] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xc1be10) 00:22:45.898 [2024-07-14 07:40:01.830377] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:45.898 [2024-07-14 07:40:01.830413] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xc9c010, cid 3, qid 0 00:22:45.898 [2024-07-14 07:40:01.830662] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:45.898 [2024-07-14 07:40:01.830675] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:45.898 [2024-07-14 07:40:01.830682] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:45.898 [2024-07-14 07:40:01.830688] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xc9c010) on tqpair=0xc1be10 00:22:45.898 [2024-07-14 07:40:01.830704] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:45.898 [2024-07-14 07:40:01.830713] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:45.898 [2024-07-14 07:40:01.830719] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xc1be10) 00:22:45.898 [2024-07-14 07:40:01.830730] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:45.898 [2024-07-14 07:40:01.830750] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xc9c010, cid 3, qid 0 00:22:45.898 [2024-07-14 07:40:01.831067] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:45.898 [2024-07-14 07:40:01.831083] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:45.898 [2024-07-14 07:40:01.831089] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:45.898 [2024-07-14 07:40:01.831096] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xc9c010) on tqpair=0xc1be10 00:22:45.898 [2024-07-14 07:40:01.831112] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:45.898 [2024-07-14 07:40:01.831121] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:45.898 [2024-07-14 07:40:01.831128] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xc1be10) 00:22:45.898 [2024-07-14 07:40:01.831138] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:45.898 [2024-07-14 07:40:01.831159] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xc9c010, cid 3, qid 0 00:22:45.898 [2024-07-14 07:40:01.831411] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:45.898 [2024-07-14 07:40:01.831427] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:45.898 [2024-07-14 07:40:01.831434] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:45.898 [2024-07-14 07:40:01.831440] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xc9c010) on tqpair=0xc1be10 00:22:45.898 [2024-07-14 07:40:01.831457] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:45.898 [2024-07-14 07:40:01.831466] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:45.898 [2024-07-14 07:40:01.831472] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xc1be10) 00:22:45.898 [2024-07-14 07:40:01.831487] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:45.898 [2024-07-14 07:40:01.831508] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xc9c010, cid 3, qid 0 00:22:45.898 [2024-07-14 07:40:01.831652] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:45.898 [2024-07-14 07:40:01.831665] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:45.898 [2024-07-14 07:40:01.831671] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:45.898 [2024-07-14 07:40:01.831678] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xc9c010) on tqpair=0xc1be10 00:22:45.898 [2024-07-14 07:40:01.831694] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:45.898 [2024-07-14 07:40:01.831703] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:45.898 [2024-07-14 07:40:01.831709] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xc1be10) 00:22:45.898 [2024-07-14 07:40:01.831720] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:45.898 [2024-07-14 07:40:01.831739] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xc9c010, cid 3, qid 0 00:22:45.898 [2024-07-14 07:40:01.831887] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:45.898 [2024-07-14 07:40:01.831902] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:45.898 [2024-07-14 07:40:01.831908] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:45.898 [2024-07-14 07:40:01.831915] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xc9c010) on tqpair=0xc1be10 00:22:45.898 [2024-07-14 07:40:01.831931] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:45.898 [2024-07-14 07:40:01.831940] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:45.898 [2024-07-14 07:40:01.831946] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xc1be10) 00:22:45.898 [2024-07-14 07:40:01.831957] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:45.898 [2024-07-14 07:40:01.831977] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xc9c010, cid 3, qid 0 00:22:45.898 [2024-07-14 07:40:01.832125] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:45.898 [2024-07-14 07:40:01.832137] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:45.898 [2024-07-14 07:40:01.832143] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:45.898 [2024-07-14 07:40:01.832150] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xc9c010) on tqpair=0xc1be10 00:22:45.898 [2024-07-14 07:40:01.832166] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:45.899 [2024-07-14 07:40:01.832175] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:45.899 [2024-07-14 07:40:01.832181] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xc1be10) 00:22:45.899 [2024-07-14 07:40:01.832191] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:45.899 [2024-07-14 07:40:01.832211] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xc9c010, cid 3, qid 0 00:22:45.899 [2024-07-14 07:40:01.832358] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:45.899 [2024-07-14 07:40:01.832371] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:45.899 [2024-07-14 07:40:01.832378] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:45.899 [2024-07-14 07:40:01.832384] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xc9c010) on tqpair=0xc1be10 00:22:45.899 [2024-07-14 07:40:01.832399] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:45.899 [2024-07-14 07:40:01.832408] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:45.899 [2024-07-14 07:40:01.832415] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xc1be10) 00:22:45.899 [2024-07-14 07:40:01.832425] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:45.899 [2024-07-14 07:40:01.832449] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xc9c010, cid 3, qid 0 00:22:45.899 [2024-07-14 07:40:01.832596] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:45.899 [2024-07-14 07:40:01.832609] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:45.899 [2024-07-14 07:40:01.832616] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:45.899 [2024-07-14 07:40:01.832622] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xc9c010) on tqpair=0xc1be10 00:22:45.899 [2024-07-14 07:40:01.832638] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:45.899 [2024-07-14 07:40:01.832647] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:45.899 [2024-07-14 07:40:01.832653] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xc1be10) 00:22:45.899 [2024-07-14 07:40:01.832664] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:45.899 [2024-07-14 07:40:01.832683] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xc9c010, cid 3, qid 0 00:22:45.899 [2024-07-14 07:40:01.832891] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:45.899 [2024-07-14 07:40:01.832907] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:45.899 [2024-07-14 07:40:01.832914] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:45.899 [2024-07-14 07:40:01.832920] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xc9c010) on tqpair=0xc1be10 00:22:45.899 [2024-07-14 07:40:01.832937] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:45.899 [2024-07-14 07:40:01.832946] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:45.899 [2024-07-14 07:40:01.832952] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xc1be10) 00:22:45.899 [2024-07-14 07:40:01.832963] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:45.899 [2024-07-14 07:40:01.832983] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xc9c010, cid 3, qid 0 00:22:45.899 [2024-07-14 07:40:01.833183] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:45.899 [2024-07-14 07:40:01.833198] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:45.899 [2024-07-14 07:40:01.833205] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:45.899 [2024-07-14 07:40:01.833211] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xc9c010) on tqpair=0xc1be10 00:22:45.899 [2024-07-14 07:40:01.833227] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:45.899 [2024-07-14 07:40:01.833236] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:45.899 [2024-07-14 07:40:01.833243] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xc1be10) 00:22:45.899 [2024-07-14 07:40:01.833253] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:45.899 [2024-07-14 07:40:01.833273] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xc9c010, cid 3, qid 0 00:22:45.899 [2024-07-14 07:40:01.833430] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:45.899 [2024-07-14 07:40:01.833445] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:45.899 [2024-07-14 07:40:01.833452] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:45.899 [2024-07-14 07:40:01.833459] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xc9c010) on tqpair=0xc1be10 00:22:45.899 [2024-07-14 07:40:01.833475] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:45.899 [2024-07-14 07:40:01.833484] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:45.899 [2024-07-14 07:40:01.833490] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xc1be10) 00:22:45.899 [2024-07-14 07:40:01.833501] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:45.899 [2024-07-14 07:40:01.833525] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xc9c010, cid 3, qid 0 00:22:45.899 [2024-07-14 07:40:01.836876] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:45.899 [2024-07-14 07:40:01.836894] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:45.899 [2024-07-14 07:40:01.836901] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:45.899 [2024-07-14 07:40:01.836907] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xc9c010) on tqpair=0xc1be10 00:22:45.899 [2024-07-14 07:40:01.836925] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:45.899 [2024-07-14 07:40:01.836950] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:45.899 [2024-07-14 07:40:01.836956] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xc1be10) 00:22:45.899 [2024-07-14 07:40:01.836967] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:45.899 [2024-07-14 07:40:01.836990] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xc9c010, cid 3, qid 0 00:22:45.899 [2024-07-14 07:40:01.837179] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:45.899 [2024-07-14 07:40:01.837191] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:45.899 [2024-07-14 07:40:01.837198] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:45.899 [2024-07-14 07:40:01.837204] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xc9c010) on tqpair=0xc1be10 00:22:45.899 [2024-07-14 07:40:01.837217] nvme_ctrlr.c:1192:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] shutdown complete in 8 milliseconds 00:22:45.899 0 Kelvin (-273 Celsius) 00:22:45.899 Available Spare: 0% 00:22:45.899 Available Spare Threshold: 0% 00:22:45.899 Life Percentage Used: 0% 00:22:45.899 Data Units Read: 0 00:22:45.899 Data Units Written: 0 00:22:45.899 Host Read Commands: 0 00:22:45.899 Host Write Commands: 0 00:22:45.899 Controller Busy Time: 0 minutes 00:22:45.899 Power Cycles: 0 00:22:45.899 Power On Hours: 0 hours 00:22:45.899 Unsafe Shutdowns: 0 00:22:45.899 Unrecoverable Media Errors: 0 00:22:45.899 Lifetime Error Log Entries: 0 00:22:45.899 Warning Temperature Time: 0 minutes 00:22:45.899 Critical Temperature Time: 0 minutes 00:22:45.899 00:22:45.899 Number of Queues 00:22:45.899 ================ 00:22:45.899 Number of I/O Submission Queues: 127 00:22:45.899 Number of I/O Completion Queues: 127 00:22:45.899 00:22:45.899 Active Namespaces 00:22:45.899 ================= 00:22:45.899 Namespace ID:1 00:22:45.899 Error Recovery Timeout: Unlimited 00:22:45.899 Command Set Identifier: NVM (00h) 00:22:45.899 Deallocate: Supported 00:22:45.899 Deallocated/Unwritten Error: Not Supported 00:22:45.899 Deallocated Read Value: Unknown 00:22:45.899 Deallocate in Write Zeroes: Not Supported 00:22:45.899 Deallocated Guard Field: 0xFFFF 00:22:45.899 Flush: Supported 00:22:45.899 Reservation: Supported 00:22:45.899 Namespace Sharing Capabilities: Multiple Controllers 00:22:45.899 Size (in LBAs): 131072 (0GiB) 00:22:45.899 Capacity (in LBAs): 131072 (0GiB) 00:22:45.899 Utilization (in LBAs): 131072 (0GiB) 00:22:45.899 NGUID: ABCDEF0123456789ABCDEF0123456789 00:22:45.899 EUI64: ABCDEF0123456789 00:22:45.899 UUID: c58df8cb-cafa-493f-bb0c-9657d7f25528 00:22:45.899 Thin Provisioning: Not Supported 00:22:45.899 Per-NS Atomic Units: Yes 00:22:45.899 Atomic Boundary Size (Normal): 0 00:22:45.899 Atomic Boundary Size (PFail): 0 00:22:45.899 Atomic Boundary Offset: 0 00:22:45.899 Maximum Single Source Range Length: 65535 00:22:45.899 Maximum Copy Length: 65535 00:22:45.899 Maximum Source Range Count: 1 00:22:45.899 NGUID/EUI64 Never Reused: No 00:22:45.899 Namespace Write Protected: No 00:22:45.899 Number of LBA Formats: 1 00:22:45.899 Current LBA Format: LBA Format #00 00:22:45.899 LBA Format #00: Data Size: 512 Metadata Size: 0 00:22:45.899 00:22:45.899 07:40:01 -- host/identify.sh@51 -- # sync 00:22:45.899 07:40:01 -- host/identify.sh@52 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:22:45.899 07:40:01 -- common/autotest_common.sh@551 -- # xtrace_disable 00:22:45.899 07:40:01 -- common/autotest_common.sh@10 -- # set +x 00:22:45.899 07:40:01 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:22:45.899 07:40:01 -- host/identify.sh@54 -- # trap - SIGINT SIGTERM EXIT 00:22:45.899 07:40:01 -- host/identify.sh@56 -- # nvmftestfini 00:22:45.899 07:40:01 -- nvmf/common.sh@476 -- # nvmfcleanup 00:22:45.899 07:40:01 -- nvmf/common.sh@116 -- # sync 00:22:45.899 07:40:01 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:22:45.899 07:40:01 -- nvmf/common.sh@119 -- # set +e 00:22:45.899 07:40:01 -- nvmf/common.sh@120 -- # for i in {1..20} 00:22:45.899 07:40:01 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:22:45.899 rmmod nvme_tcp 00:22:45.899 rmmod nvme_fabrics 00:22:45.899 rmmod nvme_keyring 00:22:45.899 07:40:01 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:22:45.899 07:40:01 -- nvmf/common.sh@123 -- # set -e 00:22:45.899 07:40:01 -- nvmf/common.sh@124 -- # return 0 00:22:45.899 07:40:01 -- nvmf/common.sh@477 -- # '[' -n 4167182 ']' 00:22:45.899 07:40:01 -- nvmf/common.sh@478 -- # killprocess 4167182 00:22:45.899 07:40:01 -- common/autotest_common.sh@926 -- # '[' -z 4167182 ']' 00:22:45.899 07:40:01 -- common/autotest_common.sh@930 -- # kill -0 4167182 00:22:45.899 07:40:01 -- common/autotest_common.sh@931 -- # uname 00:22:45.899 07:40:01 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:22:45.899 07:40:01 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 4167182 00:22:45.899 07:40:01 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:22:45.899 07:40:01 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:22:45.899 07:40:01 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 4167182' 00:22:45.899 killing process with pid 4167182 00:22:45.899 07:40:01 -- common/autotest_common.sh@945 -- # kill 4167182 00:22:45.899 [2024-07-14 07:40:01.959015] app.c: 883:log_deprecation_hits: *WARNING*: rpc_nvmf_get_subsystems: deprecation 'listener.transport is deprecated in favor of trtype' scheduled for removal in v24.05 hit 1 times 00:22:45.900 07:40:01 -- common/autotest_common.sh@950 -- # wait 4167182 00:22:46.185 07:40:02 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:22:46.185 07:40:02 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:22:46.185 07:40:02 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:22:46.185 07:40:02 -- nvmf/common.sh@273 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:22:46.185 07:40:02 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:22:46.185 07:40:02 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:46.185 07:40:02 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:22:46.185 07:40:02 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:48.712 07:40:04 -- nvmf/common.sh@278 -- # ip -4 addr flush cvl_0_1 00:22:48.712 00:22:48.712 real 0m6.065s 00:22:48.712 user 0m6.961s 00:22:48.712 sys 0m1.916s 00:22:48.712 07:40:04 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:22:48.712 07:40:04 -- common/autotest_common.sh@10 -- # set +x 00:22:48.712 ************************************ 00:22:48.712 END TEST nvmf_identify 00:22:48.712 ************************************ 00:22:48.712 07:40:04 -- nvmf/nvmf.sh@98 -- # run_test nvmf_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/perf.sh --transport=tcp 00:22:48.712 07:40:04 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:22:48.712 07:40:04 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:22:48.712 07:40:04 -- common/autotest_common.sh@10 -- # set +x 00:22:48.712 ************************************ 00:22:48.712 START TEST nvmf_perf 00:22:48.712 ************************************ 00:22:48.712 07:40:04 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/perf.sh --transport=tcp 00:22:48.712 * Looking for test storage... 00:22:48.712 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:22:48.712 07:40:04 -- host/perf.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:22:48.712 07:40:04 -- nvmf/common.sh@7 -- # uname -s 00:22:48.712 07:40:04 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:22:48.712 07:40:04 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:22:48.712 07:40:04 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:22:48.712 07:40:04 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:22:48.712 07:40:04 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:22:48.712 07:40:04 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:22:48.712 07:40:04 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:22:48.712 07:40:04 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:22:48.712 07:40:04 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:22:48.712 07:40:04 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:22:48.712 07:40:04 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:22:48.712 07:40:04 -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:22:48.712 07:40:04 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:22:48.712 07:40:04 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:22:48.712 07:40:04 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:22:48.712 07:40:04 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:22:48.712 07:40:04 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:22:48.712 07:40:04 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:22:48.712 07:40:04 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:22:48.712 07:40:04 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:48.712 07:40:04 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:48.712 07:40:04 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:48.712 07:40:04 -- paths/export.sh@5 -- # export PATH 00:22:48.712 07:40:04 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:48.712 07:40:04 -- nvmf/common.sh@46 -- # : 0 00:22:48.712 07:40:04 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:22:48.712 07:40:04 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:22:48.712 07:40:04 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:22:48.712 07:40:04 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:22:48.712 07:40:04 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:22:48.712 07:40:04 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:22:48.712 07:40:04 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:22:48.712 07:40:04 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:22:48.712 07:40:04 -- host/perf.sh@12 -- # MALLOC_BDEV_SIZE=64 00:22:48.712 07:40:04 -- host/perf.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:22:48.712 07:40:04 -- host/perf.sh@15 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:22:48.712 07:40:04 -- host/perf.sh@17 -- # nvmftestinit 00:22:48.712 07:40:04 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:22:48.712 07:40:04 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:22:48.712 07:40:04 -- nvmf/common.sh@436 -- # prepare_net_devs 00:22:48.712 07:40:04 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:22:48.712 07:40:04 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:22:48.712 07:40:04 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:48.712 07:40:04 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:22:48.712 07:40:04 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:48.712 07:40:04 -- nvmf/common.sh@402 -- # [[ phy != virt ]] 00:22:48.712 07:40:04 -- nvmf/common.sh@402 -- # gather_supported_nvmf_pci_devs 00:22:48.712 07:40:04 -- nvmf/common.sh@284 -- # xtrace_disable 00:22:48.712 07:40:04 -- common/autotest_common.sh@10 -- # set +x 00:22:50.616 07:40:06 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:22:50.616 07:40:06 -- nvmf/common.sh@290 -- # pci_devs=() 00:22:50.616 07:40:06 -- nvmf/common.sh@290 -- # local -a pci_devs 00:22:50.616 07:40:06 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:22:50.616 07:40:06 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:22:50.616 07:40:06 -- nvmf/common.sh@292 -- # pci_drivers=() 00:22:50.616 07:40:06 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:22:50.616 07:40:06 -- nvmf/common.sh@294 -- # net_devs=() 00:22:50.616 07:40:06 -- nvmf/common.sh@294 -- # local -ga net_devs 00:22:50.616 07:40:06 -- nvmf/common.sh@295 -- # e810=() 00:22:50.616 07:40:06 -- nvmf/common.sh@295 -- # local -ga e810 00:22:50.616 07:40:06 -- nvmf/common.sh@296 -- # x722=() 00:22:50.616 07:40:06 -- nvmf/common.sh@296 -- # local -ga x722 00:22:50.616 07:40:06 -- nvmf/common.sh@297 -- # mlx=() 00:22:50.616 07:40:06 -- nvmf/common.sh@297 -- # local -ga mlx 00:22:50.616 07:40:06 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:22:50.616 07:40:06 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:22:50.616 07:40:06 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:22:50.616 07:40:06 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:22:50.616 07:40:06 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:22:50.616 07:40:06 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:22:50.616 07:40:06 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:22:50.616 07:40:06 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:22:50.616 07:40:06 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:22:50.616 07:40:06 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:22:50.616 07:40:06 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:22:50.616 07:40:06 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:22:50.616 07:40:06 -- nvmf/common.sh@320 -- # [[ tcp == rdma ]] 00:22:50.616 07:40:06 -- nvmf/common.sh@326 -- # [[ e810 == mlx5 ]] 00:22:50.616 07:40:06 -- nvmf/common.sh@328 -- # [[ e810 == e810 ]] 00:22:50.616 07:40:06 -- nvmf/common.sh@329 -- # pci_devs=("${e810[@]}") 00:22:50.616 07:40:06 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:22:50.616 07:40:06 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:22:50.616 07:40:06 -- nvmf/common.sh@340 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:22:50.616 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:22:50.616 07:40:06 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:22:50.616 07:40:06 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:22:50.616 07:40:06 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:50.616 07:40:06 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:50.616 07:40:06 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:22:50.616 07:40:06 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:22:50.616 07:40:06 -- nvmf/common.sh@340 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:22:50.616 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:22:50.616 07:40:06 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:22:50.616 07:40:06 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:22:50.616 07:40:06 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:50.616 07:40:06 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:50.616 07:40:06 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:22:50.616 07:40:06 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:22:50.616 07:40:06 -- nvmf/common.sh@371 -- # [[ e810 == e810 ]] 00:22:50.616 07:40:06 -- nvmf/common.sh@371 -- # [[ tcp == rdma ]] 00:22:50.616 07:40:06 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:22:50.616 07:40:06 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:50.616 07:40:06 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:22:50.616 07:40:06 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:50.616 07:40:06 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:22:50.616 Found net devices under 0000:0a:00.0: cvl_0_0 00:22:50.616 07:40:06 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:22:50.616 07:40:06 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:22:50.616 07:40:06 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:50.616 07:40:06 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:22:50.616 07:40:06 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:50.616 07:40:06 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:22:50.616 Found net devices under 0000:0a:00.1: cvl_0_1 00:22:50.616 07:40:06 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:22:50.616 07:40:06 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:22:50.616 07:40:06 -- nvmf/common.sh@402 -- # is_hw=yes 00:22:50.616 07:40:06 -- nvmf/common.sh@404 -- # [[ yes == yes ]] 00:22:50.616 07:40:06 -- nvmf/common.sh@405 -- # [[ tcp == tcp ]] 00:22:50.616 07:40:06 -- nvmf/common.sh@406 -- # nvmf_tcp_init 00:22:50.616 07:40:06 -- nvmf/common.sh@228 -- # NVMF_INITIATOR_IP=10.0.0.1 00:22:50.616 07:40:06 -- nvmf/common.sh@229 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:22:50.616 07:40:06 -- nvmf/common.sh@230 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:22:50.616 07:40:06 -- nvmf/common.sh@233 -- # (( 2 > 1 )) 00:22:50.616 07:40:06 -- nvmf/common.sh@235 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:22:50.616 07:40:06 -- nvmf/common.sh@236 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:22:50.616 07:40:06 -- nvmf/common.sh@239 -- # NVMF_SECOND_TARGET_IP= 00:22:50.616 07:40:06 -- nvmf/common.sh@241 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:22:50.616 07:40:06 -- nvmf/common.sh@242 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:22:50.616 07:40:06 -- nvmf/common.sh@243 -- # ip -4 addr flush cvl_0_0 00:22:50.616 07:40:06 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_1 00:22:50.616 07:40:06 -- nvmf/common.sh@247 -- # ip netns add cvl_0_0_ns_spdk 00:22:50.616 07:40:06 -- nvmf/common.sh@250 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:22:50.616 07:40:06 -- nvmf/common.sh@253 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:22:50.616 07:40:06 -- nvmf/common.sh@254 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:22:50.616 07:40:06 -- nvmf/common.sh@257 -- # ip link set cvl_0_1 up 00:22:50.616 07:40:06 -- nvmf/common.sh@259 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:22:50.616 07:40:06 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:22:50.616 07:40:06 -- nvmf/common.sh@263 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:22:50.616 07:40:06 -- nvmf/common.sh@266 -- # ping -c 1 10.0.0.2 00:22:50.616 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:22:50.616 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.133 ms 00:22:50.616 00:22:50.616 --- 10.0.0.2 ping statistics --- 00:22:50.616 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:50.616 rtt min/avg/max/mdev = 0.133/0.133/0.133/0.000 ms 00:22:50.616 07:40:06 -- nvmf/common.sh@267 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:22:50.616 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:22:50.616 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.132 ms 00:22:50.616 00:22:50.616 --- 10.0.0.1 ping statistics --- 00:22:50.616 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:50.616 rtt min/avg/max/mdev = 0.132/0.132/0.132/0.000 ms 00:22:50.616 07:40:06 -- nvmf/common.sh@269 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:22:50.616 07:40:06 -- nvmf/common.sh@410 -- # return 0 00:22:50.616 07:40:06 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:22:50.616 07:40:06 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:22:50.616 07:40:06 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:22:50.616 07:40:06 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:22:50.616 07:40:06 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:22:50.616 07:40:06 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:22:50.616 07:40:06 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:22:50.616 07:40:06 -- host/perf.sh@18 -- # nvmfappstart -m 0xF 00:22:50.616 07:40:06 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:22:50.616 07:40:06 -- common/autotest_common.sh@712 -- # xtrace_disable 00:22:50.616 07:40:06 -- common/autotest_common.sh@10 -- # set +x 00:22:50.616 07:40:06 -- nvmf/common.sh@469 -- # nvmfpid=4169287 00:22:50.616 07:40:06 -- nvmf/common.sh@468 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:22:50.616 07:40:06 -- nvmf/common.sh@470 -- # waitforlisten 4169287 00:22:50.616 07:40:06 -- common/autotest_common.sh@819 -- # '[' -z 4169287 ']' 00:22:50.616 07:40:06 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:50.616 07:40:06 -- common/autotest_common.sh@824 -- # local max_retries=100 00:22:50.616 07:40:06 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:50.616 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:50.616 07:40:06 -- common/autotest_common.sh@828 -- # xtrace_disable 00:22:50.616 07:40:06 -- common/autotest_common.sh@10 -- # set +x 00:22:50.616 [2024-07-14 07:40:06.576917] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:22:50.616 [2024-07-14 07:40:06.576997] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:22:50.616 EAL: No free 2048 kB hugepages reported on node 1 00:22:50.616 [2024-07-14 07:40:06.639378] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:22:50.616 [2024-07-14 07:40:06.750256] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:22:50.616 [2024-07-14 07:40:06.750414] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:22:50.616 [2024-07-14 07:40:06.750445] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:22:50.616 [2024-07-14 07:40:06.750469] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:22:50.616 [2024-07-14 07:40:06.750550] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:22:50.616 [2024-07-14 07:40:06.750617] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:22:50.616 [2024-07-14 07:40:06.750644] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:22:50.616 [2024-07-14 07:40:06.750646] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:22:51.551 07:40:07 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:22:51.551 07:40:07 -- common/autotest_common.sh@852 -- # return 0 00:22:51.551 07:40:07 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:22:51.551 07:40:07 -- common/autotest_common.sh@718 -- # xtrace_disable 00:22:51.551 07:40:07 -- common/autotest_common.sh@10 -- # set +x 00:22:51.551 07:40:07 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:22:51.551 07:40:07 -- host/perf.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh 00:22:51.551 07:40:07 -- host/perf.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py load_subsystem_config 00:22:54.830 07:40:10 -- host/perf.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py framework_get_config bdev 00:22:54.830 07:40:10 -- host/perf.sh@30 -- # jq -r '.[].params | select(.name=="Nvme0").traddr' 00:22:54.830 07:40:10 -- host/perf.sh@30 -- # local_nvme_trid=0000:88:00.0 00:22:54.830 07:40:10 -- host/perf.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:22:55.088 07:40:11 -- host/perf.sh@31 -- # bdevs=' Malloc0' 00:22:55.088 07:40:11 -- host/perf.sh@33 -- # '[' -n 0000:88:00.0 ']' 00:22:55.088 07:40:11 -- host/perf.sh@34 -- # bdevs=' Malloc0 Nvme0n1' 00:22:55.088 07:40:11 -- host/perf.sh@37 -- # '[' tcp == rdma ']' 00:22:55.088 07:40:11 -- host/perf.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:22:55.346 [2024-07-14 07:40:11.417390] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:22:55.346 07:40:11 -- host/perf.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:22:55.604 07:40:11 -- host/perf.sh@45 -- # for bdev in $bdevs 00:22:55.604 07:40:11 -- host/perf.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:22:55.862 07:40:11 -- host/perf.sh@45 -- # for bdev in $bdevs 00:22:55.862 07:40:11 -- host/perf.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Nvme0n1 00:22:56.120 07:40:12 -- host/perf.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:22:56.377 [2024-07-14 07:40:12.372795] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:22:56.377 07:40:12 -- host/perf.sh@49 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:22:56.635 07:40:12 -- host/perf.sh@52 -- # '[' -n 0000:88:00.0 ']' 00:22:56.635 07:40:12 -- host/perf.sh@53 -- # perf_app -i 0 -q 32 -o 4096 -w randrw -M 50 -t 1 -r 'trtype:PCIe traddr:0000:88:00.0' 00:22:56.635 07:40:12 -- host/perf.sh@21 -- # '[' 0 -eq 1 ']' 00:22:56.635 07:40:12 -- host/perf.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -i 0 -q 32 -o 4096 -w randrw -M 50 -t 1 -r 'trtype:PCIe traddr:0000:88:00.0' 00:22:58.005 Initializing NVMe Controllers 00:22:58.005 Attached to NVMe Controller at 0000:88:00.0 [8086:0a54] 00:22:58.005 Associating PCIE (0000:88:00.0) NSID 1 with lcore 0 00:22:58.005 Initialization complete. Launching workers. 00:22:58.005 ======================================================== 00:22:58.005 Latency(us) 00:22:58.005 Device Information : IOPS MiB/s Average min max 00:22:58.005 PCIE (0000:88:00.0) NSID 1 from core 0: 85971.34 335.83 371.65 36.99 6271.32 00:22:58.005 ======================================================== 00:22:58.005 Total : 85971.34 335.83 371.65 36.99 6271.32 00:22:58.005 00:22:58.005 07:40:13 -- host/perf.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 1 -o 4096 -w randrw -M 50 -t 1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:22:58.005 EAL: No free 2048 kB hugepages reported on node 1 00:22:58.938 Initializing NVMe Controllers 00:22:58.938 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:22:58.938 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:22:58.938 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:22:58.938 Initialization complete. Launching workers. 00:22:58.938 ======================================================== 00:22:58.938 Latency(us) 00:22:58.938 Device Information : IOPS MiB/s Average min max 00:22:58.938 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 95.66 0.37 10702.59 208.76 45574.31 00:22:58.938 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 45.84 0.18 21815.41 7959.05 47902.02 00:22:58.938 ======================================================== 00:22:58.938 Total : 141.50 0.55 14302.52 208.76 47902.02 00:22:58.938 00:22:58.938 07:40:15 -- host/perf.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 32 -o 4096 -w randrw -M 50 -t 1 -HI -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:22:58.938 EAL: No free 2048 kB hugepages reported on node 1 00:23:00.311 Initializing NVMe Controllers 00:23:00.311 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:23:00.311 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:23:00.311 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:23:00.311 Initialization complete. Launching workers. 00:23:00.311 ======================================================== 00:23:00.311 Latency(us) 00:23:00.311 Device Information : IOPS MiB/s Average min max 00:23:00.311 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 8378.37 32.73 3818.97 529.12 7778.02 00:23:00.311 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 3932.68 15.36 8160.91 5505.21 15727.67 00:23:00.311 ======================================================== 00:23:00.311 Total : 12311.05 48.09 5205.98 529.12 15727.67 00:23:00.311 00:23:00.311 07:40:16 -- host/perf.sh@59 -- # [[ e810 == \e\8\1\0 ]] 00:23:00.311 07:40:16 -- host/perf.sh@59 -- # [[ tcp == \r\d\m\a ]] 00:23:00.311 07:40:16 -- host/perf.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 128 -o 262144 -O 16384 -w randrw -M 50 -t 2 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:23:00.311 EAL: No free 2048 kB hugepages reported on node 1 00:23:02.874 Initializing NVMe Controllers 00:23:02.874 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:23:02.874 Controller IO queue size 128, less than required. 00:23:02.874 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:23:02.874 Controller IO queue size 128, less than required. 00:23:02.874 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:23:02.874 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:23:02.874 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:23:02.874 Initialization complete. Launching workers. 00:23:02.874 ======================================================== 00:23:02.874 Latency(us) 00:23:02.874 Device Information : IOPS MiB/s Average min max 00:23:02.874 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 675.05 168.76 198749.17 141904.69 336669.52 00:23:02.874 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 590.11 147.53 221206.91 71580.76 357895.66 00:23:02.874 ======================================================== 00:23:02.874 Total : 1265.16 316.29 209224.13 71580.76 357895.66 00:23:02.874 00:23:02.874 07:40:18 -- host/perf.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 128 -o 36964 -O 4096 -w randrw -M 50 -t 5 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -c 0xf -P 4 00:23:02.874 EAL: No free 2048 kB hugepages reported on node 1 00:23:03.132 No valid NVMe controllers or AIO or URING devices found 00:23:03.132 Initializing NVMe Controllers 00:23:03.132 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:23:03.132 Controller IO queue size 128, less than required. 00:23:03.132 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:23:03.132 WARNING: IO size 36964 (-o) is not a multiple of nsid 1 sector size 512. Removing this ns from test 00:23:03.132 Controller IO queue size 128, less than required. 00:23:03.132 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:23:03.132 WARNING: IO size 36964 (-o) is not a multiple of nsid 2 sector size 512. Removing this ns from test 00:23:03.132 WARNING: Some requested NVMe devices were skipped 00:23:03.132 07:40:19 -- host/perf.sh@65 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 128 -o 262144 -w randrw -M 50 -t 2 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' --transport-stat 00:23:03.132 EAL: No free 2048 kB hugepages reported on node 1 00:23:05.668 Initializing NVMe Controllers 00:23:05.668 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:23:05.668 Controller IO queue size 128, less than required. 00:23:05.668 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:23:05.668 Controller IO queue size 128, less than required. 00:23:05.668 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:23:05.668 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:23:05.668 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:23:05.668 Initialization complete. Launching workers. 00:23:05.668 00:23:05.668 ==================== 00:23:05.668 lcore 0, ns TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 statistics: 00:23:05.668 TCP transport: 00:23:05.668 polls: 49338 00:23:05.668 idle_polls: 14389 00:23:05.668 sock_completions: 34949 00:23:05.668 nvme_completions: 2039 00:23:05.668 submitted_requests: 3135 00:23:05.668 queued_requests: 1 00:23:05.668 00:23:05.668 ==================== 00:23:05.668 lcore 0, ns TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 statistics: 00:23:05.668 TCP transport: 00:23:05.668 polls: 54954 00:23:05.668 idle_polls: 21017 00:23:05.668 sock_completions: 33937 00:23:05.668 nvme_completions: 2138 00:23:05.668 submitted_requests: 3354 00:23:05.668 queued_requests: 1 00:23:05.668 ======================================================== 00:23:05.668 Latency(us) 00:23:05.668 Device Information : IOPS MiB/s Average min max 00:23:05.668 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 573.13 143.28 234771.93 118677.07 372468.92 00:23:05.668 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 597.61 149.40 223707.92 79385.00 344698.10 00:23:05.668 ======================================================== 00:23:05.668 Total : 1170.74 292.69 229124.23 79385.00 372468.92 00:23:05.668 00:23:05.668 07:40:21 -- host/perf.sh@66 -- # sync 00:23:05.668 07:40:21 -- host/perf.sh@67 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:23:05.925 07:40:21 -- host/perf.sh@69 -- # '[' 1 -eq 1 ']' 00:23:05.925 07:40:21 -- host/perf.sh@71 -- # '[' -n 0000:88:00.0 ']' 00:23:05.925 07:40:21 -- host/perf.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore Nvme0n1 lvs_0 00:23:09.197 07:40:25 -- host/perf.sh@72 -- # ls_guid=413e2808-6ae0-41b0-ae2b-adad372e3146 00:23:09.197 07:40:25 -- host/perf.sh@73 -- # get_lvs_free_mb 413e2808-6ae0-41b0-ae2b-adad372e3146 00:23:09.197 07:40:25 -- common/autotest_common.sh@1343 -- # local lvs_uuid=413e2808-6ae0-41b0-ae2b-adad372e3146 00:23:09.197 07:40:25 -- common/autotest_common.sh@1344 -- # local lvs_info 00:23:09.197 07:40:25 -- common/autotest_common.sh@1345 -- # local fc 00:23:09.197 07:40:25 -- common/autotest_common.sh@1346 -- # local cs 00:23:09.197 07:40:25 -- common/autotest_common.sh@1347 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:23:09.455 07:40:25 -- common/autotest_common.sh@1347 -- # lvs_info='[ 00:23:09.455 { 00:23:09.455 "uuid": "413e2808-6ae0-41b0-ae2b-adad372e3146", 00:23:09.455 "name": "lvs_0", 00:23:09.455 "base_bdev": "Nvme0n1", 00:23:09.455 "total_data_clusters": 238234, 00:23:09.455 "free_clusters": 238234, 00:23:09.455 "block_size": 512, 00:23:09.455 "cluster_size": 4194304 00:23:09.455 } 00:23:09.455 ]' 00:23:09.455 07:40:25 -- common/autotest_common.sh@1348 -- # jq '.[] | select(.uuid=="413e2808-6ae0-41b0-ae2b-adad372e3146") .free_clusters' 00:23:09.455 07:40:25 -- common/autotest_common.sh@1348 -- # fc=238234 00:23:09.455 07:40:25 -- common/autotest_common.sh@1349 -- # jq '.[] | select(.uuid=="413e2808-6ae0-41b0-ae2b-adad372e3146") .cluster_size' 00:23:09.455 07:40:25 -- common/autotest_common.sh@1349 -- # cs=4194304 00:23:09.455 07:40:25 -- common/autotest_common.sh@1352 -- # free_mb=952936 00:23:09.455 07:40:25 -- common/autotest_common.sh@1353 -- # echo 952936 00:23:09.455 952936 00:23:09.455 07:40:25 -- host/perf.sh@77 -- # '[' 952936 -gt 20480 ']' 00:23:09.455 07:40:25 -- host/perf.sh@78 -- # free_mb=20480 00:23:09.456 07:40:25 -- host/perf.sh@80 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u 413e2808-6ae0-41b0-ae2b-adad372e3146 lbd_0 20480 00:23:10.022 07:40:25 -- host/perf.sh@80 -- # lb_guid=93a2dd98-4f26-4425-b41f-c78767b25226 00:23:10.022 07:40:25 -- host/perf.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore 93a2dd98-4f26-4425-b41f-c78767b25226 lvs_n_0 00:23:10.588 07:40:26 -- host/perf.sh@83 -- # ls_nested_guid=64e9e37f-7b45-47fd-8f02-519b37b18286 00:23:10.588 07:40:26 -- host/perf.sh@84 -- # get_lvs_free_mb 64e9e37f-7b45-47fd-8f02-519b37b18286 00:23:10.588 07:40:26 -- common/autotest_common.sh@1343 -- # local lvs_uuid=64e9e37f-7b45-47fd-8f02-519b37b18286 00:23:10.588 07:40:26 -- common/autotest_common.sh@1344 -- # local lvs_info 00:23:10.588 07:40:26 -- common/autotest_common.sh@1345 -- # local fc 00:23:10.588 07:40:26 -- common/autotest_common.sh@1346 -- # local cs 00:23:10.588 07:40:26 -- common/autotest_common.sh@1347 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:23:10.846 07:40:26 -- common/autotest_common.sh@1347 -- # lvs_info='[ 00:23:10.846 { 00:23:10.846 "uuid": "413e2808-6ae0-41b0-ae2b-adad372e3146", 00:23:10.846 "name": "lvs_0", 00:23:10.846 "base_bdev": "Nvme0n1", 00:23:10.846 "total_data_clusters": 238234, 00:23:10.846 "free_clusters": 233114, 00:23:10.846 "block_size": 512, 00:23:10.846 "cluster_size": 4194304 00:23:10.846 }, 00:23:10.846 { 00:23:10.846 "uuid": "64e9e37f-7b45-47fd-8f02-519b37b18286", 00:23:10.846 "name": "lvs_n_0", 00:23:10.846 "base_bdev": "93a2dd98-4f26-4425-b41f-c78767b25226", 00:23:10.846 "total_data_clusters": 5114, 00:23:10.846 "free_clusters": 5114, 00:23:10.846 "block_size": 512, 00:23:10.846 "cluster_size": 4194304 00:23:10.846 } 00:23:10.846 ]' 00:23:10.846 07:40:26 -- common/autotest_common.sh@1348 -- # jq '.[] | select(.uuid=="64e9e37f-7b45-47fd-8f02-519b37b18286") .free_clusters' 00:23:10.846 07:40:26 -- common/autotest_common.sh@1348 -- # fc=5114 00:23:10.846 07:40:26 -- common/autotest_common.sh@1349 -- # jq '.[] | select(.uuid=="64e9e37f-7b45-47fd-8f02-519b37b18286") .cluster_size' 00:23:10.846 07:40:27 -- common/autotest_common.sh@1349 -- # cs=4194304 00:23:10.846 07:40:27 -- common/autotest_common.sh@1352 -- # free_mb=20456 00:23:10.846 07:40:27 -- common/autotest_common.sh@1353 -- # echo 20456 00:23:10.846 20456 00:23:10.846 07:40:27 -- host/perf.sh@85 -- # '[' 20456 -gt 20480 ']' 00:23:10.846 07:40:27 -- host/perf.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u 64e9e37f-7b45-47fd-8f02-519b37b18286 lbd_nest_0 20456 00:23:11.104 07:40:27 -- host/perf.sh@88 -- # lb_nested_guid=fbbfd7f7-ba61-4d49-92a0-ca5549227f88 00:23:11.104 07:40:27 -- host/perf.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:23:11.362 07:40:27 -- host/perf.sh@90 -- # for bdev in $lb_nested_guid 00:23:11.362 07:40:27 -- host/perf.sh@91 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 fbbfd7f7-ba61-4d49-92a0-ca5549227f88 00:23:11.621 07:40:27 -- host/perf.sh@93 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:23:11.878 07:40:27 -- host/perf.sh@95 -- # qd_depth=("1" "32" "128") 00:23:11.878 07:40:27 -- host/perf.sh@96 -- # io_size=("512" "131072") 00:23:11.878 07:40:27 -- host/perf.sh@97 -- # for qd in "${qd_depth[@]}" 00:23:11.878 07:40:27 -- host/perf.sh@98 -- # for o in "${io_size[@]}" 00:23:11.878 07:40:27 -- host/perf.sh@99 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 1 -o 512 -w randrw -M 50 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:23:11.878 EAL: No free 2048 kB hugepages reported on node 1 00:23:24.073 Initializing NVMe Controllers 00:23:24.073 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:23:24.073 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:23:24.073 Initialization complete. Launching workers. 00:23:24.073 ======================================================== 00:23:24.073 Latency(us) 00:23:24.073 Device Information : IOPS MiB/s Average min max 00:23:24.073 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 48.30 0.02 20758.61 242.75 49636.37 00:23:24.073 ======================================================== 00:23:24.073 Total : 48.30 0.02 20758.61 242.75 49636.37 00:23:24.073 00:23:24.073 07:40:38 -- host/perf.sh@98 -- # for o in "${io_size[@]}" 00:23:24.073 07:40:38 -- host/perf.sh@99 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 1 -o 131072 -w randrw -M 50 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:23:24.073 EAL: No free 2048 kB hugepages reported on node 1 00:23:34.087 Initializing NVMe Controllers 00:23:34.087 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:23:34.087 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:23:34.087 Initialization complete. Launching workers. 00:23:34.087 ======================================================== 00:23:34.087 Latency(us) 00:23:34.087 Device Information : IOPS MiB/s Average min max 00:23:34.087 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 81.90 10.24 12219.88 5994.49 47899.14 00:23:34.087 ======================================================== 00:23:34.087 Total : 81.90 10.24 12219.88 5994.49 47899.14 00:23:34.087 00:23:34.087 07:40:48 -- host/perf.sh@97 -- # for qd in "${qd_depth[@]}" 00:23:34.087 07:40:48 -- host/perf.sh@98 -- # for o in "${io_size[@]}" 00:23:34.087 07:40:48 -- host/perf.sh@99 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 32 -o 512 -w randrw -M 50 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:23:34.087 EAL: No free 2048 kB hugepages reported on node 1 00:23:44.050 Initializing NVMe Controllers 00:23:44.050 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:23:44.050 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:23:44.050 Initialization complete. Launching workers. 00:23:44.050 ======================================================== 00:23:44.050 Latency(us) 00:23:44.051 Device Information : IOPS MiB/s Average min max 00:23:44.051 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 7143.16 3.49 4479.27 312.03 12056.23 00:23:44.051 ======================================================== 00:23:44.051 Total : 7143.16 3.49 4479.27 312.03 12056.23 00:23:44.051 00:23:44.051 07:40:58 -- host/perf.sh@98 -- # for o in "${io_size[@]}" 00:23:44.051 07:40:58 -- host/perf.sh@99 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 32 -o 131072 -w randrw -M 50 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:23:44.051 EAL: No free 2048 kB hugepages reported on node 1 00:23:54.065 Initializing NVMe Controllers 00:23:54.065 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:23:54.065 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:23:54.065 Initialization complete. Launching workers. 00:23:54.065 ======================================================== 00:23:54.065 Latency(us) 00:23:54.065 Device Information : IOPS MiB/s Average min max 00:23:54.065 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 1602.75 200.34 19984.02 1644.77 44495.60 00:23:54.065 ======================================================== 00:23:54.065 Total : 1602.75 200.34 19984.02 1644.77 44495.60 00:23:54.065 00:23:54.065 07:41:09 -- host/perf.sh@97 -- # for qd in "${qd_depth[@]}" 00:23:54.065 07:41:09 -- host/perf.sh@98 -- # for o in "${io_size[@]}" 00:23:54.065 07:41:09 -- host/perf.sh@99 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 128 -o 512 -w randrw -M 50 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:23:54.065 EAL: No free 2048 kB hugepages reported on node 1 00:24:04.027 Initializing NVMe Controllers 00:24:04.027 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:24:04.027 Controller IO queue size 128, less than required. 00:24:04.027 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:24:04.027 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:24:04.027 Initialization complete. Launching workers. 00:24:04.027 ======================================================== 00:24:04.027 Latency(us) 00:24:04.027 Device Information : IOPS MiB/s Average min max 00:24:04.027 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 12013.34 5.87 10656.52 1803.88 26202.28 00:24:04.027 ======================================================== 00:24:04.027 Total : 12013.34 5.87 10656.52 1803.88 26202.28 00:24:04.027 00:24:04.027 07:41:19 -- host/perf.sh@98 -- # for o in "${io_size[@]}" 00:24:04.027 07:41:19 -- host/perf.sh@99 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 128 -o 131072 -w randrw -M 50 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:24:04.027 EAL: No free 2048 kB hugepages reported on node 1 00:24:13.995 Initializing NVMe Controllers 00:24:13.996 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:24:13.996 Controller IO queue size 128, less than required. 00:24:13.996 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:24:13.996 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:24:13.996 Initialization complete. Launching workers. 00:24:13.996 ======================================================== 00:24:13.996 Latency(us) 00:24:13.996 Device Information : IOPS MiB/s Average min max 00:24:13.996 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 1175.26 146.91 109148.26 28878.28 227500.02 00:24:13.996 ======================================================== 00:24:13.996 Total : 1175.26 146.91 109148.26 28878.28 227500.02 00:24:13.996 00:24:13.996 07:41:30 -- host/perf.sh@104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:24:14.253 07:41:30 -- host/perf.sh@105 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete fbbfd7f7-ba61-4d49-92a0-ca5549227f88 00:24:15.185 07:41:31 -- host/perf.sh@106 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -l lvs_n_0 00:24:15.185 07:41:31 -- host/perf.sh@107 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete 93a2dd98-4f26-4425-b41f-c78767b25226 00:24:15.443 07:41:31 -- host/perf.sh@108 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -l lvs_0 00:24:15.701 07:41:31 -- host/perf.sh@112 -- # trap - SIGINT SIGTERM EXIT 00:24:15.701 07:41:31 -- host/perf.sh@114 -- # nvmftestfini 00:24:15.701 07:41:31 -- nvmf/common.sh@476 -- # nvmfcleanup 00:24:15.701 07:41:31 -- nvmf/common.sh@116 -- # sync 00:24:15.701 07:41:31 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:24:15.701 07:41:31 -- nvmf/common.sh@119 -- # set +e 00:24:15.701 07:41:31 -- nvmf/common.sh@120 -- # for i in {1..20} 00:24:15.701 07:41:31 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:24:15.701 rmmod nvme_tcp 00:24:15.701 rmmod nvme_fabrics 00:24:15.701 rmmod nvme_keyring 00:24:15.701 07:41:31 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:24:15.960 07:41:31 -- nvmf/common.sh@123 -- # set -e 00:24:15.960 07:41:31 -- nvmf/common.sh@124 -- # return 0 00:24:15.960 07:41:31 -- nvmf/common.sh@477 -- # '[' -n 4169287 ']' 00:24:15.960 07:41:31 -- nvmf/common.sh@478 -- # killprocess 4169287 00:24:15.960 07:41:31 -- common/autotest_common.sh@926 -- # '[' -z 4169287 ']' 00:24:15.960 07:41:31 -- common/autotest_common.sh@930 -- # kill -0 4169287 00:24:15.960 07:41:31 -- common/autotest_common.sh@931 -- # uname 00:24:15.960 07:41:31 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:24:15.960 07:41:31 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 4169287 00:24:15.960 07:41:31 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:24:15.960 07:41:31 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:24:15.960 07:41:31 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 4169287' 00:24:15.960 killing process with pid 4169287 00:24:15.960 07:41:31 -- common/autotest_common.sh@945 -- # kill 4169287 00:24:15.960 07:41:31 -- common/autotest_common.sh@950 -- # wait 4169287 00:24:17.860 07:41:33 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:24:17.860 07:41:33 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:24:17.860 07:41:33 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:24:17.860 07:41:33 -- nvmf/common.sh@273 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:24:17.860 07:41:33 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:24:17.860 07:41:33 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:17.860 07:41:33 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:24:17.860 07:41:33 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:19.763 07:41:35 -- nvmf/common.sh@278 -- # ip -4 addr flush cvl_0_1 00:24:19.763 00:24:19.763 real 1m31.265s 00:24:19.763 user 5m31.169s 00:24:19.763 sys 0m15.267s 00:24:19.763 07:41:35 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:24:19.763 07:41:35 -- common/autotest_common.sh@10 -- # set +x 00:24:19.763 ************************************ 00:24:19.763 END TEST nvmf_perf 00:24:19.763 ************************************ 00:24:19.763 07:41:35 -- nvmf/nvmf.sh@99 -- # run_test nvmf_fio_host /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/fio.sh --transport=tcp 00:24:19.763 07:41:35 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:24:19.763 07:41:35 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:24:19.763 07:41:35 -- common/autotest_common.sh@10 -- # set +x 00:24:19.763 ************************************ 00:24:19.763 START TEST nvmf_fio_host 00:24:19.763 ************************************ 00:24:19.763 07:41:35 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/fio.sh --transport=tcp 00:24:19.763 * Looking for test storage... 00:24:19.763 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:24:19.763 07:41:35 -- host/fio.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:24:19.763 07:41:35 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:24:19.763 07:41:35 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:24:19.763 07:41:35 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:24:19.763 07:41:35 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:19.763 07:41:35 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:19.763 07:41:35 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:19.763 07:41:35 -- paths/export.sh@5 -- # export PATH 00:24:19.764 07:41:35 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:19.764 07:41:35 -- host/fio.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:24:19.764 07:41:35 -- nvmf/common.sh@7 -- # uname -s 00:24:19.764 07:41:35 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:24:19.764 07:41:35 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:24:19.764 07:41:35 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:24:19.764 07:41:35 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:24:19.764 07:41:35 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:24:19.764 07:41:35 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:24:19.764 07:41:35 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:24:19.764 07:41:35 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:24:19.764 07:41:35 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:24:19.764 07:41:35 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:24:19.764 07:41:35 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:24:19.764 07:41:35 -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:24:19.764 07:41:35 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:24:19.764 07:41:35 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:24:19.764 07:41:35 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:24:19.764 07:41:35 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:24:19.764 07:41:35 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:24:19.764 07:41:35 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:24:19.764 07:41:35 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:24:19.764 07:41:35 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:19.764 07:41:35 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:19.764 07:41:35 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:19.764 07:41:35 -- paths/export.sh@5 -- # export PATH 00:24:19.764 07:41:35 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:19.764 07:41:35 -- nvmf/common.sh@46 -- # : 0 00:24:19.764 07:41:35 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:24:19.764 07:41:35 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:24:19.764 07:41:35 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:24:19.764 07:41:35 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:24:19.764 07:41:35 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:24:19.764 07:41:35 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:24:19.764 07:41:35 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:24:19.764 07:41:35 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:24:19.764 07:41:35 -- host/fio.sh@12 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:24:19.764 07:41:35 -- host/fio.sh@14 -- # nvmftestinit 00:24:19.764 07:41:35 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:24:19.764 07:41:35 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:24:19.764 07:41:35 -- nvmf/common.sh@436 -- # prepare_net_devs 00:24:19.764 07:41:35 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:24:19.764 07:41:35 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:24:19.764 07:41:35 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:19.764 07:41:35 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:24:19.764 07:41:35 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:19.764 07:41:35 -- nvmf/common.sh@402 -- # [[ phy != virt ]] 00:24:19.764 07:41:35 -- nvmf/common.sh@402 -- # gather_supported_nvmf_pci_devs 00:24:19.764 07:41:35 -- nvmf/common.sh@284 -- # xtrace_disable 00:24:19.764 07:41:35 -- common/autotest_common.sh@10 -- # set +x 00:24:21.664 07:41:37 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:24:21.664 07:41:37 -- nvmf/common.sh@290 -- # pci_devs=() 00:24:21.664 07:41:37 -- nvmf/common.sh@290 -- # local -a pci_devs 00:24:21.664 07:41:37 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:24:21.664 07:41:37 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:24:21.664 07:41:37 -- nvmf/common.sh@292 -- # pci_drivers=() 00:24:21.664 07:41:37 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:24:21.664 07:41:37 -- nvmf/common.sh@294 -- # net_devs=() 00:24:21.664 07:41:37 -- nvmf/common.sh@294 -- # local -ga net_devs 00:24:21.664 07:41:37 -- nvmf/common.sh@295 -- # e810=() 00:24:21.664 07:41:37 -- nvmf/common.sh@295 -- # local -ga e810 00:24:21.664 07:41:37 -- nvmf/common.sh@296 -- # x722=() 00:24:21.664 07:41:37 -- nvmf/common.sh@296 -- # local -ga x722 00:24:21.664 07:41:37 -- nvmf/common.sh@297 -- # mlx=() 00:24:21.664 07:41:37 -- nvmf/common.sh@297 -- # local -ga mlx 00:24:21.664 07:41:37 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:24:21.664 07:41:37 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:24:21.664 07:41:37 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:24:21.664 07:41:37 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:24:21.664 07:41:37 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:24:21.664 07:41:37 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:24:21.664 07:41:37 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:24:21.664 07:41:37 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:24:21.664 07:41:37 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:24:21.664 07:41:37 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:24:21.664 07:41:37 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:24:21.664 07:41:37 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:24:21.664 07:41:37 -- nvmf/common.sh@320 -- # [[ tcp == rdma ]] 00:24:21.664 07:41:37 -- nvmf/common.sh@326 -- # [[ e810 == mlx5 ]] 00:24:21.664 07:41:37 -- nvmf/common.sh@328 -- # [[ e810 == e810 ]] 00:24:21.664 07:41:37 -- nvmf/common.sh@329 -- # pci_devs=("${e810[@]}") 00:24:21.664 07:41:37 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:24:21.664 07:41:37 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:24:21.664 07:41:37 -- nvmf/common.sh@340 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:24:21.664 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:24:21.664 07:41:37 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:24:21.664 07:41:37 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:24:21.664 07:41:37 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:21.664 07:41:37 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:21.664 07:41:37 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:24:21.664 07:41:37 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:24:21.664 07:41:37 -- nvmf/common.sh@340 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:24:21.664 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:24:21.664 07:41:37 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:24:21.664 07:41:37 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:24:21.664 07:41:37 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:21.664 07:41:37 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:21.664 07:41:37 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:24:21.664 07:41:37 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:24:21.664 07:41:37 -- nvmf/common.sh@371 -- # [[ e810 == e810 ]] 00:24:21.664 07:41:37 -- nvmf/common.sh@371 -- # [[ tcp == rdma ]] 00:24:21.664 07:41:37 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:24:21.664 07:41:37 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:21.664 07:41:37 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:24:21.664 07:41:37 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:21.664 07:41:37 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:24:21.664 Found net devices under 0000:0a:00.0: cvl_0_0 00:24:21.664 07:41:37 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:24:21.664 07:41:37 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:24:21.664 07:41:37 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:21.664 07:41:37 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:24:21.664 07:41:37 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:21.664 07:41:37 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:24:21.664 Found net devices under 0000:0a:00.1: cvl_0_1 00:24:21.664 07:41:37 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:24:21.664 07:41:37 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:24:21.664 07:41:37 -- nvmf/common.sh@402 -- # is_hw=yes 00:24:21.664 07:41:37 -- nvmf/common.sh@404 -- # [[ yes == yes ]] 00:24:21.664 07:41:37 -- nvmf/common.sh@405 -- # [[ tcp == tcp ]] 00:24:21.664 07:41:37 -- nvmf/common.sh@406 -- # nvmf_tcp_init 00:24:21.664 07:41:37 -- nvmf/common.sh@228 -- # NVMF_INITIATOR_IP=10.0.0.1 00:24:21.664 07:41:37 -- nvmf/common.sh@229 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:24:21.664 07:41:37 -- nvmf/common.sh@230 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:24:21.664 07:41:37 -- nvmf/common.sh@233 -- # (( 2 > 1 )) 00:24:21.664 07:41:37 -- nvmf/common.sh@235 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:24:21.664 07:41:37 -- nvmf/common.sh@236 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:24:21.664 07:41:37 -- nvmf/common.sh@239 -- # NVMF_SECOND_TARGET_IP= 00:24:21.664 07:41:37 -- nvmf/common.sh@241 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:24:21.664 07:41:37 -- nvmf/common.sh@242 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:24:21.664 07:41:37 -- nvmf/common.sh@243 -- # ip -4 addr flush cvl_0_0 00:24:21.664 07:41:37 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_1 00:24:21.664 07:41:37 -- nvmf/common.sh@247 -- # ip netns add cvl_0_0_ns_spdk 00:24:21.664 07:41:37 -- nvmf/common.sh@250 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:24:21.664 07:41:37 -- nvmf/common.sh@253 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:24:21.664 07:41:37 -- nvmf/common.sh@254 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:24:21.664 07:41:37 -- nvmf/common.sh@257 -- # ip link set cvl_0_1 up 00:24:21.664 07:41:37 -- nvmf/common.sh@259 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:24:21.664 07:41:37 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:24:21.664 07:41:37 -- nvmf/common.sh@263 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:24:21.664 07:41:37 -- nvmf/common.sh@266 -- # ping -c 1 10.0.0.2 00:24:21.664 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:24:21.664 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.124 ms 00:24:21.664 00:24:21.664 --- 10.0.0.2 ping statistics --- 00:24:21.664 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:21.664 rtt min/avg/max/mdev = 0.124/0.124/0.124/0.000 ms 00:24:21.664 07:41:37 -- nvmf/common.sh@267 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:24:21.664 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:24:21.664 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.103 ms 00:24:21.664 00:24:21.664 --- 10.0.0.1 ping statistics --- 00:24:21.664 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:21.664 rtt min/avg/max/mdev = 0.103/0.103/0.103/0.000 ms 00:24:21.664 07:41:37 -- nvmf/common.sh@269 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:24:21.664 07:41:37 -- nvmf/common.sh@410 -- # return 0 00:24:21.664 07:41:37 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:24:21.665 07:41:37 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:24:21.665 07:41:37 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:24:21.665 07:41:37 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:24:21.665 07:41:37 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:24:21.665 07:41:37 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:24:21.665 07:41:37 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:24:21.924 07:41:37 -- host/fio.sh@16 -- # [[ y != y ]] 00:24:21.924 07:41:37 -- host/fio.sh@21 -- # timing_enter start_nvmf_tgt 00:24:21.924 07:41:37 -- common/autotest_common.sh@712 -- # xtrace_disable 00:24:21.924 07:41:37 -- common/autotest_common.sh@10 -- # set +x 00:24:21.924 07:41:37 -- host/fio.sh@24 -- # nvmfpid=4181611 00:24:21.924 07:41:37 -- host/fio.sh@23 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:24:21.924 07:41:37 -- host/fio.sh@26 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:24:21.924 07:41:37 -- host/fio.sh@28 -- # waitforlisten 4181611 00:24:21.924 07:41:37 -- common/autotest_common.sh@819 -- # '[' -z 4181611 ']' 00:24:21.924 07:41:37 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:24:21.924 07:41:37 -- common/autotest_common.sh@824 -- # local max_retries=100 00:24:21.924 07:41:37 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:24:21.924 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:24:21.924 07:41:37 -- common/autotest_common.sh@828 -- # xtrace_disable 00:24:21.924 07:41:37 -- common/autotest_common.sh@10 -- # set +x 00:24:21.924 [2024-07-14 07:41:37.881834] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:24:21.924 [2024-07-14 07:41:37.881941] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:24:21.924 EAL: No free 2048 kB hugepages reported on node 1 00:24:21.924 [2024-07-14 07:41:37.950792] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:24:21.924 [2024-07-14 07:41:38.063387] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:24:21.924 [2024-07-14 07:41:38.063555] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:24:21.924 [2024-07-14 07:41:38.063574] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:24:21.924 [2024-07-14 07:41:38.063587] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:24:21.924 [2024-07-14 07:41:38.065887] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:24:21.924 [2024-07-14 07:41:38.065952] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:24:21.924 [2024-07-14 07:41:38.066020] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:24:21.924 [2024-07-14 07:41:38.066024] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:24:22.858 07:41:38 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:24:22.858 07:41:38 -- common/autotest_common.sh@852 -- # return 0 00:24:22.858 07:41:38 -- host/fio.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:24:23.116 [2024-07-14 07:41:39.141407] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:24:23.116 07:41:39 -- host/fio.sh@30 -- # timing_exit start_nvmf_tgt 00:24:23.116 07:41:39 -- common/autotest_common.sh@718 -- # xtrace_disable 00:24:23.116 07:41:39 -- common/autotest_common.sh@10 -- # set +x 00:24:23.116 07:41:39 -- host/fio.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc1 00:24:23.375 Malloc1 00:24:23.375 07:41:39 -- host/fio.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:24:23.633 07:41:39 -- host/fio.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:24:23.891 07:41:39 -- host/fio.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:24:24.149 [2024-07-14 07:41:40.221094] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:24:24.149 07:41:40 -- host/fio.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:24:24.408 07:41:40 -- host/fio.sh@38 -- # PLUGIN_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme 00:24:24.408 07:41:40 -- host/fio.sh@41 -- # fio_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:24:24.408 07:41:40 -- common/autotest_common.sh@1339 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:24:24.408 07:41:40 -- common/autotest_common.sh@1316 -- # local fio_dir=/usr/src/fio 00:24:24.408 07:41:40 -- common/autotest_common.sh@1318 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:24:24.408 07:41:40 -- common/autotest_common.sh@1318 -- # local sanitizers 00:24:24.408 07:41:40 -- common/autotest_common.sh@1319 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:24:24.408 07:41:40 -- common/autotest_common.sh@1320 -- # shift 00:24:24.408 07:41:40 -- common/autotest_common.sh@1322 -- # local asan_lib= 00:24:24.408 07:41:40 -- common/autotest_common.sh@1323 -- # for sanitizer in "${sanitizers[@]}" 00:24:24.408 07:41:40 -- common/autotest_common.sh@1324 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:24:24.408 07:41:40 -- common/autotest_common.sh@1324 -- # grep libasan 00:24:24.408 07:41:40 -- common/autotest_common.sh@1324 -- # awk '{print $3}' 00:24:24.408 07:41:40 -- common/autotest_common.sh@1324 -- # asan_lib= 00:24:24.408 07:41:40 -- common/autotest_common.sh@1325 -- # [[ -n '' ]] 00:24:24.408 07:41:40 -- common/autotest_common.sh@1323 -- # for sanitizer in "${sanitizers[@]}" 00:24:24.408 07:41:40 -- common/autotest_common.sh@1324 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:24:24.408 07:41:40 -- common/autotest_common.sh@1324 -- # grep libclang_rt.asan 00:24:24.408 07:41:40 -- common/autotest_common.sh@1324 -- # awk '{print $3}' 00:24:24.408 07:41:40 -- common/autotest_common.sh@1324 -- # asan_lib= 00:24:24.408 07:41:40 -- common/autotest_common.sh@1325 -- # [[ -n '' ]] 00:24:24.408 07:41:40 -- common/autotest_common.sh@1331 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme' 00:24:24.408 07:41:40 -- common/autotest_common.sh@1331 -- # /usr/src/fio/fio /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:24:24.666 test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128 00:24:24.666 fio-3.35 00:24:24.666 Starting 1 thread 00:24:24.666 EAL: No free 2048 kB hugepages reported on node 1 00:24:27.259 00:24:27.259 test: (groupid=0, jobs=1): err= 0: pid=4182107: Sun Jul 14 07:41:42 2024 00:24:27.259 read: IOPS=9519, BW=37.2MiB/s (39.0MB/s)(74.6MiB/2006msec) 00:24:27.259 slat (nsec): min=1933, max=159811, avg=2498.38, stdev=1753.60 00:24:27.259 clat (usec): min=4986, max=13278, avg=7445.39, stdev=557.30 00:24:27.259 lat (usec): min=5010, max=13281, avg=7447.89, stdev=557.24 00:24:27.259 clat percentiles (usec): 00:24:27.259 | 1.00th=[ 6194], 5.00th=[ 6587], 10.00th=[ 6783], 20.00th=[ 6980], 00:24:27.259 | 30.00th=[ 7177], 40.00th=[ 7308], 50.00th=[ 7439], 60.00th=[ 7570], 00:24:27.259 | 70.00th=[ 7701], 80.00th=[ 7898], 90.00th=[ 8094], 95.00th=[ 8291], 00:24:27.259 | 99.00th=[ 8717], 99.50th=[ 8848], 99.90th=[11469], 99.95th=[12518], 00:24:27.259 | 99.99th=[13304] 00:24:27.259 bw ( KiB/s): min=37184, max=38552, per=99.91%, avg=38042.00, stdev=596.43, samples=4 00:24:27.259 iops : min= 9296, max= 9638, avg=9510.50, stdev=149.11, samples=4 00:24:27.259 write: IOPS=9526, BW=37.2MiB/s (39.0MB/s)(74.7MiB/2006msec); 0 zone resets 00:24:27.259 slat (usec): min=2, max=145, avg= 2.62, stdev= 1.49 00:24:27.259 clat (usec): min=1443, max=11521, avg=5957.22, stdev=488.46 00:24:27.259 lat (usec): min=1462, max=11524, avg=5959.84, stdev=488.45 00:24:27.259 clat percentiles (usec): 00:24:27.259 | 1.00th=[ 4883], 5.00th=[ 5211], 10.00th=[ 5407], 20.00th=[ 5604], 00:24:27.259 | 30.00th=[ 5735], 40.00th=[ 5866], 50.00th=[ 5932], 60.00th=[ 6063], 00:24:27.259 | 70.00th=[ 6194], 80.00th=[ 6325], 90.00th=[ 6521], 95.00th=[ 6718], 00:24:27.259 | 99.00th=[ 7046], 99.50th=[ 7177], 99.90th=[ 8979], 99.95th=[10421], 00:24:27.259 | 99.99th=[11338] 00:24:27.259 bw ( KiB/s): min=37824, max=38400, per=100.00%, avg=38118.00, stdev=264.47, samples=4 00:24:27.259 iops : min= 9456, max= 9600, avg=9529.50, stdev=66.12, samples=4 00:24:27.259 lat (msec) : 2=0.01%, 4=0.07%, 10=99.81%, 20=0.12% 00:24:27.259 cpu : usr=50.32%, sys=40.70%, ctx=71, majf=0, minf=5 00:24:27.259 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.8% 00:24:27.259 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:27.259 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:24:27.259 issued rwts: total=19096,19111,0,0 short=0,0,0,0 dropped=0,0,0,0 00:24:27.259 latency : target=0, window=0, percentile=100.00%, depth=128 00:24:27.259 00:24:27.259 Run status group 0 (all jobs): 00:24:27.259 READ: bw=37.2MiB/s (39.0MB/s), 37.2MiB/s-37.2MiB/s (39.0MB/s-39.0MB/s), io=74.6MiB (78.2MB), run=2006-2006msec 00:24:27.259 WRITE: bw=37.2MiB/s (39.0MB/s), 37.2MiB/s-37.2MiB/s (39.0MB/s-39.0MB/s), io=74.7MiB (78.3MB), run=2006-2006msec 00:24:27.259 07:41:42 -- host/fio.sh@45 -- # fio_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/mock_sgl_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' 00:24:27.259 07:41:43 -- common/autotest_common.sh@1339 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/mock_sgl_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' 00:24:27.259 07:41:43 -- common/autotest_common.sh@1316 -- # local fio_dir=/usr/src/fio 00:24:27.259 07:41:43 -- common/autotest_common.sh@1318 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:24:27.259 07:41:43 -- common/autotest_common.sh@1318 -- # local sanitizers 00:24:27.259 07:41:43 -- common/autotest_common.sh@1319 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:24:27.259 07:41:43 -- common/autotest_common.sh@1320 -- # shift 00:24:27.259 07:41:43 -- common/autotest_common.sh@1322 -- # local asan_lib= 00:24:27.259 07:41:43 -- common/autotest_common.sh@1323 -- # for sanitizer in "${sanitizers[@]}" 00:24:27.259 07:41:43 -- common/autotest_common.sh@1324 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:24:27.259 07:41:43 -- common/autotest_common.sh@1324 -- # grep libasan 00:24:27.259 07:41:43 -- common/autotest_common.sh@1324 -- # awk '{print $3}' 00:24:27.259 07:41:43 -- common/autotest_common.sh@1324 -- # asan_lib= 00:24:27.259 07:41:43 -- common/autotest_common.sh@1325 -- # [[ -n '' ]] 00:24:27.259 07:41:43 -- common/autotest_common.sh@1323 -- # for sanitizer in "${sanitizers[@]}" 00:24:27.259 07:41:43 -- common/autotest_common.sh@1324 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:24:27.259 07:41:43 -- common/autotest_common.sh@1324 -- # grep libclang_rt.asan 00:24:27.259 07:41:43 -- common/autotest_common.sh@1324 -- # awk '{print $3}' 00:24:27.259 07:41:43 -- common/autotest_common.sh@1324 -- # asan_lib= 00:24:27.260 07:41:43 -- common/autotest_common.sh@1325 -- # [[ -n '' ]] 00:24:27.260 07:41:43 -- common/autotest_common.sh@1331 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme' 00:24:27.260 07:41:43 -- common/autotest_common.sh@1331 -- # /usr/src/fio/fio /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/mock_sgl_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' 00:24:27.260 test: (g=0): rw=randrw, bs=(R) 16.0KiB-16.0KiB, (W) 16.0KiB-16.0KiB, (T) 16.0KiB-16.0KiB, ioengine=spdk, iodepth=128 00:24:27.260 fio-3.35 00:24:27.260 Starting 1 thread 00:24:27.260 EAL: No free 2048 kB hugepages reported on node 1 00:24:29.791 00:24:29.791 test: (groupid=0, jobs=1): err= 0: pid=4182454: Sun Jul 14 07:41:45 2024 00:24:29.791 read: IOPS=8032, BW=126MiB/s (132MB/s)(252MiB/2005msec) 00:24:29.791 slat (usec): min=2, max=113, avg= 3.62, stdev= 1.66 00:24:29.791 clat (usec): min=3561, max=53119, avg=10066.20, stdev=5464.82 00:24:29.792 lat (usec): min=3565, max=53123, avg=10069.83, stdev=5464.88 00:24:29.792 clat percentiles (usec): 00:24:29.792 | 1.00th=[ 4817], 5.00th=[ 5735], 10.00th=[ 6456], 20.00th=[ 7308], 00:24:29.792 | 30.00th=[ 8029], 40.00th=[ 8717], 50.00th=[ 9372], 60.00th=[10028], 00:24:29.792 | 70.00th=[10683], 80.00th=[11469], 90.00th=[12911], 95.00th=[14222], 00:24:29.792 | 99.00th=[47973], 99.50th=[50594], 99.90th=[52691], 99.95th=[52691], 00:24:29.792 | 99.99th=[53216] 00:24:29.792 bw ( KiB/s): min=53504, max=77952, per=49.79%, avg=63992.00, stdev=10180.24, samples=4 00:24:29.792 iops : min= 3344, max= 4872, avg=3999.50, stdev=636.26, samples=4 00:24:29.792 write: IOPS=4877, BW=76.2MiB/s (79.9MB/s)(131MiB/1715msec); 0 zone resets 00:24:29.792 slat (usec): min=30, max=140, avg=33.12, stdev= 4.28 00:24:29.792 clat (usec): min=3581, max=19137, avg=10601.08, stdev=1899.85 00:24:29.792 lat (usec): min=3613, max=19170, avg=10634.21, stdev=1900.12 00:24:29.792 clat percentiles (usec): 00:24:29.792 | 1.00th=[ 6915], 5.00th=[ 7767], 10.00th=[ 8225], 20.00th=[ 8979], 00:24:29.792 | 30.00th=[ 9503], 40.00th=[10028], 50.00th=[10552], 60.00th=[10945], 00:24:29.792 | 70.00th=[11469], 80.00th=[12125], 90.00th=[13173], 95.00th=[13960], 00:24:29.792 | 99.00th=[15533], 99.50th=[15926], 99.90th=[16909], 99.95th=[17171], 00:24:29.792 | 99.99th=[19268] 00:24:29.792 bw ( KiB/s): min=56480, max=80736, per=85.58%, avg=66784.00, stdev=10134.89, samples=4 00:24:29.792 iops : min= 3530, max= 5046, avg=4174.00, stdev=633.43, samples=4 00:24:29.792 lat (msec) : 4=0.08%, 10=52.28%, 20=46.60%, 50=0.62%, 100=0.42% 00:24:29.792 cpu : usr=73.20%, sys=23.05%, ctx=22, majf=0, minf=1 00:24:29.792 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.3%, 32=0.7%, >=64=98.7% 00:24:29.792 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:29.792 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:24:29.792 issued rwts: total=16105,8365,0,0 short=0,0,0,0 dropped=0,0,0,0 00:24:29.792 latency : target=0, window=0, percentile=100.00%, depth=128 00:24:29.792 00:24:29.792 Run status group 0 (all jobs): 00:24:29.792 READ: bw=126MiB/s (132MB/s), 126MiB/s-126MiB/s (132MB/s-132MB/s), io=252MiB (264MB), run=2005-2005msec 00:24:29.792 WRITE: bw=76.2MiB/s (79.9MB/s), 76.2MiB/s-76.2MiB/s (79.9MB/s-79.9MB/s), io=131MiB (137MB), run=1715-1715msec 00:24:29.792 07:41:45 -- host/fio.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:24:29.792 07:41:45 -- host/fio.sh@49 -- # '[' 1 -eq 1 ']' 00:24:29.792 07:41:45 -- host/fio.sh@51 -- # bdfs=($(get_nvme_bdfs)) 00:24:29.792 07:41:45 -- host/fio.sh@51 -- # get_nvme_bdfs 00:24:29.792 07:41:45 -- common/autotest_common.sh@1498 -- # bdfs=() 00:24:29.792 07:41:45 -- common/autotest_common.sh@1498 -- # local bdfs 00:24:29.792 07:41:45 -- common/autotest_common.sh@1499 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:24:29.792 07:41:45 -- common/autotest_common.sh@1499 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh 00:24:29.792 07:41:45 -- common/autotest_common.sh@1499 -- # jq -r '.config[].params.traddr' 00:24:29.792 07:41:45 -- common/autotest_common.sh@1500 -- # (( 1 == 0 )) 00:24:29.792 07:41:45 -- common/autotest_common.sh@1504 -- # printf '%s\n' 0000:88:00.0 00:24:29.792 07:41:45 -- host/fio.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_nvme_attach_controller -b Nvme0 -t PCIe -a 0000:88:00.0 -i 10.0.0.2 00:24:33.071 Nvme0n1 00:24:33.071 07:41:48 -- host/fio.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore -c 1073741824 Nvme0n1 lvs_0 00:24:36.349 07:41:51 -- host/fio.sh@53 -- # ls_guid=86dab5a8-f449-48eb-96dc-aee928c37d50 00:24:36.349 07:41:51 -- host/fio.sh@54 -- # get_lvs_free_mb 86dab5a8-f449-48eb-96dc-aee928c37d50 00:24:36.349 07:41:51 -- common/autotest_common.sh@1343 -- # local lvs_uuid=86dab5a8-f449-48eb-96dc-aee928c37d50 00:24:36.349 07:41:51 -- common/autotest_common.sh@1344 -- # local lvs_info 00:24:36.349 07:41:51 -- common/autotest_common.sh@1345 -- # local fc 00:24:36.349 07:41:51 -- common/autotest_common.sh@1346 -- # local cs 00:24:36.349 07:41:51 -- common/autotest_common.sh@1347 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:24:36.349 07:41:52 -- common/autotest_common.sh@1347 -- # lvs_info='[ 00:24:36.349 { 00:24:36.349 "uuid": "86dab5a8-f449-48eb-96dc-aee928c37d50", 00:24:36.349 "name": "lvs_0", 00:24:36.349 "base_bdev": "Nvme0n1", 00:24:36.349 "total_data_clusters": 930, 00:24:36.349 "free_clusters": 930, 00:24:36.349 "block_size": 512, 00:24:36.349 "cluster_size": 1073741824 00:24:36.349 } 00:24:36.349 ]' 00:24:36.349 07:41:52 -- common/autotest_common.sh@1348 -- # jq '.[] | select(.uuid=="86dab5a8-f449-48eb-96dc-aee928c37d50") .free_clusters' 00:24:36.349 07:41:52 -- common/autotest_common.sh@1348 -- # fc=930 00:24:36.349 07:41:52 -- common/autotest_common.sh@1349 -- # jq '.[] | select(.uuid=="86dab5a8-f449-48eb-96dc-aee928c37d50") .cluster_size' 00:24:36.349 07:41:52 -- common/autotest_common.sh@1349 -- # cs=1073741824 00:24:36.349 07:41:52 -- common/autotest_common.sh@1352 -- # free_mb=952320 00:24:36.349 07:41:52 -- common/autotest_common.sh@1353 -- # echo 952320 00:24:36.349 952320 00:24:36.349 07:41:52 -- host/fio.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -l lvs_0 lbd_0 952320 00:24:36.349 e1a5753f-006e-447b-b8b3-9f881b5d95f9 00:24:36.607 07:41:52 -- host/fio.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK00000000000001 00:24:36.607 07:41:52 -- host/fio.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 lvs_0/lbd_0 00:24:36.865 07:41:53 -- host/fio.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:24:37.123 07:41:53 -- host/fio.sh@59 -- # fio_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:24:37.123 07:41:53 -- common/autotest_common.sh@1339 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:24:37.123 07:41:53 -- common/autotest_common.sh@1316 -- # local fio_dir=/usr/src/fio 00:24:37.123 07:41:53 -- common/autotest_common.sh@1318 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:24:37.123 07:41:53 -- common/autotest_common.sh@1318 -- # local sanitizers 00:24:37.123 07:41:53 -- common/autotest_common.sh@1319 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:24:37.123 07:41:53 -- common/autotest_common.sh@1320 -- # shift 00:24:37.123 07:41:53 -- common/autotest_common.sh@1322 -- # local asan_lib= 00:24:37.123 07:41:53 -- common/autotest_common.sh@1323 -- # for sanitizer in "${sanitizers[@]}" 00:24:37.123 07:41:53 -- common/autotest_common.sh@1324 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:24:37.123 07:41:53 -- common/autotest_common.sh@1324 -- # grep libasan 00:24:37.123 07:41:53 -- common/autotest_common.sh@1324 -- # awk '{print $3}' 00:24:37.123 07:41:53 -- common/autotest_common.sh@1324 -- # asan_lib= 00:24:37.123 07:41:53 -- common/autotest_common.sh@1325 -- # [[ -n '' ]] 00:24:37.123 07:41:53 -- common/autotest_common.sh@1323 -- # for sanitizer in "${sanitizers[@]}" 00:24:37.123 07:41:53 -- common/autotest_common.sh@1324 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:24:37.123 07:41:53 -- common/autotest_common.sh@1324 -- # grep libclang_rt.asan 00:24:37.123 07:41:53 -- common/autotest_common.sh@1324 -- # awk '{print $3}' 00:24:37.123 07:41:53 -- common/autotest_common.sh@1324 -- # asan_lib= 00:24:37.123 07:41:53 -- common/autotest_common.sh@1325 -- # [[ -n '' ]] 00:24:37.123 07:41:53 -- common/autotest_common.sh@1331 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme' 00:24:37.123 07:41:53 -- common/autotest_common.sh@1331 -- # /usr/src/fio/fio /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:24:37.381 test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128 00:24:37.381 fio-3.35 00:24:37.381 Starting 1 thread 00:24:37.381 EAL: No free 2048 kB hugepages reported on node 1 00:24:39.908 00:24:39.908 test: (groupid=0, jobs=1): err= 0: pid=4183883: Sun Jul 14 07:41:55 2024 00:24:39.908 read: IOPS=4864, BW=19.0MiB/s (19.9MB/s)(38.2MiB/2009msec) 00:24:39.908 slat (nsec): min=1924, max=126749, avg=2600.05, stdev=2117.04 00:24:39.908 clat (usec): min=1682, max=175421, avg=14512.78, stdev=12847.74 00:24:39.908 lat (usec): min=1684, max=175450, avg=14515.38, stdev=12847.98 00:24:39.908 clat percentiles (msec): 00:24:39.908 | 1.00th=[ 11], 5.00th=[ 12], 10.00th=[ 12], 20.00th=[ 13], 00:24:39.908 | 30.00th=[ 13], 40.00th=[ 13], 50.00th=[ 14], 60.00th=[ 14], 00:24:39.908 | 70.00th=[ 15], 80.00th=[ 15], 90.00th=[ 16], 95.00th=[ 17], 00:24:39.908 | 99.00th=[ 18], 99.50th=[ 157], 99.90th=[ 176], 99.95th=[ 176], 00:24:39.908 | 99.99th=[ 176] 00:24:39.908 bw ( KiB/s): min=14136, max=21632, per=99.73%, avg=19406.00, stdev=3530.16, samples=4 00:24:39.908 iops : min= 3534, max= 5408, avg=4851.50, stdev=882.54, samples=4 00:24:39.908 write: IOPS=4853, BW=19.0MiB/s (19.9MB/s)(38.1MiB/2009msec); 0 zone resets 00:24:39.908 slat (usec): min=2, max=102, avg= 2.67, stdev= 1.62 00:24:39.908 clat (usec): min=561, max=172404, avg=11674.42, stdev=12036.75 00:24:39.908 lat (usec): min=564, max=172410, avg=11677.09, stdev=12036.98 00:24:39.908 clat percentiles (msec): 00:24:39.908 | 1.00th=[ 9], 5.00th=[ 9], 10.00th=[ 10], 20.00th=[ 10], 00:24:39.908 | 30.00th=[ 11], 40.00th=[ 11], 50.00th=[ 11], 60.00th=[ 11], 00:24:39.908 | 70.00th=[ 12], 80.00th=[ 12], 90.00th=[ 13], 95.00th=[ 13], 00:24:39.908 | 99.00th=[ 15], 99.50th=[ 157], 99.90th=[ 174], 99.95th=[ 174], 00:24:39.908 | 99.99th=[ 174] 00:24:39.908 bw ( KiB/s): min=14888, max=21184, per=99.86%, avg=19386.00, stdev=3010.18, samples=4 00:24:39.908 iops : min= 3722, max= 5296, avg=4846.50, stdev=752.54, samples=4 00:24:39.908 lat (usec) : 750=0.01% 00:24:39.908 lat (msec) : 2=0.05%, 4=0.08%, 10=13.55%, 20=85.60%, 50=0.07% 00:24:39.908 lat (msec) : 250=0.66% 00:24:39.908 cpu : usr=50.80%, sys=44.12%, ctx=84, majf=0, minf=5 00:24:39.908 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.2%, >=64=99.7% 00:24:39.908 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:39.908 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:24:39.908 issued rwts: total=9773,9750,0,0 short=0,0,0,0 dropped=0,0,0,0 00:24:39.908 latency : target=0, window=0, percentile=100.00%, depth=128 00:24:39.908 00:24:39.908 Run status group 0 (all jobs): 00:24:39.908 READ: bw=19.0MiB/s (19.9MB/s), 19.0MiB/s-19.0MiB/s (19.9MB/s-19.9MB/s), io=38.2MiB (40.0MB), run=2009-2009msec 00:24:39.908 WRITE: bw=19.0MiB/s (19.9MB/s), 19.0MiB/s-19.0MiB/s (19.9MB/s-19.9MB/s), io=38.1MiB (39.9MB), run=2009-2009msec 00:24:39.908 07:41:55 -- host/fio.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:24:39.908 07:41:56 -- host/fio.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore --clear-method none lvs_0/lbd_0 lvs_n_0 00:24:41.282 07:41:57 -- host/fio.sh@64 -- # ls_nested_guid=7586c533-a24a-4b7f-af9a-b1ba3955dd56 00:24:41.282 07:41:57 -- host/fio.sh@65 -- # get_lvs_free_mb 7586c533-a24a-4b7f-af9a-b1ba3955dd56 00:24:41.282 07:41:57 -- common/autotest_common.sh@1343 -- # local lvs_uuid=7586c533-a24a-4b7f-af9a-b1ba3955dd56 00:24:41.282 07:41:57 -- common/autotest_common.sh@1344 -- # local lvs_info 00:24:41.282 07:41:57 -- common/autotest_common.sh@1345 -- # local fc 00:24:41.282 07:41:57 -- common/autotest_common.sh@1346 -- # local cs 00:24:41.282 07:41:57 -- common/autotest_common.sh@1347 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:24:41.282 07:41:57 -- common/autotest_common.sh@1347 -- # lvs_info='[ 00:24:41.282 { 00:24:41.282 "uuid": "86dab5a8-f449-48eb-96dc-aee928c37d50", 00:24:41.282 "name": "lvs_0", 00:24:41.282 "base_bdev": "Nvme0n1", 00:24:41.282 "total_data_clusters": 930, 00:24:41.282 "free_clusters": 0, 00:24:41.282 "block_size": 512, 00:24:41.282 "cluster_size": 1073741824 00:24:41.282 }, 00:24:41.282 { 00:24:41.282 "uuid": "7586c533-a24a-4b7f-af9a-b1ba3955dd56", 00:24:41.282 "name": "lvs_n_0", 00:24:41.282 "base_bdev": "e1a5753f-006e-447b-b8b3-9f881b5d95f9", 00:24:41.282 "total_data_clusters": 237847, 00:24:41.282 "free_clusters": 237847, 00:24:41.282 "block_size": 512, 00:24:41.282 "cluster_size": 4194304 00:24:41.282 } 00:24:41.282 ]' 00:24:41.282 07:41:57 -- common/autotest_common.sh@1348 -- # jq '.[] | select(.uuid=="7586c533-a24a-4b7f-af9a-b1ba3955dd56") .free_clusters' 00:24:41.282 07:41:57 -- common/autotest_common.sh@1348 -- # fc=237847 00:24:41.282 07:41:57 -- common/autotest_common.sh@1349 -- # jq '.[] | select(.uuid=="7586c533-a24a-4b7f-af9a-b1ba3955dd56") .cluster_size' 00:24:41.282 07:41:57 -- common/autotest_common.sh@1349 -- # cs=4194304 00:24:41.282 07:41:57 -- common/autotest_common.sh@1352 -- # free_mb=951388 00:24:41.282 07:41:57 -- common/autotest_common.sh@1353 -- # echo 951388 00:24:41.282 951388 00:24:41.282 07:41:57 -- host/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -l lvs_n_0 lbd_nest_0 951388 00:24:42.214 2d47e4e5-6598-4052-99a0-8b8037826e88 00:24:42.214 07:41:58 -- host/fio.sh@67 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode3 -a -s SPDK00000000000001 00:24:42.214 07:41:58 -- host/fio.sh@68 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode3 lvs_n_0/lbd_nest_0 00:24:42.471 07:41:58 -- host/fio.sh@69 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode3 -t tcp -a 10.0.0.2 -s 4420 00:24:42.729 07:41:58 -- host/fio.sh@70 -- # fio_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:24:42.729 07:41:58 -- common/autotest_common.sh@1339 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:24:42.729 07:41:58 -- common/autotest_common.sh@1316 -- # local fio_dir=/usr/src/fio 00:24:42.729 07:41:58 -- common/autotest_common.sh@1318 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:24:42.729 07:41:58 -- common/autotest_common.sh@1318 -- # local sanitizers 00:24:42.729 07:41:58 -- common/autotest_common.sh@1319 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:24:42.729 07:41:58 -- common/autotest_common.sh@1320 -- # shift 00:24:42.729 07:41:58 -- common/autotest_common.sh@1322 -- # local asan_lib= 00:24:42.729 07:41:58 -- common/autotest_common.sh@1323 -- # for sanitizer in "${sanitizers[@]}" 00:24:42.729 07:41:58 -- common/autotest_common.sh@1324 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:24:42.729 07:41:58 -- common/autotest_common.sh@1324 -- # grep libasan 00:24:42.729 07:41:58 -- common/autotest_common.sh@1324 -- # awk '{print $3}' 00:24:42.729 07:41:58 -- common/autotest_common.sh@1324 -- # asan_lib= 00:24:42.729 07:41:58 -- common/autotest_common.sh@1325 -- # [[ -n '' ]] 00:24:42.729 07:41:58 -- common/autotest_common.sh@1323 -- # for sanitizer in "${sanitizers[@]}" 00:24:42.729 07:41:58 -- common/autotest_common.sh@1324 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:24:42.729 07:41:58 -- common/autotest_common.sh@1324 -- # grep libclang_rt.asan 00:24:42.729 07:41:58 -- common/autotest_common.sh@1324 -- # awk '{print $3}' 00:24:42.729 07:41:58 -- common/autotest_common.sh@1324 -- # asan_lib= 00:24:42.729 07:41:58 -- common/autotest_common.sh@1325 -- # [[ -n '' ]] 00:24:42.729 07:41:58 -- common/autotest_common.sh@1331 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme' 00:24:42.729 07:41:58 -- common/autotest_common.sh@1331 -- # /usr/src/fio/fio /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:24:42.987 test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128 00:24:42.987 fio-3.35 00:24:42.987 Starting 1 thread 00:24:42.987 EAL: No free 2048 kB hugepages reported on node 1 00:24:45.523 00:24:45.523 test: (groupid=0, jobs=1): err= 0: pid=4184640: Sun Jul 14 07:42:01 2024 00:24:45.523 read: IOPS=6158, BW=24.1MiB/s (25.2MB/s)(48.3MiB/2008msec) 00:24:45.523 slat (nsec): min=1940, max=137725, avg=2611.81, stdev=1863.68 00:24:45.523 clat (usec): min=4240, max=18554, avg=11513.89, stdev=1100.72 00:24:45.523 lat (usec): min=4255, max=18556, avg=11516.50, stdev=1100.63 00:24:45.523 clat percentiles (usec): 00:24:45.523 | 1.00th=[ 9241], 5.00th=[ 9896], 10.00th=[10290], 20.00th=[10683], 00:24:45.523 | 30.00th=[10945], 40.00th=[11207], 50.00th=[11469], 60.00th=[11731], 00:24:45.523 | 70.00th=[11994], 80.00th=[12256], 90.00th=[12780], 95.00th=[13173], 00:24:45.523 | 99.00th=[15401], 99.50th=[16188], 99.90th=[17433], 99.95th=[17695], 00:24:45.523 | 99.99th=[18482] 00:24:45.523 bw ( KiB/s): min=23392, max=25224, per=99.86%, avg=24600.00, stdev=858.08, samples=4 00:24:45.523 iops : min= 5848, max= 6306, avg=6150.00, stdev=214.52, samples=4 00:24:45.523 write: IOPS=6142, BW=24.0MiB/s (25.2MB/s)(48.2MiB/2008msec); 0 zone resets 00:24:45.523 slat (usec): min=2, max=105, avg= 2.74, stdev= 1.50 00:24:45.523 clat (usec): min=2200, max=17536, avg=9146.11, stdev=959.21 00:24:45.523 lat (usec): min=2205, max=17539, avg=9148.84, stdev=959.19 00:24:45.523 clat percentiles (usec): 00:24:45.523 | 1.00th=[ 7046], 5.00th=[ 7767], 10.00th=[ 8094], 20.00th=[ 8455], 00:24:45.523 | 30.00th=[ 8717], 40.00th=[ 8848], 50.00th=[ 9110], 60.00th=[ 9372], 00:24:45.523 | 70.00th=[ 9503], 80.00th=[ 9765], 90.00th=[10159], 95.00th=[10552], 00:24:45.523 | 99.00th=[12125], 99.50th=[12911], 99.90th=[16057], 99.95th=[16450], 00:24:45.523 | 99.99th=[17433] 00:24:45.523 bw ( KiB/s): min=24128, max=24840, per=99.91%, avg=24550.00, stdev=308.97, samples=4 00:24:45.523 iops : min= 6032, max= 6210, avg=6137.50, stdev=77.24, samples=4 00:24:45.523 lat (msec) : 4=0.04%, 10=45.66%, 20=54.30% 00:24:45.523 cpu : usr=53.31%, sys=40.51%, ctx=68, majf=0, minf=5 00:24:45.523 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.7% 00:24:45.523 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:45.523 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:24:45.523 issued rwts: total=12367,12335,0,0 short=0,0,0,0 dropped=0,0,0,0 00:24:45.523 latency : target=0, window=0, percentile=100.00%, depth=128 00:24:45.523 00:24:45.523 Run status group 0 (all jobs): 00:24:45.523 READ: bw=24.1MiB/s (25.2MB/s), 24.1MiB/s-24.1MiB/s (25.2MB/s-25.2MB/s), io=48.3MiB (50.7MB), run=2008-2008msec 00:24:45.523 WRITE: bw=24.0MiB/s (25.2MB/s), 24.0MiB/s-24.0MiB/s (25.2MB/s-25.2MB/s), io=48.2MiB (50.5MB), run=2008-2008msec 00:24:45.523 07:42:01 -- host/fio.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode3 00:24:45.523 07:42:01 -- host/fio.sh@74 -- # sync 00:24:45.523 07:42:01 -- host/fio.sh@76 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete lvs_n_0/lbd_nest_0 00:24:49.700 07:42:05 -- host/fio.sh@77 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -l lvs_n_0 00:24:49.700 07:42:05 -- host/fio.sh@78 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete lvs_0/lbd_0 00:24:52.980 07:42:08 -- host/fio.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -l lvs_0 00:24:52.980 07:42:08 -- host/fio.sh@80 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_nvme_detach_controller Nvme0 00:24:54.881 07:42:10 -- host/fio.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:24:54.881 07:42:10 -- host/fio.sh@85 -- # rm -f ./local-test-0-verify.state 00:24:54.881 07:42:10 -- host/fio.sh@86 -- # nvmftestfini 00:24:54.881 07:42:10 -- nvmf/common.sh@476 -- # nvmfcleanup 00:24:54.881 07:42:10 -- nvmf/common.sh@116 -- # sync 00:24:54.881 07:42:10 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:24:54.881 07:42:10 -- nvmf/common.sh@119 -- # set +e 00:24:54.881 07:42:10 -- nvmf/common.sh@120 -- # for i in {1..20} 00:24:54.881 07:42:10 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:24:54.881 rmmod nvme_tcp 00:24:54.881 rmmod nvme_fabrics 00:24:54.881 rmmod nvme_keyring 00:24:54.881 07:42:10 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:24:54.881 07:42:10 -- nvmf/common.sh@123 -- # set -e 00:24:54.881 07:42:10 -- nvmf/common.sh@124 -- # return 0 00:24:54.881 07:42:10 -- nvmf/common.sh@477 -- # '[' -n 4181611 ']' 00:24:54.881 07:42:10 -- nvmf/common.sh@478 -- # killprocess 4181611 00:24:54.881 07:42:10 -- common/autotest_common.sh@926 -- # '[' -z 4181611 ']' 00:24:54.881 07:42:10 -- common/autotest_common.sh@930 -- # kill -0 4181611 00:24:54.881 07:42:10 -- common/autotest_common.sh@931 -- # uname 00:24:54.881 07:42:10 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:24:54.881 07:42:10 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 4181611 00:24:54.881 07:42:10 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:24:54.881 07:42:10 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:24:54.881 07:42:10 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 4181611' 00:24:54.881 killing process with pid 4181611 00:24:54.881 07:42:10 -- common/autotest_common.sh@945 -- # kill 4181611 00:24:54.881 07:42:10 -- common/autotest_common.sh@950 -- # wait 4181611 00:24:55.139 07:42:11 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:24:55.139 07:42:11 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:24:55.139 07:42:11 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:24:55.139 07:42:11 -- nvmf/common.sh@273 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:24:55.139 07:42:11 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:24:55.139 07:42:11 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:55.139 07:42:11 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:24:55.139 07:42:11 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:57.040 07:42:13 -- nvmf/common.sh@278 -- # ip -4 addr flush cvl_0_1 00:24:57.040 00:24:57.040 real 0m37.557s 00:24:57.040 user 2m21.789s 00:24:57.040 sys 0m7.771s 00:24:57.040 07:42:13 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:24:57.040 07:42:13 -- common/autotest_common.sh@10 -- # set +x 00:24:57.040 ************************************ 00:24:57.040 END TEST nvmf_fio_host 00:24:57.040 ************************************ 00:24:57.040 07:42:13 -- nvmf/nvmf.sh@100 -- # run_test nvmf_failover /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/failover.sh --transport=tcp 00:24:57.040 07:42:13 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:24:57.040 07:42:13 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:24:57.040 07:42:13 -- common/autotest_common.sh@10 -- # set +x 00:24:57.040 ************************************ 00:24:57.040 START TEST nvmf_failover 00:24:57.040 ************************************ 00:24:57.040 07:42:13 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/failover.sh --transport=tcp 00:24:57.296 * Looking for test storage... 00:24:57.296 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:24:57.296 07:42:13 -- host/failover.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:24:57.296 07:42:13 -- nvmf/common.sh@7 -- # uname -s 00:24:57.296 07:42:13 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:24:57.296 07:42:13 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:24:57.296 07:42:13 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:24:57.296 07:42:13 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:24:57.296 07:42:13 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:24:57.296 07:42:13 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:24:57.296 07:42:13 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:24:57.296 07:42:13 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:24:57.296 07:42:13 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:24:57.296 07:42:13 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:24:57.296 07:42:13 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:24:57.297 07:42:13 -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:24:57.297 07:42:13 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:24:57.297 07:42:13 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:24:57.297 07:42:13 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:24:57.297 07:42:13 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:24:57.297 07:42:13 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:24:57.297 07:42:13 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:24:57.297 07:42:13 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:24:57.297 07:42:13 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:57.297 07:42:13 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:57.297 07:42:13 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:57.297 07:42:13 -- paths/export.sh@5 -- # export PATH 00:24:57.297 07:42:13 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:57.297 07:42:13 -- nvmf/common.sh@46 -- # : 0 00:24:57.297 07:42:13 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:24:57.297 07:42:13 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:24:57.297 07:42:13 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:24:57.297 07:42:13 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:24:57.297 07:42:13 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:24:57.297 07:42:13 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:24:57.297 07:42:13 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:24:57.297 07:42:13 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:24:57.297 07:42:13 -- host/failover.sh@11 -- # MALLOC_BDEV_SIZE=64 00:24:57.297 07:42:13 -- host/failover.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:24:57.297 07:42:13 -- host/failover.sh@14 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:24:57.297 07:42:13 -- host/failover.sh@16 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:24:57.297 07:42:13 -- host/failover.sh@18 -- # nvmftestinit 00:24:57.297 07:42:13 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:24:57.297 07:42:13 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:24:57.297 07:42:13 -- nvmf/common.sh@436 -- # prepare_net_devs 00:24:57.297 07:42:13 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:24:57.297 07:42:13 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:24:57.297 07:42:13 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:57.297 07:42:13 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:24:57.297 07:42:13 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:57.297 07:42:13 -- nvmf/common.sh@402 -- # [[ phy != virt ]] 00:24:57.297 07:42:13 -- nvmf/common.sh@402 -- # gather_supported_nvmf_pci_devs 00:24:57.297 07:42:13 -- nvmf/common.sh@284 -- # xtrace_disable 00:24:57.297 07:42:13 -- common/autotest_common.sh@10 -- # set +x 00:24:59.195 07:42:15 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:24:59.195 07:42:15 -- nvmf/common.sh@290 -- # pci_devs=() 00:24:59.195 07:42:15 -- nvmf/common.sh@290 -- # local -a pci_devs 00:24:59.195 07:42:15 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:24:59.195 07:42:15 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:24:59.195 07:42:15 -- nvmf/common.sh@292 -- # pci_drivers=() 00:24:59.195 07:42:15 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:24:59.195 07:42:15 -- nvmf/common.sh@294 -- # net_devs=() 00:24:59.195 07:42:15 -- nvmf/common.sh@294 -- # local -ga net_devs 00:24:59.195 07:42:15 -- nvmf/common.sh@295 -- # e810=() 00:24:59.195 07:42:15 -- nvmf/common.sh@295 -- # local -ga e810 00:24:59.195 07:42:15 -- nvmf/common.sh@296 -- # x722=() 00:24:59.195 07:42:15 -- nvmf/common.sh@296 -- # local -ga x722 00:24:59.195 07:42:15 -- nvmf/common.sh@297 -- # mlx=() 00:24:59.195 07:42:15 -- nvmf/common.sh@297 -- # local -ga mlx 00:24:59.195 07:42:15 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:24:59.195 07:42:15 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:24:59.195 07:42:15 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:24:59.195 07:42:15 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:24:59.196 07:42:15 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:24:59.196 07:42:15 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:24:59.196 07:42:15 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:24:59.196 07:42:15 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:24:59.196 07:42:15 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:24:59.196 07:42:15 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:24:59.196 07:42:15 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:24:59.196 07:42:15 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:24:59.196 07:42:15 -- nvmf/common.sh@320 -- # [[ tcp == rdma ]] 00:24:59.196 07:42:15 -- nvmf/common.sh@326 -- # [[ e810 == mlx5 ]] 00:24:59.196 07:42:15 -- nvmf/common.sh@328 -- # [[ e810 == e810 ]] 00:24:59.196 07:42:15 -- nvmf/common.sh@329 -- # pci_devs=("${e810[@]}") 00:24:59.196 07:42:15 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:24:59.196 07:42:15 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:24:59.196 07:42:15 -- nvmf/common.sh@340 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:24:59.196 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:24:59.196 07:42:15 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:24:59.196 07:42:15 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:24:59.196 07:42:15 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:59.196 07:42:15 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:59.196 07:42:15 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:24:59.196 07:42:15 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:24:59.196 07:42:15 -- nvmf/common.sh@340 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:24:59.196 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:24:59.196 07:42:15 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:24:59.196 07:42:15 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:24:59.196 07:42:15 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:59.196 07:42:15 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:59.196 07:42:15 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:24:59.196 07:42:15 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:24:59.196 07:42:15 -- nvmf/common.sh@371 -- # [[ e810 == e810 ]] 00:24:59.196 07:42:15 -- nvmf/common.sh@371 -- # [[ tcp == rdma ]] 00:24:59.196 07:42:15 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:24:59.196 07:42:15 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:59.196 07:42:15 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:24:59.196 07:42:15 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:59.196 07:42:15 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:24:59.196 Found net devices under 0000:0a:00.0: cvl_0_0 00:24:59.196 07:42:15 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:24:59.196 07:42:15 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:24:59.196 07:42:15 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:59.196 07:42:15 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:24:59.196 07:42:15 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:59.196 07:42:15 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:24:59.196 Found net devices under 0000:0a:00.1: cvl_0_1 00:24:59.196 07:42:15 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:24:59.196 07:42:15 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:24:59.196 07:42:15 -- nvmf/common.sh@402 -- # is_hw=yes 00:24:59.196 07:42:15 -- nvmf/common.sh@404 -- # [[ yes == yes ]] 00:24:59.196 07:42:15 -- nvmf/common.sh@405 -- # [[ tcp == tcp ]] 00:24:59.196 07:42:15 -- nvmf/common.sh@406 -- # nvmf_tcp_init 00:24:59.196 07:42:15 -- nvmf/common.sh@228 -- # NVMF_INITIATOR_IP=10.0.0.1 00:24:59.196 07:42:15 -- nvmf/common.sh@229 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:24:59.196 07:42:15 -- nvmf/common.sh@230 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:24:59.196 07:42:15 -- nvmf/common.sh@233 -- # (( 2 > 1 )) 00:24:59.196 07:42:15 -- nvmf/common.sh@235 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:24:59.196 07:42:15 -- nvmf/common.sh@236 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:24:59.196 07:42:15 -- nvmf/common.sh@239 -- # NVMF_SECOND_TARGET_IP= 00:24:59.196 07:42:15 -- nvmf/common.sh@241 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:24:59.196 07:42:15 -- nvmf/common.sh@242 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:24:59.196 07:42:15 -- nvmf/common.sh@243 -- # ip -4 addr flush cvl_0_0 00:24:59.196 07:42:15 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_1 00:24:59.196 07:42:15 -- nvmf/common.sh@247 -- # ip netns add cvl_0_0_ns_spdk 00:24:59.196 07:42:15 -- nvmf/common.sh@250 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:24:59.196 07:42:15 -- nvmf/common.sh@253 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:24:59.196 07:42:15 -- nvmf/common.sh@254 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:24:59.196 07:42:15 -- nvmf/common.sh@257 -- # ip link set cvl_0_1 up 00:24:59.196 07:42:15 -- nvmf/common.sh@259 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:24:59.196 07:42:15 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:24:59.196 07:42:15 -- nvmf/common.sh@263 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:24:59.196 07:42:15 -- nvmf/common.sh@266 -- # ping -c 1 10.0.0.2 00:24:59.196 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:24:59.196 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.198 ms 00:24:59.196 00:24:59.196 --- 10.0.0.2 ping statistics --- 00:24:59.196 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:59.196 rtt min/avg/max/mdev = 0.198/0.198/0.198/0.000 ms 00:24:59.196 07:42:15 -- nvmf/common.sh@267 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:24:59.196 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:24:59.196 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.229 ms 00:24:59.196 00:24:59.196 --- 10.0.0.1 ping statistics --- 00:24:59.196 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:59.196 rtt min/avg/max/mdev = 0.229/0.229/0.229/0.000 ms 00:24:59.196 07:42:15 -- nvmf/common.sh@269 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:24:59.196 07:42:15 -- nvmf/common.sh@410 -- # return 0 00:24:59.196 07:42:15 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:24:59.196 07:42:15 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:24:59.196 07:42:15 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:24:59.196 07:42:15 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:24:59.196 07:42:15 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:24:59.196 07:42:15 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:24:59.196 07:42:15 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:24:59.196 07:42:15 -- host/failover.sh@20 -- # nvmfappstart -m 0xE 00:24:59.196 07:42:15 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:24:59.196 07:42:15 -- common/autotest_common.sh@712 -- # xtrace_disable 00:24:59.196 07:42:15 -- common/autotest_common.sh@10 -- # set +x 00:24:59.196 07:42:15 -- nvmf/common.sh@469 -- # nvmfpid=4188563 00:24:59.196 07:42:15 -- nvmf/common.sh@468 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:24:59.196 07:42:15 -- nvmf/common.sh@470 -- # waitforlisten 4188563 00:24:59.196 07:42:15 -- common/autotest_common.sh@819 -- # '[' -z 4188563 ']' 00:24:59.196 07:42:15 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:24:59.196 07:42:15 -- common/autotest_common.sh@824 -- # local max_retries=100 00:24:59.196 07:42:15 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:24:59.196 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:24:59.196 07:42:15 -- common/autotest_common.sh@828 -- # xtrace_disable 00:24:59.196 07:42:15 -- common/autotest_common.sh@10 -- # set +x 00:24:59.455 [2024-07-14 07:42:15.383400] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:24:59.455 [2024-07-14 07:42:15.383478] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:24:59.455 EAL: No free 2048 kB hugepages reported on node 1 00:24:59.455 [2024-07-14 07:42:15.451391] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 3 00:24:59.455 [2024-07-14 07:42:15.565859] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:24:59.455 [2024-07-14 07:42:15.566043] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:24:59.455 [2024-07-14 07:42:15.566065] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:24:59.455 [2024-07-14 07:42:15.566079] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:24:59.455 [2024-07-14 07:42:15.566187] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:24:59.455 [2024-07-14 07:42:15.566219] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:24:59.455 [2024-07-14 07:42:15.566222] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:25:00.388 07:42:16 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:25:00.388 07:42:16 -- common/autotest_common.sh@852 -- # return 0 00:25:00.388 07:42:16 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:25:00.388 07:42:16 -- common/autotest_common.sh@718 -- # xtrace_disable 00:25:00.388 07:42:16 -- common/autotest_common.sh@10 -- # set +x 00:25:00.388 07:42:16 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:25:00.388 07:42:16 -- host/failover.sh@22 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:25:00.646 [2024-07-14 07:42:16.563773] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:25:00.646 07:42:16 -- host/failover.sh@23 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc0 00:25:00.904 Malloc0 00:25:00.904 07:42:16 -- host/failover.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:25:00.904 07:42:17 -- host/failover.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:25:01.163 07:42:17 -- host/failover.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:25:01.425 [2024-07-14 07:42:17.539433] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:25:01.425 07:42:17 -- host/failover.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:25:01.682 [2024-07-14 07:42:17.768104] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:25:01.682 07:42:17 -- host/failover.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4422 00:25:01.940 [2024-07-14 07:42:18.004938] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4422 *** 00:25:01.940 07:42:18 -- host/failover.sh@31 -- # bdevperf_pid=4188863 00:25:01.940 07:42:18 -- host/failover.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 15 -f 00:25:01.940 07:42:18 -- host/failover.sh@33 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; cat $testdir/try.txt; rm -f $testdir/try.txt; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:25:01.940 07:42:18 -- host/failover.sh@34 -- # waitforlisten 4188863 /var/tmp/bdevperf.sock 00:25:01.940 07:42:18 -- common/autotest_common.sh@819 -- # '[' -z 4188863 ']' 00:25:01.940 07:42:18 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:25:01.940 07:42:18 -- common/autotest_common.sh@824 -- # local max_retries=100 00:25:01.940 07:42:18 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:25:01.940 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:25:01.940 07:42:18 -- common/autotest_common.sh@828 -- # xtrace_disable 00:25:01.940 07:42:18 -- common/autotest_common.sh@10 -- # set +x 00:25:02.874 07:42:18 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:25:02.874 07:42:18 -- common/autotest_common.sh@852 -- # return 0 00:25:02.874 07:42:18 -- host/failover.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:25:03.441 NVMe0n1 00:25:03.441 07:42:19 -- host/failover.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:25:03.699 00:25:03.700 07:42:19 -- host/failover.sh@39 -- # run_test_pid=4189132 00:25:03.700 07:42:19 -- host/failover.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:25:03.700 07:42:19 -- host/failover.sh@41 -- # sleep 1 00:25:04.635 07:42:20 -- host/failover.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:25:04.895 [2024-07-14 07:42:20.920812] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf0f370 is same with the state(5) to be set 00:25:04.895 [2024-07-14 07:42:20.920946] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf0f370 is same with the state(5) to be set 00:25:04.895 [2024-07-14 07:42:20.920964] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf0f370 is same with the state(5) to be set 00:25:04.895 [2024-07-14 07:42:20.920983] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf0f370 is same with the state(5) to be set 00:25:04.895 [2024-07-14 07:42:20.920996] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf0f370 is same with the state(5) to be set 00:25:04.895 [2024-07-14 07:42:20.921008] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf0f370 is same with the state(5) to be set 00:25:04.895 [2024-07-14 07:42:20.921020] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf0f370 is same with the state(5) to be set 00:25:04.895 [2024-07-14 07:42:20.921032] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf0f370 is same with the state(5) to be set 00:25:04.895 [2024-07-14 07:42:20.921044] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf0f370 is same with the state(5) to be set 00:25:04.895 [2024-07-14 07:42:20.921056] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf0f370 is same with the state(5) to be set 00:25:04.895 [2024-07-14 07:42:20.921068] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf0f370 is same with the state(5) to be set 00:25:04.895 [2024-07-14 07:42:20.921089] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf0f370 is same with the state(5) to be set 00:25:04.895 [2024-07-14 07:42:20.921102] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf0f370 is same with the state(5) to be set 00:25:04.895 [2024-07-14 07:42:20.921114] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf0f370 is same with the state(5) to be set 00:25:04.895 [2024-07-14 07:42:20.921126] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf0f370 is same with the state(5) to be set 00:25:04.895 [2024-07-14 07:42:20.921162] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf0f370 is same with the state(5) to be set 00:25:04.895 [2024-07-14 07:42:20.921174] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf0f370 is same with the state(5) to be set 00:25:04.895 [2024-07-14 07:42:20.921185] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf0f370 is same with the state(5) to be set 00:25:04.895 [2024-07-14 07:42:20.921197] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf0f370 is same with the state(5) to be set 00:25:04.895 [2024-07-14 07:42:20.921208] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf0f370 is same with the state(5) to be set 00:25:04.895 [2024-07-14 07:42:20.921235] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf0f370 is same with the state(5) to be set 00:25:04.895 [2024-07-14 07:42:20.921247] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf0f370 is same with the state(5) to be set 00:25:04.895 [2024-07-14 07:42:20.921258] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf0f370 is same with the state(5) to be set 00:25:04.895 [2024-07-14 07:42:20.921269] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf0f370 is same with the state(5) to be set 00:25:04.895 [2024-07-14 07:42:20.921281] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf0f370 is same with the state(5) to be set 00:25:04.895 [2024-07-14 07:42:20.921292] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf0f370 is same with the state(5) to be set 00:25:04.895 [2024-07-14 07:42:20.921303] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf0f370 is same with the state(5) to be set 00:25:04.895 [2024-07-14 07:42:20.921314] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf0f370 is same with the state(5) to be set 00:25:04.895 [2024-07-14 07:42:20.921326] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf0f370 is same with the state(5) to be set 00:25:04.895 [2024-07-14 07:42:20.921337] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf0f370 is same with the state(5) to be set 00:25:04.895 [2024-07-14 07:42:20.921348] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf0f370 is same with the state(5) to be set 00:25:04.895 [2024-07-14 07:42:20.921359] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf0f370 is same with the state(5) to be set 00:25:04.895 [2024-07-14 07:42:20.921370] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf0f370 is same with the state(5) to be set 00:25:04.895 [2024-07-14 07:42:20.921382] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf0f370 is same with the state(5) to be set 00:25:04.895 [2024-07-14 07:42:20.921394] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf0f370 is same with the state(5) to be set 00:25:04.895 [2024-07-14 07:42:20.921406] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf0f370 is same with the state(5) to be set 00:25:04.895 07:42:20 -- host/failover.sh@45 -- # sleep 3 00:25:08.171 07:42:23 -- host/failover.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4422 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:25:08.171 00:25:08.171 07:42:24 -- host/failover.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:25:08.427 [2024-07-14 07:42:24.562212] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf0fb80 is same with the state(5) to be set 00:25:08.427 [2024-07-14 07:42:24.562315] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf0fb80 is same with the state(5) to be set 00:25:08.427 [2024-07-14 07:42:24.562333] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf0fb80 is same with the state(5) to be set 00:25:08.427 [2024-07-14 07:42:24.562346] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf0fb80 is same with the state(5) to be set 00:25:08.427 [2024-07-14 07:42:24.562359] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf0fb80 is same with the state(5) to be set 00:25:08.427 [2024-07-14 07:42:24.562372] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf0fb80 is same with the state(5) to be set 00:25:08.427 [2024-07-14 07:42:24.562384] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf0fb80 is same with the state(5) to be set 00:25:08.427 [2024-07-14 07:42:24.562397] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf0fb80 is same with the state(5) to be set 00:25:08.427 [2024-07-14 07:42:24.562409] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf0fb80 is same with the state(5) to be set 00:25:08.427 [2024-07-14 07:42:24.562422] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf0fb80 is same with the state(5) to be set 00:25:08.427 [2024-07-14 07:42:24.562434] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf0fb80 is same with the state(5) to be set 00:25:08.428 [2024-07-14 07:42:24.562446] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf0fb80 is same with the state(5) to be set 00:25:08.428 [2024-07-14 07:42:24.562458] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf0fb80 is same with the state(5) to be set 00:25:08.428 [2024-07-14 07:42:24.562471] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf0fb80 is same with the state(5) to be set 00:25:08.428 [2024-07-14 07:42:24.562484] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf0fb80 is same with the state(5) to be set 00:25:08.428 [2024-07-14 07:42:24.562496] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf0fb80 is same with the state(5) to be set 00:25:08.428 [2024-07-14 07:42:24.562508] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf0fb80 is same with the state(5) to be set 00:25:08.428 [2024-07-14 07:42:24.562521] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf0fb80 is same with the state(5) to be set 00:25:08.428 [2024-07-14 07:42:24.562533] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf0fb80 is same with the state(5) to be set 00:25:08.428 [2024-07-14 07:42:24.562545] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf0fb80 is same with the state(5) to be set 00:25:08.428 [2024-07-14 07:42:24.562558] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf0fb80 is same with the state(5) to be set 00:25:08.428 [2024-07-14 07:42:24.562570] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf0fb80 is same with the state(5) to be set 00:25:08.428 [2024-07-14 07:42:24.562597] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf0fb80 is same with the state(5) to be set 00:25:08.428 [2024-07-14 07:42:24.562609] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf0fb80 is same with the state(5) to be set 00:25:08.428 [2024-07-14 07:42:24.562622] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf0fb80 is same with the state(5) to be set 00:25:08.428 [2024-07-14 07:42:24.562633] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf0fb80 is same with the state(5) to be set 00:25:08.428 [2024-07-14 07:42:24.562669] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf0fb80 is same with the state(5) to be set 00:25:08.428 [2024-07-14 07:42:24.562681] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf0fb80 is same with the state(5) to be set 00:25:08.428 [2024-07-14 07:42:24.562693] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf0fb80 is same with the state(5) to be set 00:25:08.428 [2024-07-14 07:42:24.562704] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf0fb80 is same with the state(5) to be set 00:25:08.428 [2024-07-14 07:42:24.562715] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf0fb80 is same with the state(5) to be set 00:25:08.428 [2024-07-14 07:42:24.562727] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf0fb80 is same with the state(5) to be set 00:25:08.428 [2024-07-14 07:42:24.562738] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf0fb80 is same with the state(5) to be set 00:25:08.428 [2024-07-14 07:42:24.562750] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf0fb80 is same with the state(5) to be set 00:25:08.428 [2024-07-14 07:42:24.562762] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf0fb80 is same with the state(5) to be set 00:25:08.428 [2024-07-14 07:42:24.562773] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf0fb80 is same with the state(5) to be set 00:25:08.428 [2024-07-14 07:42:24.562785] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf0fb80 is same with the state(5) to be set 00:25:08.428 [2024-07-14 07:42:24.562796] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf0fb80 is same with the state(5) to be set 00:25:08.428 [2024-07-14 07:42:24.562808] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf0fb80 is same with the state(5) to be set 00:25:08.428 [2024-07-14 07:42:24.562819] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf0fb80 is same with the state(5) to be set 00:25:08.428 [2024-07-14 07:42:24.562830] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf0fb80 is same with the state(5) to be set 00:25:08.428 [2024-07-14 07:42:24.562841] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf0fb80 is same with the state(5) to be set 00:25:08.428 [2024-07-14 07:42:24.562853] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf0fb80 is same with the state(5) to be set 00:25:08.428 [2024-07-14 07:42:24.562871] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf0fb80 is same with the state(5) to be set 00:25:08.428 [2024-07-14 07:42:24.562884] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf0fb80 is same with the state(5) to be set 00:25:08.428 [2024-07-14 07:42:24.562896] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf0fb80 is same with the state(5) to be set 00:25:08.428 [2024-07-14 07:42:24.562907] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf0fb80 is same with the state(5) to be set 00:25:08.428 [2024-07-14 07:42:24.562919] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf0fb80 is same with the state(5) to be set 00:25:08.428 [2024-07-14 07:42:24.562930] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf0fb80 is same with the state(5) to be set 00:25:08.428 [2024-07-14 07:42:24.562942] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf0fb80 is same with the state(5) to be set 00:25:08.428 [2024-07-14 07:42:24.562954] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf0fb80 is same with the state(5) to be set 00:25:08.428 [2024-07-14 07:42:24.562966] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf0fb80 is same with the state(5) to be set 00:25:08.428 [2024-07-14 07:42:24.562977] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf0fb80 is same with the state(5) to be set 00:25:08.428 [2024-07-14 07:42:24.562989] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf0fb80 is same with the state(5) to be set 00:25:08.428 [2024-07-14 07:42:24.563004] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf0fb80 is same with the state(5) to be set 00:25:08.428 [2024-07-14 07:42:24.563016] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf0fb80 is same with the state(5) to be set 00:25:08.428 [2024-07-14 07:42:24.563028] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf0fb80 is same with the state(5) to be set 00:25:08.428 [2024-07-14 07:42:24.563039] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf0fb80 is same with the state(5) to be set 00:25:08.428 07:42:24 -- host/failover.sh@50 -- # sleep 3 00:25:11.708 07:42:27 -- host/failover.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:25:11.708 [2024-07-14 07:42:27.845949] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:25:11.708 07:42:27 -- host/failover.sh@55 -- # sleep 1 00:25:13.084 07:42:28 -- host/failover.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4422 00:25:13.084 [2024-07-14 07:42:29.096506] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcba2f0 is same with the state(5) to be set 00:25:13.085 [2024-07-14 07:42:29.096576] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcba2f0 is same with the state(5) to be set 00:25:13.085 [2024-07-14 07:42:29.096592] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcba2f0 is same with the state(5) to be set 00:25:13.085 [2024-07-14 07:42:29.096605] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcba2f0 is same with the state(5) to be set 00:25:13.085 [2024-07-14 07:42:29.096618] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcba2f0 is same with the state(5) to be set 00:25:13.085 [2024-07-14 07:42:29.096632] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcba2f0 is same with the state(5) to be set 00:25:13.085 [2024-07-14 07:42:29.096645] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcba2f0 is same with the state(5) to be set 00:25:13.085 [2024-07-14 07:42:29.096657] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcba2f0 is same with the state(5) to be set 00:25:13.085 [2024-07-14 07:42:29.096670] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcba2f0 is same with the state(5) to be set 00:25:13.085 [2024-07-14 07:42:29.096682] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcba2f0 is same with the state(5) to be set 00:25:13.085 [2024-07-14 07:42:29.096695] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcba2f0 is same with the state(5) to be set 00:25:13.085 [2024-07-14 07:42:29.096708] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcba2f0 is same with the state(5) to be set 00:25:13.085 [2024-07-14 07:42:29.096722] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcba2f0 is same with the state(5) to be set 00:25:13.085 [2024-07-14 07:42:29.096734] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcba2f0 is same with the state(5) to be set 00:25:13.085 [2024-07-14 07:42:29.096746] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcba2f0 is same with the state(5) to be set 00:25:13.085 [2024-07-14 07:42:29.096758] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcba2f0 is same with the state(5) to be set 00:25:13.085 [2024-07-14 07:42:29.096770] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcba2f0 is same with the state(5) to be set 00:25:13.085 [2024-07-14 07:42:29.096782] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcba2f0 is same with the state(5) to be set 00:25:13.085 [2024-07-14 07:42:29.096795] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcba2f0 is same with the state(5) to be set 00:25:13.085 [2024-07-14 07:42:29.096813] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcba2f0 is same with the state(5) to be set 00:25:13.085 [2024-07-14 07:42:29.096826] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcba2f0 is same with the state(5) to be set 00:25:13.085 [2024-07-14 07:42:29.096837] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcba2f0 is same with the state(5) to be set 00:25:13.085 [2024-07-14 07:42:29.096849] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcba2f0 is same with the state(5) to be set 00:25:13.085 [2024-07-14 07:42:29.096861] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcba2f0 is same with the state(5) to be set 00:25:13.085 [2024-07-14 07:42:29.096884] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcba2f0 is same with the state(5) to be set 00:25:13.085 [2024-07-14 07:42:29.096897] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcba2f0 is same with the state(5) to be set 00:25:13.085 [2024-07-14 07:42:29.096908] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcba2f0 is same with the state(5) to be set 00:25:13.085 [2024-07-14 07:42:29.096920] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcba2f0 is same with the state(5) to be set 00:25:13.085 [2024-07-14 07:42:29.096932] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcba2f0 is same with the state(5) to be set 00:25:13.085 [2024-07-14 07:42:29.096944] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcba2f0 is same with the state(5) to be set 00:25:13.085 [2024-07-14 07:42:29.096956] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcba2f0 is same with the state(5) to be set 00:25:13.085 [2024-07-14 07:42:29.096967] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcba2f0 is same with the state(5) to be set 00:25:13.085 [2024-07-14 07:42:29.096979] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcba2f0 is same with the state(5) to be set 00:25:13.085 [2024-07-14 07:42:29.096992] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcba2f0 is same with the state(5) to be set 00:25:13.085 [2024-07-14 07:42:29.097005] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcba2f0 is same with the state(5) to be set 00:25:13.085 [2024-07-14 07:42:29.097018] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcba2f0 is same with the state(5) to be set 00:25:13.085 [2024-07-14 07:42:29.097030] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcba2f0 is same with the state(5) to be set 00:25:13.085 [2024-07-14 07:42:29.097042] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcba2f0 is same with the state(5) to be set 00:25:13.085 [2024-07-14 07:42:29.097054] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcba2f0 is same with the state(5) to be set 00:25:13.085 [2024-07-14 07:42:29.097066] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcba2f0 is same with the state(5) to be set 00:25:13.085 [2024-07-14 07:42:29.097078] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcba2f0 is same with the state(5) to be set 00:25:13.085 [2024-07-14 07:42:29.097090] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcba2f0 is same with the state(5) to be set 00:25:13.085 [2024-07-14 07:42:29.097103] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcba2f0 is same with the state(5) to be set 00:25:13.085 [2024-07-14 07:42:29.097115] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcba2f0 is same with the state(5) to be set 00:25:13.085 [2024-07-14 07:42:29.097127] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcba2f0 is same with the state(5) to be set 00:25:13.085 [2024-07-14 07:42:29.097139] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcba2f0 is same with the state(5) to be set 00:25:13.085 [2024-07-14 07:42:29.097155] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcba2f0 is same with the state(5) to be set 00:25:13.085 [2024-07-14 07:42:29.097183] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcba2f0 is same with the state(5) to be set 00:25:13.085 [2024-07-14 07:42:29.097194] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcba2f0 is same with the state(5) to be set 00:25:13.085 [2024-07-14 07:42:29.097205] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcba2f0 is same with the state(5) to be set 00:25:13.085 [2024-07-14 07:42:29.097217] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcba2f0 is same with the state(5) to be set 00:25:13.085 [2024-07-14 07:42:29.097228] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcba2f0 is same with the state(5) to be set 00:25:13.085 [2024-07-14 07:42:29.097240] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcba2f0 is same with the state(5) to be set 00:25:13.085 [2024-07-14 07:42:29.097251] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcba2f0 is same with the state(5) to be set 00:25:13.085 07:42:29 -- host/failover.sh@59 -- # wait 4189132 00:25:19.652 0 00:25:19.652 07:42:34 -- host/failover.sh@61 -- # killprocess 4188863 00:25:19.652 07:42:34 -- common/autotest_common.sh@926 -- # '[' -z 4188863 ']' 00:25:19.652 07:42:34 -- common/autotest_common.sh@930 -- # kill -0 4188863 00:25:19.652 07:42:34 -- common/autotest_common.sh@931 -- # uname 00:25:19.652 07:42:34 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:25:19.652 07:42:34 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 4188863 00:25:19.652 07:42:34 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:25:19.652 07:42:34 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:25:19.652 07:42:34 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 4188863' 00:25:19.652 killing process with pid 4188863 00:25:19.652 07:42:34 -- common/autotest_common.sh@945 -- # kill 4188863 00:25:19.652 07:42:34 -- common/autotest_common.sh@950 -- # wait 4188863 00:25:19.652 07:42:35 -- host/failover.sh@63 -- # cat /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:25:19.652 [2024-07-14 07:42:18.059417] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:25:19.652 [2024-07-14 07:42:18.059499] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid4188863 ] 00:25:19.652 EAL: No free 2048 kB hugepages reported on node 1 00:25:19.652 [2024-07-14 07:42:18.120030] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:19.652 [2024-07-14 07:42:18.227055] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:25:19.652 Running I/O for 15 seconds... 00:25:19.652 [2024-07-14 07:42:20.921776] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:117840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:19.652 [2024-07-14 07:42:20.921821] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:19.652 [2024-07-14 07:42:20.921872] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:117320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:19.652 [2024-07-14 07:42:20.921891] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:19.652 [2024-07-14 07:42:20.921918] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:117336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:19.652 [2024-07-14 07:42:20.921934] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:19.652 [2024-07-14 07:42:20.921950] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:117352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:19.652 [2024-07-14 07:42:20.921965] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:19.652 [2024-07-14 07:42:20.921981] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:117360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:19.652 [2024-07-14 07:42:20.921995] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:19.652 [2024-07-14 07:42:20.922011] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:117368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:19.652 [2024-07-14 07:42:20.922026] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:19.652 [2024-07-14 07:42:20.922041] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:117376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:19.652 [2024-07-14 07:42:20.922057] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:19.652 [2024-07-14 07:42:20.922072] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:117384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:19.652 [2024-07-14 07:42:20.922087] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:19.652 [2024-07-14 07:42:20.922102] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:117392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:19.652 [2024-07-14 07:42:20.922117] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:19.652 [2024-07-14 07:42:20.922132] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:117928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:19.652 [2024-07-14 07:42:20.922147] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:19.652 [2024-07-14 07:42:20.922171] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:117952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:19.652 [2024-07-14 07:42:20.922186] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:19.652 [2024-07-14 07:42:20.922224] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:117984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:19.652 [2024-07-14 07:42:20.922239] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:19.652 [2024-07-14 07:42:20.922254] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:117992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:19.652 [2024-07-14 07:42:20.922269] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:19.652 [2024-07-14 07:42:20.922299] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:118000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:19.652 [2024-07-14 07:42:20.922312] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:19.652 [2024-07-14 07:42:20.922327] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:118008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:19.652 [2024-07-14 07:42:20.922340] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:19.652 [2024-07-14 07:42:20.922354] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:118016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:19.652 [2024-07-14 07:42:20.922368] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:19.652 [2024-07-14 07:42:20.922384] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:118024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:19.652 [2024-07-14 07:42:20.922397] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:19.652 [2024-07-14 07:42:20.922411] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:118032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:19.652 [2024-07-14 07:42:20.922425] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:19.652 [2024-07-14 07:42:20.922439] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:118040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:19.652 [2024-07-14 07:42:20.922453] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:19.652 [2024-07-14 07:42:20.922467] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:118048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:19.652 [2024-07-14 07:42:20.922480] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:19.652 [2024-07-14 07:42:20.922494] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:118056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:19.652 [2024-07-14 07:42:20.922507] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:19.652 [2024-07-14 07:42:20.922522] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:118064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:19.652 [2024-07-14 07:42:20.922535] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:19.652 [2024-07-14 07:42:20.922550] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:118072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:19.652 [2024-07-14 07:42:20.922563] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:19.653 [2024-07-14 07:42:20.922577] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:118080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:19.653 [2024-07-14 07:42:20.922594] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:19.653 [2024-07-14 07:42:20.922609] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:117448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:19.653 [2024-07-14 07:42:20.922622] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:19.653 [2024-07-14 07:42:20.922636] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:117456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:19.653 [2024-07-14 07:42:20.922649] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:19.653 [2024-07-14 07:42:20.922678] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:117464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:19.653 [2024-07-14 07:42:20.922692] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:19.653 [2024-07-14 07:42:20.922708] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:117472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:19.653 [2024-07-14 07:42:20.922721] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:19.653 [2024-07-14 07:42:20.922736] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:117480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:19.653 [2024-07-14 07:42:20.922749] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:19.653 [2024-07-14 07:42:20.922764] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:117488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:19.653 [2024-07-14 07:42:20.922778] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:19.653 [2024-07-14 07:42:20.922792] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:117496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:19.653 [2024-07-14 07:42:20.922806] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:19.653 [2024-07-14 07:42:20.922822] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:117504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:19.653 [2024-07-14 07:42:20.922836] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:19.653 [2024-07-14 07:42:20.922850] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:117512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:19.653 [2024-07-14 07:42:20.922864] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:19.653 [2024-07-14 07:42:20.922915] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:117528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:19.653 [2024-07-14 07:42:20.922930] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:19.653 [2024-07-14 07:42:20.922945] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:117576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:19.653 [2024-07-14 07:42:20.922960] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:19.653 [2024-07-14 07:42:20.922975] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:117592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:19.653 [2024-07-14 07:42:20.922989] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:19.653 [2024-07-14 07:42:20.923008] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:117616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:19.653 [2024-07-14 07:42:20.923022] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:19.653 [2024-07-14 07:42:20.923038] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:117632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:19.653 [2024-07-14 07:42:20.923052] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:19.653 [2024-07-14 07:42:20.923067] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:117648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:19.653 [2024-07-14 07:42:20.923080] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:19.653 [2024-07-14 07:42:20.923096] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:117664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:19.653 [2024-07-14 07:42:20.923110] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:19.653 [2024-07-14 07:42:20.923125] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:118088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:19.653 [2024-07-14 07:42:20.923139] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:19.653 [2024-07-14 07:42:20.923154] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:118096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:19.653 [2024-07-14 07:42:20.923168] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:19.653 [2024-07-14 07:42:20.923206] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:118104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:19.653 [2024-07-14 07:42:20.923221] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:19.653 [2024-07-14 07:42:20.923235] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:118112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:19.653 [2024-07-14 07:42:20.923249] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:19.653 [2024-07-14 07:42:20.923264] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:118120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:19.653 [2024-07-14 07:42:20.923277] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:19.653 [2024-07-14 07:42:20.923292] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:118128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:19.653 [2024-07-14 07:42:20.923306] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:19.653 [2024-07-14 07:42:20.923321] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:118136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:19.653 [2024-07-14 07:42:20.923334] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:19.653 [2024-07-14 07:42:20.923350] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:118144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:19.653 [2024-07-14 07:42:20.923364] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:19.653 [2024-07-14 07:42:20.923379] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:118152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:19.653 [2024-07-14 07:42:20.923396] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:19.653 [2024-07-14 07:42:20.923412] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:118160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:19.653 [2024-07-14 07:42:20.923426] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:19.653 [2024-07-14 07:42:20.923440] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:118168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:19.653 [2024-07-14 07:42:20.923454] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:19.653 [2024-07-14 07:42:20.923468] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:118176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:19.653 [2024-07-14 07:42:20.923482] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:19.653 [2024-07-14 07:42:20.923497] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:118184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:19.653 [2024-07-14 07:42:20.923510] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:19.653 [2024-07-14 07:42:20.923525] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:118192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:19.653 [2024-07-14 07:42:20.923539] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:19.653 [2024-07-14 07:42:20.923553] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:118200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:19.653 [2024-07-14 07:42:20.923567] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:19.653 [2024-07-14 07:42:20.923581] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:118208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:19.653 [2024-07-14 07:42:20.923595] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:19.653 [2024-07-14 07:42:20.923610] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:118216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:19.653 [2024-07-14 07:42:20.923623] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:19.653 [2024-07-14 07:42:20.923638] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:118224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:19.653 [2024-07-14 07:42:20.923652] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:19.653 [2024-07-14 07:42:20.923667] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:118232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:19.653 [2024-07-14 07:42:20.923681] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:19.653 [2024-07-14 07:42:20.923695] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:118240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:19.653 [2024-07-14 07:42:20.923709] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:19.653 [2024-07-14 07:42:20.923723] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:118248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:19.653 [2024-07-14 07:42:20.923737] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:19.653 [2024-07-14 07:42:20.923755] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:118256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:19.653 [2024-07-14 07:42:20.923769] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:19.653 [2024-07-14 07:42:20.923784] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:118264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:19.653 [2024-07-14 07:42:20.923797] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:19.653 [2024-07-14 07:42:20.923818] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:118272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:19.653 [2024-07-14 07:42:20.923832] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:19.653 [2024-07-14 07:42:20.923847] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:117688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:19.653 [2024-07-14 07:42:20.923860] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:19.654 [2024-07-14 07:42:20.923910] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:117712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:19.654 [2024-07-14 07:42:20.923925] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:19.654 [2024-07-14 07:42:20.923941] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:117736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:19.654 [2024-07-14 07:42:20.923955] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:19.654 [2024-07-14 07:42:20.923970] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:117760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:19.654 [2024-07-14 07:42:20.923984] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:19.654 [2024-07-14 07:42:20.923999] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:117768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:19.654 [2024-07-14 07:42:20.924013] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:19.654 [2024-07-14 07:42:20.924028] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:117776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:19.654 [2024-07-14 07:42:20.924042] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:19.654 [2024-07-14 07:42:20.924057] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:117816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:19.654 [2024-07-14 07:42:20.924070] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:19.654 [2024-07-14 07:42:20.924085] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:117824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:19.654 [2024-07-14 07:42:20.924099] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:19.654 [2024-07-14 07:42:20.924114] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:118280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:19.654 [2024-07-14 07:42:20.924128] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:19.654 [2024-07-14 07:42:20.924143] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:118288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:19.654 [2024-07-14 07:42:20.924164] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:19.654 [2024-07-14 07:42:20.924179] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:118296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:19.654 [2024-07-14 07:42:20.924209] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:19.654 [2024-07-14 07:42:20.924224] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:118304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:19.654 [2024-07-14 07:42:20.924238] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:19.654 [2024-07-14 07:42:20.924252] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:118312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:19.654 [2024-07-14 07:42:20.924266] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:19.654 [2024-07-14 07:42:20.924281] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:118320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:19.654 [2024-07-14 07:42:20.924294] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:19.654 [2024-07-14 07:42:20.924309] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:118328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:19.654 [2024-07-14 07:42:20.924323] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:19.654 [2024-07-14 07:42:20.924344] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:118336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:19.654 [2024-07-14 07:42:20.924358] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:19.654 [2024-07-14 07:42:20.924372] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:118344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:19.654 [2024-07-14 07:42:20.924386] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:19.654 [2024-07-14 07:42:20.924401] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:118352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:19.654 [2024-07-14 07:42:20.924415] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:19.654 [2024-07-14 07:42:20.924430] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:118360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:19.654 [2024-07-14 07:42:20.924443] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:19.654 [2024-07-14 07:42:20.924458] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:118368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:19.654 [2024-07-14 07:42:20.924471] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:19.654 [2024-07-14 07:42:20.924486] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:118376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:19.654 [2024-07-14 07:42:20.924500] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:19.654 [2024-07-14 07:42:20.924514] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:118384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:19.654 [2024-07-14 07:42:20.924529] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:19.654 [2024-07-14 07:42:20.924543] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:118392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:19.654 [2024-07-14 07:42:20.924560] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:19.654 [2024-07-14 07:42:20.924575] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:118400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:19.654 [2024-07-14 07:42:20.924589] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:19.654 [2024-07-14 07:42:20.924603] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:118408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:19.654 [2024-07-14 07:42:20.924617] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:19.654 [2024-07-14 07:42:20.924632] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:118416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:19.654 [2024-07-14 07:42:20.924646] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:19.654 [2024-07-14 07:42:20.924661] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:118424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:19.654 [2024-07-14 07:42:20.924674] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:19.654 [2024-07-14 07:42:20.924689] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:118432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:19.654 [2024-07-14 07:42:20.924703] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:19.654 [2024-07-14 07:42:20.924718] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:118440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:19.654 [2024-07-14 07:42:20.924731] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:19.654 [2024-07-14 07:42:20.924746] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:118448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:19.654 [2024-07-14 07:42:20.924760] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:19.654 [2024-07-14 07:42:20.924774] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:117832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:19.654 [2024-07-14 07:42:20.924788] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:19.654 [2024-07-14 07:42:20.924803] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:117848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:19.654 [2024-07-14 07:42:20.924816] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:19.654 [2024-07-14 07:42:20.924831] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:117856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:19.654 [2024-07-14 07:42:20.924844] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:19.654 [2024-07-14 07:42:20.924859] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:117864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:19.654 [2024-07-14 07:42:20.924894] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:19.654 [2024-07-14 07:42:20.924911] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:117872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:19.654 [2024-07-14 07:42:20.924924] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:19.654 [2024-07-14 07:42:20.924943] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:117880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:19.654 [2024-07-14 07:42:20.924957] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:19.654 [2024-07-14 07:42:20.924972] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:117888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:19.654 [2024-07-14 07:42:20.924986] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:19.654 [2024-07-14 07:42:20.925001] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:117896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:19.654 [2024-07-14 07:42:20.925015] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:19.654 [2024-07-14 07:42:20.925030] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:118456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:19.654 [2024-07-14 07:42:20.925044] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:19.654 [2024-07-14 07:42:20.925059] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:118464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:19.654 [2024-07-14 07:42:20.925073] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:19.654 [2024-07-14 07:42:20.925087] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:118472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:19.654 [2024-07-14 07:42:20.925101] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:19.654 [2024-07-14 07:42:20.925116] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:118480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:19.654 [2024-07-14 07:42:20.925130] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:19.654 [2024-07-14 07:42:20.925145] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:118488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:19.654 [2024-07-14 07:42:20.925158] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:19.654 [2024-07-14 07:42:20.925188] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:118496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:19.654 [2024-07-14 07:42:20.925202] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:19.655 [2024-07-14 07:42:20.925223] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:118504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:19.655 [2024-07-14 07:42:20.925237] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:19.655 [2024-07-14 07:42:20.925251] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:118512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:19.655 [2024-07-14 07:42:20.925265] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:19.655 [2024-07-14 07:42:20.925280] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:118520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:19.655 [2024-07-14 07:42:20.925294] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:19.655 [2024-07-14 07:42:20.925309] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:118528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:19.655 [2024-07-14 07:42:20.925325] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:19.655 [2024-07-14 07:42:20.925340] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:118536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:19.655 [2024-07-14 07:42:20.925354] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:19.655 [2024-07-14 07:42:20.925369] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:118544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:19.655 [2024-07-14 07:42:20.925383] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:19.655 [2024-07-14 07:42:20.925398] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:118552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:19.655 [2024-07-14 07:42:20.925411] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:19.655 [2024-07-14 07:42:20.925426] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:118560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:19.655 [2024-07-14 07:42:20.925440] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:19.655 [2024-07-14 07:42:20.925454] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:118568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:19.655 [2024-07-14 07:42:20.925468] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:19.655 [2024-07-14 07:42:20.925483] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:118576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:19.655 [2024-07-14 07:42:20.925496] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:19.655 [2024-07-14 07:42:20.925511] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:118584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:19.655 [2024-07-14 07:42:20.925525] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:19.655 [2024-07-14 07:42:20.925539] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:118592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:19.655 [2024-07-14 07:42:20.925553] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:19.655 [2024-07-14 07:42:20.925568] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:117904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:19.655 [2024-07-14 07:42:20.925581] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:19.655 [2024-07-14 07:42:20.925596] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:117912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:19.655 [2024-07-14 07:42:20.925609] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:19.655 [2024-07-14 07:42:20.925623] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:117920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:19.655 [2024-07-14 07:42:20.925637] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:19.655 [2024-07-14 07:42:20.925652] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:117936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:19.655 [2024-07-14 07:42:20.925666] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:19.655 [2024-07-14 07:42:20.925689] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:117944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:19.655 [2024-07-14 07:42:20.925703] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:19.655 [2024-07-14 07:42:20.925718] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:117960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:19.655 [2024-07-14 07:42:20.925732] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:19.655 [2024-07-14 07:42:20.925748] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:117968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:19.655 [2024-07-14 07:42:20.925762] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:19.655 [2024-07-14 07:42:20.925776] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2188600 is same with the state(5) to be set 00:25:19.655 [2024-07-14 07:42:20.925794] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:19.655 [2024-07-14 07:42:20.925805] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:19.655 [2024-07-14 07:42:20.925816] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:117976 len:8 PRP1 0x0 PRP2 0x0 00:25:19.655 [2024-07-14 07:42:20.925829] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:19.655 [2024-07-14 07:42:20.925926] bdev_nvme.c:1590:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x2188600 was disconnected and freed. reset controller. 00:25:19.655 [2024-07-14 07:42:20.925955] bdev_nvme.c:1843:bdev_nvme_failover_trid: *NOTICE*: Start failover from 10.0.0.2:4420 to 10.0.0.2:4421 00:25:19.655 [2024-07-14 07:42:20.925992] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:25:19.655 [2024-07-14 07:42:20.926010] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:19.655 [2024-07-14 07:42:20.926025] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:25:19.655 [2024-07-14 07:42:20.926038] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:19.655 [2024-07-14 07:42:20.926052] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:25:19.655 [2024-07-14 07:42:20.926065] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:19.655 [2024-07-14 07:42:20.926078] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:25:19.655 [2024-07-14 07:42:20.926091] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:19.655 [2024-07-14 07:42:20.926104] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:19.655 [2024-07-14 07:42:20.926143] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2169bd0 (9): Bad file descriptor 00:25:19.655 [2024-07-14 07:42:20.928444] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:19.655 [2024-07-14 07:42:20.997713] bdev_nvme.c:2040:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:25:19.655 [2024-07-14 07:42:24.563252] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:2792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:19.655 [2024-07-14 07:42:24.563298] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:19.655 [2024-07-14 07:42:24.563335] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:2200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:19.655 [2024-07-14 07:42:24.563352] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:19.655 [2024-07-14 07:42:24.563368] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:2208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:19.655 [2024-07-14 07:42:24.563383] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:19.655 [2024-07-14 07:42:24.563398] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:2216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:19.655 [2024-07-14 07:42:24.563412] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:19.655 [2024-07-14 07:42:24.563426] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:2232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:19.655 [2024-07-14 07:42:24.563440] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:19.655 [2024-07-14 07:42:24.563456] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:2248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:19.655 [2024-07-14 07:42:24.563469] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:19.655 [2024-07-14 07:42:24.563498] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:2264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:19.655 [2024-07-14 07:42:24.563512] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:19.655 [2024-07-14 07:42:24.563526] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:2272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:19.655 [2024-07-14 07:42:24.563539] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:19.655 [2024-07-14 07:42:24.563553] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:2280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:19.655 [2024-07-14 07:42:24.563566] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:19.655 [2024-07-14 07:42:24.563580] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:2816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:19.655 [2024-07-14 07:42:24.563593] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:19.655 [2024-07-14 07:42:24.563607] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:2824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:19.655 [2024-07-14 07:42:24.563620] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:19.655 [2024-07-14 07:42:24.563635] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:2832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:19.655 [2024-07-14 07:42:24.563648] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:19.655 [2024-07-14 07:42:24.563662] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:2856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:19.655 [2024-07-14 07:42:24.563675] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:19.655 [2024-07-14 07:42:24.563689] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:2864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:19.655 [2024-07-14 07:42:24.563710] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:19.655 [2024-07-14 07:42:24.563725] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:2872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:19.655 [2024-07-14 07:42:24.563738] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:19.656 [2024-07-14 07:42:24.563752] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:2888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:19.656 [2024-07-14 07:42:24.563765] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:19.656 [2024-07-14 07:42:24.563780] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:2936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:19.656 [2024-07-14 07:42:24.563793] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:19.656 [2024-07-14 07:42:24.563808] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:2304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:19.656 [2024-07-14 07:42:24.563820] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:19.656 [2024-07-14 07:42:24.563835] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:2328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:19.656 [2024-07-14 07:42:24.563848] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:19.656 [2024-07-14 07:42:24.563862] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:2352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:19.656 [2024-07-14 07:42:24.563882] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:19.656 [2024-07-14 07:42:24.563897] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:2392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:19.656 [2024-07-14 07:42:24.563910] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:19.656 [2024-07-14 07:42:24.563924] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:2400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:19.656 [2024-07-14 07:42:24.563938] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:19.656 [2024-07-14 07:42:24.563952] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:2416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:19.656 [2024-07-14 07:42:24.563964] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:19.656 [2024-07-14 07:42:24.563978] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:2480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:19.656 [2024-07-14 07:42:24.563991] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:19.656 [2024-07-14 07:42:24.564005] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:2488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:19.656 [2024-07-14 07:42:24.564018] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:19.656 [2024-07-14 07:42:24.564032] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:2968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:19.656 [2024-07-14 07:42:24.564045] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:19.656 [2024-07-14 07:42:24.564059] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:2976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:19.656 [2024-07-14 07:42:24.564075] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:19.656 [2024-07-14 07:42:24.564090] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:2992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:19.656 [2024-07-14 07:42:24.564103] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:19.656 [2024-07-14 07:42:24.564117] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:3000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:19.656 [2024-07-14 07:42:24.564131] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:19.656 [2024-07-14 07:42:24.564145] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:2504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:19.656 [2024-07-14 07:42:24.564158] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:19.656 [2024-07-14 07:42:24.564173] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:2512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:19.656 [2024-07-14 07:42:24.564186] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:19.656 [2024-07-14 07:42:24.564201] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:2520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:19.656 [2024-07-14 07:42:24.564213] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:19.656 [2024-07-14 07:42:24.564228] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:2528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:19.656 [2024-07-14 07:42:24.564241] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:19.656 [2024-07-14 07:42:24.564256] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:2560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:19.656 [2024-07-14 07:42:24.564269] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:19.656 [2024-07-14 07:42:24.564283] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:2568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:19.656 [2024-07-14 07:42:24.564296] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:19.656 [2024-07-14 07:42:24.564310] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:2616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:19.656 [2024-07-14 07:42:24.564323] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:19.656 [2024-07-14 07:42:24.564337] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:2640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:19.656 [2024-07-14 07:42:24.564350] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:19.656 [2024-07-14 07:42:24.564364] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:3008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:19.656 [2024-07-14 07:42:24.564376] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:19.656 [2024-07-14 07:42:24.564391] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:3016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:19.656 [2024-07-14 07:42:24.564403] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:19.656 [2024-07-14 07:42:24.564421] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:3024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:19.656 [2024-07-14 07:42:24.564434] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:19.656 [2024-07-14 07:42:24.564448] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:3032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:19.656 [2024-07-14 07:42:24.564461] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:19.656 [2024-07-14 07:42:24.564475] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:3040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:19.656 [2024-07-14 07:42:24.564488] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:19.656 [2024-07-14 07:42:24.564502] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:3048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:19.656 [2024-07-14 07:42:24.564514] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:19.656 [2024-07-14 07:42:24.564528] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:3056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:19.656 [2024-07-14 07:42:24.564541] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:19.656 [2024-07-14 07:42:24.564556] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:3064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:19.656 [2024-07-14 07:42:24.564569] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:19.656 [2024-07-14 07:42:24.564583] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:3072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:19.656 [2024-07-14 07:42:24.564596] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:19.657 [2024-07-14 07:42:24.564610] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:3080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:19.657 [2024-07-14 07:42:24.564623] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:19.657 [2024-07-14 07:42:24.564637] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:3088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:19.657 [2024-07-14 07:42:24.564651] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:19.657 [2024-07-14 07:42:24.564665] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:3096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:19.657 [2024-07-14 07:42:24.564678] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:19.657 [2024-07-14 07:42:24.564692] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:3104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:19.657 [2024-07-14 07:42:24.564706] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:19.657 [2024-07-14 07:42:24.564720] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:3112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:19.657 [2024-07-14 07:42:24.564733] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:19.657 [2024-07-14 07:42:24.564747] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:3120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:19.657 [2024-07-14 07:42:24.564762] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:19.657 [2024-07-14 07:42:24.564777] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:3128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:19.657 [2024-07-14 07:42:24.564790] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:19.657 [2024-07-14 07:42:24.564805] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:3136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:19.657 [2024-07-14 07:42:24.564818] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:19.657 [2024-07-14 07:42:24.564832] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:3144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:19.657 [2024-07-14 07:42:24.564860] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:19.657 [2024-07-14 07:42:24.564883] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:3152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:19.657 [2024-07-14 07:42:24.564897] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:19.657 [2024-07-14 07:42:24.564927] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:3160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:19.657 [2024-07-14 07:42:24.564941] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:19.657 [2024-07-14 07:42:24.564959] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:3168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:19.657 [2024-07-14 07:42:24.564973] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:19.657 [2024-07-14 07:42:24.564989] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:3176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:19.657 [2024-07-14 07:42:24.565003] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:19.657 [2024-07-14 07:42:24.565019] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:3184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:19.657 [2024-07-14 07:42:24.565033] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:19.657 [2024-07-14 07:42:24.565048] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:3192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:19.657 [2024-07-14 07:42:24.565063] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:19.657 [2024-07-14 07:42:24.565078] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:3200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:19.657 [2024-07-14 07:42:24.565092] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:19.657 [2024-07-14 07:42:24.565107] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:3208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:19.657 [2024-07-14 07:42:24.565122] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:19.657 [2024-07-14 07:42:24.565137] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:3216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:19.657 [2024-07-14 07:42:24.565152] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:19.657 [2024-07-14 07:42:24.565183] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:3224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:19.657 [2024-07-14 07:42:24.565200] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:19.657 [2024-07-14 07:42:24.565216] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:3232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:19.657 [2024-07-14 07:42:24.565244] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:19.657 [2024-07-14 07:42:24.565258] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:3240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:19.657 [2024-07-14 07:42:24.565272] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:19.657 [2024-07-14 07:42:24.565286] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:3248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:19.657 [2024-07-14 07:42:24.565299] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:19.657 [2024-07-14 07:42:24.565314] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:3256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:19.657 [2024-07-14 07:42:24.565327] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:19.657 [2024-07-14 07:42:24.565342] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:3264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:19.657 [2024-07-14 07:42:24.565355] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:19.657 [2024-07-14 07:42:24.565370] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:2648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:19.657 [2024-07-14 07:42:24.565384] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:19.657 [2024-07-14 07:42:24.565398] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:2656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:19.657 [2024-07-14 07:42:24.565412] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:19.657 [2024-07-14 07:42:24.565426] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:2672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:19.657 [2024-07-14 07:42:24.565440] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:19.657 [2024-07-14 07:42:24.565454] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:2680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:19.657 [2024-07-14 07:42:24.565468] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:19.657 [2024-07-14 07:42:24.565482] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:2712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:19.657 [2024-07-14 07:42:24.565495] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:19.657 [2024-07-14 07:42:24.565510] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:2728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:19.657 [2024-07-14 07:42:24.565523] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:19.657 [2024-07-14 07:42:24.565538] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:2736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:19.657 [2024-07-14 07:42:24.565551] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:19.657 [2024-07-14 07:42:24.565568] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:2760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:19.657 [2024-07-14 07:42:24.565582] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:19.657 [2024-07-14 07:42:24.565597] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:3272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:19.657 [2024-07-14 07:42:24.565610] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:19.657 [2024-07-14 07:42:24.565624] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:3280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:19.657 [2024-07-14 07:42:24.565638] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:19.657 [2024-07-14 07:42:24.565652] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:3288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:19.657 [2024-07-14 07:42:24.565666] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:19.657 [2024-07-14 07:42:24.565680] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:3296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:19.657 [2024-07-14 07:42:24.565693] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:19.657 [2024-07-14 07:42:24.565708] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:3304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:19.657 [2024-07-14 07:42:24.565721] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:19.657 [2024-07-14 07:42:24.565735] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:3312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:19.657 [2024-07-14 07:42:24.565748] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:19.657 [2024-07-14 07:42:24.565762] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:3320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:19.657 [2024-07-14 07:42:24.565776] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:19.657 [2024-07-14 07:42:24.565790] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:3328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:19.657 [2024-07-14 07:42:24.565803] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:19.657 [2024-07-14 07:42:24.565818] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:3336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:19.657 [2024-07-14 07:42:24.565832] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:19.657 [2024-07-14 07:42:24.565846] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:3344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:19.657 [2024-07-14 07:42:24.565859] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:19.657 [2024-07-14 07:42:24.565898] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:2776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:19.657 [2024-07-14 07:42:24.565913] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:19.658 [2024-07-14 07:42:24.565927] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:2784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:19.658 [2024-07-14 07:42:24.565944] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:19.658 [2024-07-14 07:42:24.565960] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:2800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:19.658 [2024-07-14 07:42:24.565973] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:19.658 [2024-07-14 07:42:24.565988] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:2808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:19.658 [2024-07-14 07:42:24.566001] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:19.658 [2024-07-14 07:42:24.566016] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:2840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:19.658 [2024-07-14 07:42:24.566029] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:19.658 [2024-07-14 07:42:24.566045] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:2848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:19.658 [2024-07-14 07:42:24.566058] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:19.658 [2024-07-14 07:42:24.566073] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:2880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:19.658 [2024-07-14 07:42:24.566086] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:19.658 [2024-07-14 07:42:24.566101] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:2896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:19.658 [2024-07-14 07:42:24.566115] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:19.658 [2024-07-14 07:42:24.566130] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:3352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:19.658 [2024-07-14 07:42:24.566143] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:19.658 [2024-07-14 07:42:24.566158] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:3360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:19.658 [2024-07-14 07:42:24.566187] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:19.658 [2024-07-14 07:42:24.566202] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:3368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:19.658 [2024-07-14 07:42:24.566215] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:19.658 [2024-07-14 07:42:24.566230] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:19.658 [2024-07-14 07:42:24.566242] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:19.658 [2024-07-14 07:42:24.566257] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:3384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:19.658 [2024-07-14 07:42:24.566276] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:19.658 [2024-07-14 07:42:24.566291] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:3392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:19.658 [2024-07-14 07:42:24.566304] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:19.658 [2024-07-14 07:42:24.566319] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:3400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:19.658 [2024-07-14 07:42:24.566335] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:19.658 [2024-07-14 07:42:24.566350] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:3408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:19.658 [2024-07-14 07:42:24.566363] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:19.658 [2024-07-14 07:42:24.566378] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:3416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:19.658 [2024-07-14 07:42:24.566391] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:19.658 [2024-07-14 07:42:24.566405] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:3424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:19.658 [2024-07-14 07:42:24.566418] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:19.658 [2024-07-14 07:42:24.566432] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:3432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:19.658 [2024-07-14 07:42:24.566445] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:19.658 [2024-07-14 07:42:24.566460] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:3440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:19.658 [2024-07-14 07:42:24.566472] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:19.658 [2024-07-14 07:42:24.566487] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:3448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:19.658 [2024-07-14 07:42:24.566500] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:19.658 [2024-07-14 07:42:24.566514] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:3456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:19.658 [2024-07-14 07:42:24.566527] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:19.658 [2024-07-14 07:42:24.566541] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:3464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:19.658 [2024-07-14 07:42:24.566555] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:19.658 [2024-07-14 07:42:24.566574] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:3472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:19.658 [2024-07-14 07:42:24.566588] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:19.658 [2024-07-14 07:42:24.566602] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:3480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:19.658 [2024-07-14 07:42:24.566615] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:19.658 [2024-07-14 07:42:24.566629] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:3488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:19.658 [2024-07-14 07:42:24.566642] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:19.658 [2024-07-14 07:42:24.566656] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:3496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:19.658 [2024-07-14 07:42:24.566669] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:19.658 [2024-07-14 07:42:24.566687] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:3504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:19.658 [2024-07-14 07:42:24.566700] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:19.658 [2024-07-14 07:42:24.566714] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:3512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:19.658 [2024-07-14 07:42:24.566733] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:19.658 [2024-07-14 07:42:24.566748] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:3520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:19.658 [2024-07-14 07:42:24.566761] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:19.658 [2024-07-14 07:42:24.566776] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:3528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:19.658 [2024-07-14 07:42:24.566789] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:19.658 [2024-07-14 07:42:24.566804] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:3536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:19.658 [2024-07-14 07:42:24.566817] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:19.658 [2024-07-14 07:42:24.566831] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:2904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:19.658 [2024-07-14 07:42:24.566844] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:19.658 [2024-07-14 07:42:24.566858] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:2912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:19.658 [2024-07-14 07:42:24.566898] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:19.658 [2024-07-14 07:42:24.566915] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:2920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:19.658 [2024-07-14 07:42:24.566928] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:19.658 [2024-07-14 07:42:24.566943] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:2928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:19.658 [2024-07-14 07:42:24.566956] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:19.658 [2024-07-14 07:42:24.566971] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:2944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:19.658 [2024-07-14 07:42:24.566984] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:19.658 [2024-07-14 07:42:24.566999] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:2952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:19.658 [2024-07-14 07:42:24.567012] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:19.658 [2024-07-14 07:42:24.567027] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:2960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:19.658 [2024-07-14 07:42:24.567040] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:19.658 [2024-07-14 07:42:24.567060] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2176050 is same with the state(5) to be set 00:25:19.658 [2024-07-14 07:42:24.567081] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:19.658 [2024-07-14 07:42:24.567092] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:19.658 [2024-07-14 07:42:24.567103] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:2984 len:8 PRP1 0x0 PRP2 0x0 00:25:19.658 [2024-07-14 07:42:24.567116] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:19.658 [2024-07-14 07:42:24.567183] bdev_nvme.c:1590:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x2176050 was disconnected and freed. reset controller. 00:25:19.658 [2024-07-14 07:42:24.567202] bdev_nvme.c:1843:bdev_nvme_failover_trid: *NOTICE*: Start failover from 10.0.0.2:4421 to 10.0.0.2:4422 00:25:19.658 [2024-07-14 07:42:24.567250] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:25:19.658 [2024-07-14 07:42:24.567268] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:19.658 [2024-07-14 07:42:24.567284] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:25:19.659 [2024-07-14 07:42:24.567297] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:19.659 [2024-07-14 07:42:24.567311] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:25:19.659 [2024-07-14 07:42:24.567324] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:19.659 [2024-07-14 07:42:24.567337] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:25:19.659 [2024-07-14 07:42:24.567350] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:19.659 [2024-07-14 07:42:24.567363] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:19.659 [2024-07-14 07:42:24.567405] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2169bd0 (9): Bad file descriptor 00:25:19.659 [2024-07-14 07:42:24.569588] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:19.659 [2024-07-14 07:42:24.717861] bdev_nvme.c:2040:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:25:19.659 [2024-07-14 07:42:29.096561] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:25:19.659 [2024-07-14 07:42:29.096603] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:19.659 [2024-07-14 07:42:29.096622] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:25:19.659 [2024-07-14 07:42:29.096637] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:19.659 [2024-07-14 07:42:29.096652] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:25:19.659 [2024-07-14 07:42:29.096666] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:19.659 [2024-07-14 07:42:29.096680] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:25:19.659 [2024-07-14 07:42:29.096694] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:19.659 [2024-07-14 07:42:29.096707] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2169bd0 is same with the state(5) to be set 00:25:19.659 [2024-07-14 07:42:29.097452] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:14104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:19.659 [2024-07-14 07:42:29.097474] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:19.659 [2024-07-14 07:42:29.097498] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:14120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:19.659 [2024-07-14 07:42:29.097513] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:19.659 [2024-07-14 07:42:29.097530] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:14144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:19.659 [2024-07-14 07:42:29.097543] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:19.659 [2024-07-14 07:42:29.097558] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:14152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:19.659 [2024-07-14 07:42:29.097571] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:19.659 [2024-07-14 07:42:29.097587] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:14160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:19.659 [2024-07-14 07:42:29.097600] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:19.659 [2024-07-14 07:42:29.097615] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:13568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:19.659 [2024-07-14 07:42:29.097628] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:19.659 [2024-07-14 07:42:29.097643] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:13592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:19.659 [2024-07-14 07:42:29.097656] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:19.659 [2024-07-14 07:42:29.097670] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:13600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:19.659 [2024-07-14 07:42:29.097683] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:19.659 [2024-07-14 07:42:29.097713] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:13624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:19.659 [2024-07-14 07:42:29.097726] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:19.659 [2024-07-14 07:42:29.097740] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:13640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:19.659 [2024-07-14 07:42:29.097753] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:19.659 [2024-07-14 07:42:29.097767] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:13696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:19.659 [2024-07-14 07:42:29.097780] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:19.659 [2024-07-14 07:42:29.097794] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:13704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:19.659 [2024-07-14 07:42:29.097807] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:19.659 [2024-07-14 07:42:29.097821] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:13712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:19.659 [2024-07-14 07:42:29.097834] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:19.659 [2024-07-14 07:42:29.097853] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:14168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:19.659 [2024-07-14 07:42:29.097873] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:19.659 [2024-07-14 07:42:29.097906] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:14184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:19.659 [2024-07-14 07:42:29.097919] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:19.659 [2024-07-14 07:42:29.097935] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:14192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:19.659 [2024-07-14 07:42:29.097965] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:19.659 [2024-07-14 07:42:29.097980] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:14208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:19.659 [2024-07-14 07:42:29.097994] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:19.659 [2024-07-14 07:42:29.098009] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:14216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:19.659 [2024-07-14 07:42:29.098023] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:19.659 [2024-07-14 07:42:29.098038] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:14224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:19.659 [2024-07-14 07:42:29.098051] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:19.659 [2024-07-14 07:42:29.098066] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:14232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:19.659 [2024-07-14 07:42:29.098080] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:19.659 [2024-07-14 07:42:29.098095] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:14240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:19.659 [2024-07-14 07:42:29.098109] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:19.659 [2024-07-14 07:42:29.098124] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:14280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:19.659 [2024-07-14 07:42:29.098138] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:19.659 [2024-07-14 07:42:29.098153] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:14288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:19.659 [2024-07-14 07:42:29.098166] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:19.659 [2024-07-14 07:42:29.098181] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:14328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:19.659 [2024-07-14 07:42:29.098195] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:19.659 [2024-07-14 07:42:29.098210] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:13720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:19.659 [2024-07-14 07:42:29.098224] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:19.659 [2024-07-14 07:42:29.098239] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:13728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:19.659 [2024-07-14 07:42:29.098271] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:19.659 [2024-07-14 07:42:29.098286] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:13736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:19.659 [2024-07-14 07:42:29.098298] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:19.659 [2024-07-14 07:42:29.098312] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:13768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:19.659 [2024-07-14 07:42:29.098325] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:19.659 [2024-07-14 07:42:29.098339] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:13776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:19.659 [2024-07-14 07:42:29.098352] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:19.659 [2024-07-14 07:42:29.098366] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:13792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:19.659 [2024-07-14 07:42:29.098379] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:19.659 [2024-07-14 07:42:29.098393] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:13808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:19.659 [2024-07-14 07:42:29.098406] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:19.659 [2024-07-14 07:42:29.098420] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:13816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:19.659 [2024-07-14 07:42:29.098433] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:19.659 [2024-07-14 07:42:29.098448] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:14336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:19.659 [2024-07-14 07:42:29.098461] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:19.659 [2024-07-14 07:42:29.098475] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:14352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:19.659 [2024-07-14 07:42:29.098488] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:19.659 [2024-07-14 07:42:29.098502] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:14360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:19.660 [2024-07-14 07:42:29.098515] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:19.660 [2024-07-14 07:42:29.098529] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:14368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:19.660 [2024-07-14 07:42:29.098541] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:19.660 [2024-07-14 07:42:29.098555] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:14376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:19.660 [2024-07-14 07:42:29.098568] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:19.660 [2024-07-14 07:42:29.098582] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:14384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:19.660 [2024-07-14 07:42:29.098595] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:19.660 [2024-07-14 07:42:29.098612] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:14392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:19.660 [2024-07-14 07:42:29.098625] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:19.660 [2024-07-14 07:42:29.098639] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:14400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:19.660 [2024-07-14 07:42:29.098652] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:19.660 [2024-07-14 07:42:29.098666] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:14408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:19.660 [2024-07-14 07:42:29.098679] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:19.660 [2024-07-14 07:42:29.098693] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:14416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:19.660 [2024-07-14 07:42:29.098705] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:19.660 [2024-07-14 07:42:29.098719] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:14424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:19.660 [2024-07-14 07:42:29.098732] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:19.660 [2024-07-14 07:42:29.098746] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:14432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:19.660 [2024-07-14 07:42:29.098758] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:19.660 [2024-07-14 07:42:29.098772] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:19.660 [2024-07-14 07:42:29.098785] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:19.660 [2024-07-14 07:42:29.098799] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:14448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:19.660 [2024-07-14 07:42:29.098812] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:19.660 [2024-07-14 07:42:29.098826] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:14456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:19.660 [2024-07-14 07:42:29.098845] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:19.660 [2024-07-14 07:42:29.098862] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:14464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:19.660 [2024-07-14 07:42:29.098900] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:19.660 [2024-07-14 07:42:29.098916] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:14472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:19.660 [2024-07-14 07:42:29.098930] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:19.660 [2024-07-14 07:42:29.098945] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:14480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:19.660 [2024-07-14 07:42:29.098959] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:19.660 [2024-07-14 07:42:29.098973] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:14488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:19.660 [2024-07-14 07:42:29.098990] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:19.660 [2024-07-14 07:42:29.099006] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:14496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:19.660 [2024-07-14 07:42:29.099019] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:19.660 [2024-07-14 07:42:29.099034] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:14504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:19.660 [2024-07-14 07:42:29.099048] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:19.660 [2024-07-14 07:42:29.099063] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:14512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:19.660 [2024-07-14 07:42:29.099077] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:19.660 [2024-07-14 07:42:29.099092] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:14520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:19.660 [2024-07-14 07:42:29.099105] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:19.660 [2024-07-14 07:42:29.099120] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:14528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:19.660 [2024-07-14 07:42:29.099133] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:19.660 [2024-07-14 07:42:29.099148] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:14536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:19.660 [2024-07-14 07:42:29.099161] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:19.660 [2024-07-14 07:42:29.099190] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:14544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:19.660 [2024-07-14 07:42:29.099204] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:19.660 [2024-07-14 07:42:29.099218] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:13824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:19.660 [2024-07-14 07:42:29.099231] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:19.660 [2024-07-14 07:42:29.099246] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:13840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:19.660 [2024-07-14 07:42:29.099259] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:19.660 [2024-07-14 07:42:29.099273] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:13848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:19.660 [2024-07-14 07:42:29.099286] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:19.660 [2024-07-14 07:42:29.099300] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:13912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:19.660 [2024-07-14 07:42:29.099313] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:19.660 [2024-07-14 07:42:29.099328] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:13920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:19.660 [2024-07-14 07:42:29.099346] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:19.660 [2024-07-14 07:42:29.099363] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:13936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:19.660 [2024-07-14 07:42:29.099377] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:19.660 [2024-07-14 07:42:29.099392] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:13944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:19.660 [2024-07-14 07:42:29.099405] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:19.661 [2024-07-14 07:42:29.099419] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:13952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:19.661 [2024-07-14 07:42:29.099432] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:19.661 [2024-07-14 07:42:29.099446] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:14552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:19.661 [2024-07-14 07:42:29.099460] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:19.661 [2024-07-14 07:42:29.099474] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:14560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:19.661 [2024-07-14 07:42:29.099487] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:19.661 [2024-07-14 07:42:29.099502] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:14568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:19.661 [2024-07-14 07:42:29.099515] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:19.661 [2024-07-14 07:42:29.099530] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:14576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:19.661 [2024-07-14 07:42:29.099543] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:19.661 [2024-07-14 07:42:29.099557] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:14584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:19.661 [2024-07-14 07:42:29.099570] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:19.662 [2024-07-14 07:42:29.099585] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:13976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:19.662 [2024-07-14 07:42:29.099598] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:19.662 [2024-07-14 07:42:29.099612] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:13984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:19.662 [2024-07-14 07:42:29.099625] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:19.662 [2024-07-14 07:42:29.099640] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:13992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:19.662 [2024-07-14 07:42:29.099653] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:19.662 [2024-07-14 07:42:29.099667] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:14016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:19.662 [2024-07-14 07:42:29.099680] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:19.662 [2024-07-14 07:42:29.099694] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:14056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:19.662 [2024-07-14 07:42:29.099707] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:19.662 [2024-07-14 07:42:29.099725] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:14064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:19.662 [2024-07-14 07:42:29.099739] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:19.662 [2024-07-14 07:42:29.099753] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:14072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:19.662 [2024-07-14 07:42:29.099766] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:19.662 [2024-07-14 07:42:29.099780] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:14080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:19.662 [2024-07-14 07:42:29.099798] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:19.662 [2024-07-14 07:42:29.099813] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:14592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:19.662 [2024-07-14 07:42:29.099826] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:19.662 [2024-07-14 07:42:29.099840] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:14600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:19.662 [2024-07-14 07:42:29.099853] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:19.662 [2024-07-14 07:42:29.099889] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:14608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:19.662 [2024-07-14 07:42:29.099906] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:19.662 [2024-07-14 07:42:29.099921] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:14616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:19.662 [2024-07-14 07:42:29.099935] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:19.662 [2024-07-14 07:42:29.099950] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:14624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:19.662 [2024-07-14 07:42:29.099964] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:19.662 [2024-07-14 07:42:29.099979] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:14632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:19.662 [2024-07-14 07:42:29.099992] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:19.662 [2024-07-14 07:42:29.100007] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:14640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:19.662 [2024-07-14 07:42:29.100021] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:19.662 [2024-07-14 07:42:29.100036] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:14648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:19.662 [2024-07-14 07:42:29.100049] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:19.662 [2024-07-14 07:42:29.100064] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:14656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:19.662 [2024-07-14 07:42:29.100078] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:19.662 [2024-07-14 07:42:29.100093] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:14664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:19.662 [2024-07-14 07:42:29.100110] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:19.662 [2024-07-14 07:42:29.100125] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:14672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:19.662 [2024-07-14 07:42:29.100139] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:19.662 [2024-07-14 07:42:29.100154] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:14680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:19.662 [2024-07-14 07:42:29.100168] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:19.662 [2024-07-14 07:42:29.100197] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:14688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:19.662 [2024-07-14 07:42:29.100210] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:19.662 [2024-07-14 07:42:29.100225] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:14696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:19.662 [2024-07-14 07:42:29.100238] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:19.662 [2024-07-14 07:42:29.100253] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:14704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:19.662 [2024-07-14 07:42:29.100266] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:19.662 [2024-07-14 07:42:29.100281] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:14712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:19.662 [2024-07-14 07:42:29.100303] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:19.662 [2024-07-14 07:42:29.100318] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:14720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:19.662 [2024-07-14 07:42:29.100331] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:19.662 [2024-07-14 07:42:29.100345] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:14728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:19.662 [2024-07-14 07:42:29.100358] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:19.662 [2024-07-14 07:42:29.100372] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:14736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:19.662 [2024-07-14 07:42:29.100385] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:19.662 [2024-07-14 07:42:29.100398] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:14088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:19.662 [2024-07-14 07:42:29.100411] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:19.662 [2024-07-14 07:42:29.100425] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:14096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:19.662 [2024-07-14 07:42:29.100439] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:19.662 [2024-07-14 07:42:29.100453] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:14112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:19.662 [2024-07-14 07:42:29.100465] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:19.662 [2024-07-14 07:42:29.100482] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:14128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:19.662 [2024-07-14 07:42:29.100495] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:19.662 [2024-07-14 07:42:29.100510] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:14136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:19.662 [2024-07-14 07:42:29.100523] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:19.662 [2024-07-14 07:42:29.100537] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:14176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:19.662 [2024-07-14 07:42:29.100549] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:19.662 [2024-07-14 07:42:29.100563] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:14200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:19.662 [2024-07-14 07:42:29.100576] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:19.662 [2024-07-14 07:42:29.100590] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:14248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:19.662 [2024-07-14 07:42:29.100603] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:19.662 [2024-07-14 07:42:29.100617] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:14744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:19.662 [2024-07-14 07:42:29.100629] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:19.662 [2024-07-14 07:42:29.100644] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:14752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:19.662 [2024-07-14 07:42:29.100656] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:19.662 [2024-07-14 07:42:29.100671] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:14760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:19.662 [2024-07-14 07:42:29.100683] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:19.662 [2024-07-14 07:42:29.100697] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:14768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:19.662 [2024-07-14 07:42:29.100717] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:19.662 [2024-07-14 07:42:29.100731] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:14776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:19.662 [2024-07-14 07:42:29.100749] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:19.662 [2024-07-14 07:42:29.100764] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:14784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:19.662 [2024-07-14 07:42:29.100777] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:19.662 [2024-07-14 07:42:29.100791] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:14792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:19.662 [2024-07-14 07:42:29.100803] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:19.662 [2024-07-14 07:42:29.100818] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:14800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:19.662 [2024-07-14 07:42:29.100834] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:19.663 [2024-07-14 07:42:29.100849] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:14808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:19.663 [2024-07-14 07:42:29.100861] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:19.663 [2024-07-14 07:42:29.100897] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:14816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:19.663 [2024-07-14 07:42:29.100912] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:19.663 [2024-07-14 07:42:29.100926] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:14824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:19.663 [2024-07-14 07:42:29.100940] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:19.663 [2024-07-14 07:42:29.100955] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:14832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:19.663 [2024-07-14 07:42:29.100968] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:19.663 [2024-07-14 07:42:29.100982] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:14840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:19.663 [2024-07-14 07:42:29.100996] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:19.663 [2024-07-14 07:42:29.101010] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:14848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:19.663 [2024-07-14 07:42:29.101031] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:19.663 [2024-07-14 07:42:29.101046] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:14256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:19.663 [2024-07-14 07:42:29.101060] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:19.663 [2024-07-14 07:42:29.101074] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:14264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:19.663 [2024-07-14 07:42:29.101088] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:19.663 [2024-07-14 07:42:29.101102] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:14272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:19.663 [2024-07-14 07:42:29.101115] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:19.663 [2024-07-14 07:42:29.101130] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:14296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:19.663 [2024-07-14 07:42:29.101143] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:19.663 [2024-07-14 07:42:29.101158] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:14304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:19.663 [2024-07-14 07:42:29.101185] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:19.663 [2024-07-14 07:42:29.101200] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:14312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:19.663 [2024-07-14 07:42:29.101213] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:19.663 [2024-07-14 07:42:29.101231] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:14320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:19.663 [2024-07-14 07:42:29.101248] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:19.663 [2024-07-14 07:42:29.101261] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2196540 is same with the state(5) to be set 00:25:19.663 [2024-07-14 07:42:29.101277] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:19.663 [2024-07-14 07:42:29.101288] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:19.663 [2024-07-14 07:42:29.101299] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:14344 len:8 PRP1 0x0 PRP2 0x0 00:25:19.663 [2024-07-14 07:42:29.101325] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:19.663 [2024-07-14 07:42:29.101386] bdev_nvme.c:1590:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x2196540 was disconnected and freed. reset controller. 00:25:19.663 [2024-07-14 07:42:29.101404] bdev_nvme.c:1843:bdev_nvme_failover_trid: *NOTICE*: Start failover from 10.0.0.2:4422 to 10.0.0.2:4420 00:25:19.663 [2024-07-14 07:42:29.101420] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:19.663 [2024-07-14 07:42:29.103703] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:19.663 [2024-07-14 07:42:29.103742] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2169bd0 (9): Bad file descriptor 00:25:19.663 [2024-07-14 07:42:29.216625] bdev_nvme.c:2040:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:25:19.663 00:25:19.663 Latency(us) 00:25:19.663 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:25:19.663 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:25:19.663 Verification LBA range: start 0x0 length 0x4000 00:25:19.663 NVMe0n1 : 15.01 12966.06 50.65 1304.20 0.00 8953.13 885.95 16117.00 00:25:19.663 =================================================================================================================== 00:25:19.663 Total : 12966.06 50.65 1304.20 0.00 8953.13 885.95 16117.00 00:25:19.663 Received shutdown signal, test time was about 15.000000 seconds 00:25:19.663 00:25:19.663 Latency(us) 00:25:19.663 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:25:19.663 =================================================================================================================== 00:25:19.663 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:25:19.663 07:42:35 -- host/failover.sh@65 -- # grep -c 'Resetting controller successful' 00:25:19.663 07:42:35 -- host/failover.sh@65 -- # count=3 00:25:19.663 07:42:35 -- host/failover.sh@67 -- # (( count != 3 )) 00:25:19.663 07:42:35 -- host/failover.sh@73 -- # bdevperf_pid=4190908 00:25:19.663 07:42:35 -- host/failover.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 1 -f 00:25:19.663 07:42:35 -- host/failover.sh@75 -- # waitforlisten 4190908 /var/tmp/bdevperf.sock 00:25:19.663 07:42:35 -- common/autotest_common.sh@819 -- # '[' -z 4190908 ']' 00:25:19.663 07:42:35 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:25:19.663 07:42:35 -- common/autotest_common.sh@824 -- # local max_retries=100 00:25:19.663 07:42:35 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:25:19.663 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:25:19.663 07:42:35 -- common/autotest_common.sh@828 -- # xtrace_disable 00:25:19.663 07:42:35 -- common/autotest_common.sh@10 -- # set +x 00:25:20.232 07:42:36 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:25:20.232 07:42:36 -- common/autotest_common.sh@852 -- # return 0 00:25:20.232 07:42:36 -- host/failover.sh@76 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:25:20.232 [2024-07-14 07:42:36.361258] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:25:20.232 07:42:36 -- host/failover.sh@77 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4422 00:25:20.497 [2024-07-14 07:42:36.597914] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4422 *** 00:25:20.497 07:42:36 -- host/failover.sh@78 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:25:21.061 NVMe0n1 00:25:21.061 07:42:37 -- host/failover.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:25:21.319 00:25:21.319 07:42:37 -- host/failover.sh@80 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4422 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:25:21.885 00:25:21.885 07:42:37 -- host/failover.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:25:21.885 07:42:37 -- host/failover.sh@82 -- # grep -q NVMe0 00:25:22.142 07:42:38 -- host/failover.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:25:22.142 07:42:38 -- host/failover.sh@87 -- # sleep 3 00:25:25.432 07:42:41 -- host/failover.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:25:25.432 07:42:41 -- host/failover.sh@88 -- # grep -q NVMe0 00:25:25.432 07:42:41 -- host/failover.sh@90 -- # run_test_pid=4191731 00:25:25.432 07:42:41 -- host/failover.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:25:25.432 07:42:41 -- host/failover.sh@92 -- # wait 4191731 00:25:26.805 0 00:25:26.805 07:42:42 -- host/failover.sh@94 -- # cat /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:25:26.805 [2024-07-14 07:42:35.208526] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:25:26.805 [2024-07-14 07:42:35.208626] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid4190908 ] 00:25:26.805 EAL: No free 2048 kB hugepages reported on node 1 00:25:26.805 [2024-07-14 07:42:35.272052] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:26.805 [2024-07-14 07:42:35.376801] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:25:26.805 [2024-07-14 07:42:38.285435] bdev_nvme.c:1843:bdev_nvme_failover_trid: *NOTICE*: Start failover from 10.0.0.2:4420 to 10.0.0.2:4421 00:25:26.805 [2024-07-14 07:42:38.285519] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:25:26.805 [2024-07-14 07:42:38.285542] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:26.805 [2024-07-14 07:42:38.285559] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:25:26.805 [2024-07-14 07:42:38.285587] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:26.805 [2024-07-14 07:42:38.285602] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:25:26.805 [2024-07-14 07:42:38.285615] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:26.805 [2024-07-14 07:42:38.285629] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:25:26.805 [2024-07-14 07:42:38.285642] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:26.805 [2024-07-14 07:42:38.285655] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:26.805 [2024-07-14 07:42:38.285695] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:26.805 [2024-07-14 07:42:38.285727] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e66bd0 (9): Bad file descriptor 00:25:26.805 [2024-07-14 07:42:38.291000] bdev_nvme.c:2040:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:25:26.805 Running I/O for 1 seconds... 00:25:26.805 00:25:26.805 Latency(us) 00:25:26.805 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:25:26.805 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:25:26.805 Verification LBA range: start 0x0 length 0x4000 00:25:26.805 NVMe0n1 : 1.01 13379.92 52.27 0.00 0.00 9521.70 1498.83 11165.39 00:25:26.805 =================================================================================================================== 00:25:26.805 Total : 13379.92 52.27 0.00 0.00 9521.70 1498.83 11165.39 00:25:26.805 07:42:42 -- host/failover.sh@95 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:25:26.805 07:42:42 -- host/failover.sh@95 -- # grep -q NVMe0 00:25:26.805 07:42:42 -- host/failover.sh@98 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4422 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:25:27.062 07:42:43 -- host/failover.sh@99 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:25:27.062 07:42:43 -- host/failover.sh@99 -- # grep -q NVMe0 00:25:27.320 07:42:43 -- host/failover.sh@100 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:25:27.577 07:42:43 -- host/failover.sh@101 -- # sleep 3 00:25:30.856 07:42:46 -- host/failover.sh@103 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:25:30.856 07:42:46 -- host/failover.sh@103 -- # grep -q NVMe0 00:25:30.856 07:42:46 -- host/failover.sh@108 -- # killprocess 4190908 00:25:30.856 07:42:46 -- common/autotest_common.sh@926 -- # '[' -z 4190908 ']' 00:25:30.856 07:42:46 -- common/autotest_common.sh@930 -- # kill -0 4190908 00:25:30.856 07:42:46 -- common/autotest_common.sh@931 -- # uname 00:25:30.856 07:42:46 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:25:30.856 07:42:46 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 4190908 00:25:30.856 07:42:46 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:25:30.856 07:42:46 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:25:30.856 07:42:46 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 4190908' 00:25:30.856 killing process with pid 4190908 00:25:30.856 07:42:46 -- common/autotest_common.sh@945 -- # kill 4190908 00:25:30.856 07:42:46 -- common/autotest_common.sh@950 -- # wait 4190908 00:25:31.114 07:42:47 -- host/failover.sh@110 -- # sync 00:25:31.114 07:42:47 -- host/failover.sh@111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:25:31.373 07:42:47 -- host/failover.sh@113 -- # trap - SIGINT SIGTERM EXIT 00:25:31.373 07:42:47 -- host/failover.sh@115 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:25:31.373 07:42:47 -- host/failover.sh@116 -- # nvmftestfini 00:25:31.373 07:42:47 -- nvmf/common.sh@476 -- # nvmfcleanup 00:25:31.373 07:42:47 -- nvmf/common.sh@116 -- # sync 00:25:31.373 07:42:47 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:25:31.373 07:42:47 -- nvmf/common.sh@119 -- # set +e 00:25:31.373 07:42:47 -- nvmf/common.sh@120 -- # for i in {1..20} 00:25:31.373 07:42:47 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:25:31.373 rmmod nvme_tcp 00:25:31.373 rmmod nvme_fabrics 00:25:31.373 rmmod nvme_keyring 00:25:31.373 07:42:47 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:25:31.373 07:42:47 -- nvmf/common.sh@123 -- # set -e 00:25:31.373 07:42:47 -- nvmf/common.sh@124 -- # return 0 00:25:31.373 07:42:47 -- nvmf/common.sh@477 -- # '[' -n 4188563 ']' 00:25:31.373 07:42:47 -- nvmf/common.sh@478 -- # killprocess 4188563 00:25:31.373 07:42:47 -- common/autotest_common.sh@926 -- # '[' -z 4188563 ']' 00:25:31.373 07:42:47 -- common/autotest_common.sh@930 -- # kill -0 4188563 00:25:31.373 07:42:47 -- common/autotest_common.sh@931 -- # uname 00:25:31.373 07:42:47 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:25:31.373 07:42:47 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 4188563 00:25:31.373 07:42:47 -- common/autotest_common.sh@932 -- # process_name=reactor_1 00:25:31.373 07:42:47 -- common/autotest_common.sh@936 -- # '[' reactor_1 = sudo ']' 00:25:31.373 07:42:47 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 4188563' 00:25:31.373 killing process with pid 4188563 00:25:31.373 07:42:47 -- common/autotest_common.sh@945 -- # kill 4188563 00:25:31.373 07:42:47 -- common/autotest_common.sh@950 -- # wait 4188563 00:25:31.632 07:42:47 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:25:31.632 07:42:47 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:25:31.632 07:42:47 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:25:31.632 07:42:47 -- nvmf/common.sh@273 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:25:31.632 07:42:47 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:25:31.632 07:42:47 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:31.632 07:42:47 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:25:31.632 07:42:47 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:34.168 07:42:49 -- nvmf/common.sh@278 -- # ip -4 addr flush cvl_0_1 00:25:34.168 00:25:34.168 real 0m36.621s 00:25:34.168 user 2m9.634s 00:25:34.168 sys 0m5.911s 00:25:34.168 07:42:49 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:25:34.168 07:42:49 -- common/autotest_common.sh@10 -- # set +x 00:25:34.168 ************************************ 00:25:34.168 END TEST nvmf_failover 00:25:34.168 ************************************ 00:25:34.168 07:42:49 -- nvmf/nvmf.sh@101 -- # run_test nvmf_discovery /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/discovery.sh --transport=tcp 00:25:34.168 07:42:49 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:25:34.168 07:42:49 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:25:34.168 07:42:49 -- common/autotest_common.sh@10 -- # set +x 00:25:34.168 ************************************ 00:25:34.168 START TEST nvmf_discovery 00:25:34.168 ************************************ 00:25:34.168 07:42:49 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/discovery.sh --transport=tcp 00:25:34.168 * Looking for test storage... 00:25:34.168 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:25:34.168 07:42:49 -- host/discovery.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:25:34.168 07:42:49 -- nvmf/common.sh@7 -- # uname -s 00:25:34.168 07:42:49 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:25:34.168 07:42:49 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:25:34.168 07:42:49 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:25:34.168 07:42:49 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:25:34.168 07:42:49 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:25:34.168 07:42:49 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:25:34.168 07:42:49 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:25:34.168 07:42:49 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:25:34.168 07:42:49 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:25:34.168 07:42:49 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:25:34.168 07:42:49 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:25:34.168 07:42:49 -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:25:34.168 07:42:49 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:25:34.168 07:42:49 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:25:34.168 07:42:49 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:25:34.168 07:42:49 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:25:34.168 07:42:49 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:25:34.168 07:42:49 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:25:34.168 07:42:49 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:25:34.168 07:42:49 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:34.168 07:42:49 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:34.168 07:42:49 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:34.168 07:42:49 -- paths/export.sh@5 -- # export PATH 00:25:34.168 07:42:49 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:34.168 07:42:49 -- nvmf/common.sh@46 -- # : 0 00:25:34.168 07:42:49 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:25:34.168 07:42:49 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:25:34.168 07:42:49 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:25:34.168 07:42:49 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:25:34.168 07:42:49 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:25:34.168 07:42:49 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:25:34.168 07:42:49 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:25:34.168 07:42:49 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:25:34.168 07:42:49 -- host/discovery.sh@11 -- # '[' tcp == rdma ']' 00:25:34.168 07:42:49 -- host/discovery.sh@16 -- # DISCOVERY_PORT=8009 00:25:34.168 07:42:49 -- host/discovery.sh@17 -- # DISCOVERY_NQN=nqn.2014-08.org.nvmexpress.discovery 00:25:34.168 07:42:49 -- host/discovery.sh@20 -- # NQN=nqn.2016-06.io.spdk:cnode 00:25:34.168 07:42:49 -- host/discovery.sh@22 -- # HOST_NQN=nqn.2021-12.io.spdk:test 00:25:34.168 07:42:49 -- host/discovery.sh@23 -- # HOST_SOCK=/tmp/host.sock 00:25:34.168 07:42:49 -- host/discovery.sh@25 -- # nvmftestinit 00:25:34.168 07:42:49 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:25:34.168 07:42:49 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:25:34.168 07:42:49 -- nvmf/common.sh@436 -- # prepare_net_devs 00:25:34.168 07:42:49 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:25:34.168 07:42:49 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:25:34.168 07:42:49 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:34.168 07:42:49 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:25:34.168 07:42:49 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:34.168 07:42:49 -- nvmf/common.sh@402 -- # [[ phy != virt ]] 00:25:34.168 07:42:49 -- nvmf/common.sh@402 -- # gather_supported_nvmf_pci_devs 00:25:34.168 07:42:49 -- nvmf/common.sh@284 -- # xtrace_disable 00:25:34.168 07:42:49 -- common/autotest_common.sh@10 -- # set +x 00:25:36.071 07:42:51 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:25:36.071 07:42:51 -- nvmf/common.sh@290 -- # pci_devs=() 00:25:36.071 07:42:51 -- nvmf/common.sh@290 -- # local -a pci_devs 00:25:36.071 07:42:51 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:25:36.071 07:42:51 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:25:36.071 07:42:51 -- nvmf/common.sh@292 -- # pci_drivers=() 00:25:36.071 07:42:51 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:25:36.071 07:42:51 -- nvmf/common.sh@294 -- # net_devs=() 00:25:36.071 07:42:51 -- nvmf/common.sh@294 -- # local -ga net_devs 00:25:36.071 07:42:51 -- nvmf/common.sh@295 -- # e810=() 00:25:36.071 07:42:51 -- nvmf/common.sh@295 -- # local -ga e810 00:25:36.071 07:42:51 -- nvmf/common.sh@296 -- # x722=() 00:25:36.071 07:42:51 -- nvmf/common.sh@296 -- # local -ga x722 00:25:36.071 07:42:51 -- nvmf/common.sh@297 -- # mlx=() 00:25:36.071 07:42:51 -- nvmf/common.sh@297 -- # local -ga mlx 00:25:36.071 07:42:51 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:25:36.071 07:42:51 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:25:36.072 07:42:51 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:25:36.072 07:42:51 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:25:36.072 07:42:51 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:25:36.072 07:42:51 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:25:36.072 07:42:51 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:25:36.072 07:42:51 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:25:36.072 07:42:51 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:25:36.072 07:42:51 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:25:36.072 07:42:51 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:25:36.072 07:42:51 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:25:36.072 07:42:51 -- nvmf/common.sh@320 -- # [[ tcp == rdma ]] 00:25:36.072 07:42:51 -- nvmf/common.sh@326 -- # [[ e810 == mlx5 ]] 00:25:36.072 07:42:51 -- nvmf/common.sh@328 -- # [[ e810 == e810 ]] 00:25:36.072 07:42:51 -- nvmf/common.sh@329 -- # pci_devs=("${e810[@]}") 00:25:36.072 07:42:51 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:25:36.072 07:42:51 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:25:36.072 07:42:51 -- nvmf/common.sh@340 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:25:36.072 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:25:36.072 07:42:51 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:25:36.072 07:42:51 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:25:36.072 07:42:51 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:25:36.072 07:42:51 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:25:36.072 07:42:51 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:25:36.072 07:42:51 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:25:36.072 07:42:51 -- nvmf/common.sh@340 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:25:36.072 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:25:36.072 07:42:51 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:25:36.072 07:42:51 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:25:36.072 07:42:51 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:25:36.072 07:42:51 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:25:36.072 07:42:51 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:25:36.072 07:42:51 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:25:36.072 07:42:51 -- nvmf/common.sh@371 -- # [[ e810 == e810 ]] 00:25:36.072 07:42:51 -- nvmf/common.sh@371 -- # [[ tcp == rdma ]] 00:25:36.072 07:42:51 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:25:36.072 07:42:51 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:25:36.072 07:42:51 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:25:36.072 07:42:51 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:25:36.072 07:42:51 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:25:36.072 Found net devices under 0000:0a:00.0: cvl_0_0 00:25:36.072 07:42:51 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:25:36.072 07:42:51 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:25:36.072 07:42:51 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:25:36.072 07:42:51 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:25:36.072 07:42:51 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:25:36.072 07:42:51 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:25:36.072 Found net devices under 0000:0a:00.1: cvl_0_1 00:25:36.072 07:42:51 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:25:36.072 07:42:51 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:25:36.072 07:42:51 -- nvmf/common.sh@402 -- # is_hw=yes 00:25:36.072 07:42:51 -- nvmf/common.sh@404 -- # [[ yes == yes ]] 00:25:36.072 07:42:51 -- nvmf/common.sh@405 -- # [[ tcp == tcp ]] 00:25:36.072 07:42:51 -- nvmf/common.sh@406 -- # nvmf_tcp_init 00:25:36.072 07:42:51 -- nvmf/common.sh@228 -- # NVMF_INITIATOR_IP=10.0.0.1 00:25:36.072 07:42:51 -- nvmf/common.sh@229 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:25:36.072 07:42:51 -- nvmf/common.sh@230 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:25:36.072 07:42:51 -- nvmf/common.sh@233 -- # (( 2 > 1 )) 00:25:36.072 07:42:51 -- nvmf/common.sh@235 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:25:36.072 07:42:51 -- nvmf/common.sh@236 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:25:36.072 07:42:51 -- nvmf/common.sh@239 -- # NVMF_SECOND_TARGET_IP= 00:25:36.072 07:42:51 -- nvmf/common.sh@241 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:25:36.072 07:42:51 -- nvmf/common.sh@242 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:25:36.072 07:42:51 -- nvmf/common.sh@243 -- # ip -4 addr flush cvl_0_0 00:25:36.072 07:42:51 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_1 00:25:36.072 07:42:51 -- nvmf/common.sh@247 -- # ip netns add cvl_0_0_ns_spdk 00:25:36.072 07:42:51 -- nvmf/common.sh@250 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:25:36.072 07:42:51 -- nvmf/common.sh@253 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:25:36.072 07:42:51 -- nvmf/common.sh@254 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:25:36.072 07:42:51 -- nvmf/common.sh@257 -- # ip link set cvl_0_1 up 00:25:36.072 07:42:51 -- nvmf/common.sh@259 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:25:36.072 07:42:51 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:25:36.072 07:42:51 -- nvmf/common.sh@263 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:25:36.072 07:42:51 -- nvmf/common.sh@266 -- # ping -c 1 10.0.0.2 00:25:36.072 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:25:36.072 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.137 ms 00:25:36.072 00:25:36.072 --- 10.0.0.2 ping statistics --- 00:25:36.072 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:36.072 rtt min/avg/max/mdev = 0.137/0.137/0.137/0.000 ms 00:25:36.072 07:42:51 -- nvmf/common.sh@267 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:25:36.072 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:25:36.072 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.104 ms 00:25:36.072 00:25:36.072 --- 10.0.0.1 ping statistics --- 00:25:36.072 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:36.072 rtt min/avg/max/mdev = 0.104/0.104/0.104/0.000 ms 00:25:36.072 07:42:51 -- nvmf/common.sh@269 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:25:36.072 07:42:51 -- nvmf/common.sh@410 -- # return 0 00:25:36.072 07:42:51 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:25:36.072 07:42:51 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:25:36.072 07:42:51 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:25:36.072 07:42:51 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:25:36.072 07:42:51 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:25:36.072 07:42:51 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:25:36.072 07:42:51 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:25:36.072 07:42:51 -- host/discovery.sh@30 -- # nvmfappstart -m 0x2 00:25:36.072 07:42:51 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:25:36.072 07:42:51 -- common/autotest_common.sh@712 -- # xtrace_disable 00:25:36.072 07:42:51 -- common/autotest_common.sh@10 -- # set +x 00:25:36.072 07:42:51 -- nvmf/common.sh@469 -- # nvmfpid=389 00:25:36.072 07:42:51 -- nvmf/common.sh@468 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:25:36.072 07:42:51 -- nvmf/common.sh@470 -- # waitforlisten 389 00:25:36.072 07:42:51 -- common/autotest_common.sh@819 -- # '[' -z 389 ']' 00:25:36.072 07:42:51 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:25:36.072 07:42:51 -- common/autotest_common.sh@824 -- # local max_retries=100 00:25:36.072 07:42:51 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:25:36.072 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:25:36.072 07:42:51 -- common/autotest_common.sh@828 -- # xtrace_disable 00:25:36.072 07:42:51 -- common/autotest_common.sh@10 -- # set +x 00:25:36.072 [2024-07-14 07:42:51.958334] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:25:36.072 [2024-07-14 07:42:51.958413] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:25:36.072 EAL: No free 2048 kB hugepages reported on node 1 00:25:36.072 [2024-07-14 07:42:52.019812] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:36.072 [2024-07-14 07:42:52.123478] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:25:36.072 [2024-07-14 07:42:52.123624] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:25:36.072 [2024-07-14 07:42:52.123642] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:25:36.072 [2024-07-14 07:42:52.123654] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:25:36.072 [2024-07-14 07:42:52.123681] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:25:37.009 07:42:52 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:25:37.009 07:42:52 -- common/autotest_common.sh@852 -- # return 0 00:25:37.009 07:42:52 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:25:37.009 07:42:52 -- common/autotest_common.sh@718 -- # xtrace_disable 00:25:37.009 07:42:52 -- common/autotest_common.sh@10 -- # set +x 00:25:37.010 07:42:52 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:25:37.010 07:42:52 -- host/discovery.sh@32 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:25:37.010 07:42:52 -- common/autotest_common.sh@551 -- # xtrace_disable 00:25:37.010 07:42:52 -- common/autotest_common.sh@10 -- # set +x 00:25:37.010 [2024-07-14 07:42:52.908831] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:25:37.010 07:42:52 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:25:37.010 07:42:52 -- host/discovery.sh@33 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2014-08.org.nvmexpress.discovery -t tcp -a 10.0.0.2 -s 8009 00:25:37.010 07:42:52 -- common/autotest_common.sh@551 -- # xtrace_disable 00:25:37.010 07:42:52 -- common/autotest_common.sh@10 -- # set +x 00:25:37.010 [2024-07-14 07:42:52.917037] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 8009 *** 00:25:37.010 07:42:52 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:25:37.010 07:42:52 -- host/discovery.sh@35 -- # rpc_cmd bdev_null_create null0 1000 512 00:25:37.010 07:42:52 -- common/autotest_common.sh@551 -- # xtrace_disable 00:25:37.010 07:42:52 -- common/autotest_common.sh@10 -- # set +x 00:25:37.010 null0 00:25:37.010 07:42:52 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:25:37.010 07:42:52 -- host/discovery.sh@36 -- # rpc_cmd bdev_null_create null1 1000 512 00:25:37.010 07:42:52 -- common/autotest_common.sh@551 -- # xtrace_disable 00:25:37.010 07:42:52 -- common/autotest_common.sh@10 -- # set +x 00:25:37.010 null1 00:25:37.010 07:42:52 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:25:37.010 07:42:52 -- host/discovery.sh@37 -- # rpc_cmd bdev_wait_for_examine 00:25:37.010 07:42:52 -- common/autotest_common.sh@551 -- # xtrace_disable 00:25:37.010 07:42:52 -- common/autotest_common.sh@10 -- # set +x 00:25:37.010 07:42:52 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:25:37.010 07:42:52 -- host/discovery.sh@45 -- # hostpid=560 00:25:37.010 07:42:52 -- host/discovery.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -m 0x1 -r /tmp/host.sock 00:25:37.010 07:42:52 -- host/discovery.sh@46 -- # waitforlisten 560 /tmp/host.sock 00:25:37.010 07:42:52 -- common/autotest_common.sh@819 -- # '[' -z 560 ']' 00:25:37.010 07:42:52 -- common/autotest_common.sh@823 -- # local rpc_addr=/tmp/host.sock 00:25:37.010 07:42:52 -- common/autotest_common.sh@824 -- # local max_retries=100 00:25:37.010 07:42:52 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock...' 00:25:37.010 Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock... 00:25:37.010 07:42:52 -- common/autotest_common.sh@828 -- # xtrace_disable 00:25:37.010 07:42:52 -- common/autotest_common.sh@10 -- # set +x 00:25:37.010 [2024-07-14 07:42:52.983997] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:25:37.010 [2024-07-14 07:42:52.984080] [ DPDK EAL parameters: nvmf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid560 ] 00:25:37.010 EAL: No free 2048 kB hugepages reported on node 1 00:25:37.010 [2024-07-14 07:42:53.044923] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:37.010 [2024-07-14 07:42:53.158589] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:25:37.010 [2024-07-14 07:42:53.158767] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:25:37.958 07:42:53 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:25:37.958 07:42:53 -- common/autotest_common.sh@852 -- # return 0 00:25:37.958 07:42:53 -- host/discovery.sh@48 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill $hostpid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:25:37.958 07:42:53 -- host/discovery.sh@50 -- # rpc_cmd -s /tmp/host.sock log_set_flag bdev_nvme 00:25:37.958 07:42:53 -- common/autotest_common.sh@551 -- # xtrace_disable 00:25:37.958 07:42:53 -- common/autotest_common.sh@10 -- # set +x 00:25:37.958 07:42:53 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:25:37.958 07:42:53 -- host/discovery.sh@51 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test 00:25:37.958 07:42:53 -- common/autotest_common.sh@551 -- # xtrace_disable 00:25:37.958 07:42:53 -- common/autotest_common.sh@10 -- # set +x 00:25:37.958 07:42:53 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:25:37.958 07:42:53 -- host/discovery.sh@72 -- # notify_id=0 00:25:37.958 07:42:53 -- host/discovery.sh@78 -- # get_subsystem_names 00:25:37.958 07:42:53 -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:25:37.958 07:42:53 -- host/discovery.sh@59 -- # jq -r '.[].name' 00:25:37.958 07:42:53 -- common/autotest_common.sh@551 -- # xtrace_disable 00:25:37.958 07:42:53 -- common/autotest_common.sh@10 -- # set +x 00:25:37.958 07:42:53 -- host/discovery.sh@59 -- # sort 00:25:37.958 07:42:53 -- host/discovery.sh@59 -- # xargs 00:25:37.958 07:42:53 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:25:37.958 07:42:54 -- host/discovery.sh@78 -- # [[ '' == '' ]] 00:25:37.958 07:42:54 -- host/discovery.sh@79 -- # get_bdev_list 00:25:37.958 07:42:54 -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:25:37.958 07:42:54 -- host/discovery.sh@55 -- # jq -r '.[].name' 00:25:37.958 07:42:54 -- common/autotest_common.sh@551 -- # xtrace_disable 00:25:37.958 07:42:54 -- common/autotest_common.sh@10 -- # set +x 00:25:37.958 07:42:54 -- host/discovery.sh@55 -- # sort 00:25:37.958 07:42:54 -- host/discovery.sh@55 -- # xargs 00:25:37.958 07:42:54 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:25:37.958 07:42:54 -- host/discovery.sh@79 -- # [[ '' == '' ]] 00:25:37.958 07:42:54 -- host/discovery.sh@81 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 00:25:37.958 07:42:54 -- common/autotest_common.sh@551 -- # xtrace_disable 00:25:37.958 07:42:54 -- common/autotest_common.sh@10 -- # set +x 00:25:37.958 07:42:54 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:25:37.958 07:42:54 -- host/discovery.sh@82 -- # get_subsystem_names 00:25:37.958 07:42:54 -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:25:37.958 07:42:54 -- common/autotest_common.sh@551 -- # xtrace_disable 00:25:37.958 07:42:54 -- host/discovery.sh@59 -- # jq -r '.[].name' 00:25:37.958 07:42:54 -- common/autotest_common.sh@10 -- # set +x 00:25:37.958 07:42:54 -- host/discovery.sh@59 -- # sort 00:25:37.958 07:42:54 -- host/discovery.sh@59 -- # xargs 00:25:37.958 07:42:54 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:25:37.958 07:42:54 -- host/discovery.sh@82 -- # [[ '' == '' ]] 00:25:37.958 07:42:54 -- host/discovery.sh@83 -- # get_bdev_list 00:25:37.958 07:42:54 -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:25:37.958 07:42:54 -- common/autotest_common.sh@551 -- # xtrace_disable 00:25:37.958 07:42:54 -- host/discovery.sh@55 -- # jq -r '.[].name' 00:25:37.958 07:42:54 -- common/autotest_common.sh@10 -- # set +x 00:25:37.958 07:42:54 -- host/discovery.sh@55 -- # sort 00:25:37.958 07:42:54 -- host/discovery.sh@55 -- # xargs 00:25:38.216 07:42:54 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:25:38.216 07:42:54 -- host/discovery.sh@83 -- # [[ '' == '' ]] 00:25:38.216 07:42:54 -- host/discovery.sh@85 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 null0 00:25:38.216 07:42:54 -- common/autotest_common.sh@551 -- # xtrace_disable 00:25:38.216 07:42:54 -- common/autotest_common.sh@10 -- # set +x 00:25:38.216 07:42:54 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:25:38.216 07:42:54 -- host/discovery.sh@86 -- # get_subsystem_names 00:25:38.216 07:42:54 -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:25:38.216 07:42:54 -- host/discovery.sh@59 -- # jq -r '.[].name' 00:25:38.216 07:42:54 -- common/autotest_common.sh@551 -- # xtrace_disable 00:25:38.216 07:42:54 -- common/autotest_common.sh@10 -- # set +x 00:25:38.216 07:42:54 -- host/discovery.sh@59 -- # sort 00:25:38.216 07:42:54 -- host/discovery.sh@59 -- # xargs 00:25:38.216 07:42:54 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:25:38.216 07:42:54 -- host/discovery.sh@86 -- # [[ '' == '' ]] 00:25:38.216 07:42:54 -- host/discovery.sh@87 -- # get_bdev_list 00:25:38.216 07:42:54 -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:25:38.216 07:42:54 -- common/autotest_common.sh@551 -- # xtrace_disable 00:25:38.216 07:42:54 -- host/discovery.sh@55 -- # jq -r '.[].name' 00:25:38.216 07:42:54 -- common/autotest_common.sh@10 -- # set +x 00:25:38.216 07:42:54 -- host/discovery.sh@55 -- # sort 00:25:38.216 07:42:54 -- host/discovery.sh@55 -- # xargs 00:25:38.216 07:42:54 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:25:38.216 07:42:54 -- host/discovery.sh@87 -- # [[ '' == '' ]] 00:25:38.216 07:42:54 -- host/discovery.sh@91 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:25:38.216 07:42:54 -- common/autotest_common.sh@551 -- # xtrace_disable 00:25:38.216 07:42:54 -- common/autotest_common.sh@10 -- # set +x 00:25:38.216 [2024-07-14 07:42:54.260753] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:25:38.216 07:42:54 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:25:38.216 07:42:54 -- host/discovery.sh@92 -- # get_subsystem_names 00:25:38.216 07:42:54 -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:25:38.216 07:42:54 -- common/autotest_common.sh@551 -- # xtrace_disable 00:25:38.216 07:42:54 -- common/autotest_common.sh@10 -- # set +x 00:25:38.216 07:42:54 -- host/discovery.sh@59 -- # jq -r '.[].name' 00:25:38.216 07:42:54 -- host/discovery.sh@59 -- # sort 00:25:38.216 07:42:54 -- host/discovery.sh@59 -- # xargs 00:25:38.216 07:42:54 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:25:38.216 07:42:54 -- host/discovery.sh@92 -- # [[ '' == '' ]] 00:25:38.216 07:42:54 -- host/discovery.sh@93 -- # get_bdev_list 00:25:38.216 07:42:54 -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:25:38.216 07:42:54 -- host/discovery.sh@55 -- # jq -r '.[].name' 00:25:38.216 07:42:54 -- common/autotest_common.sh@551 -- # xtrace_disable 00:25:38.216 07:42:54 -- common/autotest_common.sh@10 -- # set +x 00:25:38.216 07:42:54 -- host/discovery.sh@55 -- # sort 00:25:38.216 07:42:54 -- host/discovery.sh@55 -- # xargs 00:25:38.216 07:42:54 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:25:38.216 07:42:54 -- host/discovery.sh@93 -- # [[ '' == '' ]] 00:25:38.216 07:42:54 -- host/discovery.sh@94 -- # get_notification_count 00:25:38.216 07:42:54 -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 0 00:25:38.216 07:42:54 -- host/discovery.sh@74 -- # jq '. | length' 00:25:38.216 07:42:54 -- common/autotest_common.sh@551 -- # xtrace_disable 00:25:38.216 07:42:54 -- common/autotest_common.sh@10 -- # set +x 00:25:38.216 07:42:54 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:25:38.216 07:42:54 -- host/discovery.sh@74 -- # notification_count=0 00:25:38.216 07:42:54 -- host/discovery.sh@75 -- # notify_id=0 00:25:38.216 07:42:54 -- host/discovery.sh@95 -- # [[ 0 == 0 ]] 00:25:38.473 07:42:54 -- host/discovery.sh@99 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2021-12.io.spdk:test 00:25:38.473 07:42:54 -- common/autotest_common.sh@551 -- # xtrace_disable 00:25:38.473 07:42:54 -- common/autotest_common.sh@10 -- # set +x 00:25:38.473 07:42:54 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:25:38.473 07:42:54 -- host/discovery.sh@100 -- # sleep 1 00:25:39.038 [2024-07-14 07:42:55.030195] bdev_nvme.c:6759:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:25:39.038 [2024-07-14 07:42:55.030234] bdev_nvme.c:6839:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:25:39.038 [2024-07-14 07:42:55.030263] bdev_nvme.c:6722:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:25:39.038 [2024-07-14 07:42:55.158677] bdev_nvme.c:6688:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 new subsystem nvme0 00:25:39.296 [2024-07-14 07:42:55.258814] bdev_nvme.c:6578:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:25:39.296 [2024-07-14 07:42:55.258843] bdev_nvme.c:6537:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:25:39.296 07:42:55 -- host/discovery.sh@101 -- # get_subsystem_names 00:25:39.296 07:42:55 -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:25:39.296 07:42:55 -- host/discovery.sh@59 -- # jq -r '.[].name' 00:25:39.296 07:42:55 -- common/autotest_common.sh@551 -- # xtrace_disable 00:25:39.296 07:42:55 -- host/discovery.sh@59 -- # sort 00:25:39.296 07:42:55 -- common/autotest_common.sh@10 -- # set +x 00:25:39.296 07:42:55 -- host/discovery.sh@59 -- # xargs 00:25:39.296 07:42:55 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:25:39.296 07:42:55 -- host/discovery.sh@101 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:39.296 07:42:55 -- host/discovery.sh@102 -- # get_bdev_list 00:25:39.296 07:42:55 -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:25:39.296 07:42:55 -- common/autotest_common.sh@551 -- # xtrace_disable 00:25:39.296 07:42:55 -- host/discovery.sh@55 -- # jq -r '.[].name' 00:25:39.296 07:42:55 -- common/autotest_common.sh@10 -- # set +x 00:25:39.296 07:42:55 -- host/discovery.sh@55 -- # sort 00:25:39.296 07:42:55 -- host/discovery.sh@55 -- # xargs 00:25:39.296 07:42:55 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:25:39.555 07:42:55 -- host/discovery.sh@102 -- # [[ nvme0n1 == \n\v\m\e\0\n\1 ]] 00:25:39.555 07:42:55 -- host/discovery.sh@103 -- # get_subsystem_paths nvme0 00:25:39.555 07:42:55 -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:25:39.555 07:42:55 -- common/autotest_common.sh@551 -- # xtrace_disable 00:25:39.555 07:42:55 -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:25:39.555 07:42:55 -- common/autotest_common.sh@10 -- # set +x 00:25:39.555 07:42:55 -- host/discovery.sh@63 -- # sort -n 00:25:39.555 07:42:55 -- host/discovery.sh@63 -- # xargs 00:25:39.555 07:42:55 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:25:39.555 07:42:55 -- host/discovery.sh@103 -- # [[ 4420 == \4\4\2\0 ]] 00:25:39.555 07:42:55 -- host/discovery.sh@104 -- # get_notification_count 00:25:39.555 07:42:55 -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 0 00:25:39.555 07:42:55 -- host/discovery.sh@74 -- # jq '. | length' 00:25:39.555 07:42:55 -- common/autotest_common.sh@551 -- # xtrace_disable 00:25:39.555 07:42:55 -- common/autotest_common.sh@10 -- # set +x 00:25:39.555 07:42:55 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:25:39.555 07:42:55 -- host/discovery.sh@74 -- # notification_count=1 00:25:39.555 07:42:55 -- host/discovery.sh@75 -- # notify_id=1 00:25:39.555 07:42:55 -- host/discovery.sh@105 -- # [[ 1 == 1 ]] 00:25:39.555 07:42:55 -- host/discovery.sh@108 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 null1 00:25:39.555 07:42:55 -- common/autotest_common.sh@551 -- # xtrace_disable 00:25:39.555 07:42:55 -- common/autotest_common.sh@10 -- # set +x 00:25:39.555 07:42:55 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:25:39.555 07:42:55 -- host/discovery.sh@109 -- # sleep 1 00:25:40.490 07:42:56 -- host/discovery.sh@110 -- # get_bdev_list 00:25:40.490 07:42:56 -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:25:40.490 07:42:56 -- host/discovery.sh@55 -- # jq -r '.[].name' 00:25:40.490 07:42:56 -- common/autotest_common.sh@551 -- # xtrace_disable 00:25:40.490 07:42:56 -- common/autotest_common.sh@10 -- # set +x 00:25:40.490 07:42:56 -- host/discovery.sh@55 -- # sort 00:25:40.490 07:42:56 -- host/discovery.sh@55 -- # xargs 00:25:40.490 07:42:56 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:25:40.490 07:42:56 -- host/discovery.sh@110 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:25:40.490 07:42:56 -- host/discovery.sh@111 -- # get_notification_count 00:25:40.490 07:42:56 -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 1 00:25:40.490 07:42:56 -- host/discovery.sh@74 -- # jq '. | length' 00:25:40.490 07:42:56 -- common/autotest_common.sh@551 -- # xtrace_disable 00:25:40.490 07:42:56 -- common/autotest_common.sh@10 -- # set +x 00:25:40.490 07:42:56 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:25:40.490 07:42:56 -- host/discovery.sh@74 -- # notification_count=1 00:25:40.490 07:42:56 -- host/discovery.sh@75 -- # notify_id=2 00:25:40.490 07:42:56 -- host/discovery.sh@112 -- # [[ 1 == 1 ]] 00:25:40.490 07:42:56 -- host/discovery.sh@116 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4421 00:25:40.490 07:42:56 -- common/autotest_common.sh@551 -- # xtrace_disable 00:25:40.490 07:42:56 -- common/autotest_common.sh@10 -- # set +x 00:25:40.748 [2024-07-14 07:42:56.664124] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:25:40.748 [2024-07-14 07:42:56.664874] bdev_nvme.c:6741:discovery_aer_cb: *INFO*: Discovery[10.0.0.2:8009] got aer 00:25:40.748 [2024-07-14 07:42:56.664946] bdev_nvme.c:6722:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:25:40.748 07:42:56 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:25:40.748 07:42:56 -- host/discovery.sh@117 -- # sleep 1 00:25:40.748 [2024-07-14 07:42:56.793301] bdev_nvme.c:6683:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 new path for nvme0 00:25:41.005 [2024-07-14 07:42:57.095802] bdev_nvme.c:6578:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:25:41.006 [2024-07-14 07:42:57.095831] bdev_nvme.c:6537:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:25:41.006 [2024-07-14 07:42:57.095842] bdev_nvme.c:6537:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 found again 00:25:41.571 07:42:57 -- host/discovery.sh@118 -- # get_subsystem_names 00:25:41.571 07:42:57 -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:25:41.571 07:42:57 -- host/discovery.sh@59 -- # jq -r '.[].name' 00:25:41.571 07:42:57 -- common/autotest_common.sh@551 -- # xtrace_disable 00:25:41.571 07:42:57 -- common/autotest_common.sh@10 -- # set +x 00:25:41.571 07:42:57 -- host/discovery.sh@59 -- # sort 00:25:41.571 07:42:57 -- host/discovery.sh@59 -- # xargs 00:25:41.571 07:42:57 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:25:41.571 07:42:57 -- host/discovery.sh@118 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:41.571 07:42:57 -- host/discovery.sh@119 -- # get_bdev_list 00:25:41.571 07:42:57 -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:25:41.571 07:42:57 -- common/autotest_common.sh@551 -- # xtrace_disable 00:25:41.571 07:42:57 -- host/discovery.sh@55 -- # jq -r '.[].name' 00:25:41.571 07:42:57 -- common/autotest_common.sh@10 -- # set +x 00:25:41.571 07:42:57 -- host/discovery.sh@55 -- # sort 00:25:41.571 07:42:57 -- host/discovery.sh@55 -- # xargs 00:25:41.571 07:42:57 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:25:41.830 07:42:57 -- host/discovery.sh@119 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:25:41.830 07:42:57 -- host/discovery.sh@120 -- # get_subsystem_paths nvme0 00:25:41.830 07:42:57 -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:25:41.830 07:42:57 -- common/autotest_common.sh@551 -- # xtrace_disable 00:25:41.830 07:42:57 -- common/autotest_common.sh@10 -- # set +x 00:25:41.830 07:42:57 -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:25:41.830 07:42:57 -- host/discovery.sh@63 -- # sort -n 00:25:41.830 07:42:57 -- host/discovery.sh@63 -- # xargs 00:25:41.830 07:42:57 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:25:41.830 07:42:57 -- host/discovery.sh@120 -- # [[ 4420 4421 == \4\4\2\0\ \4\4\2\1 ]] 00:25:41.830 07:42:57 -- host/discovery.sh@121 -- # get_notification_count 00:25:41.830 07:42:57 -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 2 00:25:41.830 07:42:57 -- common/autotest_common.sh@551 -- # xtrace_disable 00:25:41.830 07:42:57 -- common/autotest_common.sh@10 -- # set +x 00:25:41.830 07:42:57 -- host/discovery.sh@74 -- # jq '. | length' 00:25:41.830 07:42:57 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:25:41.830 07:42:57 -- host/discovery.sh@74 -- # notification_count=0 00:25:41.830 07:42:57 -- host/discovery.sh@75 -- # notify_id=2 00:25:41.830 07:42:57 -- host/discovery.sh@122 -- # [[ 0 == 0 ]] 00:25:41.830 07:42:57 -- host/discovery.sh@126 -- # rpc_cmd nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:25:41.830 07:42:57 -- common/autotest_common.sh@551 -- # xtrace_disable 00:25:41.830 07:42:57 -- common/autotest_common.sh@10 -- # set +x 00:25:41.830 [2024-07-14 07:42:57.840336] bdev_nvme.c:6741:discovery_aer_cb: *INFO*: Discovery[10.0.0.2:8009] got aer 00:25:41.830 [2024-07-14 07:42:57.840372] bdev_nvme.c:6722:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:25:41.830 [2024-07-14 07:42:57.840930] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:25:41.830 [2024-07-14 07:42:57.840959] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:41.830 [2024-07-14 07:42:57.840976] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:25:41.830 [2024-07-14 07:42:57.840990] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:41.830 [2024-07-14 07:42:57.841004] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:25:41.830 [2024-07-14 07:42:57.841017] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:41.830 [2024-07-14 07:42:57.841031] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:25:41.830 [2024-07-14 07:42:57.841044] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:41.830 [2024-07-14 07:42:57.841057] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2446b80 is same with the state(5) to be set 00:25:41.830 07:42:57 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:25:41.830 07:42:57 -- host/discovery.sh@127 -- # sleep 1 00:25:41.830 [2024-07-14 07:42:57.850920] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2446b80 (9): Bad file descriptor 00:25:41.830 [2024-07-14 07:42:57.860964] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:25:41.830 [2024-07-14 07:42:57.861253] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.830 [2024-07-14 07:42:57.861484] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.830 [2024-07-14 07:42:57.861513] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2446b80 with addr=10.0.0.2, port=4420 00:25:41.830 [2024-07-14 07:42:57.861541] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2446b80 is same with the state(5) to be set 00:25:41.830 [2024-07-14 07:42:57.861567] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2446b80 (9): Bad file descriptor 00:25:41.830 [2024-07-14 07:42:57.861619] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:25:41.830 [2024-07-14 07:42:57.861640] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:25:41.830 [2024-07-14 07:42:57.861657] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:25:41.830 [2024-07-14 07:42:57.861679] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:41.830 [2024-07-14 07:42:57.871051] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:25:41.830 [2024-07-14 07:42:57.871331] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.830 [2024-07-14 07:42:57.871541] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.830 [2024-07-14 07:42:57.871567] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2446b80 with addr=10.0.0.2, port=4420 00:25:41.830 [2024-07-14 07:42:57.871582] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2446b80 is same with the state(5) to be set 00:25:41.830 [2024-07-14 07:42:57.871604] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2446b80 (9): Bad file descriptor 00:25:41.830 [2024-07-14 07:42:57.871625] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:25:41.830 [2024-07-14 07:42:57.871639] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:25:41.830 [2024-07-14 07:42:57.871653] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:25:41.830 [2024-07-14 07:42:57.871684] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:41.830 [2024-07-14 07:42:57.881134] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:25:41.830 [2024-07-14 07:42:57.881451] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.830 [2024-07-14 07:42:57.881664] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.830 [2024-07-14 07:42:57.881692] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2446b80 with addr=10.0.0.2, port=4420 00:25:41.830 [2024-07-14 07:42:57.881709] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2446b80 is same with the state(5) to be set 00:25:41.830 [2024-07-14 07:42:57.881734] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2446b80 (9): Bad file descriptor 00:25:41.830 [2024-07-14 07:42:57.881781] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:25:41.830 [2024-07-14 07:42:57.881802] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:25:41.830 [2024-07-14 07:42:57.881817] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:25:41.830 [2024-07-14 07:42:57.881837] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:41.830 [2024-07-14 07:42:57.891224] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:25:41.830 [2024-07-14 07:42:57.891464] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.830 [2024-07-14 07:42:57.891700] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.830 [2024-07-14 07:42:57.891728] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2446b80 with addr=10.0.0.2, port=4420 00:25:41.830 [2024-07-14 07:42:57.891746] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2446b80 is same with the state(5) to be set 00:25:41.831 [2024-07-14 07:42:57.891776] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2446b80 (9): Bad file descriptor 00:25:41.831 [2024-07-14 07:42:57.891800] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:25:41.831 [2024-07-14 07:42:57.891815] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:25:41.831 [2024-07-14 07:42:57.891830] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:25:41.831 [2024-07-14 07:42:57.891878] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:41.831 [2024-07-14 07:42:57.901305] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:25:41.831 [2024-07-14 07:42:57.901531] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.831 [2024-07-14 07:42:57.901788] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.831 [2024-07-14 07:42:57.901815] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2446b80 with addr=10.0.0.2, port=4420 00:25:41.831 [2024-07-14 07:42:57.901833] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2446b80 is same with the state(5) to be set 00:25:41.831 [2024-07-14 07:42:57.901857] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2446b80 (9): Bad file descriptor 00:25:41.831 [2024-07-14 07:42:57.901928] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:25:41.831 [2024-07-14 07:42:57.901948] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:25:41.831 [2024-07-14 07:42:57.901962] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:25:41.831 [2024-07-14 07:42:57.901980] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:41.831 [2024-07-14 07:42:57.911380] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:25:41.831 [2024-07-14 07:42:57.911679] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.831 [2024-07-14 07:42:57.911963] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.831 [2024-07-14 07:42:57.911990] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2446b80 with addr=10.0.0.2, port=4420 00:25:41.831 [2024-07-14 07:42:57.912005] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2446b80 is same with the state(5) to be set 00:25:41.831 [2024-07-14 07:42:57.912028] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2446b80 (9): Bad file descriptor 00:25:41.831 [2024-07-14 07:42:57.912060] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:25:41.831 [2024-07-14 07:42:57.912078] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:25:41.831 [2024-07-14 07:42:57.912091] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:25:41.831 [2024-07-14 07:42:57.912110] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:41.831 [2024-07-14 07:42:57.921457] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:25:41.831 [2024-07-14 07:42:57.921776] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.831 [2024-07-14 07:42:57.922058] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.831 [2024-07-14 07:42:57.922084] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2446b80 with addr=10.0.0.2, port=4420 00:25:41.831 [2024-07-14 07:42:57.922100] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2446b80 is same with the state(5) to be set 00:25:41.831 [2024-07-14 07:42:57.922121] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2446b80 (9): Bad file descriptor 00:25:41.831 [2024-07-14 07:42:57.922170] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:25:41.831 [2024-07-14 07:42:57.922189] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:25:41.831 [2024-07-14 07:42:57.922219] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:25:41.831 [2024-07-14 07:42:57.922240] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:41.831 [2024-07-14 07:42:57.927171] bdev_nvme.c:6546:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 not found 00:25:41.831 [2024-07-14 07:42:57.927200] bdev_nvme.c:6537:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 found again 00:25:42.762 07:42:58 -- host/discovery.sh@128 -- # get_subsystem_names 00:25:42.762 07:42:58 -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:25:42.762 07:42:58 -- host/discovery.sh@59 -- # jq -r '.[].name' 00:25:42.762 07:42:58 -- common/autotest_common.sh@551 -- # xtrace_disable 00:25:42.762 07:42:58 -- host/discovery.sh@59 -- # sort 00:25:42.762 07:42:58 -- common/autotest_common.sh@10 -- # set +x 00:25:42.762 07:42:58 -- host/discovery.sh@59 -- # xargs 00:25:42.762 07:42:58 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:25:42.762 07:42:58 -- host/discovery.sh@128 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:42.762 07:42:58 -- host/discovery.sh@129 -- # get_bdev_list 00:25:42.762 07:42:58 -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:25:42.762 07:42:58 -- host/discovery.sh@55 -- # jq -r '.[].name' 00:25:42.762 07:42:58 -- common/autotest_common.sh@551 -- # xtrace_disable 00:25:42.762 07:42:58 -- common/autotest_common.sh@10 -- # set +x 00:25:42.762 07:42:58 -- host/discovery.sh@55 -- # sort 00:25:42.762 07:42:58 -- host/discovery.sh@55 -- # xargs 00:25:42.762 07:42:58 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:25:43.019 07:42:58 -- host/discovery.sh@129 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:25:43.019 07:42:58 -- host/discovery.sh@130 -- # get_subsystem_paths nvme0 00:25:43.019 07:42:58 -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:25:43.019 07:42:58 -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:25:43.019 07:42:58 -- common/autotest_common.sh@551 -- # xtrace_disable 00:25:43.019 07:42:58 -- common/autotest_common.sh@10 -- # set +x 00:25:43.019 07:42:58 -- host/discovery.sh@63 -- # sort -n 00:25:43.019 07:42:58 -- host/discovery.sh@63 -- # xargs 00:25:43.019 07:42:58 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:25:43.019 07:42:58 -- host/discovery.sh@130 -- # [[ 4421 == \4\4\2\1 ]] 00:25:43.019 07:42:58 -- host/discovery.sh@131 -- # get_notification_count 00:25:43.019 07:42:58 -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 2 00:25:43.019 07:42:58 -- host/discovery.sh@74 -- # jq '. | length' 00:25:43.019 07:42:58 -- common/autotest_common.sh@551 -- # xtrace_disable 00:25:43.019 07:42:58 -- common/autotest_common.sh@10 -- # set +x 00:25:43.019 07:42:58 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:25:43.019 07:42:59 -- host/discovery.sh@74 -- # notification_count=0 00:25:43.019 07:42:59 -- host/discovery.sh@75 -- # notify_id=2 00:25:43.019 07:42:59 -- host/discovery.sh@132 -- # [[ 0 == 0 ]] 00:25:43.019 07:42:59 -- host/discovery.sh@134 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_stop_discovery -b nvme 00:25:43.019 07:42:59 -- common/autotest_common.sh@551 -- # xtrace_disable 00:25:43.019 07:42:59 -- common/autotest_common.sh@10 -- # set +x 00:25:43.019 07:42:59 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:25:43.019 07:42:59 -- host/discovery.sh@135 -- # sleep 1 00:25:43.948 07:43:00 -- host/discovery.sh@136 -- # get_subsystem_names 00:25:43.948 07:43:00 -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:25:43.948 07:43:00 -- host/discovery.sh@59 -- # jq -r '.[].name' 00:25:43.948 07:43:00 -- common/autotest_common.sh@551 -- # xtrace_disable 00:25:43.948 07:43:00 -- common/autotest_common.sh@10 -- # set +x 00:25:43.948 07:43:00 -- host/discovery.sh@59 -- # sort 00:25:43.948 07:43:00 -- host/discovery.sh@59 -- # xargs 00:25:43.948 07:43:00 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:25:43.948 07:43:00 -- host/discovery.sh@136 -- # [[ '' == '' ]] 00:25:43.948 07:43:00 -- host/discovery.sh@137 -- # get_bdev_list 00:25:43.948 07:43:00 -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:25:43.948 07:43:00 -- common/autotest_common.sh@551 -- # xtrace_disable 00:25:43.948 07:43:00 -- host/discovery.sh@55 -- # jq -r '.[].name' 00:25:43.948 07:43:00 -- common/autotest_common.sh@10 -- # set +x 00:25:43.948 07:43:00 -- host/discovery.sh@55 -- # sort 00:25:43.948 07:43:00 -- host/discovery.sh@55 -- # xargs 00:25:43.948 07:43:00 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:25:43.948 07:43:00 -- host/discovery.sh@137 -- # [[ '' == '' ]] 00:25:43.948 07:43:00 -- host/discovery.sh@138 -- # get_notification_count 00:25:44.205 07:43:00 -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 2 00:25:44.205 07:43:00 -- host/discovery.sh@74 -- # jq '. | length' 00:25:44.205 07:43:00 -- common/autotest_common.sh@551 -- # xtrace_disable 00:25:44.205 07:43:00 -- common/autotest_common.sh@10 -- # set +x 00:25:44.205 07:43:00 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:25:44.205 07:43:00 -- host/discovery.sh@74 -- # notification_count=2 00:25:44.205 07:43:00 -- host/discovery.sh@75 -- # notify_id=4 00:25:44.205 07:43:00 -- host/discovery.sh@139 -- # [[ 2 == 2 ]] 00:25:44.205 07:43:00 -- host/discovery.sh@142 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:25:44.205 07:43:00 -- common/autotest_common.sh@551 -- # xtrace_disable 00:25:44.205 07:43:00 -- common/autotest_common.sh@10 -- # set +x 00:25:45.136 [2024-07-14 07:43:01.216100] bdev_nvme.c:6759:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:25:45.136 [2024-07-14 07:43:01.216127] bdev_nvme.c:6839:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:25:45.136 [2024-07-14 07:43:01.216165] bdev_nvme.c:6722:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:25:45.136 [2024-07-14 07:43:01.303460] bdev_nvme.c:6688:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 new subsystem nvme0 00:25:45.701 [2024-07-14 07:43:01.570511] bdev_nvme.c:6578:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:25:45.701 [2024-07-14 07:43:01.570553] bdev_nvme.c:6537:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 found again 00:25:45.701 07:43:01 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:25:45.701 07:43:01 -- host/discovery.sh@144 -- # NOT rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:25:45.701 07:43:01 -- common/autotest_common.sh@640 -- # local es=0 00:25:45.701 07:43:01 -- common/autotest_common.sh@642 -- # valid_exec_arg rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:25:45.701 07:43:01 -- common/autotest_common.sh@628 -- # local arg=rpc_cmd 00:25:45.701 07:43:01 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:25:45.701 07:43:01 -- common/autotest_common.sh@632 -- # type -t rpc_cmd 00:25:45.701 07:43:01 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:25:45.701 07:43:01 -- common/autotest_common.sh@643 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:25:45.701 07:43:01 -- common/autotest_common.sh@551 -- # xtrace_disable 00:25:45.701 07:43:01 -- common/autotest_common.sh@10 -- # set +x 00:25:45.701 request: 00:25:45.701 { 00:25:45.701 "name": "nvme", 00:25:45.701 "trtype": "tcp", 00:25:45.701 "traddr": "10.0.0.2", 00:25:45.701 "hostnqn": "nqn.2021-12.io.spdk:test", 00:25:45.701 "adrfam": "ipv4", 00:25:45.701 "trsvcid": "8009", 00:25:45.701 "wait_for_attach": true, 00:25:45.701 "method": "bdev_nvme_start_discovery", 00:25:45.701 "req_id": 1 00:25:45.701 } 00:25:45.701 Got JSON-RPC error response 00:25:45.701 response: 00:25:45.701 { 00:25:45.701 "code": -17, 00:25:45.701 "message": "File exists" 00:25:45.701 } 00:25:45.701 07:43:01 -- common/autotest_common.sh@579 -- # [[ 1 == 0 ]] 00:25:45.701 07:43:01 -- common/autotest_common.sh@643 -- # es=1 00:25:45.701 07:43:01 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:25:45.701 07:43:01 -- common/autotest_common.sh@662 -- # [[ -n '' ]] 00:25:45.701 07:43:01 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:25:45.701 07:43:01 -- host/discovery.sh@146 -- # get_discovery_ctrlrs 00:25:45.701 07:43:01 -- host/discovery.sh@67 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 00:25:45.701 07:43:01 -- common/autotest_common.sh@551 -- # xtrace_disable 00:25:45.701 07:43:01 -- host/discovery.sh@67 -- # jq -r '.[].name' 00:25:45.701 07:43:01 -- common/autotest_common.sh@10 -- # set +x 00:25:45.701 07:43:01 -- host/discovery.sh@67 -- # sort 00:25:45.701 07:43:01 -- host/discovery.sh@67 -- # xargs 00:25:45.701 07:43:01 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:25:45.701 07:43:01 -- host/discovery.sh@146 -- # [[ nvme == \n\v\m\e ]] 00:25:45.701 07:43:01 -- host/discovery.sh@147 -- # get_bdev_list 00:25:45.701 07:43:01 -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:25:45.701 07:43:01 -- host/discovery.sh@55 -- # jq -r '.[].name' 00:25:45.701 07:43:01 -- common/autotest_common.sh@551 -- # xtrace_disable 00:25:45.701 07:43:01 -- common/autotest_common.sh@10 -- # set +x 00:25:45.701 07:43:01 -- host/discovery.sh@55 -- # sort 00:25:45.701 07:43:01 -- host/discovery.sh@55 -- # xargs 00:25:45.701 07:43:01 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:25:45.701 07:43:01 -- host/discovery.sh@147 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:25:45.701 07:43:01 -- host/discovery.sh@150 -- # NOT rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:25:45.701 07:43:01 -- common/autotest_common.sh@640 -- # local es=0 00:25:45.701 07:43:01 -- common/autotest_common.sh@642 -- # valid_exec_arg rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:25:45.701 07:43:01 -- common/autotest_common.sh@628 -- # local arg=rpc_cmd 00:25:45.701 07:43:01 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:25:45.701 07:43:01 -- common/autotest_common.sh@632 -- # type -t rpc_cmd 00:25:45.701 07:43:01 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:25:45.701 07:43:01 -- common/autotest_common.sh@643 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:25:45.702 07:43:01 -- common/autotest_common.sh@551 -- # xtrace_disable 00:25:45.702 07:43:01 -- common/autotest_common.sh@10 -- # set +x 00:25:45.702 request: 00:25:45.702 { 00:25:45.702 "name": "nvme_second", 00:25:45.702 "trtype": "tcp", 00:25:45.702 "traddr": "10.0.0.2", 00:25:45.702 "hostnqn": "nqn.2021-12.io.spdk:test", 00:25:45.702 "adrfam": "ipv4", 00:25:45.702 "trsvcid": "8009", 00:25:45.702 "wait_for_attach": true, 00:25:45.702 "method": "bdev_nvme_start_discovery", 00:25:45.702 "req_id": 1 00:25:45.702 } 00:25:45.702 Got JSON-RPC error response 00:25:45.702 response: 00:25:45.702 { 00:25:45.702 "code": -17, 00:25:45.702 "message": "File exists" 00:25:45.702 } 00:25:45.702 07:43:01 -- common/autotest_common.sh@579 -- # [[ 1 == 0 ]] 00:25:45.702 07:43:01 -- common/autotest_common.sh@643 -- # es=1 00:25:45.702 07:43:01 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:25:45.702 07:43:01 -- common/autotest_common.sh@662 -- # [[ -n '' ]] 00:25:45.702 07:43:01 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:25:45.702 07:43:01 -- host/discovery.sh@152 -- # get_discovery_ctrlrs 00:25:45.702 07:43:01 -- host/discovery.sh@67 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 00:25:45.702 07:43:01 -- host/discovery.sh@67 -- # jq -r '.[].name' 00:25:45.702 07:43:01 -- common/autotest_common.sh@551 -- # xtrace_disable 00:25:45.702 07:43:01 -- common/autotest_common.sh@10 -- # set +x 00:25:45.702 07:43:01 -- host/discovery.sh@67 -- # sort 00:25:45.702 07:43:01 -- host/discovery.sh@67 -- # xargs 00:25:45.702 07:43:01 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:25:45.702 07:43:01 -- host/discovery.sh@152 -- # [[ nvme == \n\v\m\e ]] 00:25:45.702 07:43:01 -- host/discovery.sh@153 -- # get_bdev_list 00:25:45.702 07:43:01 -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:25:45.702 07:43:01 -- host/discovery.sh@55 -- # jq -r '.[].name' 00:25:45.702 07:43:01 -- common/autotest_common.sh@551 -- # xtrace_disable 00:25:45.702 07:43:01 -- common/autotest_common.sh@10 -- # set +x 00:25:45.702 07:43:01 -- host/discovery.sh@55 -- # sort 00:25:45.702 07:43:01 -- host/discovery.sh@55 -- # xargs 00:25:45.702 07:43:01 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:25:45.702 07:43:01 -- host/discovery.sh@153 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:25:45.702 07:43:01 -- host/discovery.sh@156 -- # NOT rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8010 -f ipv4 -q nqn.2021-12.io.spdk:test -T 3000 00:25:45.702 07:43:01 -- common/autotest_common.sh@640 -- # local es=0 00:25:45.702 07:43:01 -- common/autotest_common.sh@642 -- # valid_exec_arg rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8010 -f ipv4 -q nqn.2021-12.io.spdk:test -T 3000 00:25:45.702 07:43:01 -- common/autotest_common.sh@628 -- # local arg=rpc_cmd 00:25:45.702 07:43:01 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:25:45.702 07:43:01 -- common/autotest_common.sh@632 -- # type -t rpc_cmd 00:25:45.702 07:43:01 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:25:45.702 07:43:01 -- common/autotest_common.sh@643 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8010 -f ipv4 -q nqn.2021-12.io.spdk:test -T 3000 00:25:45.702 07:43:01 -- common/autotest_common.sh@551 -- # xtrace_disable 00:25:45.702 07:43:01 -- common/autotest_common.sh@10 -- # set +x 00:25:46.633 [2024-07-14 07:43:02.773965] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:46.633 [2024-07-14 07:43:02.774217] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:46.633 [2024-07-14 07:43:02.774246] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x243c070 with addr=10.0.0.2, port=8010 00:25:46.633 [2024-07-14 07:43:02.774270] nvme_tcp.c:2596:nvme_tcp_ctrlr_construct: *ERROR*: failed to create admin qpair 00:25:46.633 [2024-07-14 07:43:02.774285] nvme.c: 821:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:25:46.633 [2024-07-14 07:43:02.774298] bdev_nvme.c:6821:discovery_poller: *ERROR*: Discovery[10.0.0.2:8010] could not start discovery connect 00:25:48.004 [2024-07-14 07:43:03.776401] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.004 [2024-07-14 07:43:03.776624] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.004 [2024-07-14 07:43:03.776653] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x243c070 with addr=10.0.0.2, port=8010 00:25:48.004 [2024-07-14 07:43:03.776674] nvme_tcp.c:2596:nvme_tcp_ctrlr_construct: *ERROR*: failed to create admin qpair 00:25:48.004 [2024-07-14 07:43:03.776688] nvme.c: 821:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:25:48.004 [2024-07-14 07:43:03.776701] bdev_nvme.c:6821:discovery_poller: *ERROR*: Discovery[10.0.0.2:8010] could not start discovery connect 00:25:48.675 [2024-07-14 07:43:04.778590] bdev_nvme.c:6802:discovery_poller: *ERROR*: Discovery[10.0.0.2:8010] timed out while attaching discovery ctrlr 00:25:48.675 request: 00:25:48.675 { 00:25:48.675 "name": "nvme_second", 00:25:48.675 "trtype": "tcp", 00:25:48.675 "traddr": "10.0.0.2", 00:25:48.675 "hostnqn": "nqn.2021-12.io.spdk:test", 00:25:48.675 "adrfam": "ipv4", 00:25:48.675 "trsvcid": "8010", 00:25:48.675 "attach_timeout_ms": 3000, 00:25:48.675 "method": "bdev_nvme_start_discovery", 00:25:48.675 "req_id": 1 00:25:48.675 } 00:25:48.675 Got JSON-RPC error response 00:25:48.675 response: 00:25:48.675 { 00:25:48.675 "code": -110, 00:25:48.675 "message": "Connection timed out" 00:25:48.675 } 00:25:48.675 07:43:04 -- common/autotest_common.sh@579 -- # [[ 1 == 0 ]] 00:25:48.675 07:43:04 -- common/autotest_common.sh@643 -- # es=1 00:25:48.675 07:43:04 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:25:48.675 07:43:04 -- common/autotest_common.sh@662 -- # [[ -n '' ]] 00:25:48.675 07:43:04 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:25:48.675 07:43:04 -- host/discovery.sh@158 -- # get_discovery_ctrlrs 00:25:48.675 07:43:04 -- host/discovery.sh@67 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 00:25:48.675 07:43:04 -- host/discovery.sh@67 -- # jq -r '.[].name' 00:25:48.675 07:43:04 -- common/autotest_common.sh@551 -- # xtrace_disable 00:25:48.675 07:43:04 -- common/autotest_common.sh@10 -- # set +x 00:25:48.675 07:43:04 -- host/discovery.sh@67 -- # sort 00:25:48.675 07:43:04 -- host/discovery.sh@67 -- # xargs 00:25:48.675 07:43:04 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:25:48.675 07:43:04 -- host/discovery.sh@158 -- # [[ nvme == \n\v\m\e ]] 00:25:48.675 07:43:04 -- host/discovery.sh@160 -- # trap - SIGINT SIGTERM EXIT 00:25:48.675 07:43:04 -- host/discovery.sh@162 -- # kill 560 00:25:48.675 07:43:04 -- host/discovery.sh@163 -- # nvmftestfini 00:25:48.675 07:43:04 -- nvmf/common.sh@476 -- # nvmfcleanup 00:25:48.675 07:43:04 -- nvmf/common.sh@116 -- # sync 00:25:48.675 07:43:04 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:25:48.675 07:43:04 -- nvmf/common.sh@119 -- # set +e 00:25:48.675 07:43:04 -- nvmf/common.sh@120 -- # for i in {1..20} 00:25:48.675 07:43:04 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:25:48.675 rmmod nvme_tcp 00:25:48.931 rmmod nvme_fabrics 00:25:48.931 rmmod nvme_keyring 00:25:48.931 07:43:04 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:25:48.931 07:43:04 -- nvmf/common.sh@123 -- # set -e 00:25:48.931 07:43:04 -- nvmf/common.sh@124 -- # return 0 00:25:48.931 07:43:04 -- nvmf/common.sh@477 -- # '[' -n 389 ']' 00:25:48.931 07:43:04 -- nvmf/common.sh@478 -- # killprocess 389 00:25:48.931 07:43:04 -- common/autotest_common.sh@926 -- # '[' -z 389 ']' 00:25:48.931 07:43:04 -- common/autotest_common.sh@930 -- # kill -0 389 00:25:48.931 07:43:04 -- common/autotest_common.sh@931 -- # uname 00:25:48.931 07:43:04 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:25:48.931 07:43:04 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 389 00:25:48.931 07:43:04 -- common/autotest_common.sh@932 -- # process_name=reactor_1 00:25:48.931 07:43:04 -- common/autotest_common.sh@936 -- # '[' reactor_1 = sudo ']' 00:25:48.931 07:43:04 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 389' 00:25:48.931 killing process with pid 389 00:25:48.931 07:43:04 -- common/autotest_common.sh@945 -- # kill 389 00:25:48.931 07:43:04 -- common/autotest_common.sh@950 -- # wait 389 00:25:49.189 07:43:05 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:25:49.189 07:43:05 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:25:49.189 07:43:05 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:25:49.189 07:43:05 -- nvmf/common.sh@273 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:25:49.189 07:43:05 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:25:49.189 07:43:05 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:49.189 07:43:05 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:25:49.189 07:43:05 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:51.143 07:43:07 -- nvmf/common.sh@278 -- # ip -4 addr flush cvl_0_1 00:25:51.143 00:25:51.143 real 0m17.375s 00:25:51.143 user 0m27.100s 00:25:51.143 sys 0m2.957s 00:25:51.143 07:43:07 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:25:51.143 07:43:07 -- common/autotest_common.sh@10 -- # set +x 00:25:51.143 ************************************ 00:25:51.143 END TEST nvmf_discovery 00:25:51.143 ************************************ 00:25:51.143 07:43:07 -- nvmf/nvmf.sh@102 -- # run_test nvmf_discovery_remove_ifc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/discovery_remove_ifc.sh --transport=tcp 00:25:51.143 07:43:07 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:25:51.143 07:43:07 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:25:51.143 07:43:07 -- common/autotest_common.sh@10 -- # set +x 00:25:51.143 ************************************ 00:25:51.143 START TEST nvmf_discovery_remove_ifc 00:25:51.143 ************************************ 00:25:51.143 07:43:07 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/discovery_remove_ifc.sh --transport=tcp 00:25:51.143 * Looking for test storage... 00:25:51.143 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:25:51.143 07:43:07 -- host/discovery_remove_ifc.sh@12 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:25:51.143 07:43:07 -- nvmf/common.sh@7 -- # uname -s 00:25:51.143 07:43:07 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:25:51.143 07:43:07 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:25:51.143 07:43:07 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:25:51.143 07:43:07 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:25:51.143 07:43:07 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:25:51.143 07:43:07 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:25:51.143 07:43:07 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:25:51.143 07:43:07 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:25:51.143 07:43:07 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:25:51.402 07:43:07 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:25:51.402 07:43:07 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:25:51.402 07:43:07 -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:25:51.402 07:43:07 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:25:51.402 07:43:07 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:25:51.402 07:43:07 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:25:51.402 07:43:07 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:25:51.402 07:43:07 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:25:51.402 07:43:07 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:25:51.402 07:43:07 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:25:51.402 07:43:07 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:51.402 07:43:07 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:51.402 07:43:07 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:51.402 07:43:07 -- paths/export.sh@5 -- # export PATH 00:25:51.402 07:43:07 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:51.402 07:43:07 -- nvmf/common.sh@46 -- # : 0 00:25:51.402 07:43:07 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:25:51.402 07:43:07 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:25:51.402 07:43:07 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:25:51.402 07:43:07 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:25:51.402 07:43:07 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:25:51.402 07:43:07 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:25:51.402 07:43:07 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:25:51.402 07:43:07 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:25:51.402 07:43:07 -- host/discovery_remove_ifc.sh@14 -- # '[' tcp == rdma ']' 00:25:51.402 07:43:07 -- host/discovery_remove_ifc.sh@19 -- # discovery_port=8009 00:25:51.402 07:43:07 -- host/discovery_remove_ifc.sh@20 -- # discovery_nqn=nqn.2014-08.org.nvmexpress.discovery 00:25:51.402 07:43:07 -- host/discovery_remove_ifc.sh@23 -- # nqn=nqn.2016-06.io.spdk:cnode 00:25:51.402 07:43:07 -- host/discovery_remove_ifc.sh@25 -- # host_nqn=nqn.2021-12.io.spdk:test 00:25:51.402 07:43:07 -- host/discovery_remove_ifc.sh@26 -- # host_sock=/tmp/host.sock 00:25:51.402 07:43:07 -- host/discovery_remove_ifc.sh@39 -- # nvmftestinit 00:25:51.402 07:43:07 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:25:51.402 07:43:07 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:25:51.402 07:43:07 -- nvmf/common.sh@436 -- # prepare_net_devs 00:25:51.402 07:43:07 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:25:51.402 07:43:07 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:25:51.402 07:43:07 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:51.402 07:43:07 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:25:51.403 07:43:07 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:51.403 07:43:07 -- nvmf/common.sh@402 -- # [[ phy != virt ]] 00:25:51.403 07:43:07 -- nvmf/common.sh@402 -- # gather_supported_nvmf_pci_devs 00:25:51.403 07:43:07 -- nvmf/common.sh@284 -- # xtrace_disable 00:25:51.403 07:43:07 -- common/autotest_common.sh@10 -- # set +x 00:25:53.305 07:43:09 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:25:53.305 07:43:09 -- nvmf/common.sh@290 -- # pci_devs=() 00:25:53.305 07:43:09 -- nvmf/common.sh@290 -- # local -a pci_devs 00:25:53.305 07:43:09 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:25:53.305 07:43:09 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:25:53.305 07:43:09 -- nvmf/common.sh@292 -- # pci_drivers=() 00:25:53.305 07:43:09 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:25:53.305 07:43:09 -- nvmf/common.sh@294 -- # net_devs=() 00:25:53.305 07:43:09 -- nvmf/common.sh@294 -- # local -ga net_devs 00:25:53.305 07:43:09 -- nvmf/common.sh@295 -- # e810=() 00:25:53.305 07:43:09 -- nvmf/common.sh@295 -- # local -ga e810 00:25:53.305 07:43:09 -- nvmf/common.sh@296 -- # x722=() 00:25:53.305 07:43:09 -- nvmf/common.sh@296 -- # local -ga x722 00:25:53.305 07:43:09 -- nvmf/common.sh@297 -- # mlx=() 00:25:53.305 07:43:09 -- nvmf/common.sh@297 -- # local -ga mlx 00:25:53.305 07:43:09 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:25:53.305 07:43:09 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:25:53.305 07:43:09 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:25:53.305 07:43:09 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:25:53.305 07:43:09 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:25:53.305 07:43:09 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:25:53.305 07:43:09 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:25:53.305 07:43:09 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:25:53.305 07:43:09 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:25:53.305 07:43:09 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:25:53.305 07:43:09 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:25:53.305 07:43:09 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:25:53.305 07:43:09 -- nvmf/common.sh@320 -- # [[ tcp == rdma ]] 00:25:53.305 07:43:09 -- nvmf/common.sh@326 -- # [[ e810 == mlx5 ]] 00:25:53.305 07:43:09 -- nvmf/common.sh@328 -- # [[ e810 == e810 ]] 00:25:53.305 07:43:09 -- nvmf/common.sh@329 -- # pci_devs=("${e810[@]}") 00:25:53.305 07:43:09 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:25:53.305 07:43:09 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:25:53.305 07:43:09 -- nvmf/common.sh@340 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:25:53.305 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:25:53.305 07:43:09 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:25:53.305 07:43:09 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:25:53.305 07:43:09 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:25:53.305 07:43:09 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:25:53.305 07:43:09 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:25:53.305 07:43:09 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:25:53.305 07:43:09 -- nvmf/common.sh@340 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:25:53.305 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:25:53.305 07:43:09 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:25:53.305 07:43:09 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:25:53.305 07:43:09 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:25:53.305 07:43:09 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:25:53.305 07:43:09 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:25:53.305 07:43:09 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:25:53.305 07:43:09 -- nvmf/common.sh@371 -- # [[ e810 == e810 ]] 00:25:53.305 07:43:09 -- nvmf/common.sh@371 -- # [[ tcp == rdma ]] 00:25:53.305 07:43:09 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:25:53.305 07:43:09 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:25:53.305 07:43:09 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:25:53.305 07:43:09 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:25:53.305 07:43:09 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:25:53.305 Found net devices under 0000:0a:00.0: cvl_0_0 00:25:53.305 07:43:09 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:25:53.305 07:43:09 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:25:53.306 07:43:09 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:25:53.306 07:43:09 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:25:53.306 07:43:09 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:25:53.306 07:43:09 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:25:53.306 Found net devices under 0000:0a:00.1: cvl_0_1 00:25:53.306 07:43:09 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:25:53.306 07:43:09 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:25:53.306 07:43:09 -- nvmf/common.sh@402 -- # is_hw=yes 00:25:53.306 07:43:09 -- nvmf/common.sh@404 -- # [[ yes == yes ]] 00:25:53.306 07:43:09 -- nvmf/common.sh@405 -- # [[ tcp == tcp ]] 00:25:53.306 07:43:09 -- nvmf/common.sh@406 -- # nvmf_tcp_init 00:25:53.306 07:43:09 -- nvmf/common.sh@228 -- # NVMF_INITIATOR_IP=10.0.0.1 00:25:53.306 07:43:09 -- nvmf/common.sh@229 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:25:53.306 07:43:09 -- nvmf/common.sh@230 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:25:53.306 07:43:09 -- nvmf/common.sh@233 -- # (( 2 > 1 )) 00:25:53.306 07:43:09 -- nvmf/common.sh@235 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:25:53.306 07:43:09 -- nvmf/common.sh@236 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:25:53.306 07:43:09 -- nvmf/common.sh@239 -- # NVMF_SECOND_TARGET_IP= 00:25:53.306 07:43:09 -- nvmf/common.sh@241 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:25:53.306 07:43:09 -- nvmf/common.sh@242 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:25:53.306 07:43:09 -- nvmf/common.sh@243 -- # ip -4 addr flush cvl_0_0 00:25:53.306 07:43:09 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_1 00:25:53.306 07:43:09 -- nvmf/common.sh@247 -- # ip netns add cvl_0_0_ns_spdk 00:25:53.306 07:43:09 -- nvmf/common.sh@250 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:25:53.306 07:43:09 -- nvmf/common.sh@253 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:25:53.306 07:43:09 -- nvmf/common.sh@254 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:25:53.306 07:43:09 -- nvmf/common.sh@257 -- # ip link set cvl_0_1 up 00:25:53.306 07:43:09 -- nvmf/common.sh@259 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:25:53.306 07:43:09 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:25:53.306 07:43:09 -- nvmf/common.sh@263 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:25:53.306 07:43:09 -- nvmf/common.sh@266 -- # ping -c 1 10.0.0.2 00:25:53.306 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:25:53.306 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.181 ms 00:25:53.306 00:25:53.306 --- 10.0.0.2 ping statistics --- 00:25:53.306 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:53.306 rtt min/avg/max/mdev = 0.181/0.181/0.181/0.000 ms 00:25:53.306 07:43:09 -- nvmf/common.sh@267 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:25:53.306 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:25:53.306 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.104 ms 00:25:53.306 00:25:53.306 --- 10.0.0.1 ping statistics --- 00:25:53.306 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:53.306 rtt min/avg/max/mdev = 0.104/0.104/0.104/0.000 ms 00:25:53.306 07:43:09 -- nvmf/common.sh@269 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:25:53.306 07:43:09 -- nvmf/common.sh@410 -- # return 0 00:25:53.306 07:43:09 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:25:53.306 07:43:09 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:25:53.306 07:43:09 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:25:53.306 07:43:09 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:25:53.306 07:43:09 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:25:53.306 07:43:09 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:25:53.306 07:43:09 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:25:53.306 07:43:09 -- host/discovery_remove_ifc.sh@40 -- # nvmfappstart -m 0x2 00:25:53.306 07:43:09 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:25:53.306 07:43:09 -- common/autotest_common.sh@712 -- # xtrace_disable 00:25:53.306 07:43:09 -- common/autotest_common.sh@10 -- # set +x 00:25:53.306 07:43:09 -- nvmf/common.sh@469 -- # nvmfpid=4358 00:25:53.306 07:43:09 -- nvmf/common.sh@468 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:25:53.306 07:43:09 -- nvmf/common.sh@470 -- # waitforlisten 4358 00:25:53.306 07:43:09 -- common/autotest_common.sh@819 -- # '[' -z 4358 ']' 00:25:53.306 07:43:09 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:25:53.306 07:43:09 -- common/autotest_common.sh@824 -- # local max_retries=100 00:25:53.306 07:43:09 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:25:53.306 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:25:53.306 07:43:09 -- common/autotest_common.sh@828 -- # xtrace_disable 00:25:53.306 07:43:09 -- common/autotest_common.sh@10 -- # set +x 00:25:53.565 [2024-07-14 07:43:09.490142] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:25:53.565 [2024-07-14 07:43:09.490249] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:25:53.565 EAL: No free 2048 kB hugepages reported on node 1 00:25:53.565 [2024-07-14 07:43:09.560762] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:53.565 [2024-07-14 07:43:09.668749] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:25:53.565 [2024-07-14 07:43:09.668947] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:25:53.565 [2024-07-14 07:43:09.668967] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:25:53.565 [2024-07-14 07:43:09.668980] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:25:53.565 [2024-07-14 07:43:09.669019] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:25:54.500 07:43:10 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:25:54.500 07:43:10 -- common/autotest_common.sh@852 -- # return 0 00:25:54.500 07:43:10 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:25:54.500 07:43:10 -- common/autotest_common.sh@718 -- # xtrace_disable 00:25:54.500 07:43:10 -- common/autotest_common.sh@10 -- # set +x 00:25:54.500 07:43:10 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:25:54.500 07:43:10 -- host/discovery_remove_ifc.sh@43 -- # rpc_cmd 00:25:54.500 07:43:10 -- common/autotest_common.sh@551 -- # xtrace_disable 00:25:54.500 07:43:10 -- common/autotest_common.sh@10 -- # set +x 00:25:54.500 [2024-07-14 07:43:10.445721] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:25:54.500 [2024-07-14 07:43:10.453902] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 8009 *** 00:25:54.500 null0 00:25:54.500 [2024-07-14 07:43:10.485863] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:25:54.500 07:43:10 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:25:54.500 07:43:10 -- host/discovery_remove_ifc.sh@59 -- # hostpid=4458 00:25:54.500 07:43:10 -- host/discovery_remove_ifc.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -m 0x1 -r /tmp/host.sock --wait-for-rpc -L bdev_nvme 00:25:54.500 07:43:10 -- host/discovery_remove_ifc.sh@60 -- # waitforlisten 4458 /tmp/host.sock 00:25:54.500 07:43:10 -- common/autotest_common.sh@819 -- # '[' -z 4458 ']' 00:25:54.501 07:43:10 -- common/autotest_common.sh@823 -- # local rpc_addr=/tmp/host.sock 00:25:54.501 07:43:10 -- common/autotest_common.sh@824 -- # local max_retries=100 00:25:54.501 07:43:10 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock...' 00:25:54.501 Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock... 00:25:54.501 07:43:10 -- common/autotest_common.sh@828 -- # xtrace_disable 00:25:54.501 07:43:10 -- common/autotest_common.sh@10 -- # set +x 00:25:54.501 [2024-07-14 07:43:10.545549] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:25:54.501 [2024-07-14 07:43:10.545632] [ DPDK EAL parameters: nvmf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid4458 ] 00:25:54.501 EAL: No free 2048 kB hugepages reported on node 1 00:25:54.501 [2024-07-14 07:43:10.605362] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:54.759 [2024-07-14 07:43:10.711518] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:25:54.759 [2024-07-14 07:43:10.711693] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:25:54.759 07:43:10 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:25:54.759 07:43:10 -- common/autotest_common.sh@852 -- # return 0 00:25:54.759 07:43:10 -- host/discovery_remove_ifc.sh@62 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $hostpid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:25:54.759 07:43:10 -- host/discovery_remove_ifc.sh@65 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_set_options -e 1 00:25:54.759 07:43:10 -- common/autotest_common.sh@551 -- # xtrace_disable 00:25:54.759 07:43:10 -- common/autotest_common.sh@10 -- # set +x 00:25:54.759 07:43:10 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:25:54.759 07:43:10 -- host/discovery_remove_ifc.sh@66 -- # rpc_cmd -s /tmp/host.sock framework_start_init 00:25:54.759 07:43:10 -- common/autotest_common.sh@551 -- # xtrace_disable 00:25:54.759 07:43:10 -- common/autotest_common.sh@10 -- # set +x 00:25:54.759 07:43:10 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:25:54.759 07:43:10 -- host/discovery_remove_ifc.sh@69 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test --ctrlr-loss-timeout-sec 2 --reconnect-delay-sec 1 --fast-io-fail-timeout-sec 1 --wait-for-attach 00:25:54.759 07:43:10 -- common/autotest_common.sh@551 -- # xtrace_disable 00:25:54.759 07:43:10 -- common/autotest_common.sh@10 -- # set +x 00:25:56.133 [2024-07-14 07:43:11.924058] bdev_nvme.c:6759:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:25:56.133 [2024-07-14 07:43:11.924088] bdev_nvme.c:6839:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:25:56.133 [2024-07-14 07:43:11.924112] bdev_nvme.c:6722:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:25:56.133 [2024-07-14 07:43:12.010400] bdev_nvme.c:6688:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 new subsystem nvme0 00:25:56.133 [2024-07-14 07:43:12.074047] bdev_nvme.c:7548:bdev_nvme_readv: *DEBUG*: read 8 blocks with offset 0 00:25:56.133 [2024-07-14 07:43:12.074096] bdev_nvme.c:7548:bdev_nvme_readv: *DEBUG*: read 1 blocks with offset 0 00:25:56.133 [2024-07-14 07:43:12.074132] bdev_nvme.c:7548:bdev_nvme_readv: *DEBUG*: read 64 blocks with offset 0 00:25:56.133 [2024-07-14 07:43:12.074154] bdev_nvme.c:6578:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:25:56.133 [2024-07-14 07:43:12.074195] bdev_nvme.c:6537:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:25:56.133 07:43:12 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:25:56.133 07:43:12 -- host/discovery_remove_ifc.sh@72 -- # wait_for_bdev nvme0n1 00:25:56.133 07:43:12 -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:25:56.133 07:43:12 -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:25:56.133 07:43:12 -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:25:56.133 07:43:12 -- common/autotest_common.sh@551 -- # xtrace_disable 00:25:56.133 07:43:12 -- common/autotest_common.sh@10 -- # set +x 00:25:56.133 07:43:12 -- host/discovery_remove_ifc.sh@29 -- # sort 00:25:56.133 07:43:12 -- host/discovery_remove_ifc.sh@29 -- # xargs 00:25:56.133 07:43:12 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:25:56.133 07:43:12 -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != \n\v\m\e\0\n\1 ]] 00:25:56.133 07:43:12 -- host/discovery_remove_ifc.sh@75 -- # ip netns exec cvl_0_0_ns_spdk ip addr del 10.0.0.2/24 dev cvl_0_0 00:25:56.133 07:43:12 -- host/discovery_remove_ifc.sh@76 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 down 00:25:56.133 07:43:12 -- host/discovery_remove_ifc.sh@79 -- # wait_for_bdev '' 00:25:56.133 07:43:12 -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:25:56.133 07:43:12 -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:25:56.133 07:43:12 -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:25:56.133 07:43:12 -- common/autotest_common.sh@551 -- # xtrace_disable 00:25:56.133 07:43:12 -- common/autotest_common.sh@10 -- # set +x 00:25:56.133 07:43:12 -- host/discovery_remove_ifc.sh@29 -- # sort 00:25:56.133 07:43:12 -- host/discovery_remove_ifc.sh@29 -- # xargs 00:25:56.133 07:43:12 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:25:56.133 07:43:12 -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:25:56.133 07:43:12 -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:25:57.063 07:43:13 -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:25:57.063 07:43:13 -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:25:57.063 07:43:13 -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:25:57.063 07:43:13 -- common/autotest_common.sh@551 -- # xtrace_disable 00:25:57.063 07:43:13 -- host/discovery_remove_ifc.sh@29 -- # sort 00:25:57.063 07:43:13 -- common/autotest_common.sh@10 -- # set +x 00:25:57.063 07:43:13 -- host/discovery_remove_ifc.sh@29 -- # xargs 00:25:57.063 07:43:13 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:25:57.321 07:43:13 -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:25:57.321 07:43:13 -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:25:58.254 07:43:14 -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:25:58.254 07:43:14 -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:25:58.254 07:43:14 -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:25:58.254 07:43:14 -- common/autotest_common.sh@551 -- # xtrace_disable 00:25:58.254 07:43:14 -- common/autotest_common.sh@10 -- # set +x 00:25:58.254 07:43:14 -- host/discovery_remove_ifc.sh@29 -- # sort 00:25:58.254 07:43:14 -- host/discovery_remove_ifc.sh@29 -- # xargs 00:25:58.254 07:43:14 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:25:58.254 07:43:14 -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:25:58.254 07:43:14 -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:25:59.188 07:43:15 -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:25:59.188 07:43:15 -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:25:59.188 07:43:15 -- common/autotest_common.sh@551 -- # xtrace_disable 00:25:59.188 07:43:15 -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:25:59.188 07:43:15 -- common/autotest_common.sh@10 -- # set +x 00:25:59.188 07:43:15 -- host/discovery_remove_ifc.sh@29 -- # sort 00:25:59.188 07:43:15 -- host/discovery_remove_ifc.sh@29 -- # xargs 00:25:59.188 07:43:15 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:25:59.188 07:43:15 -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:25:59.188 07:43:15 -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:26:00.562 07:43:16 -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:26:00.562 07:43:16 -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:26:00.562 07:43:16 -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:26:00.562 07:43:16 -- common/autotest_common.sh@551 -- # xtrace_disable 00:26:00.562 07:43:16 -- common/autotest_common.sh@10 -- # set +x 00:26:00.562 07:43:16 -- host/discovery_remove_ifc.sh@29 -- # sort 00:26:00.562 07:43:16 -- host/discovery_remove_ifc.sh@29 -- # xargs 00:26:00.562 07:43:16 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:26:00.562 07:43:16 -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:26:00.562 07:43:16 -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:26:01.495 07:43:17 -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:26:01.495 07:43:17 -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:26:01.495 07:43:17 -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:26:01.495 07:43:17 -- common/autotest_common.sh@551 -- # xtrace_disable 00:26:01.495 07:43:17 -- host/discovery_remove_ifc.sh@29 -- # sort 00:26:01.495 07:43:17 -- common/autotest_common.sh@10 -- # set +x 00:26:01.495 07:43:17 -- host/discovery_remove_ifc.sh@29 -- # xargs 00:26:01.495 07:43:17 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:26:01.495 07:43:17 -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:26:01.495 07:43:17 -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:26:01.495 [2024-07-14 07:43:17.515843] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 110: Connection timed out 00:26:01.495 [2024-07-14 07:43:17.515939] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:26:01.495 [2024-07-14 07:43:17.515977] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:01.495 [2024-07-14 07:43:17.515995] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:26:01.495 [2024-07-14 07:43:17.516008] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:01.495 [2024-07-14 07:43:17.516022] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:26:01.495 [2024-07-14 07:43:17.516035] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:01.495 [2024-07-14 07:43:17.516048] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:26:01.495 [2024-07-14 07:43:17.516060] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:01.495 [2024-07-14 07:43:17.516074] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:26:01.495 [2024-07-14 07:43:17.516087] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:01.495 [2024-07-14 07:43:17.516100] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x103b810 is same with the state(5) to be set 00:26:01.495 [2024-07-14 07:43:17.525863] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x103b810 (9): Bad file descriptor 00:26:01.495 [2024-07-14 07:43:17.535959] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:0 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:01.495 [2024-07-14 07:43:17.535997] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:01.495 [2024-07-14 07:43:17.536033] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:0 len:64 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:01.495 [2024-07-14 07:43:17.536063] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:01.495 [2024-07-14 07:43:17.536082] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:0 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:01.495 [2024-07-14 07:43:17.536096] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:01.495 [2024-07-14 07:43:17.536216] bdev_nvme.c:1582:bdev_nvme_disconnected_qpair_cb: *DEBUG*: qpair 0x10751f0 was disconnected and freed in a reset ctrlr sequence. 00:26:01.495 [2024-07-14 07:43:17.536241] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:26:02.429 07:43:18 -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:26:02.429 07:43:18 -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:26:02.429 07:43:18 -- common/autotest_common.sh@551 -- # xtrace_disable 00:26:02.429 07:43:18 -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:26:02.429 07:43:18 -- common/autotest_common.sh@10 -- # set +x 00:26:02.429 07:43:18 -- host/discovery_remove_ifc.sh@29 -- # sort 00:26:02.429 07:43:18 -- host/discovery_remove_ifc.sh@29 -- # xargs 00:26:02.429 [2024-07-14 07:43:18.557917] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 110 00:26:03.798 [2024-07-14 07:43:19.581938] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 110 00:26:03.798 [2024-07-14 07:43:19.581998] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x103b810 with addr=10.0.0.2, port=4420 00:26:03.798 [2024-07-14 07:43:19.582025] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x103b810 is same with the state(5) to be set 00:26:03.798 [2024-07-14 07:43:19.582109] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:26:03.798 [2024-07-14 07:43:19.582131] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:26:03.798 [2024-07-14 07:43:19.582145] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:26:03.798 [2024-07-14 07:43:19.582176] nvme_ctrlr.c:1017:nvme_ctrlr_fail: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] already in failed state 00:26:03.798 [2024-07-14 07:43:19.582569] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x103b810 (9): Bad file descriptor 00:26:03.798 [2024-07-14 07:43:19.582645] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:03.798 [2024-07-14 07:43:19.582695] bdev_nvme.c:6510:remove_discovery_entry: *INFO*: Discovery[10.0.0.2:8009] Remove discovery entry: nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 00:26:03.798 [2024-07-14 07:43:19.582737] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:26:03.798 [2024-07-14 07:43:19.582763] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:03.798 [2024-07-14 07:43:19.582785] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:26:03.798 [2024-07-14 07:43:19.582799] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:03.798 [2024-07-14 07:43:19.582815] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:26:03.798 [2024-07-14 07:43:19.582837] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:03.798 [2024-07-14 07:43:19.582853] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:26:03.798 [2024-07-14 07:43:19.582885] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:03.798 [2024-07-14 07:43:19.582920] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:26:03.798 [2024-07-14 07:43:19.582934] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:03.798 [2024-07-14 07:43:19.582948] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2014-08.org.nvmexpress.discovery] in failed state. 00:26:03.798 [2024-07-14 07:43:19.582992] bdev.c:4968:_tmp_bdev_event_cb: *NOTICE*: Unexpected event type: 0 00:26:03.798 [2024-07-14 07:43:19.583113] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x103bc20 (9): Bad file descriptor 00:26:03.798 [2024-07-14 07:43:19.584150] nvme_fabric.c: 214:nvme_fabric_prop_get_cmd_async: *ERROR*: Failed to send Property Get fabrics command 00:26:03.798 [2024-07-14 07:43:19.584174] nvme_ctrlr.c:1136:nvme_ctrlr_shutdown_async: *ERROR*: [nqn.2014-08.org.nvmexpress.discovery] Failed to read the CC register 00:26:03.798 07:43:19 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:26:03.798 07:43:19 -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:26:03.798 07:43:19 -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:26:04.730 [2024-07-14 07:43:20.585169] bdev_raid.c:3350:raid_bdev_examine_load_sb_cb: *ERROR*: Failed to examine bdev nvme0n1: Input/output error 00:26:04.730 [2024-07-14 07:43:20.585245] vbdev_gpt.c: 468:gpt_bdev_complete: *ERROR*: Gpt: bdev=nvme0n1 io error status=0 00:26:04.730 07:43:20 -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:26:04.730 07:43:20 -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:26:04.730 07:43:20 -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:26:04.730 07:43:20 -- common/autotest_common.sh@551 -- # xtrace_disable 00:26:04.730 07:43:20 -- common/autotest_common.sh@10 -- # set +x 00:26:04.730 07:43:20 -- host/discovery_remove_ifc.sh@29 -- # sort 00:26:04.730 07:43:20 -- host/discovery_remove_ifc.sh@29 -- # xargs 00:26:04.730 07:43:20 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:26:04.730 07:43:20 -- host/discovery_remove_ifc.sh@33 -- # [[ '' != '' ]] 00:26:04.730 07:43:20 -- host/discovery_remove_ifc.sh@82 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:26:04.730 07:43:20 -- host/discovery_remove_ifc.sh@83 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:26:04.730 07:43:20 -- host/discovery_remove_ifc.sh@86 -- # wait_for_bdev nvme1n1 00:26:04.730 07:43:20 -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:26:04.730 07:43:20 -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:26:04.730 07:43:20 -- common/autotest_common.sh@551 -- # xtrace_disable 00:26:04.730 07:43:20 -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:26:04.730 07:43:20 -- common/autotest_common.sh@10 -- # set +x 00:26:04.730 07:43:20 -- host/discovery_remove_ifc.sh@29 -- # sort 00:26:04.730 07:43:20 -- host/discovery_remove_ifc.sh@29 -- # xargs 00:26:04.730 07:43:20 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:26:04.730 07:43:20 -- host/discovery_remove_ifc.sh@33 -- # [[ '' != \n\v\m\e\1\n\1 ]] 00:26:04.730 07:43:20 -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:26:05.694 [2024-07-14 07:43:21.599883] bdev_nvme.c:6759:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:26:05.694 [2024-07-14 07:43:21.599924] bdev_nvme.c:6839:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:26:05.694 [2024-07-14 07:43:21.599948] bdev_nvme.c:6722:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:26:05.694 [2024-07-14 07:43:21.686349] bdev_nvme.c:6688:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 new subsystem nvme1 00:26:05.694 07:43:21 -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:26:05.694 07:43:21 -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:26:05.694 07:43:21 -- common/autotest_common.sh@551 -- # xtrace_disable 00:26:05.694 07:43:21 -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:26:05.694 07:43:21 -- common/autotest_common.sh@10 -- # set +x 00:26:05.694 07:43:21 -- host/discovery_remove_ifc.sh@29 -- # sort 00:26:05.694 07:43:21 -- host/discovery_remove_ifc.sh@29 -- # xargs 00:26:05.694 07:43:21 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:26:05.694 07:43:21 -- host/discovery_remove_ifc.sh@33 -- # [[ '' != \n\v\m\e\1\n\1 ]] 00:26:05.694 07:43:21 -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:26:05.952 [2024-07-14 07:43:21.870880] bdev_nvme.c:7548:bdev_nvme_readv: *DEBUG*: read 8 blocks with offset 0 00:26:05.952 [2024-07-14 07:43:21.870945] bdev_nvme.c:7548:bdev_nvme_readv: *DEBUG*: read 1 blocks with offset 0 00:26:05.952 [2024-07-14 07:43:21.870977] bdev_nvme.c:7548:bdev_nvme_readv: *DEBUG*: read 64 blocks with offset 0 00:26:05.952 [2024-07-14 07:43:21.871000] bdev_nvme.c:6578:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme1 done 00:26:05.952 [2024-07-14 07:43:21.871012] bdev_nvme.c:6537:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:26:05.952 [2024-07-14 07:43:21.878692] bdev_nvme.c:1595:bdev_nvme_disconnected_qpair_cb: *DEBUG*: qpair 0x1059450 was disconnected and freed. delete nvme_qpair. 00:26:06.885 07:43:22 -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:26:06.885 07:43:22 -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:26:06.885 07:43:22 -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:26:06.885 07:43:22 -- common/autotest_common.sh@551 -- # xtrace_disable 00:26:06.885 07:43:22 -- common/autotest_common.sh@10 -- # set +x 00:26:06.885 07:43:22 -- host/discovery_remove_ifc.sh@29 -- # sort 00:26:06.885 07:43:22 -- host/discovery_remove_ifc.sh@29 -- # xargs 00:26:06.885 07:43:22 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:26:06.885 07:43:22 -- host/discovery_remove_ifc.sh@33 -- # [[ nvme1n1 != \n\v\m\e\1\n\1 ]] 00:26:06.885 07:43:22 -- host/discovery_remove_ifc.sh@88 -- # trap - SIGINT SIGTERM EXIT 00:26:06.885 07:43:22 -- host/discovery_remove_ifc.sh@90 -- # killprocess 4458 00:26:06.885 07:43:22 -- common/autotest_common.sh@926 -- # '[' -z 4458 ']' 00:26:06.885 07:43:22 -- common/autotest_common.sh@930 -- # kill -0 4458 00:26:06.885 07:43:22 -- common/autotest_common.sh@931 -- # uname 00:26:06.885 07:43:22 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:26:06.885 07:43:22 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 4458 00:26:06.885 07:43:22 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:26:06.885 07:43:22 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:26:06.885 07:43:22 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 4458' 00:26:06.886 killing process with pid 4458 00:26:06.886 07:43:22 -- common/autotest_common.sh@945 -- # kill 4458 00:26:06.886 07:43:22 -- common/autotest_common.sh@950 -- # wait 4458 00:26:07.143 07:43:23 -- host/discovery_remove_ifc.sh@91 -- # nvmftestfini 00:26:07.143 07:43:23 -- nvmf/common.sh@476 -- # nvmfcleanup 00:26:07.143 07:43:23 -- nvmf/common.sh@116 -- # sync 00:26:07.143 07:43:23 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:26:07.143 07:43:23 -- nvmf/common.sh@119 -- # set +e 00:26:07.143 07:43:23 -- nvmf/common.sh@120 -- # for i in {1..20} 00:26:07.143 07:43:23 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:26:07.143 rmmod nvme_tcp 00:26:07.143 rmmod nvme_fabrics 00:26:07.143 rmmod nvme_keyring 00:26:07.143 07:43:23 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:26:07.143 07:43:23 -- nvmf/common.sh@123 -- # set -e 00:26:07.143 07:43:23 -- nvmf/common.sh@124 -- # return 0 00:26:07.143 07:43:23 -- nvmf/common.sh@477 -- # '[' -n 4358 ']' 00:26:07.143 07:43:23 -- nvmf/common.sh@478 -- # killprocess 4358 00:26:07.143 07:43:23 -- common/autotest_common.sh@926 -- # '[' -z 4358 ']' 00:26:07.143 07:43:23 -- common/autotest_common.sh@930 -- # kill -0 4358 00:26:07.143 07:43:23 -- common/autotest_common.sh@931 -- # uname 00:26:07.143 07:43:23 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:26:07.143 07:43:23 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 4358 00:26:07.143 07:43:23 -- common/autotest_common.sh@932 -- # process_name=reactor_1 00:26:07.143 07:43:23 -- common/autotest_common.sh@936 -- # '[' reactor_1 = sudo ']' 00:26:07.143 07:43:23 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 4358' 00:26:07.143 killing process with pid 4358 00:26:07.143 07:43:23 -- common/autotest_common.sh@945 -- # kill 4358 00:26:07.143 07:43:23 -- common/autotest_common.sh@950 -- # wait 4358 00:26:07.143 [2024-07-14 07:43:23.207284] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20b4070 is same with the state(5) to be set 00:26:07.143 [2024-07-14 07:43:23.207339] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20b4070 is same with the state(5) to be set 00:26:07.400 07:43:23 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:26:07.400 07:43:23 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:26:07.400 07:43:23 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:26:07.400 07:43:23 -- nvmf/common.sh@273 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:26:07.401 07:43:23 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:26:07.401 07:43:23 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:26:07.401 07:43:23 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:26:07.401 07:43:23 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:26:09.937 07:43:25 -- nvmf/common.sh@278 -- # ip -4 addr flush cvl_0_1 00:26:09.937 00:26:09.937 real 0m18.264s 00:26:09.937 user 0m25.062s 00:26:09.937 sys 0m3.193s 00:26:09.937 07:43:25 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:26:09.937 07:43:25 -- common/autotest_common.sh@10 -- # set +x 00:26:09.937 ************************************ 00:26:09.937 END TEST nvmf_discovery_remove_ifc 00:26:09.937 ************************************ 00:26:09.937 07:43:25 -- nvmf/nvmf.sh@106 -- # [[ tcp == \t\c\p ]] 00:26:09.937 07:43:25 -- nvmf/nvmf.sh@107 -- # run_test nvmf_digest /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/digest.sh --transport=tcp 00:26:09.937 07:43:25 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:26:09.937 07:43:25 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:26:09.937 07:43:25 -- common/autotest_common.sh@10 -- # set +x 00:26:09.937 ************************************ 00:26:09.937 START TEST nvmf_digest 00:26:09.937 ************************************ 00:26:09.937 07:43:25 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/digest.sh --transport=tcp 00:26:09.937 * Looking for test storage... 00:26:09.937 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:26:09.937 07:43:25 -- host/digest.sh@12 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:26:09.937 07:43:25 -- nvmf/common.sh@7 -- # uname -s 00:26:09.937 07:43:25 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:26:09.937 07:43:25 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:26:09.937 07:43:25 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:26:09.937 07:43:25 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:26:09.937 07:43:25 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:26:09.937 07:43:25 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:26:09.937 07:43:25 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:26:09.937 07:43:25 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:26:09.937 07:43:25 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:26:09.937 07:43:25 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:26:09.937 07:43:25 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:26:09.937 07:43:25 -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:26:09.937 07:43:25 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:26:09.938 07:43:25 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:26:09.938 07:43:25 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:26:09.938 07:43:25 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:26:09.938 07:43:25 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:26:09.938 07:43:25 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:26:09.938 07:43:25 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:26:09.938 07:43:25 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:09.938 07:43:25 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:09.938 07:43:25 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:09.938 07:43:25 -- paths/export.sh@5 -- # export PATH 00:26:09.938 07:43:25 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:09.938 07:43:25 -- nvmf/common.sh@46 -- # : 0 00:26:09.938 07:43:25 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:26:09.938 07:43:25 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:26:09.938 07:43:25 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:26:09.938 07:43:25 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:26:09.938 07:43:25 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:26:09.938 07:43:25 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:26:09.938 07:43:25 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:26:09.938 07:43:25 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:26:09.938 07:43:25 -- host/digest.sh@14 -- # nqn=nqn.2016-06.io.spdk:cnode1 00:26:09.938 07:43:25 -- host/digest.sh@15 -- # bperfsock=/var/tmp/bperf.sock 00:26:09.938 07:43:25 -- host/digest.sh@16 -- # runtime=2 00:26:09.938 07:43:25 -- host/digest.sh@130 -- # [[ tcp != \t\c\p ]] 00:26:09.938 07:43:25 -- host/digest.sh@132 -- # nvmftestinit 00:26:09.938 07:43:25 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:26:09.938 07:43:25 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:26:09.938 07:43:25 -- nvmf/common.sh@436 -- # prepare_net_devs 00:26:09.938 07:43:25 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:26:09.938 07:43:25 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:26:09.938 07:43:25 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:26:09.938 07:43:25 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:26:09.938 07:43:25 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:26:09.938 07:43:25 -- nvmf/common.sh@402 -- # [[ phy != virt ]] 00:26:09.938 07:43:25 -- nvmf/common.sh@402 -- # gather_supported_nvmf_pci_devs 00:26:09.938 07:43:25 -- nvmf/common.sh@284 -- # xtrace_disable 00:26:09.938 07:43:25 -- common/autotest_common.sh@10 -- # set +x 00:26:11.844 07:43:27 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:26:11.844 07:43:27 -- nvmf/common.sh@290 -- # pci_devs=() 00:26:11.844 07:43:27 -- nvmf/common.sh@290 -- # local -a pci_devs 00:26:11.844 07:43:27 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:26:11.844 07:43:27 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:26:11.844 07:43:27 -- nvmf/common.sh@292 -- # pci_drivers=() 00:26:11.844 07:43:27 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:26:11.844 07:43:27 -- nvmf/common.sh@294 -- # net_devs=() 00:26:11.844 07:43:27 -- nvmf/common.sh@294 -- # local -ga net_devs 00:26:11.844 07:43:27 -- nvmf/common.sh@295 -- # e810=() 00:26:11.844 07:43:27 -- nvmf/common.sh@295 -- # local -ga e810 00:26:11.844 07:43:27 -- nvmf/common.sh@296 -- # x722=() 00:26:11.844 07:43:27 -- nvmf/common.sh@296 -- # local -ga x722 00:26:11.844 07:43:27 -- nvmf/common.sh@297 -- # mlx=() 00:26:11.844 07:43:27 -- nvmf/common.sh@297 -- # local -ga mlx 00:26:11.844 07:43:27 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:26:11.844 07:43:27 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:26:11.844 07:43:27 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:26:11.844 07:43:27 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:26:11.844 07:43:27 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:26:11.844 07:43:27 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:26:11.844 07:43:27 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:26:11.844 07:43:27 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:26:11.844 07:43:27 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:26:11.844 07:43:27 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:26:11.844 07:43:27 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:26:11.844 07:43:27 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:26:11.844 07:43:27 -- nvmf/common.sh@320 -- # [[ tcp == rdma ]] 00:26:11.844 07:43:27 -- nvmf/common.sh@326 -- # [[ e810 == mlx5 ]] 00:26:11.844 07:43:27 -- nvmf/common.sh@328 -- # [[ e810 == e810 ]] 00:26:11.844 07:43:27 -- nvmf/common.sh@329 -- # pci_devs=("${e810[@]}") 00:26:11.844 07:43:27 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:26:11.844 07:43:27 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:26:11.844 07:43:27 -- nvmf/common.sh@340 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:26:11.844 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:26:11.844 07:43:27 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:26:11.844 07:43:27 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:26:11.844 07:43:27 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:26:11.844 07:43:27 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:26:11.844 07:43:27 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:26:11.844 07:43:27 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:26:11.844 07:43:27 -- nvmf/common.sh@340 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:26:11.844 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:26:11.844 07:43:27 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:26:11.844 07:43:27 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:26:11.844 07:43:27 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:26:11.844 07:43:27 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:26:11.844 07:43:27 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:26:11.844 07:43:27 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:26:11.844 07:43:27 -- nvmf/common.sh@371 -- # [[ e810 == e810 ]] 00:26:11.844 07:43:27 -- nvmf/common.sh@371 -- # [[ tcp == rdma ]] 00:26:11.844 07:43:27 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:26:11.844 07:43:27 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:26:11.844 07:43:27 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:26:11.844 07:43:27 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:26:11.844 07:43:27 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:26:11.844 Found net devices under 0000:0a:00.0: cvl_0_0 00:26:11.844 07:43:27 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:26:11.844 07:43:27 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:26:11.844 07:43:27 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:26:11.844 07:43:27 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:26:11.844 07:43:27 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:26:11.844 07:43:27 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:26:11.844 Found net devices under 0000:0a:00.1: cvl_0_1 00:26:11.844 07:43:27 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:26:11.844 07:43:27 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:26:11.844 07:43:27 -- nvmf/common.sh@402 -- # is_hw=yes 00:26:11.844 07:43:27 -- nvmf/common.sh@404 -- # [[ yes == yes ]] 00:26:11.844 07:43:27 -- nvmf/common.sh@405 -- # [[ tcp == tcp ]] 00:26:11.844 07:43:27 -- nvmf/common.sh@406 -- # nvmf_tcp_init 00:26:11.844 07:43:27 -- nvmf/common.sh@228 -- # NVMF_INITIATOR_IP=10.0.0.1 00:26:11.844 07:43:27 -- nvmf/common.sh@229 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:26:11.844 07:43:27 -- nvmf/common.sh@230 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:26:11.844 07:43:27 -- nvmf/common.sh@233 -- # (( 2 > 1 )) 00:26:11.844 07:43:27 -- nvmf/common.sh@235 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:26:11.844 07:43:27 -- nvmf/common.sh@236 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:26:11.844 07:43:27 -- nvmf/common.sh@239 -- # NVMF_SECOND_TARGET_IP= 00:26:11.844 07:43:27 -- nvmf/common.sh@241 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:26:11.844 07:43:27 -- nvmf/common.sh@242 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:26:11.844 07:43:27 -- nvmf/common.sh@243 -- # ip -4 addr flush cvl_0_0 00:26:11.844 07:43:27 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_1 00:26:11.844 07:43:27 -- nvmf/common.sh@247 -- # ip netns add cvl_0_0_ns_spdk 00:26:11.844 07:43:27 -- nvmf/common.sh@250 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:26:11.844 07:43:27 -- nvmf/common.sh@253 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:26:11.844 07:43:27 -- nvmf/common.sh@254 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:26:11.844 07:43:27 -- nvmf/common.sh@257 -- # ip link set cvl_0_1 up 00:26:11.844 07:43:27 -- nvmf/common.sh@259 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:26:11.844 07:43:27 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:26:11.844 07:43:27 -- nvmf/common.sh@263 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:26:11.844 07:43:27 -- nvmf/common.sh@266 -- # ping -c 1 10.0.0.2 00:26:11.844 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:26:11.844 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.168 ms 00:26:11.844 00:26:11.844 --- 10.0.0.2 ping statistics --- 00:26:11.844 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:26:11.844 rtt min/avg/max/mdev = 0.168/0.168/0.168/0.000 ms 00:26:11.844 07:43:27 -- nvmf/common.sh@267 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:26:11.844 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:26:11.844 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.184 ms 00:26:11.844 00:26:11.845 --- 10.0.0.1 ping statistics --- 00:26:11.845 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:26:11.845 rtt min/avg/max/mdev = 0.184/0.184/0.184/0.000 ms 00:26:11.845 07:43:27 -- nvmf/common.sh@269 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:26:11.845 07:43:27 -- nvmf/common.sh@410 -- # return 0 00:26:11.845 07:43:27 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:26:11.845 07:43:27 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:26:11.845 07:43:27 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:26:11.845 07:43:27 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:26:11.845 07:43:27 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:26:11.845 07:43:27 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:26:11.845 07:43:27 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:26:11.845 07:43:27 -- host/digest.sh@134 -- # trap cleanup SIGINT SIGTERM EXIT 00:26:11.845 07:43:27 -- host/digest.sh@135 -- # run_test nvmf_digest_clean run_digest 00:26:11.845 07:43:27 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:26:11.845 07:43:27 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:26:11.845 07:43:27 -- common/autotest_common.sh@10 -- # set +x 00:26:11.845 ************************************ 00:26:11.845 START TEST nvmf_digest_clean 00:26:11.845 ************************************ 00:26:11.845 07:43:27 -- common/autotest_common.sh@1104 -- # run_digest 00:26:11.845 07:43:27 -- host/digest.sh@119 -- # nvmfappstart --wait-for-rpc 00:26:11.845 07:43:27 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:26:11.845 07:43:27 -- common/autotest_common.sh@712 -- # xtrace_disable 00:26:11.845 07:43:27 -- common/autotest_common.sh@10 -- # set +x 00:26:11.845 07:43:27 -- nvmf/common.sh@469 -- # nvmfpid=8049 00:26:11.845 07:43:27 -- nvmf/common.sh@468 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --wait-for-rpc 00:26:11.845 07:43:27 -- nvmf/common.sh@470 -- # waitforlisten 8049 00:26:11.845 07:43:27 -- common/autotest_common.sh@819 -- # '[' -z 8049 ']' 00:26:11.845 07:43:27 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:26:11.845 07:43:27 -- common/autotest_common.sh@824 -- # local max_retries=100 00:26:11.845 07:43:27 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:26:11.845 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:26:11.845 07:43:27 -- common/autotest_common.sh@828 -- # xtrace_disable 00:26:11.845 07:43:27 -- common/autotest_common.sh@10 -- # set +x 00:26:11.845 [2024-07-14 07:43:27.710278] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:26:11.845 [2024-07-14 07:43:27.710347] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:26:11.845 EAL: No free 2048 kB hugepages reported on node 1 00:26:11.845 [2024-07-14 07:43:27.776128] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:11.845 [2024-07-14 07:43:27.887966] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:26:11.845 [2024-07-14 07:43:27.888139] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:26:11.845 [2024-07-14 07:43:27.888158] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:26:11.845 [2024-07-14 07:43:27.888173] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:26:11.845 [2024-07-14 07:43:27.888205] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:26:12.781 07:43:28 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:26:12.781 07:43:28 -- common/autotest_common.sh@852 -- # return 0 00:26:12.781 07:43:28 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:26:12.781 07:43:28 -- common/autotest_common.sh@718 -- # xtrace_disable 00:26:12.781 07:43:28 -- common/autotest_common.sh@10 -- # set +x 00:26:12.781 07:43:28 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:26:12.781 07:43:28 -- host/digest.sh@120 -- # common_target_config 00:26:12.781 07:43:28 -- host/digest.sh@43 -- # rpc_cmd 00:26:12.781 07:43:28 -- common/autotest_common.sh@551 -- # xtrace_disable 00:26:12.781 07:43:28 -- common/autotest_common.sh@10 -- # set +x 00:26:12.781 null0 00:26:12.781 [2024-07-14 07:43:28.765034] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:26:12.781 [2024-07-14 07:43:28.789252] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:26:12.781 07:43:28 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:26:12.781 07:43:28 -- host/digest.sh@122 -- # run_bperf randread 4096 128 00:26:12.781 07:43:28 -- host/digest.sh@77 -- # local rw bs qd 00:26:12.781 07:43:28 -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:26:12.781 07:43:28 -- host/digest.sh@80 -- # rw=randread 00:26:12.781 07:43:28 -- host/digest.sh@80 -- # bs=4096 00:26:12.781 07:43:28 -- host/digest.sh@80 -- # qd=128 00:26:12.781 07:43:28 -- host/digest.sh@82 -- # bperfpid=8205 00:26:12.781 07:43:28 -- host/digest.sh@81 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 4096 -t 2 -q 128 -z --wait-for-rpc 00:26:12.781 07:43:28 -- host/digest.sh@83 -- # waitforlisten 8205 /var/tmp/bperf.sock 00:26:12.781 07:43:28 -- common/autotest_common.sh@819 -- # '[' -z 8205 ']' 00:26:12.781 07:43:28 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/bperf.sock 00:26:12.781 07:43:28 -- common/autotest_common.sh@824 -- # local max_retries=100 00:26:12.781 07:43:28 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:26:12.781 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:26:12.781 07:43:28 -- common/autotest_common.sh@828 -- # xtrace_disable 00:26:12.781 07:43:28 -- common/autotest_common.sh@10 -- # set +x 00:26:12.781 [2024-07-14 07:43:28.831020] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:26:12.781 [2024-07-14 07:43:28.831092] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid8205 ] 00:26:12.781 EAL: No free 2048 kB hugepages reported on node 1 00:26:12.781 [2024-07-14 07:43:28.891521] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:13.040 [2024-07-14 07:43:29.004951] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:26:13.607 07:43:29 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:26:13.607 07:43:29 -- common/autotest_common.sh@852 -- # return 0 00:26:13.607 07:43:29 -- host/digest.sh@85 -- # [[ 0 -eq 1 ]] 00:26:13.607 07:43:29 -- host/digest.sh@86 -- # bperf_rpc framework_start_init 00:26:13.607 07:43:29 -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:26:14.177 07:43:30 -- host/digest.sh@88 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:26:14.177 07:43:30 -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:26:14.435 nvme0n1 00:26:14.435 07:43:30 -- host/digest.sh@91 -- # bperf_py perform_tests 00:26:14.435 07:43:30 -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:26:14.435 Running I/O for 2 seconds... 00:26:16.970 00:26:16.970 Latency(us) 00:26:16.970 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:26:16.970 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 128, IO size: 4096) 00:26:16.970 nvme0n1 : 2.00 21376.47 83.50 0.00 0.00 5980.17 2949.12 15825.73 00:26:16.970 =================================================================================================================== 00:26:16.970 Total : 21376.47 83.50 0.00 0.00 5980.17 2949.12 15825.73 00:26:16.970 0 00:26:16.970 07:43:32 -- host/digest.sh@92 -- # read -r acc_module acc_executed 00:26:16.970 07:43:32 -- host/digest.sh@92 -- # get_accel_stats 00:26:16.970 07:43:32 -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:26:16.970 07:43:32 -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:26:16.970 07:43:32 -- host/digest.sh@37 -- # jq -rc '.operations[] 00:26:16.970 | select(.opcode=="crc32c") 00:26:16.970 | "\(.module_name) \(.executed)"' 00:26:16.970 07:43:32 -- host/digest.sh@93 -- # [[ 0 -eq 1 ]] 00:26:16.970 07:43:32 -- host/digest.sh@93 -- # exp_module=software 00:26:16.970 07:43:32 -- host/digest.sh@94 -- # (( acc_executed > 0 )) 00:26:16.970 07:43:32 -- host/digest.sh@95 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:26:16.970 07:43:32 -- host/digest.sh@97 -- # killprocess 8205 00:26:16.970 07:43:32 -- common/autotest_common.sh@926 -- # '[' -z 8205 ']' 00:26:16.970 07:43:32 -- common/autotest_common.sh@930 -- # kill -0 8205 00:26:16.970 07:43:32 -- common/autotest_common.sh@931 -- # uname 00:26:16.970 07:43:32 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:26:16.970 07:43:32 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 8205 00:26:16.970 07:43:32 -- common/autotest_common.sh@932 -- # process_name=reactor_1 00:26:16.970 07:43:32 -- common/autotest_common.sh@936 -- # '[' reactor_1 = sudo ']' 00:26:16.970 07:43:32 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 8205' 00:26:16.970 killing process with pid 8205 00:26:16.970 07:43:32 -- common/autotest_common.sh@945 -- # kill 8205 00:26:16.970 Received shutdown signal, test time was about 2.000000 seconds 00:26:16.970 00:26:16.970 Latency(us) 00:26:16.970 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:26:16.970 =================================================================================================================== 00:26:16.970 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:26:16.970 07:43:32 -- common/autotest_common.sh@950 -- # wait 8205 00:26:16.970 07:43:33 -- host/digest.sh@123 -- # run_bperf randread 131072 16 00:26:16.970 07:43:33 -- host/digest.sh@77 -- # local rw bs qd 00:26:16.970 07:43:33 -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:26:16.970 07:43:33 -- host/digest.sh@80 -- # rw=randread 00:26:16.970 07:43:33 -- host/digest.sh@80 -- # bs=131072 00:26:16.970 07:43:33 -- host/digest.sh@80 -- # qd=16 00:26:16.970 07:43:33 -- host/digest.sh@82 -- # bperfpid=8665 00:26:16.970 07:43:33 -- host/digest.sh@83 -- # waitforlisten 8665 /var/tmp/bperf.sock 00:26:16.970 07:43:33 -- host/digest.sh@81 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 131072 -t 2 -q 16 -z --wait-for-rpc 00:26:16.970 07:43:33 -- common/autotest_common.sh@819 -- # '[' -z 8665 ']' 00:26:16.970 07:43:33 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/bperf.sock 00:26:16.970 07:43:33 -- common/autotest_common.sh@824 -- # local max_retries=100 00:26:16.970 07:43:33 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:26:16.970 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:26:16.970 07:43:33 -- common/autotest_common.sh@828 -- # xtrace_disable 00:26:16.970 07:43:33 -- common/autotest_common.sh@10 -- # set +x 00:26:17.236 [2024-07-14 07:43:33.170725] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:26:17.236 [2024-07-14 07:43:33.170806] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid8665 ] 00:26:17.236 I/O size of 131072 is greater than zero copy threshold (65536). 00:26:17.236 Zero copy mechanism will not be used. 00:26:17.236 EAL: No free 2048 kB hugepages reported on node 1 00:26:17.236 [2024-07-14 07:43:33.234013] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:17.236 [2024-07-14 07:43:33.348687] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:26:17.236 07:43:33 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:26:17.236 07:43:33 -- common/autotest_common.sh@852 -- # return 0 00:26:17.236 07:43:33 -- host/digest.sh@85 -- # [[ 0 -eq 1 ]] 00:26:17.236 07:43:33 -- host/digest.sh@86 -- # bperf_rpc framework_start_init 00:26:17.236 07:43:33 -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:26:17.805 07:43:33 -- host/digest.sh@88 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:26:17.805 07:43:33 -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:26:18.064 nvme0n1 00:26:18.064 07:43:34 -- host/digest.sh@91 -- # bperf_py perform_tests 00:26:18.064 07:43:34 -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:26:18.064 I/O size of 131072 is greater than zero copy threshold (65536). 00:26:18.064 Zero copy mechanism will not be used. 00:26:18.064 Running I/O for 2 seconds... 00:26:20.598 00:26:20.598 Latency(us) 00:26:20.598 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:26:20.598 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 16, IO size: 131072) 00:26:20.598 nvme0n1 : 2.01 2361.44 295.18 0.00 0.00 6771.79 5801.15 15631.55 00:26:20.598 =================================================================================================================== 00:26:20.598 Total : 2361.44 295.18 0.00 0.00 6771.79 5801.15 15631.55 00:26:20.598 0 00:26:20.598 07:43:36 -- host/digest.sh@92 -- # read -r acc_module acc_executed 00:26:20.598 07:43:36 -- host/digest.sh@92 -- # get_accel_stats 00:26:20.598 07:43:36 -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:26:20.598 07:43:36 -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:26:20.598 07:43:36 -- host/digest.sh@37 -- # jq -rc '.operations[] 00:26:20.598 | select(.opcode=="crc32c") 00:26:20.598 | "\(.module_name) \(.executed)"' 00:26:20.598 07:43:36 -- host/digest.sh@93 -- # [[ 0 -eq 1 ]] 00:26:20.598 07:43:36 -- host/digest.sh@93 -- # exp_module=software 00:26:20.598 07:43:36 -- host/digest.sh@94 -- # (( acc_executed > 0 )) 00:26:20.598 07:43:36 -- host/digest.sh@95 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:26:20.598 07:43:36 -- host/digest.sh@97 -- # killprocess 8665 00:26:20.598 07:43:36 -- common/autotest_common.sh@926 -- # '[' -z 8665 ']' 00:26:20.598 07:43:36 -- common/autotest_common.sh@930 -- # kill -0 8665 00:26:20.598 07:43:36 -- common/autotest_common.sh@931 -- # uname 00:26:20.598 07:43:36 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:26:20.598 07:43:36 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 8665 00:26:20.598 07:43:36 -- common/autotest_common.sh@932 -- # process_name=reactor_1 00:26:20.598 07:43:36 -- common/autotest_common.sh@936 -- # '[' reactor_1 = sudo ']' 00:26:20.598 07:43:36 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 8665' 00:26:20.598 killing process with pid 8665 00:26:20.598 07:43:36 -- common/autotest_common.sh@945 -- # kill 8665 00:26:20.598 Received shutdown signal, test time was about 2.000000 seconds 00:26:20.598 00:26:20.598 Latency(us) 00:26:20.598 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:26:20.598 =================================================================================================================== 00:26:20.598 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:26:20.598 07:43:36 -- common/autotest_common.sh@950 -- # wait 8665 00:26:20.598 07:43:36 -- host/digest.sh@124 -- # run_bperf randwrite 4096 128 00:26:20.598 07:43:36 -- host/digest.sh@77 -- # local rw bs qd 00:26:20.598 07:43:36 -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:26:20.598 07:43:36 -- host/digest.sh@80 -- # rw=randwrite 00:26:20.598 07:43:36 -- host/digest.sh@80 -- # bs=4096 00:26:20.598 07:43:36 -- host/digest.sh@80 -- # qd=128 00:26:20.598 07:43:36 -- host/digest.sh@82 -- # bperfpid=9189 00:26:20.598 07:43:36 -- host/digest.sh@81 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 4096 -t 2 -q 128 -z --wait-for-rpc 00:26:20.598 07:43:36 -- host/digest.sh@83 -- # waitforlisten 9189 /var/tmp/bperf.sock 00:26:20.598 07:43:36 -- common/autotest_common.sh@819 -- # '[' -z 9189 ']' 00:26:20.598 07:43:36 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/bperf.sock 00:26:20.598 07:43:36 -- common/autotest_common.sh@824 -- # local max_retries=100 00:26:20.598 07:43:36 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:26:20.598 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:26:20.598 07:43:36 -- common/autotest_common.sh@828 -- # xtrace_disable 00:26:20.598 07:43:36 -- common/autotest_common.sh@10 -- # set +x 00:26:20.857 [2024-07-14 07:43:36.800722] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:26:20.857 [2024-07-14 07:43:36.800800] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid9189 ] 00:26:20.857 EAL: No free 2048 kB hugepages reported on node 1 00:26:20.857 [2024-07-14 07:43:36.860778] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:20.857 [2024-07-14 07:43:36.970934] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:26:20.857 07:43:37 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:26:20.857 07:43:37 -- common/autotest_common.sh@852 -- # return 0 00:26:20.857 07:43:37 -- host/digest.sh@85 -- # [[ 0 -eq 1 ]] 00:26:20.857 07:43:37 -- host/digest.sh@86 -- # bperf_rpc framework_start_init 00:26:20.857 07:43:37 -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:26:21.427 07:43:37 -- host/digest.sh@88 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:26:21.427 07:43:37 -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:26:21.686 nvme0n1 00:26:21.686 07:43:37 -- host/digest.sh@91 -- # bperf_py perform_tests 00:26:21.686 07:43:37 -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:26:21.944 Running I/O for 2 seconds... 00:26:23.845 00:26:23.845 Latency(us) 00:26:23.845 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:26:23.845 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:26:23.845 nvme0n1 : 2.01 19044.55 74.39 0.00 0.00 6706.39 6310.87 16311.18 00:26:23.845 =================================================================================================================== 00:26:23.846 Total : 19044.55 74.39 0.00 0.00 6706.39 6310.87 16311.18 00:26:23.846 0 00:26:23.846 07:43:39 -- host/digest.sh@92 -- # read -r acc_module acc_executed 00:26:23.846 07:43:39 -- host/digest.sh@92 -- # get_accel_stats 00:26:23.846 07:43:39 -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:26:23.846 07:43:39 -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:26:23.846 07:43:39 -- host/digest.sh@37 -- # jq -rc '.operations[] 00:26:23.846 | select(.opcode=="crc32c") 00:26:23.846 | "\(.module_name) \(.executed)"' 00:26:24.103 07:43:40 -- host/digest.sh@93 -- # [[ 0 -eq 1 ]] 00:26:24.103 07:43:40 -- host/digest.sh@93 -- # exp_module=software 00:26:24.103 07:43:40 -- host/digest.sh@94 -- # (( acc_executed > 0 )) 00:26:24.103 07:43:40 -- host/digest.sh@95 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:26:24.103 07:43:40 -- host/digest.sh@97 -- # killprocess 9189 00:26:24.103 07:43:40 -- common/autotest_common.sh@926 -- # '[' -z 9189 ']' 00:26:24.103 07:43:40 -- common/autotest_common.sh@930 -- # kill -0 9189 00:26:24.103 07:43:40 -- common/autotest_common.sh@931 -- # uname 00:26:24.103 07:43:40 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:26:24.103 07:43:40 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 9189 00:26:24.103 07:43:40 -- common/autotest_common.sh@932 -- # process_name=reactor_1 00:26:24.103 07:43:40 -- common/autotest_common.sh@936 -- # '[' reactor_1 = sudo ']' 00:26:24.103 07:43:40 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 9189' 00:26:24.103 killing process with pid 9189 00:26:24.103 07:43:40 -- common/autotest_common.sh@945 -- # kill 9189 00:26:24.103 Received shutdown signal, test time was about 2.000000 seconds 00:26:24.103 00:26:24.103 Latency(us) 00:26:24.103 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:26:24.103 =================================================================================================================== 00:26:24.104 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:26:24.104 07:43:40 -- common/autotest_common.sh@950 -- # wait 9189 00:26:24.362 07:43:40 -- host/digest.sh@125 -- # run_bperf randwrite 131072 16 00:26:24.362 07:43:40 -- host/digest.sh@77 -- # local rw bs qd 00:26:24.362 07:43:40 -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:26:24.362 07:43:40 -- host/digest.sh@80 -- # rw=randwrite 00:26:24.362 07:43:40 -- host/digest.sh@80 -- # bs=131072 00:26:24.362 07:43:40 -- host/digest.sh@80 -- # qd=16 00:26:24.362 07:43:40 -- host/digest.sh@82 -- # bperfpid=9615 00:26:24.362 07:43:40 -- host/digest.sh@81 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 131072 -t 2 -q 16 -z --wait-for-rpc 00:26:24.362 07:43:40 -- host/digest.sh@83 -- # waitforlisten 9615 /var/tmp/bperf.sock 00:26:24.362 07:43:40 -- common/autotest_common.sh@819 -- # '[' -z 9615 ']' 00:26:24.362 07:43:40 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/bperf.sock 00:26:24.362 07:43:40 -- common/autotest_common.sh@824 -- # local max_retries=100 00:26:24.362 07:43:40 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:26:24.362 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:26:24.362 07:43:40 -- common/autotest_common.sh@828 -- # xtrace_disable 00:26:24.362 07:43:40 -- common/autotest_common.sh@10 -- # set +x 00:26:24.362 [2024-07-14 07:43:40.481996] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:26:24.362 [2024-07-14 07:43:40.482083] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid9615 ] 00:26:24.362 I/O size of 131072 is greater than zero copy threshold (65536). 00:26:24.362 Zero copy mechanism will not be used. 00:26:24.362 EAL: No free 2048 kB hugepages reported on node 1 00:26:24.620 [2024-07-14 07:43:40.543023] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:24.620 [2024-07-14 07:43:40.656383] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:26:24.620 07:43:40 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:26:24.620 07:43:40 -- common/autotest_common.sh@852 -- # return 0 00:26:24.620 07:43:40 -- host/digest.sh@85 -- # [[ 0 -eq 1 ]] 00:26:24.620 07:43:40 -- host/digest.sh@86 -- # bperf_rpc framework_start_init 00:26:24.620 07:43:40 -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:26:24.877 07:43:41 -- host/digest.sh@88 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:26:24.877 07:43:41 -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:26:25.443 nvme0n1 00:26:25.443 07:43:41 -- host/digest.sh@91 -- # bperf_py perform_tests 00:26:25.443 07:43:41 -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:26:25.443 I/O size of 131072 is greater than zero copy threshold (65536). 00:26:25.443 Zero copy mechanism will not be used. 00:26:25.443 Running I/O for 2 seconds... 00:26:27.344 00:26:27.344 Latency(us) 00:26:27.344 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:26:27.344 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 16, IO size: 131072) 00:26:27.344 nvme0n1 : 2.01 1553.94 194.24 0.00 0.00 10229.31 6043.88 15243.19 00:26:27.344 =================================================================================================================== 00:26:27.344 Total : 1553.94 194.24 0.00 0.00 10229.31 6043.88 15243.19 00:26:27.344 0 00:26:27.344 07:43:43 -- host/digest.sh@92 -- # read -r acc_module acc_executed 00:26:27.344 07:43:43 -- host/digest.sh@92 -- # get_accel_stats 00:26:27.344 07:43:43 -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:26:27.344 07:43:43 -- host/digest.sh@37 -- # jq -rc '.operations[] 00:26:27.344 | select(.opcode=="crc32c") 00:26:27.344 | "\(.module_name) \(.executed)"' 00:26:27.344 07:43:43 -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:26:27.601 07:43:43 -- host/digest.sh@93 -- # [[ 0 -eq 1 ]] 00:26:27.601 07:43:43 -- host/digest.sh@93 -- # exp_module=software 00:26:27.601 07:43:43 -- host/digest.sh@94 -- # (( acc_executed > 0 )) 00:26:27.601 07:43:43 -- host/digest.sh@95 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:26:27.601 07:43:43 -- host/digest.sh@97 -- # killprocess 9615 00:26:27.601 07:43:43 -- common/autotest_common.sh@926 -- # '[' -z 9615 ']' 00:26:27.601 07:43:43 -- common/autotest_common.sh@930 -- # kill -0 9615 00:26:27.601 07:43:43 -- common/autotest_common.sh@931 -- # uname 00:26:27.601 07:43:43 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:26:27.601 07:43:43 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 9615 00:26:27.858 07:43:43 -- common/autotest_common.sh@932 -- # process_name=reactor_1 00:26:27.858 07:43:43 -- common/autotest_common.sh@936 -- # '[' reactor_1 = sudo ']' 00:26:27.858 07:43:43 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 9615' 00:26:27.858 killing process with pid 9615 00:26:27.858 07:43:43 -- common/autotest_common.sh@945 -- # kill 9615 00:26:27.858 Received shutdown signal, test time was about 2.000000 seconds 00:26:27.858 00:26:27.858 Latency(us) 00:26:27.858 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:26:27.858 =================================================================================================================== 00:26:27.858 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:26:27.858 07:43:43 -- common/autotest_common.sh@950 -- # wait 9615 00:26:28.116 07:43:44 -- host/digest.sh@126 -- # killprocess 8049 00:26:28.116 07:43:44 -- common/autotest_common.sh@926 -- # '[' -z 8049 ']' 00:26:28.116 07:43:44 -- common/autotest_common.sh@930 -- # kill -0 8049 00:26:28.116 07:43:44 -- common/autotest_common.sh@931 -- # uname 00:26:28.116 07:43:44 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:26:28.116 07:43:44 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 8049 00:26:28.116 07:43:44 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:26:28.116 07:43:44 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:26:28.116 07:43:44 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 8049' 00:26:28.116 killing process with pid 8049 00:26:28.116 07:43:44 -- common/autotest_common.sh@945 -- # kill 8049 00:26:28.116 07:43:44 -- common/autotest_common.sh@950 -- # wait 8049 00:26:28.374 00:26:28.374 real 0m16.671s 00:26:28.374 user 0m31.410s 00:26:28.374 sys 0m3.734s 00:26:28.374 07:43:44 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:26:28.374 07:43:44 -- common/autotest_common.sh@10 -- # set +x 00:26:28.374 ************************************ 00:26:28.374 END TEST nvmf_digest_clean 00:26:28.374 ************************************ 00:26:28.374 07:43:44 -- host/digest.sh@136 -- # run_test nvmf_digest_error run_digest_error 00:26:28.374 07:43:44 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:26:28.374 07:43:44 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:26:28.374 07:43:44 -- common/autotest_common.sh@10 -- # set +x 00:26:28.374 ************************************ 00:26:28.374 START TEST nvmf_digest_error 00:26:28.374 ************************************ 00:26:28.374 07:43:44 -- common/autotest_common.sh@1104 -- # run_digest_error 00:26:28.374 07:43:44 -- host/digest.sh@101 -- # nvmfappstart --wait-for-rpc 00:26:28.374 07:43:44 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:26:28.374 07:43:44 -- common/autotest_common.sh@712 -- # xtrace_disable 00:26:28.374 07:43:44 -- common/autotest_common.sh@10 -- # set +x 00:26:28.374 07:43:44 -- nvmf/common.sh@469 -- # nvmfpid=10059 00:26:28.374 07:43:44 -- nvmf/common.sh@468 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --wait-for-rpc 00:26:28.374 07:43:44 -- nvmf/common.sh@470 -- # waitforlisten 10059 00:26:28.374 07:43:44 -- common/autotest_common.sh@819 -- # '[' -z 10059 ']' 00:26:28.374 07:43:44 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:26:28.374 07:43:44 -- common/autotest_common.sh@824 -- # local max_retries=100 00:26:28.374 07:43:44 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:26:28.374 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:26:28.374 07:43:44 -- common/autotest_common.sh@828 -- # xtrace_disable 00:26:28.374 07:43:44 -- common/autotest_common.sh@10 -- # set +x 00:26:28.374 [2024-07-14 07:43:44.418116] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:26:28.374 [2024-07-14 07:43:44.418203] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:26:28.374 EAL: No free 2048 kB hugepages reported on node 1 00:26:28.374 [2024-07-14 07:43:44.485979] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:28.632 [2024-07-14 07:43:44.604834] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:26:28.632 [2024-07-14 07:43:44.605025] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:26:28.632 [2024-07-14 07:43:44.605046] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:26:28.632 [2024-07-14 07:43:44.605060] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:26:28.632 [2024-07-14 07:43:44.605101] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:26:29.565 07:43:45 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:26:29.565 07:43:45 -- common/autotest_common.sh@852 -- # return 0 00:26:29.565 07:43:45 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:26:29.565 07:43:45 -- common/autotest_common.sh@718 -- # xtrace_disable 00:26:29.565 07:43:45 -- common/autotest_common.sh@10 -- # set +x 00:26:29.565 07:43:45 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:26:29.565 07:43:45 -- host/digest.sh@103 -- # rpc_cmd accel_assign_opc -o crc32c -m error 00:26:29.565 07:43:45 -- common/autotest_common.sh@551 -- # xtrace_disable 00:26:29.565 07:43:45 -- common/autotest_common.sh@10 -- # set +x 00:26:29.565 [2024-07-14 07:43:45.427649] accel_rpc.c: 168:rpc_accel_assign_opc: *NOTICE*: Operation crc32c will be assigned to module error 00:26:29.565 07:43:45 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:26:29.565 07:43:45 -- host/digest.sh@104 -- # common_target_config 00:26:29.565 07:43:45 -- host/digest.sh@43 -- # rpc_cmd 00:26:29.565 07:43:45 -- common/autotest_common.sh@551 -- # xtrace_disable 00:26:29.565 07:43:45 -- common/autotest_common.sh@10 -- # set +x 00:26:29.565 null0 00:26:29.565 [2024-07-14 07:43:45.547018] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:26:29.565 [2024-07-14 07:43:45.571226] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:26:29.565 07:43:45 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:26:29.565 07:43:45 -- host/digest.sh@107 -- # run_bperf_err randread 4096 128 00:26:29.565 07:43:45 -- host/digest.sh@54 -- # local rw bs qd 00:26:29.565 07:43:45 -- host/digest.sh@56 -- # rw=randread 00:26:29.565 07:43:45 -- host/digest.sh@56 -- # bs=4096 00:26:29.565 07:43:45 -- host/digest.sh@56 -- # qd=128 00:26:29.565 07:43:45 -- host/digest.sh@58 -- # bperfpid=10217 00:26:29.565 07:43:45 -- host/digest.sh@60 -- # waitforlisten 10217 /var/tmp/bperf.sock 00:26:29.565 07:43:45 -- host/digest.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 4096 -t 2 -q 128 -z 00:26:29.565 07:43:45 -- common/autotest_common.sh@819 -- # '[' -z 10217 ']' 00:26:29.565 07:43:45 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/bperf.sock 00:26:29.565 07:43:45 -- common/autotest_common.sh@824 -- # local max_retries=100 00:26:29.565 07:43:45 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:26:29.565 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:26:29.565 07:43:45 -- common/autotest_common.sh@828 -- # xtrace_disable 00:26:29.565 07:43:45 -- common/autotest_common.sh@10 -- # set +x 00:26:29.565 [2024-07-14 07:43:45.613519] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:26:29.565 [2024-07-14 07:43:45.613590] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid10217 ] 00:26:29.565 EAL: No free 2048 kB hugepages reported on node 1 00:26:29.565 [2024-07-14 07:43:45.677753] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:29.823 [2024-07-14 07:43:45.798154] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:26:30.387 07:43:46 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:26:30.387 07:43:46 -- common/autotest_common.sh@852 -- # return 0 00:26:30.387 07:43:46 -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:26:30.387 07:43:46 -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:26:30.645 07:43:46 -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:26:30.645 07:43:46 -- common/autotest_common.sh@551 -- # xtrace_disable 00:26:30.645 07:43:46 -- common/autotest_common.sh@10 -- # set +x 00:26:30.645 07:43:46 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:26:30.645 07:43:46 -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:26:30.645 07:43:46 -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:26:31.209 nvme0n1 00:26:31.209 07:43:47 -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 256 00:26:31.209 07:43:47 -- common/autotest_common.sh@551 -- # xtrace_disable 00:26:31.209 07:43:47 -- common/autotest_common.sh@10 -- # set +x 00:26:31.209 07:43:47 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:26:31.209 07:43:47 -- host/digest.sh@69 -- # bperf_py perform_tests 00:26:31.209 07:43:47 -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:26:31.209 Running I/O for 2 seconds... 00:26:31.209 [2024-07-14 07:43:47.341403] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2168f00) 00:26:31.209 [2024-07-14 07:43:47.341454] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:17674 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:31.209 [2024-07-14 07:43:47.341474] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:31.209 [2024-07-14 07:43:47.354798] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2168f00) 00:26:31.209 [2024-07-14 07:43:47.354834] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17160 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:31.209 [2024-07-14 07:43:47.354861] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:31.209 [2024-07-14 07:43:47.366843] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2168f00) 00:26:31.209 [2024-07-14 07:43:47.366884] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:7275 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:31.209 [2024-07-14 07:43:47.366919] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:21 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:31.209 [2024-07-14 07:43:47.378936] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2168f00) 00:26:31.209 [2024-07-14 07:43:47.378969] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:1479 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:31.209 [2024-07-14 07:43:47.378989] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:103 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:31.467 [2024-07-14 07:43:47.391338] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2168f00) 00:26:31.467 [2024-07-14 07:43:47.391376] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:20870 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:31.467 [2024-07-14 07:43:47.391396] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:31.467 [2024-07-14 07:43:47.404300] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2168f00) 00:26:31.467 [2024-07-14 07:43:47.404332] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:24455 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:31.467 [2024-07-14 07:43:47.404351] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:116 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:31.467 [2024-07-14 07:43:47.416346] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2168f00) 00:26:31.467 [2024-07-14 07:43:47.416379] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:7027 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:31.467 [2024-07-14 07:43:47.416398] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:26 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:31.467 [2024-07-14 07:43:47.428144] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2168f00) 00:26:31.467 [2024-07-14 07:43:47.428191] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:9906 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:31.467 [2024-07-14 07:43:47.428210] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:70 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:31.467 [2024-07-14 07:43:47.440271] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2168f00) 00:26:31.468 [2024-07-14 07:43:47.440304] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:17882 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:31.468 [2024-07-14 07:43:47.440339] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:53 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:31.468 [2024-07-14 07:43:47.452960] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2168f00) 00:26:31.468 [2024-07-14 07:43:47.452993] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:12410 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:31.468 [2024-07-14 07:43:47.453012] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:31.468 [2024-07-14 07:43:47.464957] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2168f00) 00:26:31.468 [2024-07-14 07:43:47.465005] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:17698 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:31.468 [2024-07-14 07:43:47.465024] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:36 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:31.468 [2024-07-14 07:43:47.476584] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2168f00) 00:26:31.468 [2024-07-14 07:43:47.476615] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:4742 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:31.468 [2024-07-14 07:43:47.476632] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:73 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:31.468 [2024-07-14 07:43:47.488308] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2168f00) 00:26:31.468 [2024-07-14 07:43:47.488340] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:7584 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:31.468 [2024-07-14 07:43:47.488357] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:20 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:31.468 [2024-07-14 07:43:47.500963] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2168f00) 00:26:31.468 [2024-07-14 07:43:47.501011] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:23462 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:31.468 [2024-07-14 07:43:47.501029] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:63 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:31.468 [2024-07-14 07:43:47.512889] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2168f00) 00:26:31.468 [2024-07-14 07:43:47.512922] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:7624 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:31.468 [2024-07-14 07:43:47.512940] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:31.468 [2024-07-14 07:43:47.524694] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2168f00) 00:26:31.468 [2024-07-14 07:43:47.524726] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:21376 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:31.468 [2024-07-14 07:43:47.524744] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:32 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:31.468 [2024-07-14 07:43:47.537575] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2168f00) 00:26:31.468 [2024-07-14 07:43:47.537607] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:25440 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:31.468 [2024-07-14 07:43:47.537625] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:59 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:31.468 [2024-07-14 07:43:47.549055] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2168f00) 00:26:31.468 [2024-07-14 07:43:47.549103] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:17728 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:31.468 [2024-07-14 07:43:47.549122] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:38 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:31.468 [2024-07-14 07:43:47.561086] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2168f00) 00:26:31.468 [2024-07-14 07:43:47.561135] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:12199 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:31.468 [2024-07-14 07:43:47.561160] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:52 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:31.468 [2024-07-14 07:43:47.573728] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2168f00) 00:26:31.468 [2024-07-14 07:43:47.573760] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:11596 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:31.468 [2024-07-14 07:43:47.573779] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:84 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:31.468 [2024-07-14 07:43:47.585777] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2168f00) 00:26:31.468 [2024-07-14 07:43:47.585813] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:14995 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:31.468 [2024-07-14 07:43:47.585847] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:78 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:31.468 [2024-07-14 07:43:47.597517] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2168f00) 00:26:31.468 [2024-07-14 07:43:47.597548] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:4591 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:31.468 [2024-07-14 07:43:47.597565] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:98 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:31.468 [2024-07-14 07:43:47.609390] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2168f00) 00:26:31.468 [2024-07-14 07:43:47.609423] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:5363 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:31.468 [2024-07-14 07:43:47.609441] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:106 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:31.468 [2024-07-14 07:43:47.621946] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2168f00) 00:26:31.468 [2024-07-14 07:43:47.621980] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:18185 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:31.468 [2024-07-14 07:43:47.621999] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:103 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:31.468 [2024-07-14 07:43:47.633981] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2168f00) 00:26:31.468 [2024-07-14 07:43:47.634014] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:7589 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:31.468 [2024-07-14 07:43:47.634033] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:115 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:31.726 [2024-07-14 07:43:47.646123] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2168f00) 00:26:31.726 [2024-07-14 07:43:47.646172] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:22727 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:31.726 [2024-07-14 07:43:47.646192] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:31 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:31.726 [2024-07-14 07:43:47.658286] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2168f00) 00:26:31.726 [2024-07-14 07:43:47.658334] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:17865 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:31.726 [2024-07-14 07:43:47.658353] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:63 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:31.726 [2024-07-14 07:43:47.670981] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2168f00) 00:26:31.726 [2024-07-14 07:43:47.671020] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:16709 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:31.726 [2024-07-14 07:43:47.671039] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:78 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:31.726 [2024-07-14 07:43:47.683199] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2168f00) 00:26:31.726 [2024-07-14 07:43:47.683234] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:12270 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:31.726 [2024-07-14 07:43:47.683252] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:33 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:31.726 [2024-07-14 07:43:47.694886] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2168f00) 00:26:31.726 [2024-07-14 07:43:47.694936] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:18765 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:31.726 [2024-07-14 07:43:47.694956] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:37 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:31.726 [2024-07-14 07:43:47.707013] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2168f00) 00:26:31.726 [2024-07-14 07:43:47.707046] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:4670 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:31.726 [2024-07-14 07:43:47.707065] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:57 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:31.726 [2024-07-14 07:43:47.719467] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2168f00) 00:26:31.726 [2024-07-14 07:43:47.719501] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:18429 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:31.726 [2024-07-14 07:43:47.719520] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:105 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:31.726 [2024-07-14 07:43:47.731402] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2168f00) 00:26:31.726 [2024-07-14 07:43:47.731433] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:5046 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:31.726 [2024-07-14 07:43:47.731450] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:68 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:31.726 [2024-07-14 07:43:47.743183] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2168f00) 00:26:31.726 [2024-07-14 07:43:47.743215] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:14712 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:31.726 [2024-07-14 07:43:47.743233] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:88 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:31.726 [2024-07-14 07:43:47.755638] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2168f00) 00:26:31.726 [2024-07-14 07:43:47.755670] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:23634 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:31.726 [2024-07-14 07:43:47.755688] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:74 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:31.726 [2024-07-14 07:43:47.767599] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2168f00) 00:26:31.726 [2024-07-14 07:43:47.767645] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:157 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:31.726 [2024-07-14 07:43:47.767662] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:120 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:31.726 [2024-07-14 07:43:47.779379] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2168f00) 00:26:31.726 [2024-07-14 07:43:47.779410] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:12346 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:31.726 [2024-07-14 07:43:47.779429] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:61 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:31.726 [2024-07-14 07:43:47.791566] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2168f00) 00:26:31.726 [2024-07-14 07:43:47.791599] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:24494 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:31.726 [2024-07-14 07:43:47.791633] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:54 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:31.726 [2024-07-14 07:43:47.803844] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2168f00) 00:26:31.726 [2024-07-14 07:43:47.803891] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:4599 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:31.726 [2024-07-14 07:43:47.803916] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:26 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:31.726 [2024-07-14 07:43:47.815862] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2168f00) 00:26:31.726 [2024-07-14 07:43:47.815916] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:4 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:31.726 [2024-07-14 07:43:47.815935] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:24 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:31.726 [2024-07-14 07:43:47.827704] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2168f00) 00:26:31.726 [2024-07-14 07:43:47.827736] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:20385 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:31.726 [2024-07-14 07:43:47.827771] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:100 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:31.726 [2024-07-14 07:43:47.839779] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2168f00) 00:26:31.726 [2024-07-14 07:43:47.839812] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:21699 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:31.726 [2024-07-14 07:43:47.839831] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:44 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:31.726 [2024-07-14 07:43:47.852279] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2168f00) 00:26:31.726 [2024-07-14 07:43:47.852311] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:2830 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:31.726 [2024-07-14 07:43:47.852329] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:101 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:31.726 [2024-07-14 07:43:47.864242] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2168f00) 00:26:31.726 [2024-07-14 07:43:47.864275] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:3200 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:31.726 [2024-07-14 07:43:47.864293] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:31.726 [2024-07-14 07:43:47.876061] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2168f00) 00:26:31.726 [2024-07-14 07:43:47.876122] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:4357 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:31.726 [2024-07-14 07:43:47.876147] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:31.726 [2024-07-14 07:43:47.888203] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2168f00) 00:26:31.726 [2024-07-14 07:43:47.888263] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:1532 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:31.726 [2024-07-14 07:43:47.888297] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:31 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:31.984 [2024-07-14 07:43:47.901377] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2168f00) 00:26:31.984 [2024-07-14 07:43:47.901411] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:5993 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:31.984 [2024-07-14 07:43:47.901429] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:85 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:31.984 [2024-07-14 07:43:47.912961] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2168f00) 00:26:31.984 [2024-07-14 07:43:47.912993] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:12238 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:31.984 [2024-07-14 07:43:47.913014] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:16 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:31.984 [2024-07-14 07:43:47.925057] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2168f00) 00:26:31.984 [2024-07-14 07:43:47.925091] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:23152 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:31.984 [2024-07-14 07:43:47.925114] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:76 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:31.984 [2024-07-14 07:43:47.937749] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2168f00) 00:26:31.984 [2024-07-14 07:43:47.937783] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:13072 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:31.984 [2024-07-14 07:43:47.937802] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:31.984 [2024-07-14 07:43:47.949902] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2168f00) 00:26:31.984 [2024-07-14 07:43:47.949950] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:16586 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:31.984 [2024-07-14 07:43:47.949968] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:64 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:31.984 [2024-07-14 07:43:47.961513] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2168f00) 00:26:31.984 [2024-07-14 07:43:47.961547] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:21786 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:31.984 [2024-07-14 07:43:47.961566] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:31.984 [2024-07-14 07:43:47.974112] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2168f00) 00:26:31.984 [2024-07-14 07:43:47.974145] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:4827 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:31.984 [2024-07-14 07:43:47.974164] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:61 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:31.984 [2024-07-14 07:43:47.986195] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2168f00) 00:26:31.984 [2024-07-14 07:43:47.986234] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:13349 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:31.984 [2024-07-14 07:43:47.986257] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:33 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:31.984 [2024-07-14 07:43:47.998013] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2168f00) 00:26:31.984 [2024-07-14 07:43:47.998049] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:13537 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:31.984 [2024-07-14 07:43:47.998068] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:87 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:31.984 [2024-07-14 07:43:48.010588] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2168f00) 00:26:31.984 [2024-07-14 07:43:48.010621] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:6915 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:31.984 [2024-07-14 07:43:48.010659] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:25 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:31.984 [2024-07-14 07:43:48.022535] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2168f00) 00:26:31.984 [2024-07-14 07:43:48.022567] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:5167 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:31.984 [2024-07-14 07:43:48.022585] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:100 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:31.984 [2024-07-14 07:43:48.034378] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2168f00) 00:26:31.984 [2024-07-14 07:43:48.034410] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:21454 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:31.984 [2024-07-14 07:43:48.034431] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:103 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:31.984 [2024-07-14 07:43:48.046041] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2168f00) 00:26:31.984 [2024-07-14 07:43:48.046091] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:1874 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:31.984 [2024-07-14 07:43:48.046109] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:120 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:31.984 [2024-07-14 07:43:48.058857] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2168f00) 00:26:31.984 [2024-07-14 07:43:48.058898] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:12694 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:31.984 [2024-07-14 07:43:48.058917] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:86 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:31.984 [2024-07-14 07:43:48.070785] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2168f00) 00:26:31.984 [2024-07-14 07:43:48.070817] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:3398 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:31.984 [2024-07-14 07:43:48.070849] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:60 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:31.984 [2024-07-14 07:43:48.082628] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2168f00) 00:26:31.984 [2024-07-14 07:43:48.082659] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:19730 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:31.984 [2024-07-14 07:43:48.082696] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:34 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:31.984 [2024-07-14 07:43:48.094646] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2168f00) 00:26:31.984 [2024-07-14 07:43:48.094695] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:9130 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:31.984 [2024-07-14 07:43:48.094733] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:97 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:31.984 [2024-07-14 07:43:48.107299] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2168f00) 00:26:31.984 [2024-07-14 07:43:48.107330] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:12370 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:31.984 [2024-07-14 07:43:48.107367] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:42 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:31.984 [2024-07-14 07:43:48.118119] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2168f00) 00:26:31.984 [2024-07-14 07:43:48.118151] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:20458 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:31.984 [2024-07-14 07:43:48.118169] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:65 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:31.984 [2024-07-14 07:43:48.132135] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2168f00) 00:26:31.984 [2024-07-14 07:43:48.132166] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:19816 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:31.984 [2024-07-14 07:43:48.132201] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:79 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:31.984 [2024-07-14 07:43:48.142551] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2168f00) 00:26:31.984 [2024-07-14 07:43:48.142583] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:12144 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:31.984 [2024-07-14 07:43:48.142600] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:105 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:32.241 [2024-07-14 07:43:48.155542] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2168f00) 00:26:32.241 [2024-07-14 07:43:48.155573] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:6178 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:32.241 [2024-07-14 07:43:48.155605] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:32.241 [2024-07-14 07:43:48.167450] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2168f00) 00:26:32.241 [2024-07-14 07:43:48.167481] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:724 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:32.241 [2024-07-14 07:43:48.167498] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:94 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:32.241 [2024-07-14 07:43:48.180088] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2168f00) 00:26:32.241 [2024-07-14 07:43:48.180122] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:25267 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:32.241 [2024-07-14 07:43:48.180140] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:84 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:32.241 [2024-07-14 07:43:48.191650] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2168f00) 00:26:32.241 [2024-07-14 07:43:48.191686] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:5138 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:32.241 [2024-07-14 07:43:48.191704] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:113 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:32.241 [2024-07-14 07:43:48.203471] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2168f00) 00:26:32.241 [2024-07-14 07:43:48.203505] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:357 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:32.241 [2024-07-14 07:43:48.203524] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:118 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:32.241 [2024-07-14 07:43:48.216061] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2168f00) 00:26:32.241 [2024-07-14 07:43:48.216094] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:19374 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:32.241 [2024-07-14 07:43:48.216113] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:31 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:32.241 [2024-07-14 07:43:48.228102] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2168f00) 00:26:32.241 [2024-07-14 07:43:48.228137] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:12618 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:32.241 [2024-07-14 07:43:48.228156] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:68 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:32.241 [2024-07-14 07:43:48.239574] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2168f00) 00:26:32.241 [2024-07-14 07:43:48.239607] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:14578 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:32.241 [2024-07-14 07:43:48.239628] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:31 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:32.241 [2024-07-14 07:43:48.251904] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2168f00) 00:26:32.241 [2024-07-14 07:43:48.251948] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:23016 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:32.241 [2024-07-14 07:43:48.251967] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:58 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:32.241 [2024-07-14 07:43:48.263495] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2168f00) 00:26:32.241 [2024-07-14 07:43:48.263528] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:22802 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:32.241 [2024-07-14 07:43:48.263561] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:68 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:32.241 [2024-07-14 07:43:48.275288] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2168f00) 00:26:32.241 [2024-07-14 07:43:48.275323] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:18929 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:32.241 [2024-07-14 07:43:48.275342] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:32.241 [2024-07-14 07:43:48.287191] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2168f00) 00:26:32.241 [2024-07-14 07:43:48.287238] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:354 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:32.241 [2024-07-14 07:43:48.287257] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:62 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:32.241 [2024-07-14 07:43:48.299543] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2168f00) 00:26:32.241 [2024-07-14 07:43:48.299577] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:15081 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:32.241 [2024-07-14 07:43:48.299596] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:33 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:32.241 [2024-07-14 07:43:48.311408] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2168f00) 00:26:32.241 [2024-07-14 07:43:48.311441] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:135 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:32.241 [2024-07-14 07:43:48.311460] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:32.241 [2024-07-14 07:43:48.323083] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2168f00) 00:26:32.241 [2024-07-14 07:43:48.323120] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:24543 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:32.241 [2024-07-14 07:43:48.323153] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:94 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:32.241 [2024-07-14 07:43:48.336189] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2168f00) 00:26:32.241 [2024-07-14 07:43:48.336226] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:4771 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:32.241 [2024-07-14 07:43:48.336260] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:100 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:32.241 [2024-07-14 07:43:48.348572] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2168f00) 00:26:32.241 [2024-07-14 07:43:48.348610] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:3862 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:32.241 [2024-07-14 07:43:48.348632] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:23 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:32.241 [2024-07-14 07:43:48.360849] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2168f00) 00:26:32.241 [2024-07-14 07:43:48.360913] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:9353 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:32.241 [2024-07-14 07:43:48.360933] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:95 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:32.241 [2024-07-14 07:43:48.373881] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2168f00) 00:26:32.241 [2024-07-14 07:43:48.373930] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:5818 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:32.241 [2024-07-14 07:43:48.373950] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:17 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:32.241 [2024-07-14 07:43:48.386407] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2168f00) 00:26:32.241 [2024-07-14 07:43:48.386444] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:7656 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:32.241 [2024-07-14 07:43:48.386464] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:84 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:32.241 [2024-07-14 07:43:48.398451] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2168f00) 00:26:32.242 [2024-07-14 07:43:48.398487] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:17547 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:32.242 [2024-07-14 07:43:48.398519] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:32.499 [2024-07-14 07:43:48.412108] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2168f00) 00:26:32.499 [2024-07-14 07:43:48.412143] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:17023 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:32.499 [2024-07-14 07:43:48.412174] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:65 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:32.499 [2024-07-14 07:43:48.424482] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2168f00) 00:26:32.499 [2024-07-14 07:43:48.424517] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:22677 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:32.499 [2024-07-14 07:43:48.424549] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:52 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:32.499 [2024-07-14 07:43:48.437016] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2168f00) 00:26:32.499 [2024-07-14 07:43:48.437047] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:7760 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:32.499 [2024-07-14 07:43:48.437064] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:92 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:32.499 [2024-07-14 07:43:48.449974] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2168f00) 00:26:32.500 [2024-07-14 07:43:48.450006] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:726 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:32.500 [2024-07-14 07:43:48.450027] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:97 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:32.500 [2024-07-14 07:43:48.462345] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2168f00) 00:26:32.500 [2024-07-14 07:43:48.462380] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:8390 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:32.500 [2024-07-14 07:43:48.462404] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:54 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:32.500 [2024-07-14 07:43:48.474597] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2168f00) 00:26:32.500 [2024-07-14 07:43:48.474632] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:22520 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:32.500 [2024-07-14 07:43:48.474654] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:57 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:32.500 [2024-07-14 07:43:48.487015] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2168f00) 00:26:32.500 [2024-07-14 07:43:48.487046] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:10991 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:32.500 [2024-07-14 07:43:48.487066] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:109 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:32.500 [2024-07-14 07:43:48.500073] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2168f00) 00:26:32.500 [2024-07-14 07:43:48.500105] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:11509 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:32.500 [2024-07-14 07:43:48.500123] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:16 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:32.500 [2024-07-14 07:43:48.512555] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2168f00) 00:26:32.500 [2024-07-14 07:43:48.512600] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:12453 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:32.500 [2024-07-14 07:43:48.512620] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:66 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:32.500 [2024-07-14 07:43:48.524802] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2168f00) 00:26:32.500 [2024-07-14 07:43:48.524839] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:22999 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:32.500 [2024-07-14 07:43:48.524861] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:115 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:32.500 [2024-07-14 07:43:48.537809] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2168f00) 00:26:32.500 [2024-07-14 07:43:48.537849] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:16581 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:32.500 [2024-07-14 07:43:48.537879] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:106 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:32.500 [2024-07-14 07:43:48.550394] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2168f00) 00:26:32.500 [2024-07-14 07:43:48.550430] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:13141 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:32.500 [2024-07-14 07:43:48.550453] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:20 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:32.500 [2024-07-14 07:43:48.562599] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2168f00) 00:26:32.500 [2024-07-14 07:43:48.562634] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:16471 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:32.500 [2024-07-14 07:43:48.562654] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:42 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:32.500 [2024-07-14 07:43:48.574944] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2168f00) 00:26:32.500 [2024-07-14 07:43:48.574991] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:15256 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:32.500 [2024-07-14 07:43:48.575008] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:108 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:32.500 [2024-07-14 07:43:48.588165] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2168f00) 00:26:32.500 [2024-07-14 07:43:48.588215] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:13508 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:32.500 [2024-07-14 07:43:48.588236] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:100 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:32.500 [2024-07-14 07:43:48.600473] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2168f00) 00:26:32.500 [2024-07-14 07:43:48.600508] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:10709 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:32.500 [2024-07-14 07:43:48.600533] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:32.500 [2024-07-14 07:43:48.612446] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2168f00) 00:26:32.500 [2024-07-14 07:43:48.612481] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:6403 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:32.500 [2024-07-14 07:43:48.612502] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:60 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:32.500 [2024-07-14 07:43:48.625704] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2168f00) 00:26:32.500 [2024-07-14 07:43:48.625743] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:19728 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:32.500 [2024-07-14 07:43:48.625764] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:123 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:32.500 [2024-07-14 07:43:48.637740] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2168f00) 00:26:32.500 [2024-07-14 07:43:48.637776] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:20940 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:32.500 [2024-07-14 07:43:48.637799] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:32.500 [2024-07-14 07:43:48.650994] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2168f00) 00:26:32.500 [2024-07-14 07:43:48.651027] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:8640 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:32.500 [2024-07-14 07:43:48.651046] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:23 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:32.500 [2024-07-14 07:43:48.663477] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2168f00) 00:26:32.500 [2024-07-14 07:43:48.663516] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:4293 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:32.500 [2024-07-14 07:43:48.663537] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:41 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:32.759 [2024-07-14 07:43:48.676408] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2168f00) 00:26:32.759 [2024-07-14 07:43:48.676446] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:13957 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:32.759 [2024-07-14 07:43:48.676469] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:115 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:32.759 [2024-07-14 07:43:48.688579] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2168f00) 00:26:32.759 [2024-07-14 07:43:48.688615] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:22193 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:32.759 [2024-07-14 07:43:48.688640] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:71 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:32.759 [2024-07-14 07:43:48.701846] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2168f00) 00:26:32.759 [2024-07-14 07:43:48.701891] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:2006 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:32.759 [2024-07-14 07:43:48.701917] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:94 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:32.759 [2024-07-14 07:43:48.714220] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2168f00) 00:26:32.759 [2024-07-14 07:43:48.714271] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:2213 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:32.759 [2024-07-14 07:43:48.714295] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:80 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:32.759 [2024-07-14 07:43:48.726544] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2168f00) 00:26:32.759 [2024-07-14 07:43:48.726585] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:11949 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:32.759 [2024-07-14 07:43:48.726606] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:32.759 [2024-07-14 07:43:48.739543] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2168f00) 00:26:32.759 [2024-07-14 07:43:48.739579] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:18483 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:32.759 [2024-07-14 07:43:48.739600] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:108 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:32.759 [2024-07-14 07:43:48.751856] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2168f00) 00:26:32.759 [2024-07-14 07:43:48.751900] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:7745 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:32.759 [2024-07-14 07:43:48.751936] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:32.759 [2024-07-14 07:43:48.763964] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2168f00) 00:26:32.759 [2024-07-14 07:43:48.763997] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:17054 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:32.759 [2024-07-14 07:43:48.764015] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:32.759 [2024-07-14 07:43:48.777080] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2168f00) 00:26:32.759 [2024-07-14 07:43:48.777112] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:12461 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:32.759 [2024-07-14 07:43:48.777131] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:85 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:32.759 [2024-07-14 07:43:48.789518] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2168f00) 00:26:32.759 [2024-07-14 07:43:48.789558] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:23301 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:32.759 [2024-07-14 07:43:48.789579] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:98 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:32.759 [2024-07-14 07:43:48.801715] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2168f00) 00:26:32.759 [2024-07-14 07:43:48.801754] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:4771 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:32.759 [2024-07-14 07:43:48.801776] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:32 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:32.759 [2024-07-14 07:43:48.814009] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2168f00) 00:26:32.759 [2024-07-14 07:43:48.814043] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:13208 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:32.759 [2024-07-14 07:43:48.814064] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:33 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:32.759 [2024-07-14 07:43:48.827159] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2168f00) 00:26:32.759 [2024-07-14 07:43:48.827210] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:19995 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:32.759 [2024-07-14 07:43:48.827235] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:28 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:32.759 [2024-07-14 07:43:48.839406] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2168f00) 00:26:32.759 [2024-07-14 07:43:48.839445] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:19910 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:32.759 [2024-07-14 07:43:48.839467] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:28 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:32.759 [2024-07-14 07:43:48.851531] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2168f00) 00:26:32.759 [2024-07-14 07:43:48.851566] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:9222 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:32.759 [2024-07-14 07:43:48.851587] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:111 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:32.759 [2024-07-14 07:43:48.864631] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2168f00) 00:26:32.759 [2024-07-14 07:43:48.864668] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:365 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:32.759 [2024-07-14 07:43:48.864693] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:28 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:32.759 [2024-07-14 07:43:48.876782] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2168f00) 00:26:32.759 [2024-07-14 07:43:48.876823] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:11515 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:32.759 [2024-07-14 07:43:48.876845] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:59 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:32.759 [2024-07-14 07:43:48.889081] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2168f00) 00:26:32.759 [2024-07-14 07:43:48.889120] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:24329 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:32.759 [2024-07-14 07:43:48.889142] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:102 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:32.759 [2024-07-14 07:43:48.902191] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2168f00) 00:26:32.759 [2024-07-14 07:43:48.902230] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:7400 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:32.759 [2024-07-14 07:43:48.902250] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:45 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:32.759 [2024-07-14 07:43:48.914448] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2168f00) 00:26:32.759 [2024-07-14 07:43:48.914484] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:18663 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:32.759 [2024-07-14 07:43:48.914504] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:110 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:32.759 [2024-07-14 07:43:48.926851] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2168f00) 00:26:32.759 [2024-07-14 07:43:48.926895] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:8644 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:32.759 [2024-07-14 07:43:48.926933] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:97 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:33.018 [2024-07-14 07:43:48.939515] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2168f00) 00:26:33.018 [2024-07-14 07:43:48.939551] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:849 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:33.018 [2024-07-14 07:43:48.939582] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:33.018 [2024-07-14 07:43:48.952640] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2168f00) 00:26:33.018 [2024-07-14 07:43:48.952676] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:14648 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:33.018 [2024-07-14 07:43:48.952697] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:19 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:33.018 [2024-07-14 07:43:48.964879] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2168f00) 00:26:33.018 [2024-07-14 07:43:48.964915] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:4031 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:33.018 [2024-07-14 07:43:48.964949] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:19 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:33.018 [2024-07-14 07:43:48.977042] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2168f00) 00:26:33.018 [2024-07-14 07:43:48.977074] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:2394 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:33.018 [2024-07-14 07:43:48.977092] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:115 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:33.018 [2024-07-14 07:43:48.990054] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2168f00) 00:26:33.018 [2024-07-14 07:43:48.990086] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:2828 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:33.018 [2024-07-14 07:43:48.990105] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:51 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:33.018 [2024-07-14 07:43:49.002507] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2168f00) 00:26:33.018 [2024-07-14 07:43:49.002545] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:10324 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:33.018 [2024-07-14 07:43:49.002570] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:36 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:33.018 [2024-07-14 07:43:49.014876] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2168f00) 00:26:33.018 [2024-07-14 07:43:49.014925] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:15007 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:33.018 [2024-07-14 07:43:49.014943] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:87 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:33.018 [2024-07-14 07:43:49.028063] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2168f00) 00:26:33.018 [2024-07-14 07:43:49.028096] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:21268 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:33.018 [2024-07-14 07:43:49.028114] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:87 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:33.018 [2024-07-14 07:43:49.040627] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2168f00) 00:26:33.018 [2024-07-14 07:43:49.040665] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:12446 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:33.018 [2024-07-14 07:43:49.040689] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:88 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:33.018 [2024-07-14 07:43:49.052800] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2168f00) 00:26:33.018 [2024-07-14 07:43:49.052842] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:9991 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:33.018 [2024-07-14 07:43:49.052863] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:43 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:33.018 [2024-07-14 07:43:49.065047] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2168f00) 00:26:33.018 [2024-07-14 07:43:49.065079] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:17184 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:33.018 [2024-07-14 07:43:49.065112] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:88 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:33.018 [2024-07-14 07:43:49.078220] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2168f00) 00:26:33.018 [2024-07-14 07:43:49.078274] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:25272 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:33.018 [2024-07-14 07:43:49.078295] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:106 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:33.018 [2024-07-14 07:43:49.090469] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2168f00) 00:26:33.018 [2024-07-14 07:43:49.090508] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:10840 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:33.018 [2024-07-14 07:43:49.090528] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:20 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:33.018 [2024-07-14 07:43:49.102791] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2168f00) 00:26:33.018 [2024-07-14 07:43:49.102829] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:1424 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:33.018 [2024-07-14 07:43:49.102849] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:33.019 [2024-07-14 07:43:49.116038] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2168f00) 00:26:33.019 [2024-07-14 07:43:49.116072] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:6602 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:33.019 [2024-07-14 07:43:49.116091] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:44 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:33.019 [2024-07-14 07:43:49.128574] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2168f00) 00:26:33.019 [2024-07-14 07:43:49.128614] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:20090 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:33.019 [2024-07-14 07:43:49.128635] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:70 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:33.019 [2024-07-14 07:43:49.140799] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2168f00) 00:26:33.019 [2024-07-14 07:43:49.140835] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:14459 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:33.019 [2024-07-14 07:43:49.140860] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:66 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:33.019 [2024-07-14 07:43:49.152911] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2168f00) 00:26:33.019 [2024-07-14 07:43:49.152944] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:18844 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:33.019 [2024-07-14 07:43:49.152977] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:66 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:33.019 [2024-07-14 07:43:49.166020] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2168f00) 00:26:33.019 [2024-07-14 07:43:49.166053] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:11290 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:33.019 [2024-07-14 07:43:49.166076] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:33.019 [2024-07-14 07:43:49.178499] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2168f00) 00:26:33.019 [2024-07-14 07:43:49.178534] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:21759 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:33.019 [2024-07-14 07:43:49.178555] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:33.276 [2024-07-14 07:43:49.191173] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2168f00) 00:26:33.277 [2024-07-14 07:43:49.191205] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:3897 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:33.277 [2024-07-14 07:43:49.191223] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:33.277 [2024-07-14 07:43:49.203550] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2168f00) 00:26:33.277 [2024-07-14 07:43:49.203586] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:10312 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:33.277 [2024-07-14 07:43:49.203606] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:112 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:33.277 [2024-07-14 07:43:49.216712] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2168f00) 00:26:33.277 [2024-07-14 07:43:49.216752] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:9847 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:33.277 [2024-07-14 07:43:49.216772] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:79 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:33.277 [2024-07-14 07:43:49.229088] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2168f00) 00:26:33.277 [2024-07-14 07:43:49.229120] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:16223 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:33.277 [2024-07-14 07:43:49.229138] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:60 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:33.277 [2024-07-14 07:43:49.241314] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2168f00) 00:26:33.277 [2024-07-14 07:43:49.241349] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:3252 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:33.277 [2024-07-14 07:43:49.241369] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:44 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:33.277 [2024-07-14 07:43:49.254529] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2168f00) 00:26:33.277 [2024-07-14 07:43:49.254565] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:14372 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:33.277 [2024-07-14 07:43:49.254585] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:57 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:33.277 [2024-07-14 07:43:49.266999] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2168f00) 00:26:33.277 [2024-07-14 07:43:49.267035] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:23099 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:33.277 [2024-07-14 07:43:49.267059] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:119 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:33.277 [2024-07-14 07:43:49.278841] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2168f00) 00:26:33.277 [2024-07-14 07:43:49.278886] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:7965 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:33.277 [2024-07-14 07:43:49.278911] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:50 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:33.277 [2024-07-14 07:43:49.292021] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2168f00) 00:26:33.277 [2024-07-14 07:43:49.292053] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:22267 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:33.277 [2024-07-14 07:43:49.292072] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:85 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:33.277 [2024-07-14 07:43:49.304436] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2168f00) 00:26:33.277 [2024-07-14 07:43:49.304471] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:13098 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:33.277 [2024-07-14 07:43:49.304495] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:56 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:33.277 [2024-07-14 07:43:49.316717] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2168f00) 00:26:33.277 [2024-07-14 07:43:49.316752] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:23472 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:33.277 [2024-07-14 07:43:49.316772] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:33 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:33.277 00:26:33.277 Latency(us) 00:26:33.277 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:26:33.277 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 128, IO size: 4096) 00:26:33.277 nvme0n1 : 2.00 20604.86 80.49 0.00 0.00 6204.16 2997.67 16505.36 00:26:33.277 =================================================================================================================== 00:26:33.277 Total : 20604.86 80.49 0.00 0.00 6204.16 2997.67 16505.36 00:26:33.277 0 00:26:33.277 07:43:49 -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:26:33.277 07:43:49 -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:26:33.277 07:43:49 -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:26:33.277 07:43:49 -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:26:33.277 | .driver_specific 00:26:33.277 | .nvme_error 00:26:33.277 | .status_code 00:26:33.277 | .command_transient_transport_error' 00:26:33.535 07:43:49 -- host/digest.sh@71 -- # (( 161 > 0 )) 00:26:33.535 07:43:49 -- host/digest.sh@73 -- # killprocess 10217 00:26:33.535 07:43:49 -- common/autotest_common.sh@926 -- # '[' -z 10217 ']' 00:26:33.535 07:43:49 -- common/autotest_common.sh@930 -- # kill -0 10217 00:26:33.535 07:43:49 -- common/autotest_common.sh@931 -- # uname 00:26:33.535 07:43:49 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:26:33.535 07:43:49 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 10217 00:26:33.535 07:43:49 -- common/autotest_common.sh@932 -- # process_name=reactor_1 00:26:33.535 07:43:49 -- common/autotest_common.sh@936 -- # '[' reactor_1 = sudo ']' 00:26:33.535 07:43:49 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 10217' 00:26:33.535 killing process with pid 10217 00:26:33.535 07:43:49 -- common/autotest_common.sh@945 -- # kill 10217 00:26:33.535 Received shutdown signal, test time was about 2.000000 seconds 00:26:33.535 00:26:33.535 Latency(us) 00:26:33.535 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:26:33.535 =================================================================================================================== 00:26:33.535 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:26:33.535 07:43:49 -- common/autotest_common.sh@950 -- # wait 10217 00:26:33.794 07:43:49 -- host/digest.sh@108 -- # run_bperf_err randread 131072 16 00:26:33.794 07:43:49 -- host/digest.sh@54 -- # local rw bs qd 00:26:33.794 07:43:49 -- host/digest.sh@56 -- # rw=randread 00:26:33.794 07:43:49 -- host/digest.sh@56 -- # bs=131072 00:26:33.794 07:43:49 -- host/digest.sh@56 -- # qd=16 00:26:33.794 07:43:49 -- host/digest.sh@58 -- # bperfpid=10766 00:26:33.794 07:43:49 -- host/digest.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 131072 -t 2 -q 16 -z 00:26:33.794 07:43:49 -- host/digest.sh@60 -- # waitforlisten 10766 /var/tmp/bperf.sock 00:26:33.794 07:43:49 -- common/autotest_common.sh@819 -- # '[' -z 10766 ']' 00:26:33.794 07:43:49 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/bperf.sock 00:26:33.794 07:43:49 -- common/autotest_common.sh@824 -- # local max_retries=100 00:26:33.794 07:43:49 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:26:33.794 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:26:33.794 07:43:49 -- common/autotest_common.sh@828 -- # xtrace_disable 00:26:33.794 07:43:49 -- common/autotest_common.sh@10 -- # set +x 00:26:33.794 [2024-07-14 07:43:49.904738] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:26:33.794 [2024-07-14 07:43:49.904821] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid10766 ] 00:26:33.794 I/O size of 131072 is greater than zero copy threshold (65536). 00:26:33.794 Zero copy mechanism will not be used. 00:26:33.794 EAL: No free 2048 kB hugepages reported on node 1 00:26:34.052 [2024-07-14 07:43:49.964789] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:34.052 [2024-07-14 07:43:50.077035] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:26:34.986 07:43:50 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:26:34.986 07:43:50 -- common/autotest_common.sh@852 -- # return 0 00:26:34.986 07:43:50 -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:26:34.986 07:43:50 -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:26:34.986 07:43:51 -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:26:34.986 07:43:51 -- common/autotest_common.sh@551 -- # xtrace_disable 00:26:34.986 07:43:51 -- common/autotest_common.sh@10 -- # set +x 00:26:35.244 07:43:51 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:26:35.244 07:43:51 -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:26:35.244 07:43:51 -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:26:35.503 nvme0n1 00:26:35.503 07:43:51 -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 32 00:26:35.503 07:43:51 -- common/autotest_common.sh@551 -- # xtrace_disable 00:26:35.503 07:43:51 -- common/autotest_common.sh@10 -- # set +x 00:26:35.503 07:43:51 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:26:35.503 07:43:51 -- host/digest.sh@69 -- # bperf_py perform_tests 00:26:35.503 07:43:51 -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:26:35.503 I/O size of 131072 is greater than zero copy threshold (65536). 00:26:35.503 Zero copy mechanism will not be used. 00:26:35.503 Running I/O for 2 seconds... 00:26:35.503 [2024-07-14 07:43:51.659408] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17f3880) 00:26:35.503 [2024-07-14 07:43:51.659474] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:35.503 [2024-07-14 07:43:51.659497] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:35.503 [2024-07-14 07:43:51.673669] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17f3880) 00:26:35.503 [2024-07-14 07:43:51.673705] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:35.503 [2024-07-14 07:43:51.673725] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:35.762 [2024-07-14 07:43:51.687761] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17f3880) 00:26:35.762 [2024-07-14 07:43:51.687795] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:35.762 [2024-07-14 07:43:51.687818] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:35.762 [2024-07-14 07:43:51.701713] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17f3880) 00:26:35.762 [2024-07-14 07:43:51.701747] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:25440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:35.762 [2024-07-14 07:43:51.701770] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:35.762 [2024-07-14 07:43:51.715792] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17f3880) 00:26:35.762 [2024-07-14 07:43:51.715827] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:35.762 [2024-07-14 07:43:51.715854] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:35.762 [2024-07-14 07:43:51.729748] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17f3880) 00:26:35.762 [2024-07-14 07:43:51.729782] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:35.762 [2024-07-14 07:43:51.729802] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:35.762 [2024-07-14 07:43:51.743957] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17f3880) 00:26:35.762 [2024-07-14 07:43:51.743987] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:35.762 [2024-07-14 07:43:51.744006] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:35.762 [2024-07-14 07:43:51.757808] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17f3880) 00:26:35.762 [2024-07-14 07:43:51.757842] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:35.762 [2024-07-14 07:43:51.757876] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:35.762 [2024-07-14 07:43:51.771815] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17f3880) 00:26:35.762 [2024-07-14 07:43:51.771857] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:35.762 [2024-07-14 07:43:51.771893] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:35.762 [2024-07-14 07:43:51.786000] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17f3880) 00:26:35.762 [2024-07-14 07:43:51.786045] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:35.762 [2024-07-14 07:43:51.786071] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:35.762 [2024-07-14 07:43:51.800437] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17f3880) 00:26:35.762 [2024-07-14 07:43:51.800470] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:35.762 [2024-07-14 07:43:51.800494] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:35.762 [2024-07-14 07:43:51.815037] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17f3880) 00:26:35.762 [2024-07-14 07:43:51.815082] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:35.762 [2024-07-14 07:43:51.815102] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:35.762 [2024-07-14 07:43:51.829451] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17f3880) 00:26:35.762 [2024-07-14 07:43:51.829485] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:35.762 [2024-07-14 07:43:51.829504] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:35.762 [2024-07-14 07:43:51.843922] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17f3880) 00:26:35.762 [2024-07-14 07:43:51.843953] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:35.762 [2024-07-14 07:43:51.843973] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:35.762 [2024-07-14 07:43:51.858334] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17f3880) 00:26:35.762 [2024-07-14 07:43:51.858367] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:35.762 [2024-07-14 07:43:51.858386] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:35.763 [2024-07-14 07:43:51.872187] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17f3880) 00:26:35.763 [2024-07-14 07:43:51.872234] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:35.763 [2024-07-14 07:43:51.872254] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:35.763 [2024-07-14 07:43:51.886171] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17f3880) 00:26:35.763 [2024-07-14 07:43:51.886218] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:35.763 [2024-07-14 07:43:51.886238] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:35.763 [2024-07-14 07:43:51.900388] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17f3880) 00:26:35.763 [2024-07-14 07:43:51.900428] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:35.763 [2024-07-14 07:43:51.900448] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:35.763 [2024-07-14 07:43:51.914317] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17f3880) 00:26:35.763 [2024-07-14 07:43:51.914350] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:35.763 [2024-07-14 07:43:51.914370] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:35.763 [2024-07-14 07:43:51.928123] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17f3880) 00:26:35.763 [2024-07-14 07:43:51.928152] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:35.763 [2024-07-14 07:43:51.928171] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:36.021 [2024-07-14 07:43:51.942063] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17f3880) 00:26:36.021 [2024-07-14 07:43:51.942108] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:36.021 [2024-07-14 07:43:51.942127] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:36.021 [2024-07-14 07:43:51.956159] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17f3880) 00:26:36.021 [2024-07-14 07:43:51.956189] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:36.021 [2024-07-14 07:43:51.956226] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:36.021 [2024-07-14 07:43:51.970045] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17f3880) 00:26:36.021 [2024-07-14 07:43:51.970090] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:36.021 [2024-07-14 07:43:51.970111] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:36.021 [2024-07-14 07:43:51.983843] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17f3880) 00:26:36.021 [2024-07-14 07:43:51.983887] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:36.021 [2024-07-14 07:43:51.983921] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:36.021 [2024-07-14 07:43:51.998285] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17f3880) 00:26:36.021 [2024-07-14 07:43:51.998320] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:25472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:36.021 [2024-07-14 07:43:51.998340] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:36.021 [2024-07-14 07:43:52.012771] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17f3880) 00:26:36.022 [2024-07-14 07:43:52.012804] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:36.022 [2024-07-14 07:43:52.012823] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:36.022 [2024-07-14 07:43:52.026878] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17f3880) 00:26:36.022 [2024-07-14 07:43:52.026925] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:36.022 [2024-07-14 07:43:52.026943] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:36.022 [2024-07-14 07:43:52.041494] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17f3880) 00:26:36.022 [2024-07-14 07:43:52.041528] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:36.022 [2024-07-14 07:43:52.041549] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:36.022 [2024-07-14 07:43:52.055479] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17f3880) 00:26:36.022 [2024-07-14 07:43:52.055511] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:36.022 [2024-07-14 07:43:52.055531] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:36.022 [2024-07-14 07:43:52.069726] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17f3880) 00:26:36.022 [2024-07-14 07:43:52.069759] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:36.022 [2024-07-14 07:43:52.069778] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:36.022 [2024-07-14 07:43:52.083530] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17f3880) 00:26:36.022 [2024-07-14 07:43:52.083563] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:36.022 [2024-07-14 07:43:52.083582] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:36.022 [2024-07-14 07:43:52.097377] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17f3880) 00:26:36.022 [2024-07-14 07:43:52.097410] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:36.022 [2024-07-14 07:43:52.097445] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:36.022 [2024-07-14 07:43:52.111211] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17f3880) 00:26:36.022 [2024-07-14 07:43:52.111254] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:36.022 [2024-07-14 07:43:52.111271] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:36.022 [2024-07-14 07:43:52.125124] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17f3880) 00:26:36.022 [2024-07-14 07:43:52.125153] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:36.022 [2024-07-14 07:43:52.125188] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:36.022 [2024-07-14 07:43:52.139045] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17f3880) 00:26:36.022 [2024-07-14 07:43:52.139075] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:36.022 [2024-07-14 07:43:52.139101] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:36.022 [2024-07-14 07:43:52.152826] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17f3880) 00:26:36.022 [2024-07-14 07:43:52.152859] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:36.022 [2024-07-14 07:43:52.152887] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:36.022 [2024-07-14 07:43:52.166508] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17f3880) 00:26:36.022 [2024-07-14 07:43:52.166541] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:36.022 [2024-07-14 07:43:52.166561] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:36.022 [2024-07-14 07:43:52.180106] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17f3880) 00:26:36.022 [2024-07-14 07:43:52.180136] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:36.022 [2024-07-14 07:43:52.180153] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:36.280 [2024-07-14 07:43:52.194474] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17f3880) 00:26:36.280 [2024-07-14 07:43:52.194504] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:36.280 [2024-07-14 07:43:52.194522] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:36.280 [2024-07-14 07:43:52.209116] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17f3880) 00:26:36.280 [2024-07-14 07:43:52.209147] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:36.280 [2024-07-14 07:43:52.209164] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:36.280 [2024-07-14 07:43:52.223805] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17f3880) 00:26:36.281 [2024-07-14 07:43:52.223838] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:36.281 [2024-07-14 07:43:52.223857] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:36.281 [2024-07-14 07:43:52.238333] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17f3880) 00:26:36.281 [2024-07-14 07:43:52.238368] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:36.281 [2024-07-14 07:43:52.238388] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:36.281 [2024-07-14 07:43:52.252676] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17f3880) 00:26:36.281 [2024-07-14 07:43:52.252710] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:36.281 [2024-07-14 07:43:52.252729] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:36.281 [2024-07-14 07:43:52.267137] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17f3880) 00:26:36.281 [2024-07-14 07:43:52.267184] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:36.281 [2024-07-14 07:43:52.267201] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:36.281 [2024-07-14 07:43:52.281124] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17f3880) 00:26:36.281 [2024-07-14 07:43:52.281166] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:36.281 [2024-07-14 07:43:52.281183] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:36.281 [2024-07-14 07:43:52.295724] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17f3880) 00:26:36.281 [2024-07-14 07:43:52.295758] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:36.281 [2024-07-14 07:43:52.295778] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:36.281 [2024-07-14 07:43:52.309864] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17f3880) 00:26:36.281 [2024-07-14 07:43:52.309934] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:36.281 [2024-07-14 07:43:52.309953] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:36.281 [2024-07-14 07:43:52.325008] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17f3880) 00:26:36.281 [2024-07-14 07:43:52.325041] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:36.281 [2024-07-14 07:43:52.325058] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:36.281 [2024-07-14 07:43:52.338891] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17f3880) 00:26:36.281 [2024-07-14 07:43:52.338930] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:36.281 [2024-07-14 07:43:52.338963] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:36.281 [2024-07-14 07:43:52.353114] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17f3880) 00:26:36.281 [2024-07-14 07:43:52.353145] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:36.281 [2024-07-14 07:43:52.353163] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:36.281 [2024-07-14 07:43:52.367579] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17f3880) 00:26:36.281 [2024-07-14 07:43:52.367613] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:36.281 [2024-07-14 07:43:52.367633] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:36.281 [2024-07-14 07:43:52.382418] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17f3880) 00:26:36.281 [2024-07-14 07:43:52.382453] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:36.281 [2024-07-14 07:43:52.382479] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:36.281 [2024-07-14 07:43:52.396434] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17f3880) 00:26:36.281 [2024-07-14 07:43:52.396467] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:36.281 [2024-07-14 07:43:52.396487] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:36.281 [2024-07-14 07:43:52.410481] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17f3880) 00:26:36.281 [2024-07-14 07:43:52.410515] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:36.281 [2024-07-14 07:43:52.410535] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:36.281 [2024-07-14 07:43:52.424431] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17f3880) 00:26:36.281 [2024-07-14 07:43:52.424464] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:36.281 [2024-07-14 07:43:52.424483] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:36.281 [2024-07-14 07:43:52.437546] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17f3880) 00:26:36.281 [2024-07-14 07:43:52.437579] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:36.281 [2024-07-14 07:43:52.437598] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:36.540 [2024-07-14 07:43:52.452351] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17f3880) 00:26:36.540 [2024-07-14 07:43:52.452387] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:36.540 [2024-07-14 07:43:52.452407] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:36.540 [2024-07-14 07:43:52.467418] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17f3880) 00:26:36.540 [2024-07-14 07:43:52.467452] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:36.540 [2024-07-14 07:43:52.467470] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:36.540 [2024-07-14 07:43:52.481844] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17f3880) 00:26:36.540 [2024-07-14 07:43:52.481885] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:36.540 [2024-07-14 07:43:52.481919] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:36.540 [2024-07-14 07:43:52.497351] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17f3880) 00:26:36.540 [2024-07-14 07:43:52.497386] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:36.540 [2024-07-14 07:43:52.497406] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:36.540 [2024-07-14 07:43:52.512270] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17f3880) 00:26:36.540 [2024-07-14 07:43:52.512311] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:36.540 [2024-07-14 07:43:52.512331] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:36.540 [2024-07-14 07:43:52.526158] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17f3880) 00:26:36.540 [2024-07-14 07:43:52.526205] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:36.540 [2024-07-14 07:43:52.526225] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:36.540 [2024-07-14 07:43:52.540588] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17f3880) 00:26:36.540 [2024-07-14 07:43:52.540623] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:36.540 [2024-07-14 07:43:52.540642] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:36.540 [2024-07-14 07:43:52.554833] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17f3880) 00:26:36.540 [2024-07-14 07:43:52.554874] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:36.540 [2024-07-14 07:43:52.554910] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:36.540 [2024-07-14 07:43:52.569424] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17f3880) 00:26:36.540 [2024-07-14 07:43:52.569459] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:36.540 [2024-07-14 07:43:52.569478] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:36.540 [2024-07-14 07:43:52.583622] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17f3880) 00:26:36.540 [2024-07-14 07:43:52.583656] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:36.540 [2024-07-14 07:43:52.583675] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:36.540 [2024-07-14 07:43:52.597659] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17f3880) 00:26:36.540 [2024-07-14 07:43:52.597693] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:36.540 [2024-07-14 07:43:52.597712] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:36.540 [2024-07-14 07:43:52.611841] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17f3880) 00:26:36.540 [2024-07-14 07:43:52.611882] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:36.540 [2024-07-14 07:43:52.611917] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:36.540 [2024-07-14 07:43:52.625977] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17f3880) 00:26:36.540 [2024-07-14 07:43:52.626006] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:36.540 [2024-07-14 07:43:52.626023] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:36.540 [2024-07-14 07:43:52.640577] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17f3880) 00:26:36.540 [2024-07-14 07:43:52.640611] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:36.540 [2024-07-14 07:43:52.640631] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:36.540 [2024-07-14 07:43:52.654910] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17f3880) 00:26:36.540 [2024-07-14 07:43:52.654954] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:36.540 [2024-07-14 07:43:52.654971] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:36.540 [2024-07-14 07:43:52.668272] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17f3880) 00:26:36.540 [2024-07-14 07:43:52.668306] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:36.540 [2024-07-14 07:43:52.668326] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:36.540 [2024-07-14 07:43:52.682246] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17f3880) 00:26:36.540 [2024-07-14 07:43:52.682274] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:36.540 [2024-07-14 07:43:52.682290] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:36.540 [2024-07-14 07:43:52.696493] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17f3880) 00:26:36.540 [2024-07-14 07:43:52.696527] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:36.540 [2024-07-14 07:43:52.696546] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:36.799 [2024-07-14 07:43:52.710110] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17f3880) 00:26:36.799 [2024-07-14 07:43:52.710154] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:36.799 [2024-07-14 07:43:52.710174] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:36.799 [2024-07-14 07:43:52.724550] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17f3880) 00:26:36.799 [2024-07-14 07:43:52.724585] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:36.799 [2024-07-14 07:43:52.724604] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:36.799 [2024-07-14 07:43:52.738905] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17f3880) 00:26:36.799 [2024-07-14 07:43:52.738935] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:36.799 [2024-07-14 07:43:52.738953] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:36.799 [2024-07-14 07:43:52.753505] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17f3880) 00:26:36.799 [2024-07-14 07:43:52.753538] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:36.799 [2024-07-14 07:43:52.753563] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:36.799 [2024-07-14 07:43:52.766884] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17f3880) 00:26:36.799 [2024-07-14 07:43:52.766929] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:36.799 [2024-07-14 07:43:52.766946] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:36.799 [2024-07-14 07:43:52.780775] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17f3880) 00:26:36.799 [2024-07-14 07:43:52.780809] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:36.799 [2024-07-14 07:43:52.780829] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:36.799 [2024-07-14 07:43:52.795206] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17f3880) 00:26:36.799 [2024-07-14 07:43:52.795254] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:36.799 [2024-07-14 07:43:52.795274] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:36.799 [2024-07-14 07:43:52.808489] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17f3880) 00:26:36.799 [2024-07-14 07:43:52.808522] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:36.799 [2024-07-14 07:43:52.808541] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:36.799 [2024-07-14 07:43:52.822501] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17f3880) 00:26:36.799 [2024-07-14 07:43:52.822535] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:36.799 [2024-07-14 07:43:52.822555] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:36.799 [2024-07-14 07:43:52.837271] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17f3880) 00:26:36.799 [2024-07-14 07:43:52.837304] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:36.799 [2024-07-14 07:43:52.837324] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:36.799 [2024-07-14 07:43:52.850381] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17f3880) 00:26:36.799 [2024-07-14 07:43:52.850415] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:36.799 [2024-07-14 07:43:52.850434] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:36.799 [2024-07-14 07:43:52.863600] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17f3880) 00:26:36.799 [2024-07-14 07:43:52.863632] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:36.799 [2024-07-14 07:43:52.863652] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:36.799 [2024-07-14 07:43:52.877639] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17f3880) 00:26:36.799 [2024-07-14 07:43:52.877674] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:36.799 [2024-07-14 07:43:52.877693] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:36.799 [2024-07-14 07:43:52.892091] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17f3880) 00:26:36.799 [2024-07-14 07:43:52.892119] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:36.799 [2024-07-14 07:43:52.892136] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:36.799 [2024-07-14 07:43:52.906411] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17f3880) 00:26:36.799 [2024-07-14 07:43:52.906444] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:36.799 [2024-07-14 07:43:52.906463] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:36.799 [2024-07-14 07:43:52.920601] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17f3880) 00:26:36.799 [2024-07-14 07:43:52.920635] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:36.799 [2024-07-14 07:43:52.920654] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:36.799 [2024-07-14 07:43:52.934729] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17f3880) 00:26:36.799 [2024-07-14 07:43:52.934762] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:36.799 [2024-07-14 07:43:52.934781] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:36.799 [2024-07-14 07:43:52.948989] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17f3880) 00:26:36.799 [2024-07-14 07:43:52.949029] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:36.799 [2024-07-14 07:43:52.949046] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:36.799 [2024-07-14 07:43:52.963393] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17f3880) 00:26:36.799 [2024-07-14 07:43:52.963426] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:36.799 [2024-07-14 07:43:52.963446] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:37.057 [2024-07-14 07:43:52.977927] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17f3880) 00:26:37.057 [2024-07-14 07:43:52.977958] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.057 [2024-07-14 07:43:52.977976] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:37.057 [2024-07-14 07:43:52.991352] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17f3880) 00:26:37.057 [2024-07-14 07:43:52.991385] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.057 [2024-07-14 07:43:52.991411] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:37.057 [2024-07-14 07:43:53.005487] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17f3880) 00:26:37.057 [2024-07-14 07:43:53.005521] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.057 [2024-07-14 07:43:53.005540] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:37.057 [2024-07-14 07:43:53.019200] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17f3880) 00:26:37.057 [2024-07-14 07:43:53.019247] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.057 [2024-07-14 07:43:53.019266] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:37.057 [2024-07-14 07:43:53.032921] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17f3880) 00:26:37.057 [2024-07-14 07:43:53.032965] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.057 [2024-07-14 07:43:53.032982] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:37.057 [2024-07-14 07:43:53.046749] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17f3880) 00:26:37.057 [2024-07-14 07:43:53.046782] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.057 [2024-07-14 07:43:53.046800] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:37.057 [2024-07-14 07:43:53.060541] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17f3880) 00:26:37.057 [2024-07-14 07:43:53.060574] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.057 [2024-07-14 07:43:53.060593] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:37.057 [2024-07-14 07:43:53.074464] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17f3880) 00:26:37.057 [2024-07-14 07:43:53.074497] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.057 [2024-07-14 07:43:53.074516] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:37.057 [2024-07-14 07:43:53.088353] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17f3880) 00:26:37.057 [2024-07-14 07:43:53.088386] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.058 [2024-07-14 07:43:53.088405] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:37.058 [2024-07-14 07:43:53.102674] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17f3880) 00:26:37.058 [2024-07-14 07:43:53.102707] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.058 [2024-07-14 07:43:53.102728] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:37.058 [2024-07-14 07:43:53.117227] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17f3880) 00:26:37.058 [2024-07-14 07:43:53.117282] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.058 [2024-07-14 07:43:53.117302] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:37.058 [2024-07-14 07:43:53.131390] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17f3880) 00:26:37.058 [2024-07-14 07:43:53.131423] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.058 [2024-07-14 07:43:53.131443] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:37.058 [2024-07-14 07:43:53.146189] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17f3880) 00:26:37.058 [2024-07-14 07:43:53.146217] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.058 [2024-07-14 07:43:53.146250] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:37.058 [2024-07-14 07:43:53.160900] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17f3880) 00:26:37.058 [2024-07-14 07:43:53.160945] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.058 [2024-07-14 07:43:53.160961] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:37.058 [2024-07-14 07:43:53.175369] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17f3880) 00:26:37.058 [2024-07-14 07:43:53.175402] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.058 [2024-07-14 07:43:53.175422] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:37.058 [2024-07-14 07:43:53.189262] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17f3880) 00:26:37.058 [2024-07-14 07:43:53.189295] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.058 [2024-07-14 07:43:53.189313] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:37.058 [2024-07-14 07:43:53.202845] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17f3880) 00:26:37.058 [2024-07-14 07:43:53.202886] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.058 [2024-07-14 07:43:53.202920] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:37.058 [2024-07-14 07:43:53.216386] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17f3880) 00:26:37.058 [2024-07-14 07:43:53.216418] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:25376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.058 [2024-07-14 07:43:53.216437] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:37.315 [2024-07-14 07:43:53.230309] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17f3880) 00:26:37.315 [2024-07-14 07:43:53.230337] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.315 [2024-07-14 07:43:53.230370] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:37.315 [2024-07-14 07:43:53.244266] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17f3880) 00:26:37.315 [2024-07-14 07:43:53.244298] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.315 [2024-07-14 07:43:53.244318] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:37.315 [2024-07-14 07:43:53.258151] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17f3880) 00:26:37.315 [2024-07-14 07:43:53.258195] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.315 [2024-07-14 07:43:53.258214] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:37.315 [2024-07-14 07:43:53.271826] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17f3880) 00:26:37.315 [2024-07-14 07:43:53.271858] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.315 [2024-07-14 07:43:53.271890] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:37.315 [2024-07-14 07:43:53.285695] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17f3880) 00:26:37.315 [2024-07-14 07:43:53.285728] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.315 [2024-07-14 07:43:53.285746] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:37.315 [2024-07-14 07:43:53.300010] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17f3880) 00:26:37.315 [2024-07-14 07:43:53.300038] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.315 [2024-07-14 07:43:53.300054] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:37.315 [2024-07-14 07:43:53.314889] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17f3880) 00:26:37.315 [2024-07-14 07:43:53.314935] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.315 [2024-07-14 07:43:53.314952] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:37.315 [2024-07-14 07:43:53.329162] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17f3880) 00:26:37.315 [2024-07-14 07:43:53.329192] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.315 [2024-07-14 07:43:53.329209] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:37.315 [2024-07-14 07:43:53.343674] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17f3880) 00:26:37.315 [2024-07-14 07:43:53.343708] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.315 [2024-07-14 07:43:53.343726] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:37.315 [2024-07-14 07:43:53.357770] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17f3880) 00:26:37.315 [2024-07-14 07:43:53.357802] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.315 [2024-07-14 07:43:53.357827] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:37.315 [2024-07-14 07:43:53.372001] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17f3880) 00:26:37.315 [2024-07-14 07:43:53.372030] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.315 [2024-07-14 07:43:53.372046] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:37.315 [2024-07-14 07:43:53.385770] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17f3880) 00:26:37.315 [2024-07-14 07:43:53.385802] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.315 [2024-07-14 07:43:53.385820] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:37.315 [2024-07-14 07:43:53.399592] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17f3880) 00:26:37.315 [2024-07-14 07:43:53.399625] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.315 [2024-07-14 07:43:53.399644] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:37.315 [2024-07-14 07:43:53.413451] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17f3880) 00:26:37.315 [2024-07-14 07:43:53.413483] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.315 [2024-07-14 07:43:53.413502] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:37.315 [2024-07-14 07:43:53.427468] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17f3880) 00:26:37.315 [2024-07-14 07:43:53.427500] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.315 [2024-07-14 07:43:53.427519] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:37.315 [2024-07-14 07:43:53.441401] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17f3880) 00:26:37.315 [2024-07-14 07:43:53.441433] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.315 [2024-07-14 07:43:53.441452] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:37.315 [2024-07-14 07:43:53.455075] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17f3880) 00:26:37.315 [2024-07-14 07:43:53.455105] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.315 [2024-07-14 07:43:53.455122] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:37.315 [2024-07-14 07:43:53.469144] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17f3880) 00:26:37.315 [2024-07-14 07:43:53.469174] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.315 [2024-07-14 07:43:53.469208] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:37.315 [2024-07-14 07:43:53.483116] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17f3880) 00:26:37.315 [2024-07-14 07:43:53.483146] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.315 [2024-07-14 07:43:53.483164] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:37.573 [2024-07-14 07:43:53.497326] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17f3880) 00:26:37.573 [2024-07-14 07:43:53.497358] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.573 [2024-07-14 07:43:53.497378] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:37.573 [2024-07-14 07:43:53.511853] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17f3880) 00:26:37.573 [2024-07-14 07:43:53.511909] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.573 [2024-07-14 07:43:53.511927] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:37.573 [2024-07-14 07:43:53.526329] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17f3880) 00:26:37.573 [2024-07-14 07:43:53.526361] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.573 [2024-07-14 07:43:53.526380] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:37.573 [2024-07-14 07:43:53.540265] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17f3880) 00:26:37.573 [2024-07-14 07:43:53.540299] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.573 [2024-07-14 07:43:53.540319] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:37.573 [2024-07-14 07:43:53.554827] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17f3880) 00:26:37.573 [2024-07-14 07:43:53.554861] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.573 [2024-07-14 07:43:53.554891] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:37.573 [2024-07-14 07:43:53.568721] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17f3880) 00:26:37.573 [2024-07-14 07:43:53.568755] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.573 [2024-07-14 07:43:53.568774] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:37.573 [2024-07-14 07:43:53.582734] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17f3880) 00:26:37.573 [2024-07-14 07:43:53.582767] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.573 [2024-07-14 07:43:53.582786] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:37.573 [2024-07-14 07:43:53.595757] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17f3880) 00:26:37.573 [2024-07-14 07:43:53.595791] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.573 [2024-07-14 07:43:53.595816] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:37.573 [2024-07-14 07:43:53.609332] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17f3880) 00:26:37.573 [2024-07-14 07:43:53.609366] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.573 [2024-07-14 07:43:53.609386] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:37.573 [2024-07-14 07:43:53.622770] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17f3880) 00:26:37.573 [2024-07-14 07:43:53.622802] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.573 [2024-07-14 07:43:53.622821] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:37.573 [2024-07-14 07:43:53.637470] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17f3880) 00:26:37.573 [2024-07-14 07:43:53.637505] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.574 [2024-07-14 07:43:53.637524] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:37.574 [2024-07-14 07:43:53.653817] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17f3880) 00:26:37.574 [2024-07-14 07:43:53.653851] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.574 [2024-07-14 07:43:53.653881] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:37.574 00:26:37.574 Latency(us) 00:26:37.574 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:26:37.574 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 16, IO size: 131072) 00:26:37.574 nvme0n1 : 2.01 2191.84 273.98 0.00 0.00 7290.99 5873.97 16117.00 00:26:37.574 =================================================================================================================== 00:26:37.574 Total : 2191.84 273.98 0.00 0.00 7290.99 5873.97 16117.00 00:26:37.574 0 00:26:37.574 07:43:53 -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:26:37.574 07:43:53 -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:26:37.574 07:43:53 -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:26:37.574 07:43:53 -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:26:37.574 | .driver_specific 00:26:37.574 | .nvme_error 00:26:37.574 | .status_code 00:26:37.574 | .command_transient_transport_error' 00:26:37.831 07:43:53 -- host/digest.sh@71 -- # (( 142 > 0 )) 00:26:37.831 07:43:53 -- host/digest.sh@73 -- # killprocess 10766 00:26:37.831 07:43:53 -- common/autotest_common.sh@926 -- # '[' -z 10766 ']' 00:26:37.831 07:43:53 -- common/autotest_common.sh@930 -- # kill -0 10766 00:26:37.831 07:43:53 -- common/autotest_common.sh@931 -- # uname 00:26:37.831 07:43:53 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:26:37.831 07:43:53 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 10766 00:26:37.831 07:43:53 -- common/autotest_common.sh@932 -- # process_name=reactor_1 00:26:37.831 07:43:53 -- common/autotest_common.sh@936 -- # '[' reactor_1 = sudo ']' 00:26:37.831 07:43:53 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 10766' 00:26:37.831 killing process with pid 10766 00:26:37.831 07:43:53 -- common/autotest_common.sh@945 -- # kill 10766 00:26:37.831 Received shutdown signal, test time was about 2.000000 seconds 00:26:37.831 00:26:37.831 Latency(us) 00:26:37.831 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:26:37.831 =================================================================================================================== 00:26:37.831 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:26:37.831 07:43:53 -- common/autotest_common.sh@950 -- # wait 10766 00:26:38.088 07:43:54 -- host/digest.sh@113 -- # run_bperf_err randwrite 4096 128 00:26:38.089 07:43:54 -- host/digest.sh@54 -- # local rw bs qd 00:26:38.089 07:43:54 -- host/digest.sh@56 -- # rw=randwrite 00:26:38.089 07:43:54 -- host/digest.sh@56 -- # bs=4096 00:26:38.089 07:43:54 -- host/digest.sh@56 -- # qd=128 00:26:38.089 07:43:54 -- host/digest.sh@58 -- # bperfpid=11316 00:26:38.089 07:43:54 -- host/digest.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 4096 -t 2 -q 128 -z 00:26:38.089 07:43:54 -- host/digest.sh@60 -- # waitforlisten 11316 /var/tmp/bperf.sock 00:26:38.089 07:43:54 -- common/autotest_common.sh@819 -- # '[' -z 11316 ']' 00:26:38.089 07:43:54 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/bperf.sock 00:26:38.089 07:43:54 -- common/autotest_common.sh@824 -- # local max_retries=100 00:26:38.089 07:43:54 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:26:38.089 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:26:38.089 07:43:54 -- common/autotest_common.sh@828 -- # xtrace_disable 00:26:38.089 07:43:54 -- common/autotest_common.sh@10 -- # set +x 00:26:38.089 [2024-07-14 07:43:54.239860] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:26:38.089 [2024-07-14 07:43:54.239947] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid11316 ] 00:26:38.347 EAL: No free 2048 kB hugepages reported on node 1 00:26:38.347 [2024-07-14 07:43:54.301655] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:38.347 [2024-07-14 07:43:54.413751] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:26:39.280 07:43:55 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:26:39.280 07:43:55 -- common/autotest_common.sh@852 -- # return 0 00:26:39.280 07:43:55 -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:26:39.280 07:43:55 -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:26:39.280 07:43:55 -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:26:39.280 07:43:55 -- common/autotest_common.sh@551 -- # xtrace_disable 00:26:39.280 07:43:55 -- common/autotest_common.sh@10 -- # set +x 00:26:39.280 07:43:55 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:26:39.280 07:43:55 -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:26:39.280 07:43:55 -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:26:39.845 nvme0n1 00:26:39.845 07:43:55 -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 256 00:26:39.845 07:43:55 -- common/autotest_common.sh@551 -- # xtrace_disable 00:26:39.845 07:43:55 -- common/autotest_common.sh@10 -- # set +x 00:26:39.845 07:43:55 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:26:39.845 07:43:55 -- host/digest.sh@69 -- # bperf_py perform_tests 00:26:39.845 07:43:55 -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:26:39.845 Running I/O for 2 seconds... 00:26:39.845 [2024-07-14 07:43:55.952193] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11471a0) with pdu=0x2000190ee5c8 00:26:39.845 [2024-07-14 07:43:55.953622] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:18097 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:39.845 [2024-07-14 07:43:55.953677] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:76 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:26:39.845 [2024-07-14 07:43:55.965566] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11471a0) with pdu=0x2000190e6738 00:26:39.845 [2024-07-14 07:43:55.966764] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:8292 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:39.845 [2024-07-14 07:43:55.966798] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:77 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:26:39.845 [2024-07-14 07:43:55.977745] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11471a0) with pdu=0x2000190f4298 00:26:39.845 [2024-07-14 07:43:55.979034] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:18557 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:39.846 [2024-07-14 07:43:55.979064] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:26:39.846 [2024-07-14 07:43:55.990674] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11471a0) with pdu=0x2000190e49b0 00:26:39.846 [2024-07-14 07:43:55.992016] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:13316 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:39.846 [2024-07-14 07:43:55.992047] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:26:39.846 [2024-07-14 07:43:56.002546] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11471a0) with pdu=0x2000190e4de8 00:26:39.846 [2024-07-14 07:43:56.003829] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:13434 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:39.846 [2024-07-14 07:43:56.003862] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:26:39.846 [2024-07-14 07:43:56.015543] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11471a0) with pdu=0x2000190e4140 00:26:40.104 [2024-07-14 07:43:56.016895] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:19421 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:40.105 [2024-07-14 07:43:56.016940] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:67 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:26:40.105 [2024-07-14 07:43:56.028087] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11471a0) with pdu=0x2000190e3060 00:26:40.105 [2024-07-14 07:43:56.029447] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:8074 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:40.105 [2024-07-14 07:43:56.029480] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:66 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:26:40.105 [2024-07-14 07:43:56.040497] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11471a0) with pdu=0x2000190f7970 00:26:40.105 [2024-07-14 07:43:56.041891] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:5948 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:40.105 [2024-07-14 07:43:56.041937] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:65 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:26:40.105 [2024-07-14 07:43:56.052790] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11471a0) with pdu=0x2000190ed920 00:26:40.105 [2024-07-14 07:43:56.054207] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:7330 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:40.105 [2024-07-14 07:43:56.054253] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:64 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:26:40.105 [2024-07-14 07:43:56.065116] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11471a0) with pdu=0x2000190ec840 00:26:40.105 [2024-07-14 07:43:56.066600] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:13019 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:40.105 [2024-07-14 07:43:56.066639] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:63 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:26:40.105 [2024-07-14 07:43:56.077505] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11471a0) with pdu=0x2000190f0350 00:26:40.105 [2024-07-14 07:43:56.079049] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:18492 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:40.105 [2024-07-14 07:43:56.079078] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:62 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:26:40.105 [2024-07-14 07:43:56.089914] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11471a0) with pdu=0x2000190e2c28 00:26:40.105 [2024-07-14 07:43:56.091484] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:24523 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:40.105 [2024-07-14 07:43:56.091518] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:61 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:26:40.105 [2024-07-14 07:43:56.102286] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11471a0) with pdu=0x2000190e49b0 00:26:40.105 [2024-07-14 07:43:56.103852] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:6848 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:40.105 [2024-07-14 07:43:56.103893] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:60 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:26:40.105 [2024-07-14 07:43:56.114595] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11471a0) with pdu=0x2000190f3a28 00:26:40.105 [2024-07-14 07:43:56.116226] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:4460 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:40.105 [2024-07-14 07:43:56.116259] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:114 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:26:40.105 [2024-07-14 07:43:56.126861] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11471a0) with pdu=0x2000190e1b48 00:26:40.105 [2024-07-14 07:43:56.128458] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:3580 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:40.105 [2024-07-14 07:43:56.128491] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:113 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:26:40.105 [2024-07-14 07:43:56.139173] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11471a0) with pdu=0x2000190ecc78 00:26:40.105 [2024-07-14 07:43:56.140784] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:25176 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:40.105 [2024-07-14 07:43:56.140817] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:112 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:26:40.105 [2024-07-14 07:43:56.151789] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11471a0) with pdu=0x2000190f6458 00:26:40.105 [2024-07-14 07:43:56.153354] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:7572 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:40.105 [2024-07-14 07:43:56.153386] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:43 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:26:40.105 [2024-07-14 07:43:56.163861] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11471a0) with pdu=0x2000190f2d80 00:26:40.105 [2024-07-14 07:43:56.164578] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:3154 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:40.105 [2024-07-14 07:43:56.164609] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:53 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:26:40.105 [2024-07-14 07:43:56.176573] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11471a0) with pdu=0x2000190fac10 00:26:40.105 [2024-07-14 07:43:56.177980] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:22121 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:40.105 [2024-07-14 07:43:56.178010] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:30 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:26:40.105 [2024-07-14 07:43:56.188928] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11471a0) with pdu=0x2000190fac10 00:26:40.105 [2024-07-14 07:43:56.190349] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:10042 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:40.105 [2024-07-14 07:43:56.190382] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:29 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:26:40.105 [2024-07-14 07:43:56.201270] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11471a0) with pdu=0x2000190fb048 00:26:40.105 [2024-07-14 07:43:56.202693] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:7442 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:40.105 [2024-07-14 07:43:56.202726] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:28 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:26:40.105 [2024-07-14 07:43:56.213510] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11471a0) with pdu=0x2000190fb8b8 00:26:40.105 [2024-07-14 07:43:56.214966] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:1785 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:40.105 [2024-07-14 07:43:56.214996] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:27 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:26:40.105 [2024-07-14 07:43:56.225844] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11471a0) with pdu=0x2000190f9f68 00:26:40.105 [2024-07-14 07:43:56.227335] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:15178 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:40.105 [2024-07-14 07:43:56.227368] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:26 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:26:40.105 [2024-07-14 07:43:56.238336] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11471a0) with pdu=0x2000190eaef0 00:26:40.105 [2024-07-14 07:43:56.239888] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:8312 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:40.105 [2024-07-14 07:43:56.239935] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:54 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:26:40.105 [2024-07-14 07:43:56.250733] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11471a0) with pdu=0x2000190ec840 00:26:40.105 [2024-07-14 07:43:56.252224] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:10335 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:40.105 [2024-07-14 07:43:56.252256] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:46 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:26:40.105 [2024-07-14 07:43:56.263107] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11471a0) with pdu=0x2000190f7da8 00:26:40.105 [2024-07-14 07:43:56.264642] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:20869 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:40.105 [2024-07-14 07:43:56.264675] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:118 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:26:40.363 [2024-07-14 07:43:56.275716] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11471a0) with pdu=0x2000190fac10 00:26:40.363 [2024-07-14 07:43:56.277269] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:11423 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:40.363 [2024-07-14 07:43:56.277301] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:119 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:26:40.363 [2024-07-14 07:43:56.288175] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11471a0) with pdu=0x2000190e0630 00:26:40.363 [2024-07-14 07:43:56.289740] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:14283 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:40.363 [2024-07-14 07:43:56.289773] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:120 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:26:40.363 [2024-07-14 07:43:56.300638] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11471a0) with pdu=0x2000190e99d8 00:26:40.363 [2024-07-14 07:43:56.302191] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:22477 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:40.363 [2024-07-14 07:43:56.302219] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:121 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:26:40.363 [2024-07-14 07:43:56.313062] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11471a0) with pdu=0x2000190ec840 00:26:40.363 [2024-07-14 07:43:56.314622] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:18113 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:40.363 [2024-07-14 07:43:56.314654] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:122 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:26:40.363 [2024-07-14 07:43:56.325572] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11471a0) with pdu=0x2000190eaef0 00:26:40.363 [2024-07-14 07:43:56.327224] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:18064 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:40.363 [2024-07-14 07:43:56.327255] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:123 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:40.363 [2024-07-14 07:43:56.338087] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11471a0) with pdu=0x2000190eb328 00:26:40.363 [2024-07-14 07:43:56.339719] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:9369 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:40.363 [2024-07-14 07:43:56.339751] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:40.363 [2024-07-14 07:43:56.348750] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11471a0) with pdu=0x2000190e0630 00:26:40.363 [2024-07-14 07:43:56.349611] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:3808 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:40.363 [2024-07-14 07:43:56.349644] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:96 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:26:40.363 [2024-07-14 07:43:56.361249] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11471a0) with pdu=0x2000190f1ca0 00:26:40.363 [2024-07-14 07:43:56.362097] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:3632 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:40.363 [2024-07-14 07:43:56.362127] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:26:40.363 [2024-07-14 07:43:56.373692] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11471a0) with pdu=0x2000190eaef0 00:26:40.363 [2024-07-14 07:43:56.374557] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:12155 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:40.363 [2024-07-14 07:43:56.374589] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:97 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:26:40.364 [2024-07-14 07:43:56.386185] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11471a0) with pdu=0x2000190ea680 00:26:40.364 [2024-07-14 07:43:56.387075] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:10748 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:40.364 [2024-07-14 07:43:56.387110] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:23 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:26:40.364 [2024-07-14 07:43:56.398717] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11471a0) with pdu=0x2000190f46d0 00:26:40.364 [2024-07-14 07:43:56.399624] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:4685 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:40.364 [2024-07-14 07:43:56.399656] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:96 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:26:40.364 [2024-07-14 07:43:56.411185] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11471a0) with pdu=0x2000190f46d0 00:26:40.364 [2024-07-14 07:43:56.412088] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:3583 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:40.364 [2024-07-14 07:43:56.412117] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:26:40.364 [2024-07-14 07:43:56.423753] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11471a0) with pdu=0x2000190ea680 00:26:40.364 [2024-07-14 07:43:56.424670] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:7658 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:40.364 [2024-07-14 07:43:56.424702] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:97 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:26:40.364 [2024-07-14 07:43:56.436256] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11471a0) with pdu=0x2000190eaef0 00:26:40.364 [2024-07-14 07:43:56.437227] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:10423 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:40.364 [2024-07-14 07:43:56.437260] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:23 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:26:40.364 [2024-07-14 07:43:56.448758] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11471a0) with pdu=0x2000190f46d0 00:26:40.364 [2024-07-14 07:43:56.449696] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:23029 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:40.364 [2024-07-14 07:43:56.449728] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:96 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:26:40.364 [2024-07-14 07:43:56.461421] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11471a0) with pdu=0x2000190e5ec8 00:26:40.364 [2024-07-14 07:43:56.462421] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:13583 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:40.364 [2024-07-14 07:43:56.462454] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:26:40.364 [2024-07-14 07:43:56.473825] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11471a0) with pdu=0x2000190eb328 00:26:40.364 [2024-07-14 07:43:56.474841] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:21145 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:40.364 [2024-07-14 07:43:56.474880] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:97 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:26:40.364 [2024-07-14 07:43:56.486323] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11471a0) with pdu=0x2000190e1f80 00:26:40.364 [2024-07-14 07:43:56.487325] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:327 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:40.364 [2024-07-14 07:43:56.487358] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:23 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:26:40.364 [2024-07-14 07:43:56.498800] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11471a0) with pdu=0x2000190eb328 00:26:40.364 [2024-07-14 07:43:56.499805] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:24301 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:40.364 [2024-07-14 07:43:56.499837] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:96 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:26:40.364 [2024-07-14 07:43:56.511175] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11471a0) with pdu=0x2000190e5ec8 00:26:40.364 [2024-07-14 07:43:56.512215] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:10479 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:40.364 [2024-07-14 07:43:56.512247] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:26:40.364 [2024-07-14 07:43:56.523667] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11471a0) with pdu=0x2000190ef270 00:26:40.364 [2024-07-14 07:43:56.524670] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:6771 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:40.364 [2024-07-14 07:43:56.524702] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:97 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:26:40.622 [2024-07-14 07:43:56.536452] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11471a0) with pdu=0x2000190e5ec8 00:26:40.622 [2024-07-14 07:43:56.537499] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:1488 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:40.622 [2024-07-14 07:43:56.537532] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:23 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:26:40.622 [2024-07-14 07:43:56.548949] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11471a0) with pdu=0x2000190f9f68 00:26:40.622 [2024-07-14 07:43:56.550017] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:18809 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:40.622 [2024-07-14 07:43:56.550059] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:96 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:26:40.622 [2024-07-14 07:43:56.561502] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11471a0) with pdu=0x2000190f9f68 00:26:40.622 [2024-07-14 07:43:56.562591] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:19422 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:40.622 [2024-07-14 07:43:56.562624] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:26:40.622 [2024-07-14 07:43:56.574046] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11471a0) with pdu=0x2000190f9f68 00:26:40.622 [2024-07-14 07:43:56.575157] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:22676 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:40.622 [2024-07-14 07:43:56.575201] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:97 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:26:40.622 [2024-07-14 07:43:56.586527] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11471a0) with pdu=0x2000190f9f68 00:26:40.622 [2024-07-14 07:43:56.587614] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:16426 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:40.622 [2024-07-14 07:43:56.587646] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:23 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:40.622 [2024-07-14 07:43:56.598947] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11471a0) with pdu=0x2000190f9f68 00:26:40.622 [2024-07-14 07:43:56.600082] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:23307 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:40.622 [2024-07-14 07:43:56.600110] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:96 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:40.622 [2024-07-14 07:43:56.611371] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11471a0) with pdu=0x2000190f9f68 00:26:40.622 [2024-07-14 07:43:56.612555] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:20173 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:40.622 [2024-07-14 07:43:56.612588] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:26:40.622 [2024-07-14 07:43:56.623807] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11471a0) with pdu=0x2000190f9f68 00:26:40.622 [2024-07-14 07:43:56.624949] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:13384 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:40.622 [2024-07-14 07:43:56.624993] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:97 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:26:40.622 [2024-07-14 07:43:56.636234] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11471a0) with pdu=0x2000190f9f68 00:26:40.622 [2024-07-14 07:43:56.637451] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:20445 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:40.622 [2024-07-14 07:43:56.637484] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:23 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:26:40.622 [2024-07-14 07:43:56.650227] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11471a0) with pdu=0x2000190f9f68 00:26:40.622 [2024-07-14 07:43:56.651523] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:19528 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:40.622 [2024-07-14 07:43:56.651556] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:32 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:40.622 [2024-07-14 07:43:56.662612] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11471a0) with pdu=0x2000190f9f68 00:26:40.622 [2024-07-14 07:43:56.663934] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:4954 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:40.622 [2024-07-14 07:43:56.663963] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:31 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:40.622 [2024-07-14 07:43:56.673425] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11471a0) with pdu=0x2000190f8a50 00:26:40.622 [2024-07-14 07:43:56.674729] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:24330 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:40.622 [2024-07-14 07:43:56.674760] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:71 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:26:40.622 [2024-07-14 07:43:56.685764] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11471a0) with pdu=0x2000190f8a50 00:26:40.622 [2024-07-14 07:43:56.687131] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:19488 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:40.622 [2024-07-14 07:43:56.687177] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:23 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:26:40.622 [2024-07-14 07:43:56.699646] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11471a0) with pdu=0x2000190f8a50 00:26:40.622 [2024-07-14 07:43:56.701226] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:3841 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:40.622 [2024-07-14 07:43:56.701259] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:33 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:40.622 [2024-07-14 07:43:56.712299] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11471a0) with pdu=0x2000190f8a50 00:26:40.622 [2024-07-14 07:43:56.713614] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:1494 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:40.622 [2024-07-14 07:43:56.713651] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:40.622 [2024-07-14 07:43:56.724457] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11471a0) with pdu=0x2000190e99d8 00:26:40.622 [2024-07-14 07:43:56.725754] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:25221 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:40.622 [2024-07-14 07:43:56.725787] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:29 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:40.622 [2024-07-14 07:43:56.736933] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11471a0) with pdu=0x2000190f31b8 00:26:40.622 [2024-07-14 07:43:56.738247] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:24027 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:40.622 [2024-07-14 07:43:56.738279] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:30 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:40.622 [2024-07-14 07:43:56.749336] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11471a0) with pdu=0x2000190f31b8 00:26:40.622 [2024-07-14 07:43:56.750644] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:11324 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:40.622 [2024-07-14 07:43:56.750677] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:23 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:40.622 [2024-07-14 07:43:56.761726] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11471a0) with pdu=0x2000190eaab8 00:26:40.622 [2024-07-14 07:43:56.763206] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:23358 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:40.622 [2024-07-14 07:43:56.763238] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:33 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:40.622 [2024-07-14 07:43:56.774067] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11471a0) with pdu=0x2000190ec840 00:26:40.622 [2024-07-14 07:43:56.775400] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:24212 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:40.622 [2024-07-14 07:43:56.775431] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:40.622 [2024-07-14 07:43:56.784914] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11471a0) with pdu=0x2000190f1ca0 00:26:40.622 [2024-07-14 07:43:56.786198] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:13733 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:40.622 [2024-07-14 07:43:56.786231] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:83 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:26:40.880 [2024-07-14 07:43:56.799103] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11471a0) with pdu=0x2000190f1ca0 00:26:40.880 [2024-07-14 07:43:56.800381] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:24107 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:40.880 [2024-07-14 07:43:56.800413] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:30 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:40.880 [2024-07-14 07:43:56.811451] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11471a0) with pdu=0x2000190e23b8 00:26:40.880 [2024-07-14 07:43:56.812709] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:19268 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:40.880 [2024-07-14 07:43:56.812741] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:27 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:40.880 [2024-07-14 07:43:56.823977] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11471a0) with pdu=0x2000190f2d80 00:26:40.880 [2024-07-14 07:43:56.825271] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:1861 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:40.880 [2024-07-14 07:43:56.825303] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:28 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:40.880 [2024-07-14 07:43:56.836367] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11471a0) with pdu=0x2000190f2d80 00:26:40.880 [2024-07-14 07:43:56.837633] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:11719 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:40.880 [2024-07-14 07:43:56.837665] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:33 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:40.880 [2024-07-14 07:43:56.848711] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11471a0) with pdu=0x2000190fbcf0 00:26:40.880 [2024-07-14 07:43:56.849981] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:6708 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:40.880 [2024-07-14 07:43:56.850010] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:54 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:40.880 [2024-07-14 07:43:56.861148] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11471a0) with pdu=0x2000190f4298 00:26:40.880 [2024-07-14 07:43:56.862477] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:9798 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:40.880 [2024-07-14 07:43:56.862509] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:26 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:40.880 [2024-07-14 07:43:56.873667] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11471a0) with pdu=0x2000190f4298 00:26:40.880 [2024-07-14 07:43:56.874954] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:19153 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:40.880 [2024-07-14 07:43:56.874983] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:30 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:40.880 [2024-07-14 07:43:56.886021] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11471a0) with pdu=0x2000190e1f80 00:26:40.880 [2024-07-14 07:43:56.887331] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:23606 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:40.880 [2024-07-14 07:43:56.887363] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:118 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:40.880 [2024-07-14 07:43:56.898398] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11471a0) with pdu=0x2000190f7538 00:26:40.880 [2024-07-14 07:43:56.899669] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:5863 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:40.880 [2024-07-14 07:43:56.899701] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:120 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:26:40.880 [2024-07-14 07:43:56.910759] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11471a0) with pdu=0x2000190e8d30 00:26:40.880 [2024-07-14 07:43:56.912056] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:16212 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:40.880 [2024-07-14 07:43:56.912084] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:24 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:26:40.880 [2024-07-14 07:43:56.923148] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11471a0) with pdu=0x2000190f0788 00:26:40.880 [2024-07-14 07:43:56.924399] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:22436 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:40.880 [2024-07-14 07:43:56.924431] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:62 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:26:40.880 [2024-07-14 07:43:56.935519] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11471a0) with pdu=0x2000190ea248 00:26:40.880 [2024-07-14 07:43:56.936821] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:22683 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:40.880 [2024-07-14 07:43:56.936853] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:50 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:26:40.880 [2024-07-14 07:43:56.947743] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11471a0) with pdu=0x2000190f6890 00:26:40.880 [2024-07-14 07:43:56.948960] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:9738 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:40.880 [2024-07-14 07:43:56.948989] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:29 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:26:40.880 [2024-07-14 07:43:56.960049] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11471a0) with pdu=0x2000190ed0b0 00:26:40.880 [2024-07-14 07:43:56.961335] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:6702 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:40.880 [2024-07-14 07:43:56.961366] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:27 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:26:40.880 [2024-07-14 07:43:56.972192] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11471a0) with pdu=0x2000190f1430 00:26:40.880 [2024-07-14 07:43:56.973030] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:16988 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:40.880 [2024-07-14 07:43:56.973059] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:54 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:26:40.880 [2024-07-14 07:43:56.984213] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11471a0) with pdu=0x2000190fb8b8 00:26:40.880 [2024-07-14 07:43:56.985048] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:589 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:40.880 [2024-07-14 07:43:56.985077] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:26 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:26:40.880 [2024-07-14 07:43:56.996448] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11471a0) with pdu=0x2000190f9f68 00:26:40.880 [2024-07-14 07:43:56.997520] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:14991 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:40.880 [2024-07-14 07:43:56.997551] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:118 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:26:40.880 [2024-07-14 07:43:57.008014] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11471a0) with pdu=0x2000190e12d8 00:26:40.880 [2024-07-14 07:43:57.010056] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:21827 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:40.880 [2024-07-14 07:43:57.010083] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:24 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:26:40.880 [2024-07-14 07:43:57.019551] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11471a0) with pdu=0x2000190edd58 00:26:40.880 [2024-07-14 07:43:57.020353] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:8128 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:40.880 [2024-07-14 07:43:57.020384] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:20 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:26:40.880 [2024-07-14 07:43:57.032201] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11471a0) with pdu=0x2000190ee190 00:26:40.880 [2024-07-14 07:43:57.033239] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:16981 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:40.880 [2024-07-14 07:43:57.033276] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:50 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:26:40.880 [2024-07-14 07:43:57.044613] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11471a0) with pdu=0x2000190ecc78 00:26:40.880 [2024-07-14 07:43:57.045623] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:6811 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:40.880 [2024-07-14 07:43:57.045655] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:58 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:26:41.138 [2024-07-14 07:43:57.057298] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11471a0) with pdu=0x2000190ec840 00:26:41.138 [2024-07-14 07:43:57.058350] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:947 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:41.138 [2024-07-14 07:43:57.058381] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:57 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:26:41.139 [2024-07-14 07:43:57.069805] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11471a0) with pdu=0x2000190f6890 00:26:41.139 [2024-07-14 07:43:57.070918] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:2767 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:41.139 [2024-07-14 07:43:57.070946] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:56 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:26:41.139 [2024-07-14 07:43:57.082236] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11471a0) with pdu=0x2000190fbcf0 00:26:41.139 [2024-07-14 07:43:57.083323] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:17769 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:41.139 [2024-07-14 07:43:57.083355] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:77 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:26:41.139 [2024-07-14 07:43:57.094736] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11471a0) with pdu=0x2000190e73e0 00:26:41.139 [2024-07-14 07:43:57.095769] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:384 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:41.139 [2024-07-14 07:43:57.095800] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:88 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:26:41.139 [2024-07-14 07:43:57.107108] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11471a0) with pdu=0x2000190f4f40 00:26:41.139 [2024-07-14 07:43:57.108137] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:3916 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:41.139 [2024-07-14 07:43:57.108179] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:116 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:26:41.139 [2024-07-14 07:43:57.119513] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11471a0) with pdu=0x2000190e5a90 00:26:41.139 [2024-07-14 07:43:57.120602] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:8883 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:41.139 [2024-07-14 07:43:57.120633] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:100 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:26:41.139 [2024-07-14 07:43:57.132008] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11471a0) with pdu=0x2000190e95a0 00:26:41.139 [2024-07-14 07:43:57.133131] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:23024 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:41.139 [2024-07-14 07:43:57.133160] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:101 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:26:41.139 [2024-07-14 07:43:57.144363] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11471a0) with pdu=0x2000190e6738 00:26:41.139 [2024-07-14 07:43:57.145507] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:6594 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:41.139 [2024-07-14 07:43:57.145538] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:102 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:26:41.139 [2024-07-14 07:43:57.156856] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11471a0) with pdu=0x2000190f1868 00:26:41.139 [2024-07-14 07:43:57.158050] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:16015 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:41.139 [2024-07-14 07:43:57.158077] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:103 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:26:41.139 [2024-07-14 07:43:57.169312] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11471a0) with pdu=0x2000190f1868 00:26:41.139 [2024-07-14 07:43:57.170548] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:19656 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:41.139 [2024-07-14 07:43:57.170579] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:104 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:26:41.139 [2024-07-14 07:43:57.181846] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11471a0) with pdu=0x2000190f1868 00:26:41.139 [2024-07-14 07:43:57.183004] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:3586 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:41.139 [2024-07-14 07:43:57.183031] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:105 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:26:41.139 [2024-07-14 07:43:57.194230] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11471a0) with pdu=0x2000190f1868 00:26:41.139 [2024-07-14 07:43:57.195411] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:1813 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:41.139 [2024-07-14 07:43:57.195441] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:106 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:26:41.139 [2024-07-14 07:43:57.206734] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11471a0) with pdu=0x2000190f1868 00:26:41.139 [2024-07-14 07:43:57.207978] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:21163 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:41.139 [2024-07-14 07:43:57.208004] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:107 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:26:41.139 [2024-07-14 07:43:57.219118] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11471a0) with pdu=0x2000190f1868 00:26:41.139 [2024-07-14 07:43:57.220479] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:24218 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:41.139 [2024-07-14 07:43:57.220510] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:108 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:26:41.139 [2024-07-14 07:43:57.231538] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11471a0) with pdu=0x2000190f1868 00:26:41.139 [2024-07-14 07:43:57.232812] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:18792 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:41.139 [2024-07-14 07:43:57.232844] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:109 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:26:41.139 [2024-07-14 07:43:57.243931] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11471a0) with pdu=0x2000190f1868 00:26:41.139 [2024-07-14 07:43:57.245239] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:22620 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:41.139 [2024-07-14 07:43:57.245270] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:53 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:26:41.139 [2024-07-14 07:43:57.256260] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11471a0) with pdu=0x2000190eea00 00:26:41.139 [2024-07-14 07:43:57.257533] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:13696 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:41.139 [2024-07-14 07:43:57.257564] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:62 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:26:41.139 [2024-07-14 07:43:57.268636] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11471a0) with pdu=0x2000190e2c28 00:26:41.139 [2024-07-14 07:43:57.269960] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:8164 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:41.139 [2024-07-14 07:43:57.269992] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:48 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:26:41.139 [2024-07-14 07:43:57.281006] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11471a0) with pdu=0x2000190e3060 00:26:41.139 [2024-07-14 07:43:57.282358] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:12184 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:41.139 [2024-07-14 07:43:57.282390] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:60 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:26:41.139 [2024-07-14 07:43:57.293334] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11471a0) with pdu=0x2000190edd58 00:26:41.139 [2024-07-14 07:43:57.294684] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:7596 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:41.139 [2024-07-14 07:43:57.294715] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:49 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:26:41.139 [2024-07-14 07:43:57.305427] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11471a0) with pdu=0x2000190f1868 00:26:41.139 [2024-07-14 07:43:57.306334] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:23452 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:41.139 [2024-07-14 07:43:57.306365] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:26:41.398 [2024-07-14 07:43:57.318168] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11471a0) with pdu=0x2000190f2948 00:26:41.398 [2024-07-14 07:43:57.319228] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:12433 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:41.398 [2024-07-14 07:43:57.319260] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:109 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:26:41.398 [2024-07-14 07:43:57.330256] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11471a0) with pdu=0x2000190e0ea0 00:26:41.398 [2024-07-14 07:43:57.330738] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:8633 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:41.398 [2024-07-14 07:43:57.330769] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:88 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:26:41.398 [2024-07-14 07:43:57.342730] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11471a0) with pdu=0x2000190ee5c8 00:26:41.398 [2024-07-14 07:43:57.343718] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:14917 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:41.398 [2024-07-14 07:43:57.343749] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:26 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:26:41.398 [2024-07-14 07:43:57.355225] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11471a0) with pdu=0x2000190ee5c8 00:26:41.398 [2024-07-14 07:43:57.356464] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:23105 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:41.398 [2024-07-14 07:43:57.356504] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:27 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:26:41.398 [2024-07-14 07:43:57.367412] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11471a0) with pdu=0x2000190ee5c8 00:26:41.398 [2024-07-14 07:43:57.368677] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:14013 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:41.398 [2024-07-14 07:43:57.368708] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:123 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:26:41.398 [2024-07-14 07:43:57.379722] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11471a0) with pdu=0x2000190ee5c8 00:26:41.398 [2024-07-14 07:43:57.380977] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:12452 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:41.398 [2024-07-14 07:43:57.381005] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:26:41.398 [2024-07-14 07:43:57.391990] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11471a0) with pdu=0x2000190ee5c8 00:26:41.398 [2024-07-14 07:43:57.393111] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:1694 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:41.398 [2024-07-14 07:43:57.393141] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:99 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:26:41.398 [2024-07-14 07:43:57.403381] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11471a0) with pdu=0x2000190e73e0 00:26:41.398 [2024-07-14 07:43:57.403862] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:17756 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:41.398 [2024-07-14 07:43:57.403921] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:47 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:41.398 [2024-07-14 07:43:57.415412] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11471a0) with pdu=0x2000190ebfd0 00:26:41.398 [2024-07-14 07:43:57.416316] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:23731 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:41.398 [2024-07-14 07:43:57.416343] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:115 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:41.398 [2024-07-14 07:43:57.427242] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11471a0) with pdu=0x2000190f7538 00:26:41.398 [2024-07-14 07:43:57.428316] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:25311 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:41.398 [2024-07-14 07:43:57.428345] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:117 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:26:41.398 [2024-07-14 07:43:57.439016] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11471a0) with pdu=0x2000190f5be8 00:26:41.398 [2024-07-14 07:43:57.440115] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:12889 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:41.398 [2024-07-14 07:43:57.440143] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:45 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:26:41.398 [2024-07-14 07:43:57.451252] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11471a0) with pdu=0x2000190f6020 00:26:41.398 [2024-07-14 07:43:57.452356] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:14734 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:41.398 [2024-07-14 07:43:57.452386] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:94 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:26:41.398 [2024-07-14 07:43:57.463574] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11471a0) with pdu=0x2000190f4f40 00:26:41.398 [2024-07-14 07:43:57.464663] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:14064 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:41.398 [2024-07-14 07:43:57.464695] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:93 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:26:41.398 [2024-07-14 07:43:57.476136] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11471a0) with pdu=0x2000190fbcf0 00:26:41.398 [2024-07-14 07:43:57.477304] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:3748 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:41.398 [2024-07-14 07:43:57.477335] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:92 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:26:41.398 [2024-07-14 07:43:57.488670] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11471a0) with pdu=0x2000190e2c28 00:26:41.398 [2024-07-14 07:43:57.489782] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:9802 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:41.398 [2024-07-14 07:43:57.489814] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:91 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:26:41.398 [2024-07-14 07:43:57.500976] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11471a0) with pdu=0x2000190f0788 00:26:41.398 [2024-07-14 07:43:57.502093] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:24102 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:41.398 [2024-07-14 07:43:57.502121] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:90 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:26:41.398 [2024-07-14 07:43:57.513391] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11471a0) with pdu=0x2000190fac10 00:26:41.398 [2024-07-14 07:43:57.514541] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:8734 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:41.398 [2024-07-14 07:43:57.514572] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:89 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:26:41.398 [2024-07-14 07:43:57.525876] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11471a0) with pdu=0x2000190fac10 00:26:41.398 [2024-07-14 07:43:57.527028] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:17368 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:41.398 [2024-07-14 07:43:57.527070] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:95 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:26:41.398 [2024-07-14 07:43:57.538324] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11471a0) with pdu=0x2000190fac10 00:26:41.398 [2024-07-14 07:43:57.539504] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:9843 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:41.398 [2024-07-14 07:43:57.539535] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:87 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:26:41.398 [2024-07-14 07:43:57.550843] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11471a0) with pdu=0x2000190fac10 00:26:41.398 [2024-07-14 07:43:57.552053] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:24356 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:41.398 [2024-07-14 07:43:57.552080] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:41 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:26:41.398 [2024-07-14 07:43:57.563343] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11471a0) with pdu=0x2000190fac10 00:26:41.398 [2024-07-14 07:43:57.564562] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:3352 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:41.398 [2024-07-14 07:43:57.564593] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:33 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:26:41.657 [2024-07-14 07:43:57.576041] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11471a0) with pdu=0x2000190fac10 00:26:41.657 [2024-07-14 07:43:57.577276] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:18963 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:41.657 [2024-07-14 07:43:57.577307] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:122 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:26:41.657 [2024-07-14 07:43:57.588478] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11471a0) with pdu=0x2000190fac10 00:26:41.657 [2024-07-14 07:43:57.589693] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:18442 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:41.657 [2024-07-14 07:43:57.589724] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:69 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:26:41.657 [2024-07-14 07:43:57.600944] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11471a0) with pdu=0x2000190fac10 00:26:41.657 [2024-07-14 07:43:57.602202] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:15159 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:41.657 [2024-07-14 07:43:57.602228] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:28 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:26:41.657 [2024-07-14 07:43:57.613328] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11471a0) with pdu=0x2000190fac10 00:26:41.657 [2024-07-14 07:43:57.614603] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:8939 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:41.657 [2024-07-14 07:43:57.614634] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:26:41.657 [2024-07-14 07:43:57.625840] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11471a0) with pdu=0x2000190fac10 00:26:41.657 [2024-07-14 07:43:57.627094] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:21148 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:41.657 [2024-07-14 07:43:57.627120] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:41.657 [2024-07-14 07:43:57.638290] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11471a0) with pdu=0x2000190fac10 00:26:41.657 [2024-07-14 07:43:57.639589] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:18289 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:41.657 [2024-07-14 07:43:57.639620] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:31 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:41.657 [2024-07-14 07:43:57.650728] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11471a0) with pdu=0x2000190fac10 00:26:41.657 [2024-07-14 07:43:57.652029] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:8637 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:41.657 [2024-07-14 07:43:57.652055] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:118 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:26:41.657 [2024-07-14 07:43:57.663102] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11471a0) with pdu=0x2000190fac10 00:26:41.657 [2024-07-14 07:43:57.664416] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:3282 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:41.657 [2024-07-14 07:43:57.664447] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:26:41.657 [2024-07-14 07:43:57.675569] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11471a0) with pdu=0x2000190fac10 00:26:41.657 [2024-07-14 07:43:57.676863] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:3692 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:41.657 [2024-07-14 07:43:57.676921] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:114 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:26:41.657 [2024-07-14 07:43:57.688038] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11471a0) with pdu=0x2000190fac10 00:26:41.657 [2024-07-14 07:43:57.689348] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:16955 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:41.657 [2024-07-14 07:43:57.689379] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:26:41.657 [2024-07-14 07:43:57.700398] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11471a0) with pdu=0x2000190fac10 00:26:41.657 [2024-07-14 07:43:57.701727] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:10440 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:41.657 [2024-07-14 07:43:57.701757] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:63 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:26:41.657 [2024-07-14 07:43:57.712780] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11471a0) with pdu=0x2000190fac10 00:26:41.657 [2024-07-14 07:43:57.714138] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:22585 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:41.657 [2024-07-14 07:43:57.714164] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:120 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:26:41.657 [2024-07-14 07:43:57.725075] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11471a0) with pdu=0x2000190fac10 00:26:41.657 [2024-07-14 07:43:57.726441] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:6187 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:41.657 [2024-07-14 07:43:57.726472] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:53 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:26:41.657 [2024-07-14 07:43:57.737437] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11471a0) with pdu=0x2000190edd58 00:26:41.657 [2024-07-14 07:43:57.738808] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:10541 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:41.657 [2024-07-14 07:43:57.738839] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:51 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:26:41.657 [2024-07-14 07:43:57.749628] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11471a0) with pdu=0x2000190f4f40 00:26:41.657 [2024-07-14 07:43:57.751004] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:791 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:41.657 [2024-07-14 07:43:57.751035] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:104 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:26:41.657 [2024-07-14 07:43:57.761972] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11471a0) with pdu=0x2000190e3060 00:26:41.657 [2024-07-14 07:43:57.763354] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:8377 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:41.657 [2024-07-14 07:43:57.763385] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:108 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:26:41.658 [2024-07-14 07:43:57.774269] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11471a0) with pdu=0x2000190f7538 00:26:41.658 [2024-07-14 07:43:57.775691] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:10798 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:41.658 [2024-07-14 07:43:57.775722] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:103 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:26:41.658 [2024-07-14 07:43:57.786539] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11471a0) with pdu=0x2000190eea00 00:26:41.658 [2024-07-14 07:43:57.787977] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:8760 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:41.658 [2024-07-14 07:43:57.788017] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:112 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:26:41.658 [2024-07-14 07:43:57.798952] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11471a0) with pdu=0x2000190eea00 00:26:41.658 [2024-07-14 07:43:57.800391] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:23065 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:41.658 [2024-07-14 07:43:57.800422] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:43 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:26:41.658 [2024-07-14 07:43:57.811495] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11471a0) with pdu=0x2000190e1710 00:26:41.658 [2024-07-14 07:43:57.812466] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:3617 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:41.658 [2024-07-14 07:43:57.812497] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:115 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:26:41.658 [2024-07-14 07:43:57.823903] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11471a0) with pdu=0x2000190eea00 00:26:41.658 [2024-07-14 07:43:57.825109] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:7294 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:41.658 [2024-07-14 07:43:57.825137] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:22 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:26:41.916 [2024-07-14 07:43:57.836465] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11471a0) with pdu=0x2000190e12d8 00:26:41.916 [2024-07-14 07:43:57.837751] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:18051 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:41.916 [2024-07-14 07:43:57.837781] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:70 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:26:41.916 [2024-07-14 07:43:57.848704] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11471a0) with pdu=0x2000190f9f68 00:26:41.916 [2024-07-14 07:43:57.849975] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:3053 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:41.916 [2024-07-14 07:43:57.850003] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:59 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:26:41.916 [2024-07-14 07:43:57.861032] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11471a0) with pdu=0x2000190f7970 00:26:41.916 [2024-07-14 07:43:57.862630] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23148 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:41.916 [2024-07-14 07:43:57.862661] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:26:41.916 [2024-07-14 07:43:57.873369] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11471a0) with pdu=0x2000190f7970 00:26:41.916 [2024-07-14 07:43:57.875317] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:2770 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:41.916 [2024-07-14 07:43:57.875348] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:54 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:26:41.916 [2024-07-14 07:43:57.885615] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11471a0) with pdu=0x2000190f7970 00:26:41.916 [2024-07-14 07:43:57.886954] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:94 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:41.916 [2024-07-14 07:43:57.886980] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:20 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:26:41.916 [2024-07-14 07:43:57.897802] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11471a0) with pdu=0x2000190f7970 00:26:41.916 [2024-07-14 07:43:57.899187] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:11620 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:41.916 [2024-07-14 07:43:57.899218] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:58 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:26:41.916 [2024-07-14 07:43:57.910822] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11471a0) with pdu=0x2000190f7970 00:26:41.916 [2024-07-14 07:43:57.912678] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:5058 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:41.916 [2024-07-14 07:43:57.912710] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:39 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:41.916 [2024-07-14 07:43:57.921777] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11471a0) with pdu=0x2000190e3060 00:26:41.916 [2024-07-14 07:43:57.922752] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:8850 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:41.916 [2024-07-14 07:43:57.922783] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:38 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:26:41.916 [2024-07-14 07:43:57.934110] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11471a0) with pdu=0x2000190ee5c8 00:26:41.916 [2024-07-14 07:43:57.935124] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:24912 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:41.916 [2024-07-14 07:43:57.935167] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:26:41.916 00:26:41.916 Latency(us) 00:26:41.916 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:26:41.916 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:26:41.916 nvme0n1 : 2.00 20573.69 80.37 0.00 0.00 6214.98 2973.39 14466.47 00:26:41.916 =================================================================================================================== 00:26:41.916 Total : 20573.69 80.37 0.00 0.00 6214.98 2973.39 14466.47 00:26:41.916 0 00:26:41.916 07:43:57 -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:26:41.916 07:43:57 -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:26:41.916 07:43:57 -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:26:41.916 07:43:57 -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:26:41.916 | .driver_specific 00:26:41.916 | .nvme_error 00:26:41.916 | .status_code 00:26:41.916 | .command_transient_transport_error' 00:26:42.175 07:43:58 -- host/digest.sh@71 -- # (( 161 > 0 )) 00:26:42.175 07:43:58 -- host/digest.sh@73 -- # killprocess 11316 00:26:42.175 07:43:58 -- common/autotest_common.sh@926 -- # '[' -z 11316 ']' 00:26:42.175 07:43:58 -- common/autotest_common.sh@930 -- # kill -0 11316 00:26:42.175 07:43:58 -- common/autotest_common.sh@931 -- # uname 00:26:42.175 07:43:58 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:26:42.175 07:43:58 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 11316 00:26:42.175 07:43:58 -- common/autotest_common.sh@932 -- # process_name=reactor_1 00:26:42.175 07:43:58 -- common/autotest_common.sh@936 -- # '[' reactor_1 = sudo ']' 00:26:42.175 07:43:58 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 11316' 00:26:42.175 killing process with pid 11316 00:26:42.175 07:43:58 -- common/autotest_common.sh@945 -- # kill 11316 00:26:42.175 Received shutdown signal, test time was about 2.000000 seconds 00:26:42.175 00:26:42.175 Latency(us) 00:26:42.175 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:26:42.175 =================================================================================================================== 00:26:42.175 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:26:42.175 07:43:58 -- common/autotest_common.sh@950 -- # wait 11316 00:26:42.433 07:43:58 -- host/digest.sh@114 -- # run_bperf_err randwrite 131072 16 00:26:42.433 07:43:58 -- host/digest.sh@54 -- # local rw bs qd 00:26:42.433 07:43:58 -- host/digest.sh@56 -- # rw=randwrite 00:26:42.433 07:43:58 -- host/digest.sh@56 -- # bs=131072 00:26:42.433 07:43:58 -- host/digest.sh@56 -- # qd=16 00:26:42.433 07:43:58 -- host/digest.sh@58 -- # bperfpid=11867 00:26:42.433 07:43:58 -- host/digest.sh@60 -- # waitforlisten 11867 /var/tmp/bperf.sock 00:26:42.433 07:43:58 -- host/digest.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 131072 -t 2 -q 16 -z 00:26:42.433 07:43:58 -- common/autotest_common.sh@819 -- # '[' -z 11867 ']' 00:26:42.433 07:43:58 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/bperf.sock 00:26:42.433 07:43:58 -- common/autotest_common.sh@824 -- # local max_retries=100 00:26:42.433 07:43:58 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:26:42.433 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:26:42.433 07:43:58 -- common/autotest_common.sh@828 -- # xtrace_disable 00:26:42.433 07:43:58 -- common/autotest_common.sh@10 -- # set +x 00:26:42.433 [2024-07-14 07:43:58.561936] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:26:42.433 [2024-07-14 07:43:58.562014] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid11867 ] 00:26:42.433 I/O size of 131072 is greater than zero copy threshold (65536). 00:26:42.433 Zero copy mechanism will not be used. 00:26:42.433 EAL: No free 2048 kB hugepages reported on node 1 00:26:42.691 [2024-07-14 07:43:58.624934] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:42.691 [2024-07-14 07:43:58.737813] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:26:43.622 07:43:59 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:26:43.622 07:43:59 -- common/autotest_common.sh@852 -- # return 0 00:26:43.622 07:43:59 -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:26:43.622 07:43:59 -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:26:43.622 07:43:59 -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:26:43.622 07:43:59 -- common/autotest_common.sh@551 -- # xtrace_disable 00:26:43.622 07:43:59 -- common/autotest_common.sh@10 -- # set +x 00:26:43.622 07:43:59 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:26:43.622 07:43:59 -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:26:43.622 07:43:59 -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:26:44.187 nvme0n1 00:26:44.187 07:44:00 -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 32 00:26:44.187 07:44:00 -- common/autotest_common.sh@551 -- # xtrace_disable 00:26:44.187 07:44:00 -- common/autotest_common.sh@10 -- # set +x 00:26:44.187 07:44:00 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:26:44.187 07:44:00 -- host/digest.sh@69 -- # bperf_py perform_tests 00:26:44.187 07:44:00 -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:26:44.187 I/O size of 131072 is greater than zero copy threshold (65536). 00:26:44.187 Zero copy mechanism will not be used. 00:26:44.187 Running I/O for 2 seconds... 00:26:44.187 [2024-07-14 07:44:00.338950] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfb5210) with pdu=0x2000190fef90 00:26:44.187 [2024-07-14 07:44:00.339531] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.187 [2024-07-14 07:44:00.339592] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:44.444 [2024-07-14 07:44:00.368654] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfb5210) with pdu=0x2000190fef90 00:26:44.444 [2024-07-14 07:44:00.369553] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.444 [2024-07-14 07:44:00.369597] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:44.444 [2024-07-14 07:44:00.398794] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfb5210) with pdu=0x2000190fef90 00:26:44.444 [2024-07-14 07:44:00.399587] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.444 [2024-07-14 07:44:00.399629] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:44.444 [2024-07-14 07:44:00.426524] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfb5210) with pdu=0x2000190fef90 00:26:44.444 [2024-07-14 07:44:00.427136] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.444 [2024-07-14 07:44:00.427187] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:44.444 [2024-07-14 07:44:00.451129] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfb5210) with pdu=0x2000190fef90 00:26:44.445 [2024-07-14 07:44:00.451780] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.445 [2024-07-14 07:44:00.451817] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:44.445 [2024-07-14 07:44:00.481305] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfb5210) with pdu=0x2000190fef90 00:26:44.445 [2024-07-14 07:44:00.482417] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.445 [2024-07-14 07:44:00.482467] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:44.445 [2024-07-14 07:44:00.509400] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfb5210) with pdu=0x2000190fef90 00:26:44.445 [2024-07-14 07:44:00.510061] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.445 [2024-07-14 07:44:00.510099] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:44.445 [2024-07-14 07:44:00.542026] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfb5210) with pdu=0x2000190fef90 00:26:44.445 [2024-07-14 07:44:00.543185] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.445 [2024-07-14 07:44:00.543236] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:44.445 [2024-07-14 07:44:00.573394] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfb5210) with pdu=0x2000190fef90 00:26:44.445 [2024-07-14 07:44:00.574321] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.445 [2024-07-14 07:44:00.574373] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:44.445 [2024-07-14 07:44:00.600400] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfb5210) with pdu=0x2000190fef90 00:26:44.445 [2024-07-14 07:44:00.601001] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.445 [2024-07-14 07:44:00.601040] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:44.702 [2024-07-14 07:44:00.630587] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfb5210) with pdu=0x2000190fef90 00:26:44.702 [2024-07-14 07:44:00.631413] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.702 [2024-07-14 07:44:00.631449] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:44.702 [2024-07-14 07:44:00.661275] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfb5210) with pdu=0x2000190fef90 00:26:44.702 [2024-07-14 07:44:00.662052] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.702 [2024-07-14 07:44:00.662090] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:44.702 [2024-07-14 07:44:00.691427] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfb5210) with pdu=0x2000190fef90 00:26:44.702 [2024-07-14 07:44:00.692183] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.702 [2024-07-14 07:44:00.692235] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:44.702 [2024-07-14 07:44:00.721776] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfb5210) with pdu=0x2000190fef90 00:26:44.702 [2024-07-14 07:44:00.722534] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.702 [2024-07-14 07:44:00.722569] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:44.702 [2024-07-14 07:44:00.750067] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfb5210) with pdu=0x2000190fef90 00:26:44.702 [2024-07-14 07:44:00.750769] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.702 [2024-07-14 07:44:00.750803] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:44.702 [2024-07-14 07:44:00.782728] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfb5210) with pdu=0x2000190fef90 00:26:44.702 [2024-07-14 07:44:00.783610] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.702 [2024-07-14 07:44:00.783646] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:44.702 [2024-07-14 07:44:00.812955] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfb5210) with pdu=0x2000190fef90 00:26:44.702 [2024-07-14 07:44:00.814215] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.702 [2024-07-14 07:44:00.814250] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:44.702 [2024-07-14 07:44:00.844730] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfb5210) with pdu=0x2000190fef90 00:26:44.702 [2024-07-14 07:44:00.845747] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.702 [2024-07-14 07:44:00.845798] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:44.960 [2024-07-14 07:44:00.874436] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfb5210) with pdu=0x2000190fef90 00:26:44.961 [2024-07-14 07:44:00.875163] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.961 [2024-07-14 07:44:00.875201] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:44.961 [2024-07-14 07:44:00.903231] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfb5210) with pdu=0x2000190fef90 00:26:44.961 [2024-07-14 07:44:00.903973] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.961 [2024-07-14 07:44:00.904011] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:44.961 [2024-07-14 07:44:00.931598] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfb5210) with pdu=0x2000190fef90 00:26:44.961 [2024-07-14 07:44:00.932245] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.961 [2024-07-14 07:44:00.932282] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:44.961 [2024-07-14 07:44:00.962364] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfb5210) with pdu=0x2000190fef90 00:26:44.961 [2024-07-14 07:44:00.963380] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.961 [2024-07-14 07:44:00.963432] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:44.961 [2024-07-14 07:44:00.991693] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfb5210) with pdu=0x2000190fef90 00:26:44.961 [2024-07-14 07:44:00.992659] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.961 [2024-07-14 07:44:00.992710] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:44.961 [2024-07-14 07:44:01.023286] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfb5210) with pdu=0x2000190fef90 00:26:44.961 [2024-07-14 07:44:01.024469] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.961 [2024-07-14 07:44:01.024507] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:44.961 [2024-07-14 07:44:01.049580] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfb5210) with pdu=0x2000190fef90 00:26:44.961 [2024-07-14 07:44:01.050432] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.961 [2024-07-14 07:44:01.050469] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:44.961 [2024-07-14 07:44:01.080501] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfb5210) with pdu=0x2000190fef90 00:26:44.961 [2024-07-14 07:44:01.081595] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.961 [2024-07-14 07:44:01.081631] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:44.961 [2024-07-14 07:44:01.112034] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfb5210) with pdu=0x2000190fef90 00:26:44.961 [2024-07-14 07:44:01.112758] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.961 [2024-07-14 07:44:01.112800] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:45.221 [2024-07-14 07:44:01.141840] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfb5210) with pdu=0x2000190fef90 00:26:45.221 [2024-07-14 07:44:01.142739] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:45.221 [2024-07-14 07:44:01.142775] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:45.221 [2024-07-14 07:44:01.172607] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfb5210) with pdu=0x2000190fef90 00:26:45.221 [2024-07-14 07:44:01.173274] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:45.221 [2024-07-14 07:44:01.173310] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:45.221 [2024-07-14 07:44:01.203412] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfb5210) with pdu=0x2000190fef90 00:26:45.221 [2024-07-14 07:44:01.204425] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:45.221 [2024-07-14 07:44:01.204462] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:45.221 [2024-07-14 07:44:01.235324] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfb5210) with pdu=0x2000190fef90 00:26:45.221 [2024-07-14 07:44:01.236093] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:45.221 [2024-07-14 07:44:01.236131] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:45.221 [2024-07-14 07:44:01.265816] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfb5210) with pdu=0x2000190fef90 00:26:45.221 [2024-07-14 07:44:01.267065] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:45.221 [2024-07-14 07:44:01.267117] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:45.221 [2024-07-14 07:44:01.296201] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfb5210) with pdu=0x2000190fef90 00:26:45.221 [2024-07-14 07:44:01.297121] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:45.221 [2024-07-14 07:44:01.297159] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:45.221 [2024-07-14 07:44:01.326418] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfb5210) with pdu=0x2000190fef90 00:26:45.221 [2024-07-14 07:44:01.327275] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:45.221 [2024-07-14 07:44:01.327310] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:45.221 [2024-07-14 07:44:01.357278] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfb5210) with pdu=0x2000190fef90 00:26:45.221 [2024-07-14 07:44:01.358137] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:45.221 [2024-07-14 07:44:01.358188] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:45.221 [2024-07-14 07:44:01.387540] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfb5210) with pdu=0x2000190fef90 00:26:45.221 [2024-07-14 07:44:01.388578] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:45.221 [2024-07-14 07:44:01.388628] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:45.484 [2024-07-14 07:44:01.416784] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfb5210) with pdu=0x2000190fef90 00:26:45.484 [2024-07-14 07:44:01.417344] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:45.484 [2024-07-14 07:44:01.417379] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:45.484 [2024-07-14 07:44:01.446742] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfb5210) with pdu=0x2000190fef90 00:26:45.484 [2024-07-14 07:44:01.447969] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:45.484 [2024-07-14 07:44:01.448007] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:45.484 [2024-07-14 07:44:01.476134] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfb5210) with pdu=0x2000190fef90 00:26:45.484 [2024-07-14 07:44:01.476903] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:45.484 [2024-07-14 07:44:01.476947] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:45.484 [2024-07-14 07:44:01.506206] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfb5210) with pdu=0x2000190fef90 00:26:45.484 [2024-07-14 07:44:01.507410] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:45.484 [2024-07-14 07:44:01.507463] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:45.484 [2024-07-14 07:44:01.536671] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfb5210) with pdu=0x2000190fef90 00:26:45.484 [2024-07-14 07:44:01.537944] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:45.484 [2024-07-14 07:44:01.537984] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:45.484 [2024-07-14 07:44:01.567015] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfb5210) with pdu=0x2000190fef90 00:26:45.484 [2024-07-14 07:44:01.567635] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:45.484 [2024-07-14 07:44:01.567682] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:45.484 [2024-07-14 07:44:01.597287] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfb5210) with pdu=0x2000190fef90 00:26:45.484 [2024-07-14 07:44:01.597979] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:45.484 [2024-07-14 07:44:01.598018] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:45.484 [2024-07-14 07:44:01.627744] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfb5210) with pdu=0x2000190fef90 00:26:45.484 [2024-07-14 07:44:01.628528] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:45.484 [2024-07-14 07:44:01.628563] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:45.742 [2024-07-14 07:44:01.656900] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfb5210) with pdu=0x2000190fef90 00:26:45.742 [2024-07-14 07:44:01.657537] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:45.742 [2024-07-14 07:44:01.657574] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:45.742 [2024-07-14 07:44:01.689321] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfb5210) with pdu=0x2000190fef90 00:26:45.742 [2024-07-14 07:44:01.690237] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:45.742 [2024-07-14 07:44:01.690273] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:45.742 [2024-07-14 07:44:01.719536] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfb5210) with pdu=0x2000190fef90 00:26:45.742 [2024-07-14 07:44:01.720466] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:45.742 [2024-07-14 07:44:01.720517] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:45.742 [2024-07-14 07:44:01.748564] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfb5210) with pdu=0x2000190fef90 00:26:45.742 [2024-07-14 07:44:01.749486] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:45.742 [2024-07-14 07:44:01.749523] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:45.742 [2024-07-14 07:44:01.780172] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfb5210) with pdu=0x2000190fef90 00:26:45.742 [2024-07-14 07:44:01.781154] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:45.742 [2024-07-14 07:44:01.781206] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:45.742 [2024-07-14 07:44:01.810299] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfb5210) with pdu=0x2000190fef90 00:26:45.742 [2024-07-14 07:44:01.810715] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:45.742 [2024-07-14 07:44:01.810751] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:45.742 [2024-07-14 07:44:01.839669] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfb5210) with pdu=0x2000190fef90 00:26:45.742 [2024-07-14 07:44:01.840438] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:45.742 [2024-07-14 07:44:01.840476] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:45.742 [2024-07-14 07:44:01.868213] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfb5210) with pdu=0x2000190fef90 00:26:45.742 [2024-07-14 07:44:01.869070] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:45.742 [2024-07-14 07:44:01.869106] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:45.742 [2024-07-14 07:44:01.899306] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfb5210) with pdu=0x2000190fef90 00:26:45.742 [2024-07-14 07:44:01.900306] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:45.742 [2024-07-14 07:44:01.900347] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:46.000 [2024-07-14 07:44:01.928727] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfb5210) with pdu=0x2000190fef90 00:26:46.000 [2024-07-14 07:44:01.929753] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:46.000 [2024-07-14 07:44:01.929803] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:46.000 [2024-07-14 07:44:01.959972] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfb5210) with pdu=0x2000190fef90 00:26:46.000 [2024-07-14 07:44:01.960805] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:46.000 [2024-07-14 07:44:01.960843] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:46.000 [2024-07-14 07:44:01.990291] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfb5210) with pdu=0x2000190fef90 00:26:46.000 [2024-07-14 07:44:01.991049] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:46.000 [2024-07-14 07:44:01.991087] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:46.000 [2024-07-14 07:44:02.021054] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfb5210) with pdu=0x2000190fef90 00:26:46.000 [2024-07-14 07:44:02.021711] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:46.000 [2024-07-14 07:44:02.021747] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:46.000 [2024-07-14 07:44:02.051959] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfb5210) with pdu=0x2000190fef90 00:26:46.000 [2024-07-14 07:44:02.053061] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:46.000 [2024-07-14 07:44:02.053098] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:46.000 [2024-07-14 07:44:02.080838] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfb5210) with pdu=0x2000190fef90 00:26:46.000 [2024-07-14 07:44:02.081265] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:46.000 [2024-07-14 07:44:02.081315] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:46.000 [2024-07-14 07:44:02.109367] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfb5210) with pdu=0x2000190fef90 00:26:46.000 [2024-07-14 07:44:02.110134] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:46.000 [2024-07-14 07:44:02.110185] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:46.000 [2024-07-14 07:44:02.140271] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfb5210) with pdu=0x2000190fef90 00:26:46.000 [2024-07-14 07:44:02.141180] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:46.000 [2024-07-14 07:44:02.141218] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:46.000 [2024-07-14 07:44:02.168588] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfb5210) with pdu=0x2000190fef90 00:26:46.000 [2024-07-14 07:44:02.169224] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:46.001 [2024-07-14 07:44:02.169261] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:46.258 [2024-07-14 07:44:02.198428] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfb5210) with pdu=0x2000190fef90 00:26:46.258 [2024-07-14 07:44:02.199438] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:46.258 [2024-07-14 07:44:02.199476] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:46.259 [2024-07-14 07:44:02.227782] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfb5210) with pdu=0x2000190fef90 00:26:46.259 [2024-07-14 07:44:02.228953] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:46.259 [2024-07-14 07:44:02.228991] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:46.259 [2024-07-14 07:44:02.255498] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfb5210) with pdu=0x2000190fef90 00:26:46.259 [2024-07-14 07:44:02.255937] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:46.259 [2024-07-14 07:44:02.255975] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:46.259 [2024-07-14 07:44:02.281075] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfb5210) with pdu=0x2000190fef90 00:26:46.259 [2024-07-14 07:44:02.281714] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:46.259 [2024-07-14 07:44:02.281765] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:46.259 [2024-07-14 07:44:02.310200] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfb5210) with pdu=0x2000190fef90 00:26:46.259 [2024-07-14 07:44:02.310861] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:46.259 [2024-07-14 07:44:02.310920] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:46.259 00:26:46.259 Latency(us) 00:26:46.259 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:26:46.259 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 16, IO size: 131072) 00:26:46.259 nvme0n1 : 2.01 1038.87 129.86 0.00 0.00 15346.13 8349.77 33593.27 00:26:46.259 =================================================================================================================== 00:26:46.259 Total : 1038.87 129.86 0.00 0.00 15346.13 8349.77 33593.27 00:26:46.259 0 00:26:46.259 07:44:02 -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:26:46.259 07:44:02 -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:26:46.259 07:44:02 -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:26:46.259 07:44:02 -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:26:46.259 | .driver_specific 00:26:46.259 | .nvme_error 00:26:46.259 | .status_code 00:26:46.259 | .command_transient_transport_error' 00:26:46.516 07:44:02 -- host/digest.sh@71 -- # (( 67 > 0 )) 00:26:46.516 07:44:02 -- host/digest.sh@73 -- # killprocess 11867 00:26:46.516 07:44:02 -- common/autotest_common.sh@926 -- # '[' -z 11867 ']' 00:26:46.516 07:44:02 -- common/autotest_common.sh@930 -- # kill -0 11867 00:26:46.516 07:44:02 -- common/autotest_common.sh@931 -- # uname 00:26:46.516 07:44:02 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:26:46.516 07:44:02 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 11867 00:26:46.516 07:44:02 -- common/autotest_common.sh@932 -- # process_name=reactor_1 00:26:46.516 07:44:02 -- common/autotest_common.sh@936 -- # '[' reactor_1 = sudo ']' 00:26:46.516 07:44:02 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 11867' 00:26:46.516 killing process with pid 11867 00:26:46.516 07:44:02 -- common/autotest_common.sh@945 -- # kill 11867 00:26:46.516 Received shutdown signal, test time was about 2.000000 seconds 00:26:46.516 00:26:46.516 Latency(us) 00:26:46.516 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:26:46.516 =================================================================================================================== 00:26:46.516 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:26:46.516 07:44:02 -- common/autotest_common.sh@950 -- # wait 11867 00:26:46.773 07:44:02 -- host/digest.sh@115 -- # killprocess 10059 00:26:46.774 07:44:02 -- common/autotest_common.sh@926 -- # '[' -z 10059 ']' 00:26:46.774 07:44:02 -- common/autotest_common.sh@930 -- # kill -0 10059 00:26:46.774 07:44:02 -- common/autotest_common.sh@931 -- # uname 00:26:46.774 07:44:02 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:26:46.774 07:44:02 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 10059 00:26:46.774 07:44:02 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:26:46.774 07:44:02 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:26:46.774 07:44:02 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 10059' 00:26:46.774 killing process with pid 10059 00:26:46.774 07:44:02 -- common/autotest_common.sh@945 -- # kill 10059 00:26:46.774 07:44:02 -- common/autotest_common.sh@950 -- # wait 10059 00:26:47.031 00:26:47.031 real 0m18.772s 00:26:47.031 user 0m37.682s 00:26:47.031 sys 0m4.073s 00:26:47.031 07:44:03 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:26:47.031 07:44:03 -- common/autotest_common.sh@10 -- # set +x 00:26:47.031 ************************************ 00:26:47.031 END TEST nvmf_digest_error 00:26:47.031 ************************************ 00:26:47.031 07:44:03 -- host/digest.sh@138 -- # trap - SIGINT SIGTERM EXIT 00:26:47.031 07:44:03 -- host/digest.sh@139 -- # nvmftestfini 00:26:47.031 07:44:03 -- nvmf/common.sh@476 -- # nvmfcleanup 00:26:47.031 07:44:03 -- nvmf/common.sh@116 -- # sync 00:26:47.031 07:44:03 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:26:47.031 07:44:03 -- nvmf/common.sh@119 -- # set +e 00:26:47.031 07:44:03 -- nvmf/common.sh@120 -- # for i in {1..20} 00:26:47.031 07:44:03 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:26:47.031 rmmod nvme_tcp 00:26:47.031 rmmod nvme_fabrics 00:26:47.031 rmmod nvme_keyring 00:26:47.289 07:44:03 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:26:47.289 07:44:03 -- nvmf/common.sh@123 -- # set -e 00:26:47.289 07:44:03 -- nvmf/common.sh@124 -- # return 0 00:26:47.289 07:44:03 -- nvmf/common.sh@477 -- # '[' -n 10059 ']' 00:26:47.289 07:44:03 -- nvmf/common.sh@478 -- # killprocess 10059 00:26:47.289 07:44:03 -- common/autotest_common.sh@926 -- # '[' -z 10059 ']' 00:26:47.289 07:44:03 -- common/autotest_common.sh@930 -- # kill -0 10059 00:26:47.289 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 930: kill: (10059) - No such process 00:26:47.289 07:44:03 -- common/autotest_common.sh@953 -- # echo 'Process with pid 10059 is not found' 00:26:47.289 Process with pid 10059 is not found 00:26:47.289 07:44:03 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:26:47.289 07:44:03 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:26:47.289 07:44:03 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:26:47.289 07:44:03 -- nvmf/common.sh@273 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:26:47.289 07:44:03 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:26:47.289 07:44:03 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:26:47.289 07:44:03 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:26:47.289 07:44:03 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:26:49.189 07:44:05 -- nvmf/common.sh@278 -- # ip -4 addr flush cvl_0_1 00:26:49.189 00:26:49.189 real 0m39.706s 00:26:49.189 user 1m9.887s 00:26:49.189 sys 0m9.272s 00:26:49.189 07:44:05 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:26:49.189 07:44:05 -- common/autotest_common.sh@10 -- # set +x 00:26:49.189 ************************************ 00:26:49.189 END TEST nvmf_digest 00:26:49.189 ************************************ 00:26:49.189 07:44:05 -- nvmf/nvmf.sh@110 -- # [[ 0 -eq 1 ]] 00:26:49.189 07:44:05 -- nvmf/nvmf.sh@115 -- # [[ 0 -eq 1 ]] 00:26:49.189 07:44:05 -- nvmf/nvmf.sh@120 -- # [[ phy == phy ]] 00:26:49.189 07:44:05 -- nvmf/nvmf.sh@122 -- # run_test nvmf_bdevperf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/bdevperf.sh --transport=tcp 00:26:49.189 07:44:05 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:26:49.189 07:44:05 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:26:49.189 07:44:05 -- common/autotest_common.sh@10 -- # set +x 00:26:49.189 ************************************ 00:26:49.189 START TEST nvmf_bdevperf 00:26:49.189 ************************************ 00:26:49.189 07:44:05 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/bdevperf.sh --transport=tcp 00:26:49.189 * Looking for test storage... 00:26:49.189 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:26:49.189 07:44:05 -- host/bdevperf.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:26:49.189 07:44:05 -- nvmf/common.sh@7 -- # uname -s 00:26:49.189 07:44:05 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:26:49.189 07:44:05 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:26:49.189 07:44:05 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:26:49.189 07:44:05 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:26:49.189 07:44:05 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:26:49.189 07:44:05 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:26:49.189 07:44:05 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:26:49.189 07:44:05 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:26:49.189 07:44:05 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:26:49.189 07:44:05 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:26:49.189 07:44:05 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:26:49.189 07:44:05 -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:26:49.189 07:44:05 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:26:49.189 07:44:05 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:26:49.189 07:44:05 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:26:49.189 07:44:05 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:26:49.189 07:44:05 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:26:49.189 07:44:05 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:26:49.189 07:44:05 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:26:49.189 07:44:05 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:49.189 07:44:05 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:49.189 07:44:05 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:49.189 07:44:05 -- paths/export.sh@5 -- # export PATH 00:26:49.189 07:44:05 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:49.189 07:44:05 -- nvmf/common.sh@46 -- # : 0 00:26:49.189 07:44:05 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:26:49.189 07:44:05 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:26:49.189 07:44:05 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:26:49.189 07:44:05 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:26:49.189 07:44:05 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:26:49.189 07:44:05 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:26:49.189 07:44:05 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:26:49.189 07:44:05 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:26:49.189 07:44:05 -- host/bdevperf.sh@11 -- # MALLOC_BDEV_SIZE=64 00:26:49.189 07:44:05 -- host/bdevperf.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:26:49.189 07:44:05 -- host/bdevperf.sh@24 -- # nvmftestinit 00:26:49.189 07:44:05 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:26:49.189 07:44:05 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:26:49.189 07:44:05 -- nvmf/common.sh@436 -- # prepare_net_devs 00:26:49.448 07:44:05 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:26:49.448 07:44:05 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:26:49.448 07:44:05 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:26:49.448 07:44:05 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:26:49.448 07:44:05 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:26:49.448 07:44:05 -- nvmf/common.sh@402 -- # [[ phy != virt ]] 00:26:49.448 07:44:05 -- nvmf/common.sh@402 -- # gather_supported_nvmf_pci_devs 00:26:49.448 07:44:05 -- nvmf/common.sh@284 -- # xtrace_disable 00:26:49.448 07:44:05 -- common/autotest_common.sh@10 -- # set +x 00:26:51.349 07:44:07 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:26:51.349 07:44:07 -- nvmf/common.sh@290 -- # pci_devs=() 00:26:51.349 07:44:07 -- nvmf/common.sh@290 -- # local -a pci_devs 00:26:51.349 07:44:07 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:26:51.349 07:44:07 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:26:51.349 07:44:07 -- nvmf/common.sh@292 -- # pci_drivers=() 00:26:51.349 07:44:07 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:26:51.349 07:44:07 -- nvmf/common.sh@294 -- # net_devs=() 00:26:51.349 07:44:07 -- nvmf/common.sh@294 -- # local -ga net_devs 00:26:51.349 07:44:07 -- nvmf/common.sh@295 -- # e810=() 00:26:51.349 07:44:07 -- nvmf/common.sh@295 -- # local -ga e810 00:26:51.349 07:44:07 -- nvmf/common.sh@296 -- # x722=() 00:26:51.349 07:44:07 -- nvmf/common.sh@296 -- # local -ga x722 00:26:51.349 07:44:07 -- nvmf/common.sh@297 -- # mlx=() 00:26:51.349 07:44:07 -- nvmf/common.sh@297 -- # local -ga mlx 00:26:51.349 07:44:07 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:26:51.349 07:44:07 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:26:51.349 07:44:07 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:26:51.349 07:44:07 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:26:51.349 07:44:07 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:26:51.349 07:44:07 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:26:51.349 07:44:07 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:26:51.349 07:44:07 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:26:51.349 07:44:07 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:26:51.349 07:44:07 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:26:51.349 07:44:07 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:26:51.349 07:44:07 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:26:51.349 07:44:07 -- nvmf/common.sh@320 -- # [[ tcp == rdma ]] 00:26:51.349 07:44:07 -- nvmf/common.sh@326 -- # [[ e810 == mlx5 ]] 00:26:51.349 07:44:07 -- nvmf/common.sh@328 -- # [[ e810 == e810 ]] 00:26:51.349 07:44:07 -- nvmf/common.sh@329 -- # pci_devs=("${e810[@]}") 00:26:51.349 07:44:07 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:26:51.349 07:44:07 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:26:51.349 07:44:07 -- nvmf/common.sh@340 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:26:51.349 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:26:51.349 07:44:07 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:26:51.349 07:44:07 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:26:51.349 07:44:07 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:26:51.349 07:44:07 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:26:51.349 07:44:07 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:26:51.349 07:44:07 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:26:51.349 07:44:07 -- nvmf/common.sh@340 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:26:51.349 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:26:51.349 07:44:07 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:26:51.349 07:44:07 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:26:51.349 07:44:07 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:26:51.349 07:44:07 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:26:51.349 07:44:07 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:26:51.349 07:44:07 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:26:51.349 07:44:07 -- nvmf/common.sh@371 -- # [[ e810 == e810 ]] 00:26:51.349 07:44:07 -- nvmf/common.sh@371 -- # [[ tcp == rdma ]] 00:26:51.349 07:44:07 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:26:51.349 07:44:07 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:26:51.349 07:44:07 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:26:51.349 07:44:07 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:26:51.349 07:44:07 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:26:51.349 Found net devices under 0000:0a:00.0: cvl_0_0 00:26:51.349 07:44:07 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:26:51.349 07:44:07 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:26:51.349 07:44:07 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:26:51.349 07:44:07 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:26:51.349 07:44:07 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:26:51.349 07:44:07 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:26:51.349 Found net devices under 0000:0a:00.1: cvl_0_1 00:26:51.349 07:44:07 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:26:51.349 07:44:07 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:26:51.349 07:44:07 -- nvmf/common.sh@402 -- # is_hw=yes 00:26:51.349 07:44:07 -- nvmf/common.sh@404 -- # [[ yes == yes ]] 00:26:51.349 07:44:07 -- nvmf/common.sh@405 -- # [[ tcp == tcp ]] 00:26:51.349 07:44:07 -- nvmf/common.sh@406 -- # nvmf_tcp_init 00:26:51.349 07:44:07 -- nvmf/common.sh@228 -- # NVMF_INITIATOR_IP=10.0.0.1 00:26:51.349 07:44:07 -- nvmf/common.sh@229 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:26:51.349 07:44:07 -- nvmf/common.sh@230 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:26:51.349 07:44:07 -- nvmf/common.sh@233 -- # (( 2 > 1 )) 00:26:51.349 07:44:07 -- nvmf/common.sh@235 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:26:51.349 07:44:07 -- nvmf/common.sh@236 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:26:51.349 07:44:07 -- nvmf/common.sh@239 -- # NVMF_SECOND_TARGET_IP= 00:26:51.349 07:44:07 -- nvmf/common.sh@241 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:26:51.349 07:44:07 -- nvmf/common.sh@242 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:26:51.349 07:44:07 -- nvmf/common.sh@243 -- # ip -4 addr flush cvl_0_0 00:26:51.349 07:44:07 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_1 00:26:51.349 07:44:07 -- nvmf/common.sh@247 -- # ip netns add cvl_0_0_ns_spdk 00:26:51.349 07:44:07 -- nvmf/common.sh@250 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:26:51.349 07:44:07 -- nvmf/common.sh@253 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:26:51.349 07:44:07 -- nvmf/common.sh@254 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:26:51.349 07:44:07 -- nvmf/common.sh@257 -- # ip link set cvl_0_1 up 00:26:51.349 07:44:07 -- nvmf/common.sh@259 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:26:51.349 07:44:07 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:26:51.349 07:44:07 -- nvmf/common.sh@263 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:26:51.349 07:44:07 -- nvmf/common.sh@266 -- # ping -c 1 10.0.0.2 00:26:51.349 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:26:51.349 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.165 ms 00:26:51.349 00:26:51.349 --- 10.0.0.2 ping statistics --- 00:26:51.349 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:26:51.349 rtt min/avg/max/mdev = 0.165/0.165/0.165/0.000 ms 00:26:51.349 07:44:07 -- nvmf/common.sh@267 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:26:51.349 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:26:51.349 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.134 ms 00:26:51.349 00:26:51.349 --- 10.0.0.1 ping statistics --- 00:26:51.349 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:26:51.349 rtt min/avg/max/mdev = 0.134/0.134/0.134/0.000 ms 00:26:51.349 07:44:07 -- nvmf/common.sh@269 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:26:51.349 07:44:07 -- nvmf/common.sh@410 -- # return 0 00:26:51.349 07:44:07 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:26:51.349 07:44:07 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:26:51.349 07:44:07 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:26:51.349 07:44:07 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:26:51.349 07:44:07 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:26:51.349 07:44:07 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:26:51.349 07:44:07 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:26:51.349 07:44:07 -- host/bdevperf.sh@25 -- # tgt_init 00:26:51.349 07:44:07 -- host/bdevperf.sh@15 -- # nvmfappstart -m 0xE 00:26:51.349 07:44:07 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:26:51.349 07:44:07 -- common/autotest_common.sh@712 -- # xtrace_disable 00:26:51.349 07:44:07 -- common/autotest_common.sh@10 -- # set +x 00:26:51.349 07:44:07 -- nvmf/common.sh@469 -- # nvmfpid=14375 00:26:51.349 07:44:07 -- nvmf/common.sh@468 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:26:51.349 07:44:07 -- nvmf/common.sh@470 -- # waitforlisten 14375 00:26:51.349 07:44:07 -- common/autotest_common.sh@819 -- # '[' -z 14375 ']' 00:26:51.349 07:44:07 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:26:51.349 07:44:07 -- common/autotest_common.sh@824 -- # local max_retries=100 00:26:51.349 07:44:07 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:26:51.349 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:26:51.349 07:44:07 -- common/autotest_common.sh@828 -- # xtrace_disable 00:26:51.349 07:44:07 -- common/autotest_common.sh@10 -- # set +x 00:26:51.349 [2024-07-14 07:44:07.483839] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:26:51.349 [2024-07-14 07:44:07.483928] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:26:51.349 EAL: No free 2048 kB hugepages reported on node 1 00:26:51.608 [2024-07-14 07:44:07.552416] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 3 00:26:51.608 [2024-07-14 07:44:07.666467] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:26:51.608 [2024-07-14 07:44:07.666660] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:26:51.608 [2024-07-14 07:44:07.666692] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:26:51.608 [2024-07-14 07:44:07.666705] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:26:51.608 [2024-07-14 07:44:07.666903] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:26:51.608 [2024-07-14 07:44:07.666962] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:26:51.608 [2024-07-14 07:44:07.666966] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:26:52.542 07:44:08 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:26:52.542 07:44:08 -- common/autotest_common.sh@852 -- # return 0 00:26:52.542 07:44:08 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:26:52.542 07:44:08 -- common/autotest_common.sh@718 -- # xtrace_disable 00:26:52.542 07:44:08 -- common/autotest_common.sh@10 -- # set +x 00:26:52.542 07:44:08 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:26:52.542 07:44:08 -- host/bdevperf.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:26:52.542 07:44:08 -- common/autotest_common.sh@551 -- # xtrace_disable 00:26:52.542 07:44:08 -- common/autotest_common.sh@10 -- # set +x 00:26:52.542 [2024-07-14 07:44:08.428402] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:26:52.542 07:44:08 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:26:52.542 07:44:08 -- host/bdevperf.sh@18 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:26:52.542 07:44:08 -- common/autotest_common.sh@551 -- # xtrace_disable 00:26:52.542 07:44:08 -- common/autotest_common.sh@10 -- # set +x 00:26:52.542 Malloc0 00:26:52.542 07:44:08 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:26:52.542 07:44:08 -- host/bdevperf.sh@19 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:26:52.542 07:44:08 -- common/autotest_common.sh@551 -- # xtrace_disable 00:26:52.542 07:44:08 -- common/autotest_common.sh@10 -- # set +x 00:26:52.542 07:44:08 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:26:52.542 07:44:08 -- host/bdevperf.sh@20 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:26:52.542 07:44:08 -- common/autotest_common.sh@551 -- # xtrace_disable 00:26:52.542 07:44:08 -- common/autotest_common.sh@10 -- # set +x 00:26:52.542 07:44:08 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:26:52.542 07:44:08 -- host/bdevperf.sh@21 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:26:52.542 07:44:08 -- common/autotest_common.sh@551 -- # xtrace_disable 00:26:52.542 07:44:08 -- common/autotest_common.sh@10 -- # set +x 00:26:52.542 [2024-07-14 07:44:08.489749] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:26:52.542 07:44:08 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:26:52.542 07:44:08 -- host/bdevperf.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -q 128 -o 4096 -w verify -t 1 00:26:52.542 07:44:08 -- host/bdevperf.sh@27 -- # gen_nvmf_target_json 00:26:52.542 07:44:08 -- nvmf/common.sh@520 -- # config=() 00:26:52.542 07:44:08 -- nvmf/common.sh@520 -- # local subsystem config 00:26:52.542 07:44:08 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:26:52.542 07:44:08 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:26:52.542 { 00:26:52.542 "params": { 00:26:52.542 "name": "Nvme$subsystem", 00:26:52.542 "trtype": "$TEST_TRANSPORT", 00:26:52.542 "traddr": "$NVMF_FIRST_TARGET_IP", 00:26:52.542 "adrfam": "ipv4", 00:26:52.542 "trsvcid": "$NVMF_PORT", 00:26:52.542 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:26:52.542 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:26:52.542 "hdgst": ${hdgst:-false}, 00:26:52.542 "ddgst": ${ddgst:-false} 00:26:52.542 }, 00:26:52.542 "method": "bdev_nvme_attach_controller" 00:26:52.542 } 00:26:52.542 EOF 00:26:52.542 )") 00:26:52.542 07:44:08 -- nvmf/common.sh@542 -- # cat 00:26:52.542 07:44:08 -- nvmf/common.sh@544 -- # jq . 00:26:52.542 07:44:08 -- nvmf/common.sh@545 -- # IFS=, 00:26:52.542 07:44:08 -- nvmf/common.sh@546 -- # printf '%s\n' '{ 00:26:52.542 "params": { 00:26:52.542 "name": "Nvme1", 00:26:52.542 "trtype": "tcp", 00:26:52.542 "traddr": "10.0.0.2", 00:26:52.542 "adrfam": "ipv4", 00:26:52.542 "trsvcid": "4420", 00:26:52.542 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:26:52.542 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:26:52.542 "hdgst": false, 00:26:52.542 "ddgst": false 00:26:52.542 }, 00:26:52.542 "method": "bdev_nvme_attach_controller" 00:26:52.542 }' 00:26:52.542 [2024-07-14 07:44:08.530012] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:26:52.542 [2024-07-14 07:44:08.530094] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid14540 ] 00:26:52.542 EAL: No free 2048 kB hugepages reported on node 1 00:26:52.542 [2024-07-14 07:44:08.590588] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:52.542 [2024-07-14 07:44:08.697446] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:26:52.800 Running I/O for 1 seconds... 00:26:53.732 00:26:53.732 Latency(us) 00:26:53.732 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:26:53.732 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:26:53.732 Verification LBA range: start 0x0 length 0x4000 00:26:53.732 Nvme1n1 : 1.01 12980.93 50.71 0.00 0.00 9815.29 1262.17 16699.54 00:26:53.732 =================================================================================================================== 00:26:53.732 Total : 12980.93 50.71 0.00 0.00 9815.29 1262.17 16699.54 00:26:53.990 07:44:10 -- host/bdevperf.sh@30 -- # bdevperfpid=14685 00:26:53.990 07:44:10 -- host/bdevperf.sh@32 -- # sleep 3 00:26:53.990 07:44:10 -- host/bdevperf.sh@29 -- # gen_nvmf_target_json 00:26:53.990 07:44:10 -- host/bdevperf.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/63 -q 128 -o 4096 -w verify -t 15 -f 00:26:53.990 07:44:10 -- nvmf/common.sh@520 -- # config=() 00:26:53.990 07:44:10 -- nvmf/common.sh@520 -- # local subsystem config 00:26:53.990 07:44:10 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:26:53.990 07:44:10 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:26:53.990 { 00:26:53.990 "params": { 00:26:53.990 "name": "Nvme$subsystem", 00:26:53.990 "trtype": "$TEST_TRANSPORT", 00:26:53.991 "traddr": "$NVMF_FIRST_TARGET_IP", 00:26:53.991 "adrfam": "ipv4", 00:26:53.991 "trsvcid": "$NVMF_PORT", 00:26:53.991 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:26:53.991 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:26:53.991 "hdgst": ${hdgst:-false}, 00:26:53.991 "ddgst": ${ddgst:-false} 00:26:53.991 }, 00:26:53.991 "method": "bdev_nvme_attach_controller" 00:26:53.991 } 00:26:53.991 EOF 00:26:53.991 )") 00:26:53.991 07:44:10 -- nvmf/common.sh@542 -- # cat 00:26:53.991 07:44:10 -- nvmf/common.sh@544 -- # jq . 00:26:53.991 07:44:10 -- nvmf/common.sh@545 -- # IFS=, 00:26:53.991 07:44:10 -- nvmf/common.sh@546 -- # printf '%s\n' '{ 00:26:53.991 "params": { 00:26:53.991 "name": "Nvme1", 00:26:53.991 "trtype": "tcp", 00:26:53.991 "traddr": "10.0.0.2", 00:26:53.991 "adrfam": "ipv4", 00:26:53.991 "trsvcid": "4420", 00:26:53.991 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:26:53.991 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:26:53.991 "hdgst": false, 00:26:53.991 "ddgst": false 00:26:53.991 }, 00:26:53.991 "method": "bdev_nvme_attach_controller" 00:26:53.991 }' 00:26:54.248 [2024-07-14 07:44:10.183479] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:26:54.249 [2024-07-14 07:44:10.183563] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid14685 ] 00:26:54.249 EAL: No free 2048 kB hugepages reported on node 1 00:26:54.249 [2024-07-14 07:44:10.246479] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:54.249 [2024-07-14 07:44:10.354214] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:26:54.506 Running I/O for 15 seconds... 00:26:57.035 07:44:13 -- host/bdevperf.sh@33 -- # kill -9 14375 00:26:57.035 07:44:13 -- host/bdevperf.sh@35 -- # sleep 3 00:26:57.035 [2024-07-14 07:44:13.159045] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:6736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:57.035 [2024-07-14 07:44:13.159097] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:57.035 [2024-07-14 07:44:13.159127] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:6760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:57.035 [2024-07-14 07:44:13.159146] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:57.035 [2024-07-14 07:44:13.159193] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:6768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:57.035 [2024-07-14 07:44:13.159210] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:57.035 [2024-07-14 07:44:13.159227] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:6784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:57.035 [2024-07-14 07:44:13.159243] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:57.035 [2024-07-14 07:44:13.159260] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:6800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:57.035 [2024-07-14 07:44:13.159289] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:57.035 [2024-07-14 07:44:13.159306] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:6808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:57.035 [2024-07-14 07:44:13.159319] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:57.035 [2024-07-14 07:44:13.159333] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:6160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:57.035 [2024-07-14 07:44:13.159346] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:57.035 [2024-07-14 07:44:13.159388] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:6168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:57.035 [2024-07-14 07:44:13.159406] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:57.035 [2024-07-14 07:44:13.159424] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:6176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:57.035 [2024-07-14 07:44:13.159440] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:57.035 [2024-07-14 07:44:13.159459] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:6192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:57.035 [2024-07-14 07:44:13.159476] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:57.035 [2024-07-14 07:44:13.159493] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:6208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:57.035 [2024-07-14 07:44:13.159508] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:57.035 [2024-07-14 07:44:13.159529] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:6216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:57.035 [2024-07-14 07:44:13.159547] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:57.035 [2024-07-14 07:44:13.159567] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:6240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:57.035 [2024-07-14 07:44:13.159585] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:57.035 [2024-07-14 07:44:13.159606] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:6256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:57.035 [2024-07-14 07:44:13.159625] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:57.035 [2024-07-14 07:44:13.159644] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:6856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:57.035 [2024-07-14 07:44:13.159661] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:57.035 [2024-07-14 07:44:13.159679] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:6264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:57.035 [2024-07-14 07:44:13.159695] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:57.035 [2024-07-14 07:44:13.159712] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:6288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:57.035 [2024-07-14 07:44:13.159730] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:57.035 [2024-07-14 07:44:13.159748] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:6320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:57.035 [2024-07-14 07:44:13.159764] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:57.035 [2024-07-14 07:44:13.159781] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:6344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:57.035 [2024-07-14 07:44:13.159797] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:57.035 [2024-07-14 07:44:13.159814] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:6352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:57.035 [2024-07-14 07:44:13.159834] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:57.035 [2024-07-14 07:44:13.159852] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:6368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:57.035 [2024-07-14 07:44:13.159876] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:57.035 [2024-07-14 07:44:13.159896] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:6376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:57.035 [2024-07-14 07:44:13.159912] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:57.035 [2024-07-14 07:44:13.159944] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:6384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:57.035 [2024-07-14 07:44:13.159958] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:57.035 [2024-07-14 07:44:13.159973] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:6880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:57.035 [2024-07-14 07:44:13.159987] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:57.035 [2024-07-14 07:44:13.160002] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:6888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:57.035 [2024-07-14 07:44:13.160016] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:57.035 [2024-07-14 07:44:13.160032] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:6904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:57.035 [2024-07-14 07:44:13.160046] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:57.035 [2024-07-14 07:44:13.160061] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:6912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:57.035 [2024-07-14 07:44:13.160075] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:57.035 [2024-07-14 07:44:13.160090] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:6936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:57.035 [2024-07-14 07:44:13.160104] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:57.035 [2024-07-14 07:44:13.160130] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:6944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:57.035 [2024-07-14 07:44:13.160144] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:57.035 [2024-07-14 07:44:13.160176] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:6960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:57.035 [2024-07-14 07:44:13.160192] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:57.035 [2024-07-14 07:44:13.160209] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:6392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:57.035 [2024-07-14 07:44:13.160225] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:57.035 [2024-07-14 07:44:13.160242] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:6400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:57.035 [2024-07-14 07:44:13.160257] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:57.035 [2024-07-14 07:44:13.160275] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:6416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:57.035 [2024-07-14 07:44:13.160295] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:57.035 [2024-07-14 07:44:13.160313] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:6440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:57.035 [2024-07-14 07:44:13.160329] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:57.035 [2024-07-14 07:44:13.160347] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:6448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:57.035 [2024-07-14 07:44:13.160363] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:57.035 [2024-07-14 07:44:13.160381] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:6456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:57.036 [2024-07-14 07:44:13.160396] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:57.036 [2024-07-14 07:44:13.160413] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:6464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:57.036 [2024-07-14 07:44:13.160429] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:57.036 [2024-07-14 07:44:13.160446] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:6472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:57.036 [2024-07-14 07:44:13.160461] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:57.036 [2024-07-14 07:44:13.160478] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:6968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:57.036 [2024-07-14 07:44:13.160494] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:57.036 [2024-07-14 07:44:13.160512] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:6976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:57.036 [2024-07-14 07:44:13.160527] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:57.036 [2024-07-14 07:44:13.160544] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:6984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:57.036 [2024-07-14 07:44:13.160560] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:57.036 [2024-07-14 07:44:13.160577] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:6992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:57.036 [2024-07-14 07:44:13.160593] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:57.036 [2024-07-14 07:44:13.160610] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:7000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:57.036 [2024-07-14 07:44:13.160626] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:57.036 [2024-07-14 07:44:13.160642] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:7008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:57.036 [2024-07-14 07:44:13.160658] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:57.036 [2024-07-14 07:44:13.160675] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:7016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:57.036 [2024-07-14 07:44:13.160690] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:57.036 [2024-07-14 07:44:13.160711] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:7024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:57.036 [2024-07-14 07:44:13.160728] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:57.036 [2024-07-14 07:44:13.160745] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:7032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:57.036 [2024-07-14 07:44:13.160760] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:57.036 [2024-07-14 07:44:13.160778] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:7040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:57.036 [2024-07-14 07:44:13.160793] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:57.036 [2024-07-14 07:44:13.160811] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:7048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:57.036 [2024-07-14 07:44:13.160827] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:57.036 [2024-07-14 07:44:13.160844] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:7056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:57.036 [2024-07-14 07:44:13.160860] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:57.036 [2024-07-14 07:44:13.160886] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:7064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:57.036 [2024-07-14 07:44:13.160930] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:57.036 [2024-07-14 07:44:13.160946] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:7072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:57.036 [2024-07-14 07:44:13.160960] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:57.036 [2024-07-14 07:44:13.160976] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:7080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:57.036 [2024-07-14 07:44:13.160991] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:57.036 [2024-07-14 07:44:13.161006] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:7088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:57.036 [2024-07-14 07:44:13.161020] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:57.036 [2024-07-14 07:44:13.161036] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:6504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:57.036 [2024-07-14 07:44:13.161050] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:57.036 [2024-07-14 07:44:13.161065] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:6520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:57.036 [2024-07-14 07:44:13.161079] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:57.036 [2024-07-14 07:44:13.161094] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:6528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:57.036 [2024-07-14 07:44:13.161117] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:57.036 [2024-07-14 07:44:13.161132] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:6568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:57.036 [2024-07-14 07:44:13.161165] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:57.036 [2024-07-14 07:44:13.161186] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:6576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:57.036 [2024-07-14 07:44:13.161202] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:57.036 [2024-07-14 07:44:13.161220] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:6584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:57.036 [2024-07-14 07:44:13.161235] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:57.036 [2024-07-14 07:44:13.161253] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:6592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:57.036 [2024-07-14 07:44:13.161269] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:57.036 [2024-07-14 07:44:13.161286] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:6600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:57.036 [2024-07-14 07:44:13.161302] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:57.036 [2024-07-14 07:44:13.161319] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:7096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:57.036 [2024-07-14 07:44:13.161334] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:57.036 [2024-07-14 07:44:13.161353] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:7104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:57.036 [2024-07-14 07:44:13.161369] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:57.036 [2024-07-14 07:44:13.161386] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:7112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:57.036 [2024-07-14 07:44:13.161402] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:57.036 [2024-07-14 07:44:13.161419] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:7120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:57.036 [2024-07-14 07:44:13.161435] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:57.036 [2024-07-14 07:44:13.161452] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:57.036 [2024-07-14 07:44:13.161468] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:57.036 [2024-07-14 07:44:13.161485] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:7136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:57.036 [2024-07-14 07:44:13.161501] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:57.036 [2024-07-14 07:44:13.161518] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:7144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:57.036 [2024-07-14 07:44:13.161533] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:57.036 [2024-07-14 07:44:13.161550] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:7152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:57.036 [2024-07-14 07:44:13.161565] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:57.036 [2024-07-14 07:44:13.161586] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:7160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:57.036 [2024-07-14 07:44:13.161603] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:57.036 [2024-07-14 07:44:13.161620] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:7168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:57.036 [2024-07-14 07:44:13.161636] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:57.036 [2024-07-14 07:44:13.161654] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:7176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:57.036 [2024-07-14 07:44:13.161669] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:57.036 [2024-07-14 07:44:13.161686] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:7184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:57.036 [2024-07-14 07:44:13.161702] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:57.036 [2024-07-14 07:44:13.161719] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:7192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:57.036 [2024-07-14 07:44:13.161735] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:57.036 [2024-07-14 07:44:13.161752] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:7200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:57.036 [2024-07-14 07:44:13.161768] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:57.036 [2024-07-14 07:44:13.161786] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:6624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:57.036 [2024-07-14 07:44:13.161801] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:57.036 [2024-07-14 07:44:13.161819] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:6632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:57.036 [2024-07-14 07:44:13.161834] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:57.036 [2024-07-14 07:44:13.161851] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:6656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:57.037 [2024-07-14 07:44:13.161874] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:57.037 [2024-07-14 07:44:13.161893] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:6664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:57.037 [2024-07-14 07:44:13.161914] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:57.037 [2024-07-14 07:44:13.161946] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:6680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:57.037 [2024-07-14 07:44:13.161961] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:57.037 [2024-07-14 07:44:13.161976] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:6688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:57.037 [2024-07-14 07:44:13.161990] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:57.037 [2024-07-14 07:44:13.162006] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:6712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:57.037 [2024-07-14 07:44:13.162020] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:57.037 [2024-07-14 07:44:13.162040] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:6720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:57.037 [2024-07-14 07:44:13.162055] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:57.037 [2024-07-14 07:44:13.162071] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:7208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:57.037 [2024-07-14 07:44:13.162085] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:57.037 [2024-07-14 07:44:13.162100] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:7216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:57.037 [2024-07-14 07:44:13.162114] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:57.037 [2024-07-14 07:44:13.162131] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:7224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:57.037 [2024-07-14 07:44:13.162146] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:57.037 [2024-07-14 07:44:13.162180] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:7232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:57.037 [2024-07-14 07:44:13.162195] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:57.037 [2024-07-14 07:44:13.162212] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:7240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:57.037 [2024-07-14 07:44:13.162227] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:57.037 [2024-07-14 07:44:13.162244] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:7248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:57.037 [2024-07-14 07:44:13.162260] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:57.037 [2024-07-14 07:44:13.162277] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:7256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:57.037 [2024-07-14 07:44:13.162293] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:57.037 [2024-07-14 07:44:13.162310] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:6728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:57.037 [2024-07-14 07:44:13.162325] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:57.037 [2024-07-14 07:44:13.162343] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:6744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:57.037 [2024-07-14 07:44:13.162358] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:57.037 [2024-07-14 07:44:13.162375] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:6752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:57.037 [2024-07-14 07:44:13.162391] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:57.037 [2024-07-14 07:44:13.162407] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:6776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:57.037 [2024-07-14 07:44:13.162423] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:57.037 [2024-07-14 07:44:13.162440] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:6792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:57.037 [2024-07-14 07:44:13.162463] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:57.037 [2024-07-14 07:44:13.162482] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:6816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:57.037 [2024-07-14 07:44:13.162497] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:57.037 [2024-07-14 07:44:13.162515] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:6824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:57.037 [2024-07-14 07:44:13.162535] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:57.037 [2024-07-14 07:44:13.162553] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:6832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:57.037 [2024-07-14 07:44:13.162569] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:57.037 [2024-07-14 07:44:13.162587] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:7264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:57.037 [2024-07-14 07:44:13.162603] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:57.037 [2024-07-14 07:44:13.162621] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:7272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:57.037 [2024-07-14 07:44:13.162636] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:57.037 [2024-07-14 07:44:13.162655] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:7280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:57.037 [2024-07-14 07:44:13.162670] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:57.037 [2024-07-14 07:44:13.162688] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:7288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:57.037 [2024-07-14 07:44:13.162704] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:57.037 [2024-07-14 07:44:13.162722] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:7296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:57.037 [2024-07-14 07:44:13.162739] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:57.037 [2024-07-14 07:44:13.162756] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:7304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:57.037 [2024-07-14 07:44:13.162772] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:57.037 [2024-07-14 07:44:13.162790] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:7312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:57.037 [2024-07-14 07:44:13.162806] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:57.037 [2024-07-14 07:44:13.162824] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:7320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:57.037 [2024-07-14 07:44:13.162840] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:57.037 [2024-07-14 07:44:13.162858] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:7328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:57.037 [2024-07-14 07:44:13.162882] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:57.037 [2024-07-14 07:44:13.162932] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:7336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:57.037 [2024-07-14 07:44:13.162948] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:57.037 [2024-07-14 07:44:13.162965] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:7344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:57.037 [2024-07-14 07:44:13.162980] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:57.037 [2024-07-14 07:44:13.162996] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:7352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:57.037 [2024-07-14 07:44:13.163011] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:57.037 [2024-07-14 07:44:13.163027] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:7360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:57.037 [2024-07-14 07:44:13.163047] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:57.037 [2024-07-14 07:44:13.163064] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:7368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:57.037 [2024-07-14 07:44:13.163078] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:57.037 [2024-07-14 07:44:13.163094] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:7376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:57.037 [2024-07-14 07:44:13.163108] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:57.037 [2024-07-14 07:44:13.163124] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:7384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:57.037 [2024-07-14 07:44:13.163138] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:57.037 [2024-07-14 07:44:13.163168] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:7392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:57.037 [2024-07-14 07:44:13.163182] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:57.037 [2024-07-14 07:44:13.163196] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:7400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:57.037 [2024-07-14 07:44:13.163210] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:57.037 [2024-07-14 07:44:13.163243] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:7408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:57.037 [2024-07-14 07:44:13.163259] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:57.037 [2024-07-14 07:44:13.163276] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:7416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:57.037 [2024-07-14 07:44:13.163292] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:57.037 [2024-07-14 07:44:13.163309] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:7424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:57.037 [2024-07-14 07:44:13.163325] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:57.037 [2024-07-14 07:44:13.163342] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:6840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:57.037 [2024-07-14 07:44:13.163358] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:57.037 [2024-07-14 07:44:13.163379] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:6848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:57.038 [2024-07-14 07:44:13.163395] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:57.038 [2024-07-14 07:44:13.163413] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:6864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:57.038 [2024-07-14 07:44:13.163429] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:57.038 [2024-07-14 07:44:13.163446] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:6872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:57.038 [2024-07-14 07:44:13.163463] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:57.038 [2024-07-14 07:44:13.163481] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:6896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:57.038 [2024-07-14 07:44:13.163497] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:57.038 [2024-07-14 07:44:13.163514] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:6920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:57.038 [2024-07-14 07:44:13.163529] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:57.038 [2024-07-14 07:44:13.163547] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:6928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:57.038 [2024-07-14 07:44:13.163564] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:57.038 [2024-07-14 07:44:13.163580] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a013a0 is same with the state(5) to be set 00:26:57.038 [2024-07-14 07:44:13.163603] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:57.038 [2024-07-14 07:44:13.163616] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:57.038 [2024-07-14 07:44:13.163630] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:6952 len:8 PRP1 0x0 PRP2 0x0 00:26:57.038 [2024-07-14 07:44:13.163644] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:57.038 [2024-07-14 07:44:13.163719] bdev_nvme.c:1590:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x1a013a0 was disconnected and freed. reset controller. 00:26:57.038 [2024-07-14 07:44:13.166195] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:57.038 [2024-07-14 07:44:13.166273] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19e2400 (9): Bad file descriptor 00:26:57.038 [2024-07-14 07:44:13.167024] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.038 [2024-07-14 07:44:13.167276] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.038 [2024-07-14 07:44:13.167305] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19e2400 with addr=10.0.0.2, port=4420 00:26:57.038 [2024-07-14 07:44:13.167324] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19e2400 is same with the state(5) to be set 00:26:57.038 [2024-07-14 07:44:13.167547] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19e2400 (9): Bad file descriptor 00:26:57.038 [2024-07-14 07:44:13.167738] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:57.038 [2024-07-14 07:44:13.167763] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:57.038 [2024-07-14 07:44:13.167789] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:57.038 [2024-07-14 07:44:13.170501] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:57.038 [2024-07-14 07:44:13.179161] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:57.038 [2024-07-14 07:44:13.179474] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.038 [2024-07-14 07:44:13.179791] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.038 [2024-07-14 07:44:13.179818] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19e2400 with addr=10.0.0.2, port=4420 00:26:57.038 [2024-07-14 07:44:13.179833] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19e2400 is same with the state(5) to be set 00:26:57.038 [2024-07-14 07:44:13.179970] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19e2400 (9): Bad file descriptor 00:26:57.038 [2024-07-14 07:44:13.180166] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:57.038 [2024-07-14 07:44:13.180192] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:57.038 [2024-07-14 07:44:13.180209] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:57.038 [2024-07-14 07:44:13.182512] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:57.038 [2024-07-14 07:44:13.191595] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:57.038 [2024-07-14 07:44:13.191966] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.038 [2024-07-14 07:44:13.192176] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.038 [2024-07-14 07:44:13.192205] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19e2400 with addr=10.0.0.2, port=4420 00:26:57.038 [2024-07-14 07:44:13.192223] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19e2400 is same with the state(5) to be set 00:26:57.038 [2024-07-14 07:44:13.192352] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19e2400 (9): Bad file descriptor 00:26:57.038 [2024-07-14 07:44:13.192522] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:57.038 [2024-07-14 07:44:13.192548] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:57.038 [2024-07-14 07:44:13.192564] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:57.038 [2024-07-14 07:44:13.195027] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:57.300 [2024-07-14 07:44:13.204397] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:57.300 [2024-07-14 07:44:13.204810] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.300 [2024-07-14 07:44:13.205025] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.300 [2024-07-14 07:44:13.205055] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19e2400 with addr=10.0.0.2, port=4420 00:26:57.300 [2024-07-14 07:44:13.205072] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19e2400 is same with the state(5) to be set 00:26:57.300 [2024-07-14 07:44:13.205265] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19e2400 (9): Bad file descriptor 00:26:57.300 [2024-07-14 07:44:13.205483] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:57.300 [2024-07-14 07:44:13.205509] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:57.300 [2024-07-14 07:44:13.205526] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:57.300 [2024-07-14 07:44:13.207859] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:57.300 [2024-07-14 07:44:13.217138] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:57.300 [2024-07-14 07:44:13.217565] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.300 [2024-07-14 07:44:13.217942] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.300 [2024-07-14 07:44:13.217973] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19e2400 with addr=10.0.0.2, port=4420 00:26:57.300 [2024-07-14 07:44:13.217992] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19e2400 is same with the state(5) to be set 00:26:57.300 [2024-07-14 07:44:13.218194] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19e2400 (9): Bad file descriptor 00:26:57.300 [2024-07-14 07:44:13.218346] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:57.300 [2024-07-14 07:44:13.218370] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:57.300 [2024-07-14 07:44:13.218387] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:57.300 [2024-07-14 07:44:13.220689] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:57.300 [2024-07-14 07:44:13.229730] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:57.300 [2024-07-14 07:44:13.230191] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.300 [2024-07-14 07:44:13.230432] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.300 [2024-07-14 07:44:13.230480] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19e2400 with addr=10.0.0.2, port=4420 00:26:57.300 [2024-07-14 07:44:13.230499] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19e2400 is same with the state(5) to be set 00:26:57.300 [2024-07-14 07:44:13.230665] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19e2400 (9): Bad file descriptor 00:26:57.300 [2024-07-14 07:44:13.230853] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:57.300 [2024-07-14 07:44:13.230891] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:57.300 [2024-07-14 07:44:13.230910] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:57.300 [2024-07-14 07:44:13.233232] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:57.300 [2024-07-14 07:44:13.242160] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:57.300 [2024-07-14 07:44:13.242560] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.300 [2024-07-14 07:44:13.242941] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.300 [2024-07-14 07:44:13.242973] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19e2400 with addr=10.0.0.2, port=4420 00:26:57.300 [2024-07-14 07:44:13.242991] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19e2400 is same with the state(5) to be set 00:26:57.300 [2024-07-14 07:44:13.243158] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19e2400 (9): Bad file descriptor 00:26:57.300 [2024-07-14 07:44:13.243327] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:57.300 [2024-07-14 07:44:13.243353] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:57.300 [2024-07-14 07:44:13.243369] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:57.300 [2024-07-14 07:44:13.245619] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:57.300 [2024-07-14 07:44:13.254629] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:57.300 [2024-07-14 07:44:13.255033] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.300 [2024-07-14 07:44:13.255233] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.300 [2024-07-14 07:44:13.255259] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19e2400 with addr=10.0.0.2, port=4420 00:26:57.300 [2024-07-14 07:44:13.255275] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19e2400 is same with the state(5) to be set 00:26:57.300 [2024-07-14 07:44:13.255490] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19e2400 (9): Bad file descriptor 00:26:57.300 [2024-07-14 07:44:13.255644] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:57.300 [2024-07-14 07:44:13.255669] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:57.300 [2024-07-14 07:44:13.255686] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:57.300 [2024-07-14 07:44:13.257966] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:57.300 [2024-07-14 07:44:13.267229] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:57.300 [2024-07-14 07:44:13.267654] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.300 [2024-07-14 07:44:13.267914] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.300 [2024-07-14 07:44:13.267945] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19e2400 with addr=10.0.0.2, port=4420 00:26:57.300 [2024-07-14 07:44:13.267964] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19e2400 is same with the state(5) to be set 00:26:57.300 [2024-07-14 07:44:13.268130] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19e2400 (9): Bad file descriptor 00:26:57.300 [2024-07-14 07:44:13.268355] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:57.300 [2024-07-14 07:44:13.268380] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:57.300 [2024-07-14 07:44:13.268396] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:57.300 [2024-07-14 07:44:13.270630] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:57.300 [2024-07-14 07:44:13.279958] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:57.300 [2024-07-14 07:44:13.280576] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.300 [2024-07-14 07:44:13.280926] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.300 [2024-07-14 07:44:13.280957] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19e2400 with addr=10.0.0.2, port=4420 00:26:57.300 [2024-07-14 07:44:13.280975] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19e2400 is same with the state(5) to be set 00:26:57.300 [2024-07-14 07:44:13.281106] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19e2400 (9): Bad file descriptor 00:26:57.300 [2024-07-14 07:44:13.281258] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:57.300 [2024-07-14 07:44:13.281281] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:57.300 [2024-07-14 07:44:13.281297] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:57.300 [2024-07-14 07:44:13.283507] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:57.300 [2024-07-14 07:44:13.292589] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:57.300 [2024-07-14 07:44:13.292966] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.300 [2024-07-14 07:44:13.293206] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.300 [2024-07-14 07:44:13.293237] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19e2400 with addr=10.0.0.2, port=4420 00:26:57.301 [2024-07-14 07:44:13.293255] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19e2400 is same with the state(5) to be set 00:26:57.301 [2024-07-14 07:44:13.293422] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19e2400 (9): Bad file descriptor 00:26:57.301 [2024-07-14 07:44:13.293591] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:57.301 [2024-07-14 07:44:13.293616] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:57.301 [2024-07-14 07:44:13.293633] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:57.301 [2024-07-14 07:44:13.296011] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:57.301 [2024-07-14 07:44:13.305007] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:57.301 [2024-07-14 07:44:13.305403] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.301 [2024-07-14 07:44:13.305664] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.301 [2024-07-14 07:44:13.305691] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19e2400 with addr=10.0.0.2, port=4420 00:26:57.301 [2024-07-14 07:44:13.305707] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19e2400 is same with the state(5) to be set 00:26:57.301 [2024-07-14 07:44:13.305889] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19e2400 (9): Bad file descriptor 00:26:57.301 [2024-07-14 07:44:13.306075] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:57.301 [2024-07-14 07:44:13.306100] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:57.301 [2024-07-14 07:44:13.306117] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:57.301 [2024-07-14 07:44:13.308512] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:57.301 [2024-07-14 07:44:13.317722] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:57.301 [2024-07-14 07:44:13.318086] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.301 [2024-07-14 07:44:13.318361] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.301 [2024-07-14 07:44:13.318414] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19e2400 with addr=10.0.0.2, port=4420 00:26:57.301 [2024-07-14 07:44:13.318433] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19e2400 is same with the state(5) to be set 00:26:57.301 [2024-07-14 07:44:13.318583] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19e2400 (9): Bad file descriptor 00:26:57.301 [2024-07-14 07:44:13.318770] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:57.301 [2024-07-14 07:44:13.318796] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:57.301 [2024-07-14 07:44:13.318812] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:57.301 [2024-07-14 07:44:13.321076] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:57.301 [2024-07-14 07:44:13.330476] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:57.301 [2024-07-14 07:44:13.330860] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.301 [2024-07-14 07:44:13.331067] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.301 [2024-07-14 07:44:13.331100] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19e2400 with addr=10.0.0.2, port=4420 00:26:57.301 [2024-07-14 07:44:13.331117] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19e2400 is same with the state(5) to be set 00:26:57.301 [2024-07-14 07:44:13.331270] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19e2400 (9): Bad file descriptor 00:26:57.301 [2024-07-14 07:44:13.331456] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:57.301 [2024-07-14 07:44:13.331482] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:57.301 [2024-07-14 07:44:13.331499] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:57.301 [2024-07-14 07:44:13.333861] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:57.301 [2024-07-14 07:44:13.343116] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:57.301 [2024-07-14 07:44:13.343533] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.301 [2024-07-14 07:44:13.343761] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.301 [2024-07-14 07:44:13.343812] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19e2400 with addr=10.0.0.2, port=4420 00:26:57.301 [2024-07-14 07:44:13.343830] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19e2400 is same with the state(5) to be set 00:26:57.301 [2024-07-14 07:44:13.343975] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19e2400 (9): Bad file descriptor 00:26:57.301 [2024-07-14 07:44:13.344146] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:57.301 [2024-07-14 07:44:13.344169] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:57.301 [2024-07-14 07:44:13.344185] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:57.301 [2024-07-14 07:44:13.346689] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:57.301 [2024-07-14 07:44:13.355759] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:57.301 [2024-07-14 07:44:13.356261] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.301 [2024-07-14 07:44:13.356548] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.301 [2024-07-14 07:44:13.356596] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19e2400 with addr=10.0.0.2, port=4420 00:26:57.301 [2024-07-14 07:44:13.356615] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19e2400 is same with the state(5) to be set 00:26:57.301 [2024-07-14 07:44:13.356765] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19e2400 (9): Bad file descriptor 00:26:57.301 [2024-07-14 07:44:13.356948] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:57.301 [2024-07-14 07:44:13.356975] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:57.301 [2024-07-14 07:44:13.356992] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:57.301 [2024-07-14 07:44:13.359277] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:57.301 [2024-07-14 07:44:13.368540] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:57.301 [2024-07-14 07:44:13.368909] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.301 [2024-07-14 07:44:13.369161] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.301 [2024-07-14 07:44:13.369188] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19e2400 with addr=10.0.0.2, port=4420 00:26:57.301 [2024-07-14 07:44:13.369209] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19e2400 is same with the state(5) to be set 00:26:57.301 [2024-07-14 07:44:13.369414] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19e2400 (9): Bad file descriptor 00:26:57.301 [2024-07-14 07:44:13.369621] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:57.301 [2024-07-14 07:44:13.369647] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:57.301 [2024-07-14 07:44:13.369663] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:57.301 [2024-07-14 07:44:13.371993] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:57.301 [2024-07-14 07:44:13.381279] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:57.301 [2024-07-14 07:44:13.381633] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.301 [2024-07-14 07:44:13.381836] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.301 [2024-07-14 07:44:13.381888] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19e2400 with addr=10.0.0.2, port=4420 00:26:57.301 [2024-07-14 07:44:13.381908] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19e2400 is same with the state(5) to be set 00:26:57.301 [2024-07-14 07:44:13.382057] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19e2400 (9): Bad file descriptor 00:26:57.301 [2024-07-14 07:44:13.382244] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:57.301 [2024-07-14 07:44:13.382270] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:57.301 [2024-07-14 07:44:13.382286] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:57.301 [2024-07-14 07:44:13.384805] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:57.301 [2024-07-14 07:44:13.393824] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:57.301 [2024-07-14 07:44:13.394210] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.301 [2024-07-14 07:44:13.394627] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.301 [2024-07-14 07:44:13.394679] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19e2400 with addr=10.0.0.2, port=4420 00:26:57.301 [2024-07-14 07:44:13.394697] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19e2400 is same with the state(5) to be set 00:26:57.301 [2024-07-14 07:44:13.394826] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19e2400 (9): Bad file descriptor 00:26:57.301 [2024-07-14 07:44:13.394991] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:57.301 [2024-07-14 07:44:13.395016] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:57.301 [2024-07-14 07:44:13.395032] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:57.301 [2024-07-14 07:44:13.397408] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:57.301 [2024-07-14 07:44:13.406413] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:57.301 [2024-07-14 07:44:13.406806] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.301 [2024-07-14 07:44:13.407037] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.301 [2024-07-14 07:44:13.407068] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19e2400 with addr=10.0.0.2, port=4420 00:26:57.301 [2024-07-14 07:44:13.407087] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19e2400 is same with the state(5) to be set 00:26:57.301 [2024-07-14 07:44:13.407313] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19e2400 (9): Bad file descriptor 00:26:57.301 [2024-07-14 07:44:13.407467] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:57.301 [2024-07-14 07:44:13.407492] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:57.301 [2024-07-14 07:44:13.407508] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:57.301 [2024-07-14 07:44:13.409813] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:57.301 [2024-07-14 07:44:13.418966] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:57.301 [2024-07-14 07:44:13.419426] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.301 [2024-07-14 07:44:13.419741] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.301 [2024-07-14 07:44:13.419789] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19e2400 with addr=10.0.0.2, port=4420 00:26:57.301 [2024-07-14 07:44:13.419808] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19e2400 is same with the state(5) to be set 00:26:57.302 [2024-07-14 07:44:13.419968] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19e2400 (9): Bad file descriptor 00:26:57.302 [2024-07-14 07:44:13.420157] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:57.302 [2024-07-14 07:44:13.420181] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:57.302 [2024-07-14 07:44:13.420197] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:57.302 [2024-07-14 07:44:13.422555] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:57.302 [2024-07-14 07:44:13.431517] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:57.302 [2024-07-14 07:44:13.431944] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.302 [2024-07-14 07:44:13.432189] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.302 [2024-07-14 07:44:13.432218] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19e2400 with addr=10.0.0.2, port=4420 00:26:57.302 [2024-07-14 07:44:13.432236] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19e2400 is same with the state(5) to be set 00:26:57.302 [2024-07-14 07:44:13.432421] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19e2400 (9): Bad file descriptor 00:26:57.302 [2024-07-14 07:44:13.432591] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:57.302 [2024-07-14 07:44:13.432616] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:57.302 [2024-07-14 07:44:13.432633] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:57.302 [2024-07-14 07:44:13.434931] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:57.302 [2024-07-14 07:44:13.444150] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:57.302 [2024-07-14 07:44:13.444610] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.302 [2024-07-14 07:44:13.444935] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.302 [2024-07-14 07:44:13.444963] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19e2400 with addr=10.0.0.2, port=4420 00:26:57.302 [2024-07-14 07:44:13.444979] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19e2400 is same with the state(5) to be set 00:26:57.302 [2024-07-14 07:44:13.445158] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19e2400 (9): Bad file descriptor 00:26:57.302 [2024-07-14 07:44:13.445316] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:57.302 [2024-07-14 07:44:13.445341] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:57.302 [2024-07-14 07:44:13.445357] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:57.302 [2024-07-14 07:44:13.447471] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:57.302 [2024-07-14 07:44:13.456810] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:57.302 [2024-07-14 07:44:13.457194] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.302 [2024-07-14 07:44:13.457452] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.302 [2024-07-14 07:44:13.457506] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19e2400 with addr=10.0.0.2, port=4420 00:26:57.302 [2024-07-14 07:44:13.457524] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19e2400 is same with the state(5) to be set 00:26:57.302 [2024-07-14 07:44:13.457654] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19e2400 (9): Bad file descriptor 00:26:57.302 [2024-07-14 07:44:13.457841] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:57.302 [2024-07-14 07:44:13.457875] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:57.302 [2024-07-14 07:44:13.457896] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:57.302 [2024-07-14 07:44:13.460330] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:57.561 [2024-07-14 07:44:13.469605] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:57.561 [2024-07-14 07:44:13.470020] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.561 [2024-07-14 07:44:13.470267] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.561 [2024-07-14 07:44:13.470297] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19e2400 with addr=10.0.0.2, port=4420 00:26:57.561 [2024-07-14 07:44:13.470315] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19e2400 is same with the state(5) to be set 00:26:57.561 [2024-07-14 07:44:13.470446] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19e2400 (9): Bad file descriptor 00:26:57.561 [2024-07-14 07:44:13.470598] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:57.561 [2024-07-14 07:44:13.470622] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:57.561 [2024-07-14 07:44:13.470639] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:57.561 [2024-07-14 07:44:13.473033] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:57.561 [2024-07-14 07:44:13.482274] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:57.561 [2024-07-14 07:44:13.482746] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.561 [2024-07-14 07:44:13.483008] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.561 [2024-07-14 07:44:13.483039] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19e2400 with addr=10.0.0.2, port=4420 00:26:57.561 [2024-07-14 07:44:13.483057] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19e2400 is same with the state(5) to be set 00:26:57.561 [2024-07-14 07:44:13.483260] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19e2400 (9): Bad file descriptor 00:26:57.561 [2024-07-14 07:44:13.483466] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:57.561 [2024-07-14 07:44:13.483496] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:57.562 [2024-07-14 07:44:13.483513] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:57.562 [2024-07-14 07:44:13.485837] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:57.562 [2024-07-14 07:44:13.494811] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:57.562 [2024-07-14 07:44:13.495223] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.562 [2024-07-14 07:44:13.495571] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.562 [2024-07-14 07:44:13.495625] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19e2400 with addr=10.0.0.2, port=4420 00:26:57.562 [2024-07-14 07:44:13.495643] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19e2400 is same with the state(5) to be set 00:26:57.562 [2024-07-14 07:44:13.495808] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19e2400 (9): Bad file descriptor 00:26:57.562 [2024-07-14 07:44:13.495974] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:57.562 [2024-07-14 07:44:13.495999] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:57.562 [2024-07-14 07:44:13.496015] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:57.562 [2024-07-14 07:44:13.498300] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:57.562 [2024-07-14 07:44:13.507339] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:57.562 [2024-07-14 07:44:13.507714] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.562 [2024-07-14 07:44:13.507918] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.562 [2024-07-14 07:44:13.507945] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19e2400 with addr=10.0.0.2, port=4420 00:26:57.562 [2024-07-14 07:44:13.507961] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19e2400 is same with the state(5) to be set 00:26:57.562 [2024-07-14 07:44:13.508090] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19e2400 (9): Bad file descriptor 00:26:57.562 [2024-07-14 07:44:13.508298] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:57.562 [2024-07-14 07:44:13.508322] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:57.562 [2024-07-14 07:44:13.508339] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:57.562 [2024-07-14 07:44:13.510881] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:57.562 [2024-07-14 07:44:13.519927] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:57.562 [2024-07-14 07:44:13.520375] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.562 [2024-07-14 07:44:13.520670] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.562 [2024-07-14 07:44:13.520695] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19e2400 with addr=10.0.0.2, port=4420 00:26:57.562 [2024-07-14 07:44:13.520726] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19e2400 is same with the state(5) to be set 00:26:57.562 [2024-07-14 07:44:13.520835] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19e2400 (9): Bad file descriptor 00:26:57.562 [2024-07-14 07:44:13.521027] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:57.562 [2024-07-14 07:44:13.521052] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:57.562 [2024-07-14 07:44:13.521073] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:57.562 [2024-07-14 07:44:13.523432] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:57.562 [2024-07-14 07:44:13.532661] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:57.562 [2024-07-14 07:44:13.533083] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.562 [2024-07-14 07:44:13.533294] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.562 [2024-07-14 07:44:13.533324] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19e2400 with addr=10.0.0.2, port=4420 00:26:57.562 [2024-07-14 07:44:13.533342] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19e2400 is same with the state(5) to be set 00:26:57.562 [2024-07-14 07:44:13.533571] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19e2400 (9): Bad file descriptor 00:26:57.562 [2024-07-14 07:44:13.533698] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:57.562 [2024-07-14 07:44:13.533722] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:57.562 [2024-07-14 07:44:13.533738] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:57.562 [2024-07-14 07:44:13.536233] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:57.562 [2024-07-14 07:44:13.545105] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:57.562 [2024-07-14 07:44:13.545562] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.562 [2024-07-14 07:44:13.545780] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.562 [2024-07-14 07:44:13.545809] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19e2400 with addr=10.0.0.2, port=4420 00:26:57.562 [2024-07-14 07:44:13.545827] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19e2400 is same with the state(5) to be set 00:26:57.562 [2024-07-14 07:44:13.546039] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19e2400 (9): Bad file descriptor 00:26:57.562 [2024-07-14 07:44:13.546202] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:57.562 [2024-07-14 07:44:13.546231] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:57.562 [2024-07-14 07:44:13.546247] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:57.562 [2024-07-14 07:44:13.548528] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:57.562 [2024-07-14 07:44:13.557779] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:57.562 [2024-07-14 07:44:13.558191] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.562 [2024-07-14 07:44:13.558579] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.562 [2024-07-14 07:44:13.558638] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19e2400 with addr=10.0.0.2, port=4420 00:26:57.562 [2024-07-14 07:44:13.558656] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19e2400 is same with the state(5) to be set 00:26:57.562 [2024-07-14 07:44:13.558822] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19e2400 (9): Bad file descriptor 00:26:57.562 [2024-07-14 07:44:13.558964] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:57.562 [2024-07-14 07:44:13.558989] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:57.562 [2024-07-14 07:44:13.559005] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:57.562 [2024-07-14 07:44:13.561369] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:57.562 [2024-07-14 07:44:13.570390] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:57.562 [2024-07-14 07:44:13.570854] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.562 [2024-07-14 07:44:13.571156] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.562 [2024-07-14 07:44:13.571186] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19e2400 with addr=10.0.0.2, port=4420 00:26:57.562 [2024-07-14 07:44:13.571204] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19e2400 is same with the state(5) to be set 00:26:57.562 [2024-07-14 07:44:13.571388] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19e2400 (9): Bad file descriptor 00:26:57.562 [2024-07-14 07:44:13.571558] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:57.562 [2024-07-14 07:44:13.571582] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:57.562 [2024-07-14 07:44:13.571598] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:57.562 [2024-07-14 07:44:13.573842] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:57.562 [2024-07-14 07:44:13.582855] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:57.562 [2024-07-14 07:44:13.583245] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.562 [2024-07-14 07:44:13.583487] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.562 [2024-07-14 07:44:13.583534] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19e2400 with addr=10.0.0.2, port=4420 00:26:57.562 [2024-07-14 07:44:13.583553] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19e2400 is same with the state(5) to be set 00:26:57.562 [2024-07-14 07:44:13.583682] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19e2400 (9): Bad file descriptor 00:26:57.562 [2024-07-14 07:44:13.583878] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:57.562 [2024-07-14 07:44:13.583904] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:57.562 [2024-07-14 07:44:13.583920] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:57.562 [2024-07-14 07:44:13.586316] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:57.562 [2024-07-14 07:44:13.595502] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:57.562 [2024-07-14 07:44:13.595894] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.562 [2024-07-14 07:44:13.596136] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.562 [2024-07-14 07:44:13.596176] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19e2400 with addr=10.0.0.2, port=4420 00:26:57.562 [2024-07-14 07:44:13.596195] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19e2400 is same with the state(5) to be set 00:26:57.562 [2024-07-14 07:44:13.596342] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19e2400 (9): Bad file descriptor 00:26:57.562 [2024-07-14 07:44:13.596494] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:57.562 [2024-07-14 07:44:13.596518] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:57.562 [2024-07-14 07:44:13.596535] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:57.562 [2024-07-14 07:44:13.598800] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:57.562 [2024-07-14 07:44:13.607911] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:57.562 [2024-07-14 07:44:13.608488] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.562 [2024-07-14 07:44:13.608896] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.562 [2024-07-14 07:44:13.608954] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19e2400 with addr=10.0.0.2, port=4420 00:26:57.562 [2024-07-14 07:44:13.608972] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19e2400 is same with the state(5) to be set 00:26:57.562 [2024-07-14 07:44:13.609119] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19e2400 (9): Bad file descriptor 00:26:57.562 [2024-07-14 07:44:13.609281] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:57.562 [2024-07-14 07:44:13.609305] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:57.563 [2024-07-14 07:44:13.609322] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:57.563 [2024-07-14 07:44:13.611624] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:57.563 [2024-07-14 07:44:13.620384] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:57.563 [2024-07-14 07:44:13.620883] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.563 [2024-07-14 07:44:13.621124] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.563 [2024-07-14 07:44:13.621164] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19e2400 with addr=10.0.0.2, port=4420 00:26:57.563 [2024-07-14 07:44:13.621182] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19e2400 is same with the state(5) to be set 00:26:57.563 [2024-07-14 07:44:13.621329] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19e2400 (9): Bad file descriptor 00:26:57.563 [2024-07-14 07:44:13.621463] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:57.563 [2024-07-14 07:44:13.621488] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:57.563 [2024-07-14 07:44:13.621504] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:57.563 [2024-07-14 07:44:13.623904] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:57.563 [2024-07-14 07:44:13.633090] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:57.563 [2024-07-14 07:44:13.633638] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.563 [2024-07-14 07:44:13.633909] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.563 [2024-07-14 07:44:13.633940] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19e2400 with addr=10.0.0.2, port=4420 00:26:57.563 [2024-07-14 07:44:13.633958] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19e2400 is same with the state(5) to be set 00:26:57.563 [2024-07-14 07:44:13.634161] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19e2400 (9): Bad file descriptor 00:26:57.563 [2024-07-14 07:44:13.634331] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:57.563 [2024-07-14 07:44:13.634355] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:57.563 [2024-07-14 07:44:13.634371] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:57.563 [2024-07-14 07:44:13.636488] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:57.563 [2024-07-14 07:44:13.645778] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:57.563 [2024-07-14 07:44:13.646200] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.563 [2024-07-14 07:44:13.646432] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.563 [2024-07-14 07:44:13.646481] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19e2400 with addr=10.0.0.2, port=4420 00:26:57.563 [2024-07-14 07:44:13.646499] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19e2400 is same with the state(5) to be set 00:26:57.563 [2024-07-14 07:44:13.646683] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19e2400 (9): Bad file descriptor 00:26:57.563 [2024-07-14 07:44:13.646881] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:57.563 [2024-07-14 07:44:13.646905] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:57.563 [2024-07-14 07:44:13.646922] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:57.563 [2024-07-14 07:44:13.649240] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:57.563 [2024-07-14 07:44:13.658298] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:57.563 [2024-07-14 07:44:13.658632] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.563 [2024-07-14 07:44:13.658830] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.563 [2024-07-14 07:44:13.658878] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19e2400 with addr=10.0.0.2, port=4420 00:26:57.563 [2024-07-14 07:44:13.658899] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19e2400 is same with the state(5) to be set 00:26:57.563 [2024-07-14 07:44:13.659065] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19e2400 (9): Bad file descriptor 00:26:57.563 [2024-07-14 07:44:13.659271] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:57.563 [2024-07-14 07:44:13.659296] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:57.563 [2024-07-14 07:44:13.659312] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:57.563 [2024-07-14 07:44:13.661516] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:57.563 [2024-07-14 07:44:13.670871] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:57.563 [2024-07-14 07:44:13.671282] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.563 [2024-07-14 07:44:13.671584] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.563 [2024-07-14 07:44:13.671610] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19e2400 with addr=10.0.0.2, port=4420 00:26:57.563 [2024-07-14 07:44:13.671640] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19e2400 is same with the state(5) to be set 00:26:57.563 [2024-07-14 07:44:13.671795] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19e2400 (9): Bad file descriptor 00:26:57.563 [2024-07-14 07:44:13.671993] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:57.563 [2024-07-14 07:44:13.672019] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:57.563 [2024-07-14 07:44:13.672035] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:57.563 [2024-07-14 07:44:13.674375] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:57.563 [2024-07-14 07:44:13.683394] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:57.563 [2024-07-14 07:44:13.683848] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.563 [2024-07-14 07:44:13.684167] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.563 [2024-07-14 07:44:13.684202] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19e2400 with addr=10.0.0.2, port=4420 00:26:57.563 [2024-07-14 07:44:13.684221] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19e2400 is same with the state(5) to be set 00:26:57.563 [2024-07-14 07:44:13.684405] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19e2400 (9): Bad file descriptor 00:26:57.563 [2024-07-14 07:44:13.684558] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:57.563 [2024-07-14 07:44:13.684583] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:57.563 [2024-07-14 07:44:13.684599] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:57.563 [2024-07-14 07:44:13.686778] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:57.563 [2024-07-14 07:44:13.695904] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:57.563 [2024-07-14 07:44:13.696333] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.563 [2024-07-14 07:44:13.696569] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.563 [2024-07-14 07:44:13.696618] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19e2400 with addr=10.0.0.2, port=4420 00:26:57.563 [2024-07-14 07:44:13.696637] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19e2400 is same with the state(5) to be set 00:26:57.563 [2024-07-14 07:44:13.696802] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19e2400 (9): Bad file descriptor 00:26:57.563 [2024-07-14 07:44:13.696983] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:57.563 [2024-07-14 07:44:13.697008] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:57.563 [2024-07-14 07:44:13.697024] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:57.563 [2024-07-14 07:44:13.699344] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:57.563 [2024-07-14 07:44:13.708570] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:57.563 [2024-07-14 07:44:13.708999] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.563 [2024-07-14 07:44:13.709287] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.563 [2024-07-14 07:44:13.709316] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19e2400 with addr=10.0.0.2, port=4420 00:26:57.563 [2024-07-14 07:44:13.709334] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19e2400 is same with the state(5) to be set 00:26:57.563 [2024-07-14 07:44:13.709518] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19e2400 (9): Bad file descriptor 00:26:57.563 [2024-07-14 07:44:13.709670] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:57.563 [2024-07-14 07:44:13.709694] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:57.563 [2024-07-14 07:44:13.709711] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:57.563 [2024-07-14 07:44:13.712185] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:57.563 [2024-07-14 07:44:13.721165] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:57.563 [2024-07-14 07:44:13.721512] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.563 [2024-07-14 07:44:13.721774] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.563 [2024-07-14 07:44:13.721821] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19e2400 with addr=10.0.0.2, port=4420 00:26:57.563 [2024-07-14 07:44:13.721844] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19e2400 is same with the state(5) to be set 00:26:57.563 [2024-07-14 07:44:13.722021] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19e2400 (9): Bad file descriptor 00:26:57.563 [2024-07-14 07:44:13.722210] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:57.563 [2024-07-14 07:44:13.722234] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:57.563 [2024-07-14 07:44:13.722251] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:57.563 [2024-07-14 07:44:13.724479] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:57.823 [2024-07-14 07:44:13.733741] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:57.823 [2024-07-14 07:44:13.734172] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.823 [2024-07-14 07:44:13.734489] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.823 [2024-07-14 07:44:13.734515] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19e2400 with addr=10.0.0.2, port=4420 00:26:57.823 [2024-07-14 07:44:13.734545] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19e2400 is same with the state(5) to be set 00:26:57.823 [2024-07-14 07:44:13.734737] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19e2400 (9): Bad file descriptor 00:26:57.823 [2024-07-14 07:44:13.734957] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:57.823 [2024-07-14 07:44:13.734992] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:57.823 [2024-07-14 07:44:13.735012] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:57.823 [2024-07-14 07:44:13.737364] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:57.823 [2024-07-14 07:44:13.746413] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:57.823 [2024-07-14 07:44:13.746792] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.823 [2024-07-14 07:44:13.747009] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.823 [2024-07-14 07:44:13.747040] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19e2400 with addr=10.0.0.2, port=4420 00:26:57.823 [2024-07-14 07:44:13.747058] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19e2400 is same with the state(5) to be set 00:26:57.823 [2024-07-14 07:44:13.747206] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19e2400 (9): Bad file descriptor 00:26:57.823 [2024-07-14 07:44:13.747430] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:57.823 [2024-07-14 07:44:13.747455] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:57.823 [2024-07-14 07:44:13.747472] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:57.823 [2024-07-14 07:44:13.749671] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:57.823 [2024-07-14 07:44:13.759167] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:57.823 [2024-07-14 07:44:13.759542] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.823 [2024-07-14 07:44:13.759921] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.823 [2024-07-14 07:44:13.759951] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19e2400 with addr=10.0.0.2, port=4420 00:26:57.823 [2024-07-14 07:44:13.759969] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19e2400 is same with the state(5) to be set 00:26:57.823 [2024-07-14 07:44:13.760123] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19e2400 (9): Bad file descriptor 00:26:57.823 [2024-07-14 07:44:13.760293] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:57.823 [2024-07-14 07:44:13.760317] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:57.823 [2024-07-14 07:44:13.760334] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:57.823 [2024-07-14 07:44:13.762454] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:57.823 [2024-07-14 07:44:13.771894] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:57.823 [2024-07-14 07:44:13.772411] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.823 [2024-07-14 07:44:13.772844] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.823 [2024-07-14 07:44:13.772908] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19e2400 with addr=10.0.0.2, port=4420 00:26:57.823 [2024-07-14 07:44:13.772927] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19e2400 is same with the state(5) to be set 00:26:57.823 [2024-07-14 07:44:13.773093] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19e2400 (9): Bad file descriptor 00:26:57.823 [2024-07-14 07:44:13.773298] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:57.823 [2024-07-14 07:44:13.773323] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:57.823 [2024-07-14 07:44:13.773339] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:57.823 [2024-07-14 07:44:13.775645] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:57.823 [2024-07-14 07:44:13.784403] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:57.823 [2024-07-14 07:44:13.784828] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.823 [2024-07-14 07:44:13.785057] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.823 [2024-07-14 07:44:13.785083] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19e2400 with addr=10.0.0.2, port=4420 00:26:57.823 [2024-07-14 07:44:13.785100] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19e2400 is same with the state(5) to be set 00:26:57.823 [2024-07-14 07:44:13.785296] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19e2400 (9): Bad file descriptor 00:26:57.823 [2024-07-14 07:44:13.785466] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:57.823 [2024-07-14 07:44:13.785491] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:57.823 [2024-07-14 07:44:13.785507] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:57.823 [2024-07-14 07:44:13.788031] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:57.823 [2024-07-14 07:44:13.796992] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:57.823 [2024-07-14 07:44:13.797432] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.823 [2024-07-14 07:44:13.797700] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.823 [2024-07-14 07:44:13.797728] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19e2400 with addr=10.0.0.2, port=4420 00:26:57.823 [2024-07-14 07:44:13.797747] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19e2400 is same with the state(5) to be set 00:26:57.823 [2024-07-14 07:44:13.797889] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19e2400 (9): Bad file descriptor 00:26:57.823 [2024-07-14 07:44:13.798065] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:57.823 [2024-07-14 07:44:13.798090] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:57.823 [2024-07-14 07:44:13.798107] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:57.823 [2024-07-14 07:44:13.800353] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:57.823 [2024-07-14 07:44:13.809472] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:57.823 [2024-07-14 07:44:13.809848] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.823 [2024-07-14 07:44:13.810052] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.823 [2024-07-14 07:44:13.810081] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19e2400 with addr=10.0.0.2, port=4420 00:26:57.823 [2024-07-14 07:44:13.810099] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19e2400 is same with the state(5) to be set 00:26:57.823 [2024-07-14 07:44:13.810247] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19e2400 (9): Bad file descriptor 00:26:57.823 [2024-07-14 07:44:13.810452] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:57.823 [2024-07-14 07:44:13.810477] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:57.823 [2024-07-14 07:44:13.810493] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:57.823 [2024-07-14 07:44:13.812616] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:57.823 [2024-07-14 07:44:13.821926] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:57.823 [2024-07-14 07:44:13.822346] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.823 [2024-07-14 07:44:13.822635] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.823 [2024-07-14 07:44:13.822687] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19e2400 with addr=10.0.0.2, port=4420 00:26:57.823 [2024-07-14 07:44:13.822705] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19e2400 is same with the state(5) to be set 00:26:57.823 [2024-07-14 07:44:13.822882] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19e2400 (9): Bad file descriptor 00:26:57.823 [2024-07-14 07:44:13.823053] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:57.823 [2024-07-14 07:44:13.823077] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:57.823 [2024-07-14 07:44:13.823093] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:57.823 [2024-07-14 07:44:13.825520] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:57.823 [2024-07-14 07:44:13.834358] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:57.823 [2024-07-14 07:44:13.834769] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.823 [2024-07-14 07:44:13.835036] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.823 [2024-07-14 07:44:13.835066] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19e2400 with addr=10.0.0.2, port=4420 00:26:57.823 [2024-07-14 07:44:13.835085] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19e2400 is same with the state(5) to be set 00:26:57.823 [2024-07-14 07:44:13.835305] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19e2400 (9): Bad file descriptor 00:26:57.823 [2024-07-14 07:44:13.835439] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:57.823 [2024-07-14 07:44:13.835468] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:57.823 [2024-07-14 07:44:13.835485] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:57.823 [2024-07-14 07:44:13.837805] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:57.823 [2024-07-14 07:44:13.846744] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:57.824 [2024-07-14 07:44:13.847107] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.824 [2024-07-14 07:44:13.847496] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.824 [2024-07-14 07:44:13.847564] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19e2400 with addr=10.0.0.2, port=4420 00:26:57.824 [2024-07-14 07:44:13.847581] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19e2400 is same with the state(5) to be set 00:26:57.824 [2024-07-14 07:44:13.847782] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19e2400 (9): Bad file descriptor 00:26:57.824 [2024-07-14 07:44:13.847982] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:57.824 [2024-07-14 07:44:13.848008] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:57.824 [2024-07-14 07:44:13.848024] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:57.824 [2024-07-14 07:44:13.850308] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:57.824 [2024-07-14 07:44:13.859372] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:57.824 [2024-07-14 07:44:13.859934] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.824 [2024-07-14 07:44:13.860198] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.824 [2024-07-14 07:44:13.860265] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19e2400 with addr=10.0.0.2, port=4420 00:26:57.824 [2024-07-14 07:44:13.860283] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19e2400 is same with the state(5) to be set 00:26:57.824 [2024-07-14 07:44:13.860467] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19e2400 (9): Bad file descriptor 00:26:57.824 [2024-07-14 07:44:13.860637] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:57.824 [2024-07-14 07:44:13.860661] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:57.824 [2024-07-14 07:44:13.860677] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:57.824 [2024-07-14 07:44:13.862968] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:57.824 [2024-07-14 07:44:13.871711] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:57.824 [2024-07-14 07:44:13.872148] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.824 [2024-07-14 07:44:13.872500] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.824 [2024-07-14 07:44:13.872561] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19e2400 with addr=10.0.0.2, port=4420 00:26:57.824 [2024-07-14 07:44:13.872579] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19e2400 is same with the state(5) to be set 00:26:57.824 [2024-07-14 07:44:13.872726] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19e2400 (9): Bad file descriptor 00:26:57.824 [2024-07-14 07:44:13.872944] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:57.824 [2024-07-14 07:44:13.872970] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:57.824 [2024-07-14 07:44:13.872992] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:57.824 [2024-07-14 07:44:13.875276] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:57.824 [2024-07-14 07:44:13.884486] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:57.824 [2024-07-14 07:44:13.884840] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.824 [2024-07-14 07:44:13.885053] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.824 [2024-07-14 07:44:13.885083] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19e2400 with addr=10.0.0.2, port=4420 00:26:57.824 [2024-07-14 07:44:13.885101] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19e2400 is same with the state(5) to be set 00:26:57.824 [2024-07-14 07:44:13.885249] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19e2400 (9): Bad file descriptor 00:26:57.824 [2024-07-14 07:44:13.885420] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:57.824 [2024-07-14 07:44:13.885444] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:57.824 [2024-07-14 07:44:13.885460] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:57.824 [2024-07-14 07:44:13.887756] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:57.824 [2024-07-14 07:44:13.897029] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:57.824 [2024-07-14 07:44:13.897438] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.824 [2024-07-14 07:44:13.897742] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.824 [2024-07-14 07:44:13.897782] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19e2400 with addr=10.0.0.2, port=4420 00:26:57.824 [2024-07-14 07:44:13.897798] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19e2400 is same with the state(5) to be set 00:26:57.824 [2024-07-14 07:44:13.897934] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19e2400 (9): Bad file descriptor 00:26:57.824 [2024-07-14 07:44:13.898112] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:57.824 [2024-07-14 07:44:13.898136] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:57.824 [2024-07-14 07:44:13.898153] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:57.824 [2024-07-14 07:44:13.900327] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:57.824 [2024-07-14 07:44:13.909616] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:57.824 [2024-07-14 07:44:13.910023] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.824 [2024-07-14 07:44:13.910270] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.824 [2024-07-14 07:44:13.910299] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19e2400 with addr=10.0.0.2, port=4420 00:26:57.824 [2024-07-14 07:44:13.910317] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19e2400 is same with the state(5) to be set 00:26:57.824 [2024-07-14 07:44:13.910502] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19e2400 (9): Bad file descriptor 00:26:57.824 [2024-07-14 07:44:13.910636] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:57.824 [2024-07-14 07:44:13.910660] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:57.824 [2024-07-14 07:44:13.910677] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:57.824 [2024-07-14 07:44:13.912822] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:57.824 [2024-07-14 07:44:13.922075] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:57.824 [2024-07-14 07:44:13.922423] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.824 [2024-07-14 07:44:13.922710] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.824 [2024-07-14 07:44:13.922765] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19e2400 with addr=10.0.0.2, port=4420 00:26:57.824 [2024-07-14 07:44:13.922791] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19e2400 is same with the state(5) to be set 00:26:57.824 [2024-07-14 07:44:13.922968] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19e2400 (9): Bad file descriptor 00:26:57.824 [2024-07-14 07:44:13.923139] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:57.824 [2024-07-14 07:44:13.923163] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:57.824 [2024-07-14 07:44:13.923179] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:57.824 [2024-07-14 07:44:13.925397] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:57.824 [2024-07-14 07:44:13.934699] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:57.824 [2024-07-14 07:44:13.935036] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.824 [2024-07-14 07:44:13.935255] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.824 [2024-07-14 07:44:13.935303] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19e2400 with addr=10.0.0.2, port=4420 00:26:57.824 [2024-07-14 07:44:13.935321] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19e2400 is same with the state(5) to be set 00:26:57.824 [2024-07-14 07:44:13.935505] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19e2400 (9): Bad file descriptor 00:26:57.824 [2024-07-14 07:44:13.935675] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:57.824 [2024-07-14 07:44:13.935699] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:57.824 [2024-07-14 07:44:13.935715] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:57.824 [2024-07-14 07:44:13.938117] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:57.824 [2024-07-14 07:44:13.947276] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:57.824 [2024-07-14 07:44:13.947681] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.824 [2024-07-14 07:44:13.947930] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.824 [2024-07-14 07:44:13.947961] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19e2400 with addr=10.0.0.2, port=4420 00:26:57.824 [2024-07-14 07:44:13.947979] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19e2400 is same with the state(5) to be set 00:26:57.824 [2024-07-14 07:44:13.948127] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19e2400 (9): Bad file descriptor 00:26:57.824 [2024-07-14 07:44:13.948279] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:57.824 [2024-07-14 07:44:13.948304] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:57.824 [2024-07-14 07:44:13.948320] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:57.824 [2024-07-14 07:44:13.950692] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:57.824 [2024-07-14 07:44:13.959862] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:57.824 [2024-07-14 07:44:13.960243] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.824 [2024-07-14 07:44:13.960523] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.824 [2024-07-14 07:44:13.960552] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19e2400 with addr=10.0.0.2, port=4420 00:26:57.824 [2024-07-14 07:44:13.960570] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19e2400 is same with the state(5) to be set 00:26:57.824 [2024-07-14 07:44:13.960700] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19e2400 (9): Bad file descriptor 00:26:57.824 [2024-07-14 07:44:13.960917] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:57.824 [2024-07-14 07:44:13.960942] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:57.824 [2024-07-14 07:44:13.960959] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:57.824 [2024-07-14 07:44:13.963143] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:57.824 [2024-07-14 07:44:13.972466] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:57.825 [2024-07-14 07:44:13.972833] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.825 [2024-07-14 07:44:13.973057] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.825 [2024-07-14 07:44:13.973087] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19e2400 with addr=10.0.0.2, port=4420 00:26:57.825 [2024-07-14 07:44:13.973105] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19e2400 is same with the state(5) to be set 00:26:57.825 [2024-07-14 07:44:13.973307] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19e2400 (9): Bad file descriptor 00:26:57.825 [2024-07-14 07:44:13.973496] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:57.825 [2024-07-14 07:44:13.973521] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:57.825 [2024-07-14 07:44:13.973537] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:57.825 [2024-07-14 07:44:13.975839] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:57.825 [2024-07-14 07:44:13.985086] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:57.825 [2024-07-14 07:44:13.985446] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.825 [2024-07-14 07:44:13.985823] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.825 [2024-07-14 07:44:13.985881] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19e2400 with addr=10.0.0.2, port=4420 00:26:57.825 [2024-07-14 07:44:13.985902] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19e2400 is same with the state(5) to be set 00:26:57.825 [2024-07-14 07:44:13.986123] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19e2400 (9): Bad file descriptor 00:26:57.825 [2024-07-14 07:44:13.986312] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:57.825 [2024-07-14 07:44:13.986336] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:57.825 [2024-07-14 07:44:13.986352] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:57.825 [2024-07-14 07:44:13.988878] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:58.084 [2024-07-14 07:44:13.997928] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:58.084 [2024-07-14 07:44:13.998264] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.084 [2024-07-14 07:44:13.998601] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.084 [2024-07-14 07:44:13.998629] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19e2400 with addr=10.0.0.2, port=4420 00:26:58.084 [2024-07-14 07:44:13.998644] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19e2400 is same with the state(5) to be set 00:26:58.084 [2024-07-14 07:44:13.998902] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19e2400 (9): Bad file descriptor 00:26:58.084 [2024-07-14 07:44:13.999020] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:58.084 [2024-07-14 07:44:13.999044] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:58.084 [2024-07-14 07:44:13.999060] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:58.084 [2024-07-14 07:44:14.001364] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:58.084 [2024-07-14 07:44:14.010506] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:58.084 [2024-07-14 07:44:14.010985] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.084 [2024-07-14 07:44:14.011279] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.084 [2024-07-14 07:44:14.011308] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19e2400 with addr=10.0.0.2, port=4420 00:26:58.084 [2024-07-14 07:44:14.011327] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19e2400 is same with the state(5) to be set 00:26:58.084 [2024-07-14 07:44:14.011457] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19e2400 (9): Bad file descriptor 00:26:58.084 [2024-07-14 07:44:14.011627] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:58.084 [2024-07-14 07:44:14.011651] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:58.084 [2024-07-14 07:44:14.011667] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:58.084 [2024-07-14 07:44:14.014019] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:58.084 [2024-07-14 07:44:14.023156] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:58.084 [2024-07-14 07:44:14.023559] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.084 [2024-07-14 07:44:14.023837] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.084 [2024-07-14 07:44:14.023899] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19e2400 with addr=10.0.0.2, port=4420 00:26:58.084 [2024-07-14 07:44:14.023918] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19e2400 is same with the state(5) to be set 00:26:58.084 [2024-07-14 07:44:14.024088] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19e2400 (9): Bad file descriptor 00:26:58.084 [2024-07-14 07:44:14.024243] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:58.084 [2024-07-14 07:44:14.024281] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:58.084 [2024-07-14 07:44:14.024297] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:58.084 [2024-07-14 07:44:14.026472] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:58.084 [2024-07-14 07:44:14.035670] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:58.084 [2024-07-14 07:44:14.036061] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.084 [2024-07-14 07:44:14.036288] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.084 [2024-07-14 07:44:14.036335] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19e2400 with addr=10.0.0.2, port=4420 00:26:58.084 [2024-07-14 07:44:14.036355] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19e2400 is same with the state(5) to be set 00:26:58.084 [2024-07-14 07:44:14.036521] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19e2400 (9): Bad file descriptor 00:26:58.084 [2024-07-14 07:44:14.036655] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:58.084 [2024-07-14 07:44:14.036679] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:58.084 [2024-07-14 07:44:14.036696] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:58.084 [2024-07-14 07:44:14.039110] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:58.084 [2024-07-14 07:44:14.048316] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:58.084 [2024-07-14 07:44:14.048762] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.084 [2024-07-14 07:44:14.048995] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.084 [2024-07-14 07:44:14.049023] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19e2400 with addr=10.0.0.2, port=4420 00:26:58.084 [2024-07-14 07:44:14.049039] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19e2400 is same with the state(5) to be set 00:26:58.084 [2024-07-14 07:44:14.049249] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19e2400 (9): Bad file descriptor 00:26:58.084 [2024-07-14 07:44:14.049471] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:58.084 [2024-07-14 07:44:14.049493] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:58.084 [2024-07-14 07:44:14.049506] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:58.084 [2024-07-14 07:44:14.051920] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:58.084 [2024-07-14 07:44:14.060913] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:58.084 [2024-07-14 07:44:14.061268] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.084 [2024-07-14 07:44:14.061454] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.084 [2024-07-14 07:44:14.061496] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19e2400 with addr=10.0.0.2, port=4420 00:26:58.084 [2024-07-14 07:44:14.061515] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19e2400 is same with the state(5) to be set 00:26:58.084 [2024-07-14 07:44:14.061662] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19e2400 (9): Bad file descriptor 00:26:58.084 [2024-07-14 07:44:14.061843] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:58.084 [2024-07-14 07:44:14.061863] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:58.084 [2024-07-14 07:44:14.061905] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:58.084 [2024-07-14 07:44:14.064264] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:58.084 [2024-07-14 07:44:14.073559] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:58.085 [2024-07-14 07:44:14.073943] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.085 [2024-07-14 07:44:14.074128] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.085 [2024-07-14 07:44:14.074155] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19e2400 with addr=10.0.0.2, port=4420 00:26:58.085 [2024-07-14 07:44:14.074191] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19e2400 is same with the state(5) to be set 00:26:58.085 [2024-07-14 07:44:14.074337] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19e2400 (9): Bad file descriptor 00:26:58.085 [2024-07-14 07:44:14.074449] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:58.085 [2024-07-14 07:44:14.074469] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:58.085 [2024-07-14 07:44:14.074482] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:58.085 [2024-07-14 07:44:14.076863] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:58.085 [2024-07-14 07:44:14.086087] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:58.085 [2024-07-14 07:44:14.086450] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.085 [2024-07-14 07:44:14.086643] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.085 [2024-07-14 07:44:14.086669] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19e2400 with addr=10.0.0.2, port=4420 00:26:58.085 [2024-07-14 07:44:14.086686] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19e2400 is same with the state(5) to be set 00:26:58.085 [2024-07-14 07:44:14.086803] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19e2400 (9): Bad file descriptor 00:26:58.085 [2024-07-14 07:44:14.086963] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:58.085 [2024-07-14 07:44:14.086986] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:58.085 [2024-07-14 07:44:14.087001] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:58.085 [2024-07-14 07:44:14.089221] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:58.085 [2024-07-14 07:44:14.098492] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:58.085 [2024-07-14 07:44:14.098822] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.085 [2024-07-14 07:44:14.099015] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.085 [2024-07-14 07:44:14.099041] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19e2400 with addr=10.0.0.2, port=4420 00:26:58.085 [2024-07-14 07:44:14.099058] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19e2400 is same with the state(5) to be set 00:26:58.085 [2024-07-14 07:44:14.099215] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19e2400 (9): Bad file descriptor 00:26:58.085 [2024-07-14 07:44:14.099386] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:58.085 [2024-07-14 07:44:14.099411] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:58.085 [2024-07-14 07:44:14.099427] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:58.085 [2024-07-14 07:44:14.101597] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:58.085 [2024-07-14 07:44:14.111198] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:58.085 [2024-07-14 07:44:14.111638] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.085 [2024-07-14 07:44:14.111839] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.085 [2024-07-14 07:44:14.111872] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19e2400 with addr=10.0.0.2, port=4420 00:26:58.085 [2024-07-14 07:44:14.111891] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19e2400 is same with the state(5) to be set 00:26:58.085 [2024-07-14 07:44:14.112061] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19e2400 (9): Bad file descriptor 00:26:58.085 [2024-07-14 07:44:14.112274] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:58.085 [2024-07-14 07:44:14.112299] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:58.085 [2024-07-14 07:44:14.112315] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:58.085 [2024-07-14 07:44:14.114824] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:58.085 [2024-07-14 07:44:14.123837] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:58.085 [2024-07-14 07:44:14.124189] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.085 [2024-07-14 07:44:14.124464] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.085 [2024-07-14 07:44:14.124489] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19e2400 with addr=10.0.0.2, port=4420 00:26:58.085 [2024-07-14 07:44:14.124523] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19e2400 is same with the state(5) to be set 00:26:58.085 [2024-07-14 07:44:14.124710] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19e2400 (9): Bad file descriptor 00:26:58.085 [2024-07-14 07:44:14.124892] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:58.085 [2024-07-14 07:44:14.124915] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:58.085 [2024-07-14 07:44:14.124930] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:58.085 [2024-07-14 07:44:14.127405] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:58.085 [2024-07-14 07:44:14.136404] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:58.085 [2024-07-14 07:44:14.136827] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.085 [2024-07-14 07:44:14.137043] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.085 [2024-07-14 07:44:14.137070] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19e2400 with addr=10.0.0.2, port=4420 00:26:58.085 [2024-07-14 07:44:14.137086] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19e2400 is same with the state(5) to be set 00:26:58.085 [2024-07-14 07:44:14.137229] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19e2400 (9): Bad file descriptor 00:26:58.085 [2024-07-14 07:44:14.137399] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:58.085 [2024-07-14 07:44:14.137424] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:58.085 [2024-07-14 07:44:14.137440] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:58.085 [2024-07-14 07:44:14.139862] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:58.085 [2024-07-14 07:44:14.149010] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:58.085 [2024-07-14 07:44:14.149479] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.085 [2024-07-14 07:44:14.149700] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.085 [2024-07-14 07:44:14.149729] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19e2400 with addr=10.0.0.2, port=4420 00:26:58.085 [2024-07-14 07:44:14.149747] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19e2400 is same with the state(5) to be set 00:26:58.085 [2024-07-14 07:44:14.149936] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19e2400 (9): Bad file descriptor 00:26:58.085 [2024-07-14 07:44:14.150081] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:58.085 [2024-07-14 07:44:14.150103] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:58.085 [2024-07-14 07:44:14.150118] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:58.085 [2024-07-14 07:44:14.152464] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:58.085 [2024-07-14 07:44:14.161667] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:58.085 [2024-07-14 07:44:14.162014] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.085 [2024-07-14 07:44:14.162226] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.085 [2024-07-14 07:44:14.162274] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19e2400 with addr=10.0.0.2, port=4420 00:26:58.085 [2024-07-14 07:44:14.162292] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19e2400 is same with the state(5) to be set 00:26:58.085 [2024-07-14 07:44:14.162457] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19e2400 (9): Bad file descriptor 00:26:58.085 [2024-07-14 07:44:14.162663] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:58.085 [2024-07-14 07:44:14.162688] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:58.085 [2024-07-14 07:44:14.162704] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:58.085 [2024-07-14 07:44:14.165007] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:58.085 [2024-07-14 07:44:14.174216] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:58.085 [2024-07-14 07:44:14.174604] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.085 [2024-07-14 07:44:14.174875] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.085 [2024-07-14 07:44:14.174905] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19e2400 with addr=10.0.0.2, port=4420 00:26:58.085 [2024-07-14 07:44:14.174923] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19e2400 is same with the state(5) to be set 00:26:58.085 [2024-07-14 07:44:14.175090] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19e2400 (9): Bad file descriptor 00:26:58.085 [2024-07-14 07:44:14.175260] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:58.085 [2024-07-14 07:44:14.175284] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:58.085 [2024-07-14 07:44:14.175300] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:58.085 [2024-07-14 07:44:14.177456] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:58.085 [2024-07-14 07:44:14.186814] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:58.085 [2024-07-14 07:44:14.187191] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.085 [2024-07-14 07:44:14.187465] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.085 [2024-07-14 07:44:14.187511] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19e2400 with addr=10.0.0.2, port=4420 00:26:58.085 [2024-07-14 07:44:14.187530] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19e2400 is same with the state(5) to be set 00:26:58.085 [2024-07-14 07:44:14.187678] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19e2400 (9): Bad file descriptor 00:26:58.085 [2024-07-14 07:44:14.187812] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:58.085 [2024-07-14 07:44:14.187841] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:58.085 [2024-07-14 07:44:14.187858] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:58.085 [2024-07-14 07:44:14.190351] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:58.086 [2024-07-14 07:44:14.199617] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:58.086 [2024-07-14 07:44:14.199991] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.086 [2024-07-14 07:44:14.200239] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.086 [2024-07-14 07:44:14.200268] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19e2400 with addr=10.0.0.2, port=4420 00:26:58.086 [2024-07-14 07:44:14.200286] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19e2400 is same with the state(5) to be set 00:26:58.086 [2024-07-14 07:44:14.200453] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19e2400 (9): Bad file descriptor 00:26:58.086 [2024-07-14 07:44:14.200623] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:58.086 [2024-07-14 07:44:14.200648] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:58.086 [2024-07-14 07:44:14.200664] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:58.086 [2024-07-14 07:44:14.202807] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:58.086 [2024-07-14 07:44:14.212139] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:58.086 [2024-07-14 07:44:14.212579] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.086 [2024-07-14 07:44:14.212813] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.086 [2024-07-14 07:44:14.212842] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19e2400 with addr=10.0.0.2, port=4420 00:26:58.086 [2024-07-14 07:44:14.212858] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19e2400 is same with the state(5) to be set 00:26:58.086 [2024-07-14 07:44:14.213091] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19e2400 (9): Bad file descriptor 00:26:58.086 [2024-07-14 07:44:14.213226] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:58.086 [2024-07-14 07:44:14.213252] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:58.086 [2024-07-14 07:44:14.213269] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:58.086 [2024-07-14 07:44:14.215535] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:58.086 [2024-07-14 07:44:14.224644] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:58.086 [2024-07-14 07:44:14.225016] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.086 [2024-07-14 07:44:14.225225] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.086 [2024-07-14 07:44:14.225256] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19e2400 with addr=10.0.0.2, port=4420 00:26:58.086 [2024-07-14 07:44:14.225274] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19e2400 is same with the state(5) to be set 00:26:58.086 [2024-07-14 07:44:14.225441] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19e2400 (9): Bad file descriptor 00:26:58.086 [2024-07-14 07:44:14.225629] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:58.086 [2024-07-14 07:44:14.225655] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:58.086 [2024-07-14 07:44:14.225676] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:58.086 [2024-07-14 07:44:14.227974] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:58.086 [2024-07-14 07:44:14.237073] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:58.086 [2024-07-14 07:44:14.237459] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.086 [2024-07-14 07:44:14.237693] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.086 [2024-07-14 07:44:14.237741] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19e2400 with addr=10.0.0.2, port=4420 00:26:58.086 [2024-07-14 07:44:14.237760] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19e2400 is same with the state(5) to be set 00:26:58.086 [2024-07-14 07:44:14.237923] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19e2400 (9): Bad file descriptor 00:26:58.086 [2024-07-14 07:44:14.238075] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:58.086 [2024-07-14 07:44:14.238099] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:58.086 [2024-07-14 07:44:14.238115] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:58.086 [2024-07-14 07:44:14.240398] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:58.086 [2024-07-14 07:44:14.249846] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:58.086 [2024-07-14 07:44:14.250268] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.086 [2024-07-14 07:44:14.250503] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.086 [2024-07-14 07:44:14.250534] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19e2400 with addr=10.0.0.2, port=4420 00:26:58.086 [2024-07-14 07:44:14.250553] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19e2400 is same with the state(5) to be set 00:26:58.086 [2024-07-14 07:44:14.250726] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19e2400 (9): Bad file descriptor 00:26:58.086 [2024-07-14 07:44:14.250979] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:58.086 [2024-07-14 07:44:14.251017] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:58.086 [2024-07-14 07:44:14.251049] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:58.344 [2024-07-14 07:44:14.253611] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:58.344 [2024-07-14 07:44:14.262365] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:58.344 [2024-07-14 07:44:14.262759] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.344 [2024-07-14 07:44:14.263011] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.344 [2024-07-14 07:44:14.263044] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19e2400 with addr=10.0.0.2, port=4420 00:26:58.344 [2024-07-14 07:44:14.263063] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19e2400 is same with the state(5) to be set 00:26:58.344 [2024-07-14 07:44:14.263257] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19e2400 (9): Bad file descriptor 00:26:58.344 [2024-07-14 07:44:14.263448] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:58.344 [2024-07-14 07:44:14.263474] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:58.344 [2024-07-14 07:44:14.263490] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:58.344 [2024-07-14 07:44:14.265970] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:58.344 [2024-07-14 07:44:14.274951] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:58.344 [2024-07-14 07:44:14.275381] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.344 [2024-07-14 07:44:14.275628] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.344 [2024-07-14 07:44:14.275677] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19e2400 with addr=10.0.0.2, port=4420 00:26:58.344 [2024-07-14 07:44:14.275695] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19e2400 is same with the state(5) to be set 00:26:58.344 [2024-07-14 07:44:14.275844] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19e2400 (9): Bad file descriptor 00:26:58.344 [2024-07-14 07:44:14.276027] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:58.344 [2024-07-14 07:44:14.276054] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:58.344 [2024-07-14 07:44:14.276070] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:58.344 [2024-07-14 07:44:14.278283] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:58.344 [2024-07-14 07:44:14.287645] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:58.344 [2024-07-14 07:44:14.287995] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.344 [2024-07-14 07:44:14.288328] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.344 [2024-07-14 07:44:14.288376] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19e2400 with addr=10.0.0.2, port=4420 00:26:58.344 [2024-07-14 07:44:14.288394] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19e2400 is same with the state(5) to be set 00:26:58.344 [2024-07-14 07:44:14.288507] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19e2400 (9): Bad file descriptor 00:26:58.344 [2024-07-14 07:44:14.288694] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:58.344 [2024-07-14 07:44:14.288719] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:58.344 [2024-07-14 07:44:14.288736] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:58.344 [2024-07-14 07:44:14.291032] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:58.344 [2024-07-14 07:44:14.300297] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:58.344 [2024-07-14 07:44:14.300649] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.344 [2024-07-14 07:44:14.300876] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.344 [2024-07-14 07:44:14.300903] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19e2400 with addr=10.0.0.2, port=4420 00:26:58.344 [2024-07-14 07:44:14.300919] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19e2400 is same with the state(5) to be set 00:26:58.344 [2024-07-14 07:44:14.301086] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19e2400 (9): Bad file descriptor 00:26:58.344 [2024-07-14 07:44:14.301256] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:58.344 [2024-07-14 07:44:14.301282] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:58.344 [2024-07-14 07:44:14.301298] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:58.344 [2024-07-14 07:44:14.303602] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:58.344 [2024-07-14 07:44:14.312858] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:58.344 [2024-07-14 07:44:14.313200] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.344 [2024-07-14 07:44:14.313540] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.344 [2024-07-14 07:44:14.313587] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19e2400 with addr=10.0.0.2, port=4420 00:26:58.344 [2024-07-14 07:44:14.313605] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19e2400 is same with the state(5) to be set 00:26:58.344 [2024-07-14 07:44:14.313754] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19e2400 (9): Bad file descriptor 00:26:58.344 [2024-07-14 07:44:14.313901] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:58.344 [2024-07-14 07:44:14.313926] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:58.344 [2024-07-14 07:44:14.313942] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:58.344 [2024-07-14 07:44:14.316391] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:58.344 [2024-07-14 07:44:14.325472] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:58.344 [2024-07-14 07:44:14.325860] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.344 [2024-07-14 07:44:14.326074] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.344 [2024-07-14 07:44:14.326103] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19e2400 with addr=10.0.0.2, port=4420 00:26:58.344 [2024-07-14 07:44:14.326121] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19e2400 is same with the state(5) to be set 00:26:58.344 [2024-07-14 07:44:14.326306] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19e2400 (9): Bad file descriptor 00:26:58.344 [2024-07-14 07:44:14.326459] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:58.344 [2024-07-14 07:44:14.326484] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:58.345 [2024-07-14 07:44:14.326501] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:58.345 [2024-07-14 07:44:14.328772] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:58.345 [2024-07-14 07:44:14.337983] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:58.345 [2024-07-14 07:44:14.338388] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.345 [2024-07-14 07:44:14.338612] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.345 [2024-07-14 07:44:14.338638] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19e2400 with addr=10.0.0.2, port=4420 00:26:58.345 [2024-07-14 07:44:14.338669] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19e2400 is same with the state(5) to be set 00:26:58.345 [2024-07-14 07:44:14.338848] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19e2400 (9): Bad file descriptor 00:26:58.345 [2024-07-14 07:44:14.339030] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:58.345 [2024-07-14 07:44:14.339056] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:58.345 [2024-07-14 07:44:14.339073] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:58.345 [2024-07-14 07:44:14.341322] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:58.345 [2024-07-14 07:44:14.350526] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:58.345 [2024-07-14 07:44:14.350963] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.345 [2024-07-14 07:44:14.351198] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.345 [2024-07-14 07:44:14.351228] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19e2400 with addr=10.0.0.2, port=4420 00:26:58.345 [2024-07-14 07:44:14.351247] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19e2400 is same with the state(5) to be set 00:26:58.345 [2024-07-14 07:44:14.351413] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19e2400 (9): Bad file descriptor 00:26:58.345 [2024-07-14 07:44:14.351583] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:58.345 [2024-07-14 07:44:14.351609] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:58.345 [2024-07-14 07:44:14.351625] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:58.345 [2024-07-14 07:44:14.353860] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:58.345 [2024-07-14 07:44:14.363129] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:58.345 [2024-07-14 07:44:14.363547] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.345 [2024-07-14 07:44:14.363777] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.345 [2024-07-14 07:44:14.363825] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19e2400 with addr=10.0.0.2, port=4420 00:26:58.345 [2024-07-14 07:44:14.363844] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19e2400 is same with the state(5) to be set 00:26:58.345 [2024-07-14 07:44:14.363968] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19e2400 (9): Bad file descriptor 00:26:58.345 [2024-07-14 07:44:14.364120] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:58.345 [2024-07-14 07:44:14.364144] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:58.345 [2024-07-14 07:44:14.364160] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:58.345 [2024-07-14 07:44:14.366575] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:58.345 [2024-07-14 07:44:14.375913] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:58.345 [2024-07-14 07:44:14.376297] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.345 [2024-07-14 07:44:14.376665] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.345 [2024-07-14 07:44:14.376715] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19e2400 with addr=10.0.0.2, port=4420 00:26:58.345 [2024-07-14 07:44:14.376733] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19e2400 is same with the state(5) to be set 00:26:58.345 [2024-07-14 07:44:14.376948] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19e2400 (9): Bad file descriptor 00:26:58.345 [2024-07-14 07:44:14.377136] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:58.345 [2024-07-14 07:44:14.377160] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:58.345 [2024-07-14 07:44:14.377175] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:58.345 [2024-07-14 07:44:14.379657] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:58.345 [2024-07-14 07:44:14.388641] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:58.345 [2024-07-14 07:44:14.388985] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.345 [2024-07-14 07:44:14.389197] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.345 [2024-07-14 07:44:14.389233] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19e2400 with addr=10.0.0.2, port=4420 00:26:58.345 [2024-07-14 07:44:14.389252] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19e2400 is same with the state(5) to be set 00:26:58.345 [2024-07-14 07:44:14.389418] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19e2400 (9): Bad file descriptor 00:26:58.345 [2024-07-14 07:44:14.389587] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:58.345 [2024-07-14 07:44:14.389611] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:58.345 [2024-07-14 07:44:14.389627] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:58.345 [2024-07-14 07:44:14.391774] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:58.345 [2024-07-14 07:44:14.400962] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:58.345 [2024-07-14 07:44:14.401373] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.345 [2024-07-14 07:44:14.401579] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.345 [2024-07-14 07:44:14.401618] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19e2400 with addr=10.0.0.2, port=4420 00:26:58.345 [2024-07-14 07:44:14.401637] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19e2400 is same with the state(5) to be set 00:26:58.345 [2024-07-14 07:44:14.401840] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19e2400 (9): Bad file descriptor 00:26:58.345 [2024-07-14 07:44:14.401986] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:58.345 [2024-07-14 07:44:14.402012] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:58.345 [2024-07-14 07:44:14.402029] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:58.345 [2024-07-14 07:44:14.404295] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:58.345 [2024-07-14 07:44:14.413504] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:58.345 [2024-07-14 07:44:14.413927] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.345 [2024-07-14 07:44:14.414110] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.345 [2024-07-14 07:44:14.414138] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19e2400 with addr=10.0.0.2, port=4420 00:26:58.345 [2024-07-14 07:44:14.414156] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19e2400 is same with the state(5) to be set 00:26:58.345 [2024-07-14 07:44:14.414359] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19e2400 (9): Bad file descriptor 00:26:58.345 [2024-07-14 07:44:14.414546] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:58.345 [2024-07-14 07:44:14.414571] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:58.345 [2024-07-14 07:44:14.414588] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:58.345 [2024-07-14 07:44:14.416972] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:58.345 [2024-07-14 07:44:14.425932] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:58.345 [2024-07-14 07:44:14.426358] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.345 [2024-07-14 07:44:14.426679] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.345 [2024-07-14 07:44:14.426708] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19e2400 with addr=10.0.0.2, port=4420 00:26:58.345 [2024-07-14 07:44:14.426733] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19e2400 is same with the state(5) to be set 00:26:58.345 [2024-07-14 07:44:14.426892] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19e2400 (9): Bad file descriptor 00:26:58.345 [2024-07-14 07:44:14.427062] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:58.345 [2024-07-14 07:44:14.427085] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:58.345 [2024-07-14 07:44:14.427102] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:58.345 [2024-07-14 07:44:14.429402] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:58.345 [2024-07-14 07:44:14.438464] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:58.345 [2024-07-14 07:44:14.438885] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.345 [2024-07-14 07:44:14.439121] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.345 [2024-07-14 07:44:14.439151] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19e2400 with addr=10.0.0.2, port=4420 00:26:58.345 [2024-07-14 07:44:14.439170] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19e2400 is same with the state(5) to be set 00:26:58.345 [2024-07-14 07:44:14.439300] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19e2400 (9): Bad file descriptor 00:26:58.345 [2024-07-14 07:44:14.439434] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:58.345 [2024-07-14 07:44:14.439457] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:58.345 [2024-07-14 07:44:14.439473] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:58.345 [2024-07-14 07:44:14.441970] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:58.345 [2024-07-14 07:44:14.450904] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:58.345 [2024-07-14 07:44:14.451328] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.345 [2024-07-14 07:44:14.451577] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.345 [2024-07-14 07:44:14.451617] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19e2400 with addr=10.0.0.2, port=4420 00:26:58.345 [2024-07-14 07:44:14.451633] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19e2400 is same with the state(5) to be set 00:26:58.345 [2024-07-14 07:44:14.451812] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19e2400 (9): Bad file descriptor 00:26:58.345 [2024-07-14 07:44:14.452020] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:58.346 [2024-07-14 07:44:14.452047] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:58.346 [2024-07-14 07:44:14.452065] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:58.346 [2024-07-14 07:44:14.454402] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:58.346 [2024-07-14 07:44:14.463588] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:58.346 [2024-07-14 07:44:14.463987] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.346 [2024-07-14 07:44:14.464194] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.346 [2024-07-14 07:44:14.464222] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19e2400 with addr=10.0.0.2, port=4420 00:26:58.346 [2024-07-14 07:44:14.464240] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19e2400 is same with the state(5) to be set 00:26:58.346 [2024-07-14 07:44:14.464412] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19e2400 (9): Bad file descriptor 00:26:58.346 [2024-07-14 07:44:14.464599] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:58.346 [2024-07-14 07:44:14.464625] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:58.346 [2024-07-14 07:44:14.464642] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:58.346 [2024-07-14 07:44:14.467138] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:58.346 [2024-07-14 07:44:14.476164] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:58.346 [2024-07-14 07:44:14.476573] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.346 [2024-07-14 07:44:14.476774] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.346 [2024-07-14 07:44:14.476802] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19e2400 with addr=10.0.0.2, port=4420 00:26:58.346 [2024-07-14 07:44:14.476820] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19e2400 is same with the state(5) to be set 00:26:58.346 [2024-07-14 07:44:14.477000] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19e2400 (9): Bad file descriptor 00:26:58.346 [2024-07-14 07:44:14.477153] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:58.346 [2024-07-14 07:44:14.477178] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:58.346 [2024-07-14 07:44:14.477195] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:58.346 [2024-07-14 07:44:14.479517] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:58.346 [2024-07-14 07:44:14.488654] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:58.346 [2024-07-14 07:44:14.489060] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.346 [2024-07-14 07:44:14.489349] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.346 [2024-07-14 07:44:14.489414] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19e2400 with addr=10.0.0.2, port=4420 00:26:58.346 [2024-07-14 07:44:14.489433] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19e2400 is same with the state(5) to be set 00:26:58.346 [2024-07-14 07:44:14.489563] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19e2400 (9): Bad file descriptor 00:26:58.346 [2024-07-14 07:44:14.489714] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:58.346 [2024-07-14 07:44:14.489737] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:58.346 [2024-07-14 07:44:14.489754] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:58.346 [2024-07-14 07:44:14.492200] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:58.346 [2024-07-14 07:44:14.501261] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:58.346 [2024-07-14 07:44:14.501616] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.346 [2024-07-14 07:44:14.501851] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.346 [2024-07-14 07:44:14.501893] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19e2400 with addr=10.0.0.2, port=4420 00:26:58.346 [2024-07-14 07:44:14.501913] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19e2400 is same with the state(5) to be set 00:26:58.346 [2024-07-14 07:44:14.502062] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19e2400 (9): Bad file descriptor 00:26:58.346 [2024-07-14 07:44:14.502238] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:58.346 [2024-07-14 07:44:14.502264] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:58.346 [2024-07-14 07:44:14.502280] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:58.346 [2024-07-14 07:44:14.504657] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:58.604 [2024-07-14 07:44:14.514174] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:58.604 [2024-07-14 07:44:14.514777] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.604 [2024-07-14 07:44:14.515050] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.604 [2024-07-14 07:44:14.515083] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19e2400 with addr=10.0.0.2, port=4420 00:26:58.604 [2024-07-14 07:44:14.515102] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19e2400 is same with the state(5) to be set 00:26:58.604 [2024-07-14 07:44:14.515271] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19e2400 (9): Bad file descriptor 00:26:58.604 [2024-07-14 07:44:14.515458] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:58.604 [2024-07-14 07:44:14.515484] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:58.604 [2024-07-14 07:44:14.515501] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:58.604 [2024-07-14 07:44:14.517873] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:58.604 [2024-07-14 07:44:14.526720] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:58.604 [2024-07-14 07:44:14.527122] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.604 [2024-07-14 07:44:14.527343] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.604 [2024-07-14 07:44:14.527373] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19e2400 with addr=10.0.0.2, port=4420 00:26:58.604 [2024-07-14 07:44:14.527392] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19e2400 is same with the state(5) to be set 00:26:58.604 [2024-07-14 07:44:14.527577] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19e2400 (9): Bad file descriptor 00:26:58.604 [2024-07-14 07:44:14.527747] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:58.604 [2024-07-14 07:44:14.527772] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:58.604 [2024-07-14 07:44:14.527789] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:58.604 [2024-07-14 07:44:14.530141] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:58.604 [2024-07-14 07:44:14.539238] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:58.604 [2024-07-14 07:44:14.539617] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.604 [2024-07-14 07:44:14.539825] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.604 [2024-07-14 07:44:14.539853] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19e2400 with addr=10.0.0.2, port=4420 00:26:58.604 [2024-07-14 07:44:14.539884] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19e2400 is same with the state(5) to be set 00:26:58.604 [2024-07-14 07:44:14.540071] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19e2400 (9): Bad file descriptor 00:26:58.604 [2024-07-14 07:44:14.540240] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:58.604 [2024-07-14 07:44:14.540272] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:58.604 [2024-07-14 07:44:14.540289] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:58.604 [2024-07-14 07:44:14.542359] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:58.604 [2024-07-14 07:44:14.551636] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:58.604 [2024-07-14 07:44:14.552020] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.604 [2024-07-14 07:44:14.552219] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.604 [2024-07-14 07:44:14.552248] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19e2400 with addr=10.0.0.2, port=4420 00:26:58.604 [2024-07-14 07:44:14.552267] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19e2400 is same with the state(5) to be set 00:26:58.604 [2024-07-14 07:44:14.552432] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19e2400 (9): Bad file descriptor 00:26:58.604 [2024-07-14 07:44:14.552592] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:58.604 [2024-07-14 07:44:14.552617] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:58.604 [2024-07-14 07:44:14.552634] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:58.604 [2024-07-14 07:44:14.555036] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:58.604 [2024-07-14 07:44:14.564348] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:58.604 [2024-07-14 07:44:14.564921] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.604 [2024-07-14 07:44:14.565142] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.604 [2024-07-14 07:44:14.565174] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19e2400 with addr=10.0.0.2, port=4420 00:26:58.604 [2024-07-14 07:44:14.565192] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19e2400 is same with the state(5) to be set 00:26:58.604 [2024-07-14 07:44:14.565377] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19e2400 (9): Bad file descriptor 00:26:58.604 [2024-07-14 07:44:14.565565] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:58.604 [2024-07-14 07:44:14.565590] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:58.604 [2024-07-14 07:44:14.565606] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:58.604 [2024-07-14 07:44:14.567932] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:58.604 [2024-07-14 07:44:14.576815] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:58.604 [2024-07-14 07:44:14.577211] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.604 [2024-07-14 07:44:14.577423] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.604 [2024-07-14 07:44:14.577448] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19e2400 with addr=10.0.0.2, port=4420 00:26:58.604 [2024-07-14 07:44:14.577463] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19e2400 is same with the state(5) to be set 00:26:58.604 [2024-07-14 07:44:14.577589] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19e2400 (9): Bad file descriptor 00:26:58.604 [2024-07-14 07:44:14.577780] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:58.604 [2024-07-14 07:44:14.577805] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:58.604 [2024-07-14 07:44:14.577828] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:58.604 [2024-07-14 07:44:14.580070] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:58.604 [2024-07-14 07:44:14.589421] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:58.604 [2024-07-14 07:44:14.589835] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.604 [2024-07-14 07:44:14.590033] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.604 [2024-07-14 07:44:14.590063] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19e2400 with addr=10.0.0.2, port=4420 00:26:58.604 [2024-07-14 07:44:14.590081] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19e2400 is same with the state(5) to be set 00:26:58.605 [2024-07-14 07:44:14.590211] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19e2400 (9): Bad file descriptor 00:26:58.605 [2024-07-14 07:44:14.590379] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:58.605 [2024-07-14 07:44:14.590405] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:58.605 [2024-07-14 07:44:14.590421] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:58.605 [2024-07-14 07:44:14.592914] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:58.605 [2024-07-14 07:44:14.602156] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:58.605 [2024-07-14 07:44:14.602567] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.605 [2024-07-14 07:44:14.602808] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.605 [2024-07-14 07:44:14.602857] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19e2400 with addr=10.0.0.2, port=4420 00:26:58.605 [2024-07-14 07:44:14.602889] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19e2400 is same with the state(5) to be set 00:26:58.605 [2024-07-14 07:44:14.603094] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19e2400 (9): Bad file descriptor 00:26:58.605 [2024-07-14 07:44:14.603245] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:58.605 [2024-07-14 07:44:14.603271] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:58.605 [2024-07-14 07:44:14.603288] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:58.605 [2024-07-14 07:44:14.605611] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:58.605 [2024-07-14 07:44:14.614850] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:58.605 [2024-07-14 07:44:14.615289] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.605 [2024-07-14 07:44:14.615529] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.605 [2024-07-14 07:44:14.615573] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19e2400 with addr=10.0.0.2, port=4420 00:26:58.605 [2024-07-14 07:44:14.615592] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19e2400 is same with the state(5) to be set 00:26:58.605 [2024-07-14 07:44:14.615758] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19e2400 (9): Bad file descriptor 00:26:58.605 [2024-07-14 07:44:14.615904] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:58.605 [2024-07-14 07:44:14.615931] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:58.605 [2024-07-14 07:44:14.615947] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:58.605 [2024-07-14 07:44:14.618399] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:58.605 [2024-07-14 07:44:14.627676] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:58.605 [2024-07-14 07:44:14.628009] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.605 [2024-07-14 07:44:14.628221] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.605 [2024-07-14 07:44:14.628252] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19e2400 with addr=10.0.0.2, port=4420 00:26:58.605 [2024-07-14 07:44:14.628270] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19e2400 is same with the state(5) to be set 00:26:58.605 [2024-07-14 07:44:14.628401] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19e2400 (9): Bad file descriptor 00:26:58.605 [2024-07-14 07:44:14.628570] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:58.605 [2024-07-14 07:44:14.628595] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:58.605 [2024-07-14 07:44:14.628612] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:58.605 [2024-07-14 07:44:14.630827] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:58.605 [2024-07-14 07:44:14.640210] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:58.605 [2024-07-14 07:44:14.640542] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.605 [2024-07-14 07:44:14.640753] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.605 [2024-07-14 07:44:14.640781] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19e2400 with addr=10.0.0.2, port=4420 00:26:58.605 [2024-07-14 07:44:14.640799] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19e2400 is same with the state(5) to be set 00:26:58.605 [2024-07-14 07:44:14.640943] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19e2400 (9): Bad file descriptor 00:26:58.605 [2024-07-14 07:44:14.641131] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:58.605 [2024-07-14 07:44:14.641157] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:58.605 [2024-07-14 07:44:14.641173] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:58.605 [2024-07-14 07:44:14.643564] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:58.605 [2024-07-14 07:44:14.652759] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:58.605 [2024-07-14 07:44:14.653126] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.605 [2024-07-14 07:44:14.653384] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.605 [2024-07-14 07:44:14.653425] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19e2400 with addr=10.0.0.2, port=4420 00:26:58.605 [2024-07-14 07:44:14.653441] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19e2400 is same with the state(5) to be set 00:26:58.605 [2024-07-14 07:44:14.653622] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19e2400 (9): Bad file descriptor 00:26:58.605 [2024-07-14 07:44:14.653821] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:58.605 [2024-07-14 07:44:14.653847] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:58.605 [2024-07-14 07:44:14.653864] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:58.605 [2024-07-14 07:44:14.656217] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:58.605 [2024-07-14 07:44:14.665411] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:58.605 [2024-07-14 07:44:14.665852] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.605 [2024-07-14 07:44:14.666114] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.605 [2024-07-14 07:44:14.666145] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19e2400 with addr=10.0.0.2, port=4420 00:26:58.605 [2024-07-14 07:44:14.666163] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19e2400 is same with the state(5) to be set 00:26:58.605 [2024-07-14 07:44:14.666347] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19e2400 (9): Bad file descriptor 00:26:58.605 [2024-07-14 07:44:14.666517] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:58.605 [2024-07-14 07:44:14.666543] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:58.605 [2024-07-14 07:44:14.666559] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:58.605 [2024-07-14 07:44:14.668927] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:58.605 [2024-07-14 07:44:14.678183] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:58.605 [2024-07-14 07:44:14.678530] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.605 [2024-07-14 07:44:14.678827] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.605 [2024-07-14 07:44:14.678898] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19e2400 with addr=10.0.0.2, port=4420 00:26:58.605 [2024-07-14 07:44:14.678917] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19e2400 is same with the state(5) to be set 00:26:58.605 [2024-07-14 07:44:14.679083] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19e2400 (9): Bad file descriptor 00:26:58.605 [2024-07-14 07:44:14.679306] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:58.605 [2024-07-14 07:44:14.679330] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:58.605 [2024-07-14 07:44:14.679346] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:58.605 [2024-07-14 07:44:14.681758] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:58.605 [2024-07-14 07:44:14.690762] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:58.605 [2024-07-14 07:44:14.691161] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.605 [2024-07-14 07:44:14.691468] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.605 [2024-07-14 07:44:14.691498] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19e2400 with addr=10.0.0.2, port=4420 00:26:58.605 [2024-07-14 07:44:14.691516] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19e2400 is same with the state(5) to be set 00:26:58.605 [2024-07-14 07:44:14.691665] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19e2400 (9): Bad file descriptor 00:26:58.605 [2024-07-14 07:44:14.691834] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:58.605 [2024-07-14 07:44:14.691859] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:58.605 [2024-07-14 07:44:14.691885] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:58.605 [2024-07-14 07:44:14.694127] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:58.605 [2024-07-14 07:44:14.703321] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:58.605 [2024-07-14 07:44:14.703759] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.605 [2024-07-14 07:44:14.703961] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.605 [2024-07-14 07:44:14.703988] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19e2400 with addr=10.0.0.2, port=4420 00:26:58.605 [2024-07-14 07:44:14.704004] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19e2400 is same with the state(5) to be set 00:26:58.605 [2024-07-14 07:44:14.704220] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19e2400 (9): Bad file descriptor 00:26:58.605 [2024-07-14 07:44:14.704354] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:58.605 [2024-07-14 07:44:14.704380] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:58.605 [2024-07-14 07:44:14.704396] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:58.605 [2024-07-14 07:44:14.706775] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:58.605 [2024-07-14 07:44:14.715664] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:58.605 [2024-07-14 07:44:14.716032] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.605 [2024-07-14 07:44:14.716297] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.605 [2024-07-14 07:44:14.716344] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19e2400 with addr=10.0.0.2, port=4420 00:26:58.605 [2024-07-14 07:44:14.716363] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19e2400 is same with the state(5) to be set 00:26:58.605 [2024-07-14 07:44:14.716546] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19e2400 (9): Bad file descriptor 00:26:58.606 [2024-07-14 07:44:14.716720] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:58.606 [2024-07-14 07:44:14.716744] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:58.606 [2024-07-14 07:44:14.716760] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:58.606 [2024-07-14 07:44:14.719076] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:58.606 [2024-07-14 07:44:14.728268] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:58.606 [2024-07-14 07:44:14.728647] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.606 [2024-07-14 07:44:14.728883] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.606 [2024-07-14 07:44:14.728913] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19e2400 with addr=10.0.0.2, port=4420 00:26:58.606 [2024-07-14 07:44:14.728932] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19e2400 is same with the state(5) to be set 00:26:58.606 [2024-07-14 07:44:14.729043] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19e2400 (9): Bad file descriptor 00:26:58.606 [2024-07-14 07:44:14.729214] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:58.606 [2024-07-14 07:44:14.729238] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:58.606 [2024-07-14 07:44:14.729254] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:58.606 [2024-07-14 07:44:14.731448] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:58.606 [2024-07-14 07:44:14.740726] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:58.606 [2024-07-14 07:44:14.741131] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.606 [2024-07-14 07:44:14.741417] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.606 [2024-07-14 07:44:14.741449] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19e2400 with addr=10.0.0.2, port=4420 00:26:58.606 [2024-07-14 07:44:14.741482] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19e2400 is same with the state(5) to be set 00:26:58.606 [2024-07-14 07:44:14.741676] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19e2400 (9): Bad file descriptor 00:26:58.606 [2024-07-14 07:44:14.741873] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:58.606 [2024-07-14 07:44:14.741898] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:58.606 [2024-07-14 07:44:14.741915] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:58.606 [2024-07-14 07:44:14.744176] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:58.606 [2024-07-14 07:44:14.753355] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:58.606 [2024-07-14 07:44:14.753819] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.606 [2024-07-14 07:44:14.754093] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.606 [2024-07-14 07:44:14.754123] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19e2400 with addr=10.0.0.2, port=4420 00:26:58.606 [2024-07-14 07:44:14.754142] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19e2400 is same with the state(5) to be set 00:26:58.606 [2024-07-14 07:44:14.754290] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19e2400 (9): Bad file descriptor 00:26:58.606 [2024-07-14 07:44:14.754424] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:58.606 [2024-07-14 07:44:14.754448] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:58.606 [2024-07-14 07:44:14.754465] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:58.606 [2024-07-14 07:44:14.756818] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:58.606 [2024-07-14 07:44:14.765879] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:58.606 [2024-07-14 07:44:14.766220] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.606 [2024-07-14 07:44:14.766516] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.606 [2024-07-14 07:44:14.766563] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19e2400 with addr=10.0.0.2, port=4420 00:26:58.606 [2024-07-14 07:44:14.766581] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19e2400 is same with the state(5) to be set 00:26:58.606 [2024-07-14 07:44:14.766765] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19e2400 (9): Bad file descriptor 00:26:58.606 [2024-07-14 07:44:14.766963] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:58.606 [2024-07-14 07:44:14.766988] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:58.606 [2024-07-14 07:44:14.767005] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:58.606 [2024-07-14 07:44:14.769141] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:58.867 [2024-07-14 07:44:14.778724] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:58.867 [2024-07-14 07:44:14.779056] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.867 [2024-07-14 07:44:14.779287] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.867 [2024-07-14 07:44:14.779337] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19e2400 with addr=10.0.0.2, port=4420 00:26:58.867 [2024-07-14 07:44:14.779362] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19e2400 is same with the state(5) to be set 00:26:58.867 [2024-07-14 07:44:14.779512] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19e2400 (9): Bad file descriptor 00:26:58.867 [2024-07-14 07:44:14.779739] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:58.867 [2024-07-14 07:44:14.779767] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:58.867 [2024-07-14 07:44:14.779784] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:58.867 [2024-07-14 07:44:14.782674] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:58.867 [2024-07-14 07:44:14.791549] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:58.867 [2024-07-14 07:44:14.791972] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.867 [2024-07-14 07:44:14.792194] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.867 [2024-07-14 07:44:14.792222] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19e2400 with addr=10.0.0.2, port=4420 00:26:58.867 [2024-07-14 07:44:14.792253] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19e2400 is same with the state(5) to be set 00:26:58.867 [2024-07-14 07:44:14.792490] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19e2400 (9): Bad file descriptor 00:26:58.867 [2024-07-14 07:44:14.792680] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:58.867 [2024-07-14 07:44:14.792706] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:58.867 [2024-07-14 07:44:14.792722] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:58.867 [2024-07-14 07:44:14.795092] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:58.867 [2024-07-14 07:44:14.803998] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:58.867 [2024-07-14 07:44:14.804398] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.867 [2024-07-14 07:44:14.804646] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.867 [2024-07-14 07:44:14.804708] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19e2400 with addr=10.0.0.2, port=4420 00:26:58.867 [2024-07-14 07:44:14.804727] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19e2400 is same with the state(5) to be set 00:26:58.867 [2024-07-14 07:44:14.804922] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19e2400 (9): Bad file descriptor 00:26:58.867 [2024-07-14 07:44:14.805076] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:58.867 [2024-07-14 07:44:14.805100] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:58.867 [2024-07-14 07:44:14.805128] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:58.867 [2024-07-14 07:44:14.807589] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:58.867 [2024-07-14 07:44:14.816722] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:58.867 [2024-07-14 07:44:14.817071] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.867 [2024-07-14 07:44:14.817480] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.867 [2024-07-14 07:44:14.817540] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19e2400 with addr=10.0.0.2, port=4420 00:26:58.867 [2024-07-14 07:44:14.817559] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19e2400 is same with the state(5) to be set 00:26:58.867 [2024-07-14 07:44:14.817677] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19e2400 (9): Bad file descriptor 00:26:58.867 [2024-07-14 07:44:14.817811] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:58.867 [2024-07-14 07:44:14.817836] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:58.867 [2024-07-14 07:44:14.817852] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:58.867 [2024-07-14 07:44:14.820127] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:58.867 [2024-07-14 07:44:14.829309] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:58.867 [2024-07-14 07:44:14.829738] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.867 [2024-07-14 07:44:14.829988] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.867 [2024-07-14 07:44:14.830019] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19e2400 with addr=10.0.0.2, port=4420 00:26:58.867 [2024-07-14 07:44:14.830037] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19e2400 is same with the state(5) to be set 00:26:58.867 [2024-07-14 07:44:14.830185] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19e2400 (9): Bad file descriptor 00:26:58.867 [2024-07-14 07:44:14.830355] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:58.867 [2024-07-14 07:44:14.830379] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:58.867 [2024-07-14 07:44:14.830396] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:58.867 [2024-07-14 07:44:14.832624] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:58.867 [2024-07-14 07:44:14.842039] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:58.867 [2024-07-14 07:44:14.842456] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.867 [2024-07-14 07:44:14.842654] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.867 [2024-07-14 07:44:14.842699] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19e2400 with addr=10.0.0.2, port=4420 00:26:58.867 [2024-07-14 07:44:14.842717] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19e2400 is same with the state(5) to be set 00:26:58.867 [2024-07-14 07:44:14.842829] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19e2400 (9): Bad file descriptor 00:26:58.867 [2024-07-14 07:44:14.842975] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:58.867 [2024-07-14 07:44:14.843001] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:58.867 [2024-07-14 07:44:14.843017] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:58.867 [2024-07-14 07:44:14.845432] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:58.867 [2024-07-14 07:44:14.854646] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:58.867 [2024-07-14 07:44:14.855091] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.867 [2024-07-14 07:44:14.855469] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.867 [2024-07-14 07:44:14.855529] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19e2400 with addr=10.0.0.2, port=4420 00:26:58.867 [2024-07-14 07:44:14.855548] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19e2400 is same with the state(5) to be set 00:26:58.867 [2024-07-14 07:44:14.855697] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19e2400 (9): Bad file descriptor 00:26:58.867 [2024-07-14 07:44:14.855904] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:58.868 [2024-07-14 07:44:14.855930] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:58.868 [2024-07-14 07:44:14.855946] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:58.868 [2024-07-14 07:44:14.858162] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:58.868 [2024-07-14 07:44:14.867350] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:58.868 [2024-07-14 07:44:14.867820] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.868 [2024-07-14 07:44:14.868059] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.868 [2024-07-14 07:44:14.868089] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19e2400 with addr=10.0.0.2, port=4420 00:26:58.868 [2024-07-14 07:44:14.868108] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19e2400 is same with the state(5) to be set 00:26:58.868 [2024-07-14 07:44:14.868291] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19e2400 (9): Bad file descriptor 00:26:58.868 [2024-07-14 07:44:14.868463] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:58.868 [2024-07-14 07:44:14.868488] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:58.868 [2024-07-14 07:44:14.868504] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:58.868 [2024-07-14 07:44:14.870863] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:58.868 [2024-07-14 07:44:14.879769] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:58.868 [2024-07-14 07:44:14.880131] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.868 [2024-07-14 07:44:14.880377] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.868 [2024-07-14 07:44:14.880425] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19e2400 with addr=10.0.0.2, port=4420 00:26:58.868 [2024-07-14 07:44:14.880444] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19e2400 is same with the state(5) to be set 00:26:58.868 [2024-07-14 07:44:14.880629] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19e2400 (9): Bad file descriptor 00:26:58.868 [2024-07-14 07:44:14.880798] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:58.868 [2024-07-14 07:44:14.880824] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:58.868 [2024-07-14 07:44:14.880840] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:58.868 [2024-07-14 07:44:14.883199] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:58.868 [2024-07-14 07:44:14.892522] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:58.868 [2024-07-14 07:44:14.892907] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.868 [2024-07-14 07:44:14.893096] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.868 [2024-07-14 07:44:14.893124] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19e2400 with addr=10.0.0.2, port=4420 00:26:58.868 [2024-07-14 07:44:14.893141] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19e2400 is same with the state(5) to be set 00:26:58.868 [2024-07-14 07:44:14.893308] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19e2400 (9): Bad file descriptor 00:26:58.868 [2024-07-14 07:44:14.893460] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:58.868 [2024-07-14 07:44:14.893490] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:58.868 [2024-07-14 07:44:14.893508] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:58.868 [2024-07-14 07:44:14.895780] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:58.868 [2024-07-14 07:44:14.905108] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:58.868 [2024-07-14 07:44:14.905672] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.868 [2024-07-14 07:44:14.905927] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.868 [2024-07-14 07:44:14.905956] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19e2400 with addr=10.0.0.2, port=4420 00:26:58.868 [2024-07-14 07:44:14.905975] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19e2400 is same with the state(5) to be set 00:26:58.868 [2024-07-14 07:44:14.906105] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19e2400 (9): Bad file descriptor 00:26:58.868 [2024-07-14 07:44:14.906274] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:58.868 [2024-07-14 07:44:14.906299] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:58.868 [2024-07-14 07:44:14.906316] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:58.868 [2024-07-14 07:44:14.908424] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:58.868 [2024-07-14 07:44:14.917586] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:58.868 [2024-07-14 07:44:14.917985] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.868 [2024-07-14 07:44:14.918222] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.868 [2024-07-14 07:44:14.918272] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19e2400 with addr=10.0.0.2, port=4420 00:26:58.868 [2024-07-14 07:44:14.918291] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19e2400 is same with the state(5) to be set 00:26:58.868 [2024-07-14 07:44:14.918457] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19e2400 (9): Bad file descriptor 00:26:58.868 [2024-07-14 07:44:14.918680] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:58.868 [2024-07-14 07:44:14.918705] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:58.868 [2024-07-14 07:44:14.918721] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:58.868 [2024-07-14 07:44:14.920978] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:58.868 [2024-07-14 07:44:14.930414] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:58.868 [2024-07-14 07:44:14.930837] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.868 [2024-07-14 07:44:14.931077] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.868 [2024-07-14 07:44:14.931108] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19e2400 with addr=10.0.0.2, port=4420 00:26:58.868 [2024-07-14 07:44:14.931127] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19e2400 is same with the state(5) to be set 00:26:58.868 [2024-07-14 07:44:14.931293] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19e2400 (9): Bad file descriptor 00:26:58.868 [2024-07-14 07:44:14.931444] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:58.868 [2024-07-14 07:44:14.931469] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:58.868 [2024-07-14 07:44:14.931490] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:58.868 [2024-07-14 07:44:14.933760] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:58.868 [2024-07-14 07:44:14.943020] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:58.868 [2024-07-14 07:44:14.943443] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.868 [2024-07-14 07:44:14.943631] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.868 [2024-07-14 07:44:14.943658] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19e2400 with addr=10.0.0.2, port=4420 00:26:58.868 [2024-07-14 07:44:14.943676] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19e2400 is same with the state(5) to be set 00:26:58.868 [2024-07-14 07:44:14.943842] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19e2400 (9): Bad file descriptor 00:26:58.868 [2024-07-14 07:44:14.944042] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:58.868 [2024-07-14 07:44:14.944069] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:58.868 [2024-07-14 07:44:14.944086] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:58.868 [2024-07-14 07:44:14.946335] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:58.868 [2024-07-14 07:44:14.955567] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:58.868 [2024-07-14 07:44:14.955974] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.868 [2024-07-14 07:44:14.956223] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.868 [2024-07-14 07:44:14.956264] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19e2400 with addr=10.0.0.2, port=4420 00:26:58.868 [2024-07-14 07:44:14.956280] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19e2400 is same with the state(5) to be set 00:26:58.868 [2024-07-14 07:44:14.956487] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19e2400 (9): Bad file descriptor 00:26:58.868 [2024-07-14 07:44:14.956675] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:58.868 [2024-07-14 07:44:14.956700] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:58.868 [2024-07-14 07:44:14.956716] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:58.868 [2024-07-14 07:44:14.958992] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:58.868 [2024-07-14 07:44:14.968044] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:58.868 [2024-07-14 07:44:14.968448] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.868 [2024-07-14 07:44:14.968748] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.868 [2024-07-14 07:44:14.968800] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19e2400 with addr=10.0.0.2, port=4420 00:26:58.868 [2024-07-14 07:44:14.968819] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19e2400 is same with the state(5) to be set 00:26:58.868 [2024-07-14 07:44:14.969034] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19e2400 (9): Bad file descriptor 00:26:58.868 [2024-07-14 07:44:14.969186] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:58.868 [2024-07-14 07:44:14.969212] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:58.868 [2024-07-14 07:44:14.969228] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:58.869 [2024-07-14 07:44:14.971557] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:58.869 [2024-07-14 07:44:14.980501] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:58.869 [2024-07-14 07:44:14.980923] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.869 [2024-07-14 07:44:14.981100] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.869 [2024-07-14 07:44:14.981130] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19e2400 with addr=10.0.0.2, port=4420 00:26:58.869 [2024-07-14 07:44:14.981149] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19e2400 is same with the state(5) to be set 00:26:58.869 [2024-07-14 07:44:14.981315] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19e2400 (9): Bad file descriptor 00:26:58.869 [2024-07-14 07:44:14.981485] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:58.869 [2024-07-14 07:44:14.981510] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:58.869 [2024-07-14 07:44:14.981527] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:58.869 [2024-07-14 07:44:14.983815] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:58.869 [2024-07-14 07:44:14.993038] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:58.869 [2024-07-14 07:44:14.993444] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.869 [2024-07-14 07:44:14.993821] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.869 [2024-07-14 07:44:14.993888] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19e2400 with addr=10.0.0.2, port=4420 00:26:58.869 [2024-07-14 07:44:14.993911] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19e2400 is same with the state(5) to be set 00:26:58.869 [2024-07-14 07:44:14.994097] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19e2400 (9): Bad file descriptor 00:26:58.869 [2024-07-14 07:44:14.994248] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:58.869 [2024-07-14 07:44:14.994274] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:58.869 [2024-07-14 07:44:14.994290] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:58.869 [2024-07-14 07:44:14.996685] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:58.869 [2024-07-14 07:44:15.005680] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:58.869 [2024-07-14 07:44:15.006034] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.869 [2024-07-14 07:44:15.006278] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.869 [2024-07-14 07:44:15.006325] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19e2400 with addr=10.0.0.2, port=4420 00:26:58.869 [2024-07-14 07:44:15.006344] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19e2400 is same with the state(5) to be set 00:26:58.869 [2024-07-14 07:44:15.006475] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19e2400 (9): Bad file descriptor 00:26:58.869 [2024-07-14 07:44:15.006626] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:58.869 [2024-07-14 07:44:15.006650] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:58.869 [2024-07-14 07:44:15.006666] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:58.869 [2024-07-14 07:44:15.008830] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:58.869 [2024-07-14 07:44:15.018080] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:58.869 [2024-07-14 07:44:15.018514] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.869 [2024-07-14 07:44:15.018711] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.869 [2024-07-14 07:44:15.018739] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19e2400 with addr=10.0.0.2, port=4420 00:26:58.869 [2024-07-14 07:44:15.018757] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19e2400 is same with the state(5) to be set 00:26:58.869 [2024-07-14 07:44:15.018954] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19e2400 (9): Bad file descriptor 00:26:58.869 [2024-07-14 07:44:15.019142] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:58.869 [2024-07-14 07:44:15.019168] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:58.869 [2024-07-14 07:44:15.019185] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:58.869 [2024-07-14 07:44:15.021506] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:58.869 [2024-07-14 07:44:15.030688] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:58.869 [2024-07-14 07:44:15.031073] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.869 [2024-07-14 07:44:15.031290] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.869 [2024-07-14 07:44:15.031340] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19e2400 with addr=10.0.0.2, port=4420 00:26:58.869 [2024-07-14 07:44:15.031359] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19e2400 is same with the state(5) to be set 00:26:58.869 [2024-07-14 07:44:15.031508] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19e2400 (9): Bad file descriptor 00:26:58.869 [2024-07-14 07:44:15.031660] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:58.869 [2024-07-14 07:44:15.031685] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:58.869 [2024-07-14 07:44:15.031702] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:58.869 [2024-07-14 07:44:15.034165] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:59.128 [2024-07-14 07:44:15.043473] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:59.128 [2024-07-14 07:44:15.043843] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.128 [2024-07-14 07:44:15.044100] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.128 [2024-07-14 07:44:15.044131] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19e2400 with addr=10.0.0.2, port=4420 00:26:59.128 [2024-07-14 07:44:15.044150] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19e2400 is same with the state(5) to be set 00:26:59.128 [2024-07-14 07:44:15.044318] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19e2400 (9): Bad file descriptor 00:26:59.128 [2024-07-14 07:44:15.044542] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:59.128 [2024-07-14 07:44:15.044568] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:59.128 [2024-07-14 07:44:15.044584] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:59.128 [2024-07-14 07:44:15.046818] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:59.128 [2024-07-14 07:44:15.056068] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:59.128 [2024-07-14 07:44:15.056524] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.128 [2024-07-14 07:44:15.056778] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.128 [2024-07-14 07:44:15.056820] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19e2400 with addr=10.0.0.2, port=4420 00:26:59.128 [2024-07-14 07:44:15.056836] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19e2400 is same with the state(5) to be set 00:26:59.128 [2024-07-14 07:44:15.057017] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19e2400 (9): Bad file descriptor 00:26:59.128 [2024-07-14 07:44:15.057205] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:59.128 [2024-07-14 07:44:15.057231] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:59.128 [2024-07-14 07:44:15.057248] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:59.128 [2024-07-14 07:44:15.059557] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:59.128 [2024-07-14 07:44:15.068584] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:59.128 [2024-07-14 07:44:15.068987] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.128 [2024-07-14 07:44:15.069320] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.128 [2024-07-14 07:44:15.069370] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19e2400 with addr=10.0.0.2, port=4420 00:26:59.128 [2024-07-14 07:44:15.069389] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19e2400 is same with the state(5) to be set 00:26:59.128 [2024-07-14 07:44:15.069537] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19e2400 (9): Bad file descriptor 00:26:59.128 [2024-07-14 07:44:15.069724] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:59.128 [2024-07-14 07:44:15.069748] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:59.128 [2024-07-14 07:44:15.069764] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:59.128 [2024-07-14 07:44:15.071878] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:59.128 [2024-07-14 07:44:15.081085] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:59.128 [2024-07-14 07:44:15.081481] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.128 [2024-07-14 07:44:15.081713] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.128 [2024-07-14 07:44:15.081746] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19e2400 with addr=10.0.0.2, port=4420 00:26:59.128 [2024-07-14 07:44:15.081781] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19e2400 is same with the state(5) to be set 00:26:59.128 [2024-07-14 07:44:15.082003] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19e2400 (9): Bad file descriptor 00:26:59.128 [2024-07-14 07:44:15.082146] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:59.128 [2024-07-14 07:44:15.082185] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:59.128 [2024-07-14 07:44:15.082202] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:59.128 [2024-07-14 07:44:15.084435] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:59.128 [2024-07-14 07:44:15.093626] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:59.128 [2024-07-14 07:44:15.094045] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.128 [2024-07-14 07:44:15.094292] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.128 [2024-07-14 07:44:15.094334] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19e2400 with addr=10.0.0.2, port=4420 00:26:59.128 [2024-07-14 07:44:15.094351] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19e2400 is same with the state(5) to be set 00:26:59.128 [2024-07-14 07:44:15.094576] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19e2400 (9): Bad file descriptor 00:26:59.128 [2024-07-14 07:44:15.094782] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:59.128 [2024-07-14 07:44:15.094808] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:59.128 [2024-07-14 07:44:15.094825] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:59.128 [2024-07-14 07:44:15.097108] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:59.128 [2024-07-14 07:44:15.106284] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:59.128 [2024-07-14 07:44:15.106699] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.128 [2024-07-14 07:44:15.106890] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.128 [2024-07-14 07:44:15.106933] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19e2400 with addr=10.0.0.2, port=4420 00:26:59.128 [2024-07-14 07:44:15.106951] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19e2400 is same with the state(5) to be set 00:26:59.128 [2024-07-14 07:44:15.107098] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19e2400 (9): Bad file descriptor 00:26:59.128 [2024-07-14 07:44:15.107249] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:59.128 [2024-07-14 07:44:15.107273] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:59.128 [2024-07-14 07:44:15.107289] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:59.128 [2024-07-14 07:44:15.109703] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:59.128 [2024-07-14 07:44:15.118828] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:59.128 [2024-07-14 07:44:15.119286] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.128 [2024-07-14 07:44:15.119669] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.128 [2024-07-14 07:44:15.119733] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19e2400 with addr=10.0.0.2, port=4420 00:26:59.128 [2024-07-14 07:44:15.119751] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19e2400 is same with the state(5) to be set 00:26:59.128 [2024-07-14 07:44:15.119969] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19e2400 (9): Bad file descriptor 00:26:59.128 [2024-07-14 07:44:15.120158] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:59.128 [2024-07-14 07:44:15.120184] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:59.128 [2024-07-14 07:44:15.120200] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:59.128 [2024-07-14 07:44:15.122631] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:59.128 [2024-07-14 07:44:15.131393] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:59.128 [2024-07-14 07:44:15.131798] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.128 [2024-07-14 07:44:15.132139] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.128 [2024-07-14 07:44:15.132206] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19e2400 with addr=10.0.0.2, port=4420 00:26:59.128 [2024-07-14 07:44:15.132231] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19e2400 is same with the state(5) to be set 00:26:59.128 [2024-07-14 07:44:15.132416] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19e2400 (9): Bad file descriptor 00:26:59.129 [2024-07-14 07:44:15.132585] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:59.129 [2024-07-14 07:44:15.132611] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:59.129 [2024-07-14 07:44:15.132627] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:59.129 [2024-07-14 07:44:15.134919] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:59.129 [2024-07-14 07:44:15.144003] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:59.129 [2024-07-14 07:44:15.144431] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.129 [2024-07-14 07:44:15.144731] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.129 [2024-07-14 07:44:15.144782] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19e2400 with addr=10.0.0.2, port=4420 00:26:59.129 [2024-07-14 07:44:15.144801] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19e2400 is same with the state(5) to be set 00:26:59.129 [2024-07-14 07:44:15.144996] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19e2400 (9): Bad file descriptor 00:26:59.129 [2024-07-14 07:44:15.145167] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:59.129 [2024-07-14 07:44:15.145192] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:59.129 [2024-07-14 07:44:15.145208] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:59.129 [2024-07-14 07:44:15.147592] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:59.129 [2024-07-14 07:44:15.156557] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:59.129 [2024-07-14 07:44:15.157009] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.129 [2024-07-14 07:44:15.157207] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.129 [2024-07-14 07:44:15.157241] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19e2400 with addr=10.0.0.2, port=4420 00:26:59.129 [2024-07-14 07:44:15.157280] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19e2400 is same with the state(5) to be set 00:26:59.129 [2024-07-14 07:44:15.157465] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19e2400 (9): Bad file descriptor 00:26:59.129 [2024-07-14 07:44:15.157653] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:59.129 [2024-07-14 07:44:15.157678] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:59.129 [2024-07-14 07:44:15.157694] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:59.129 [2024-07-14 07:44:15.159925] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:59.129 [2024-07-14 07:44:15.169153] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:59.129 [2024-07-14 07:44:15.169601] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.129 [2024-07-14 07:44:15.169790] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.129 [2024-07-14 07:44:15.169819] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19e2400 with addr=10.0.0.2, port=4420 00:26:59.129 [2024-07-14 07:44:15.169837] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19e2400 is same with the state(5) to be set 00:26:59.129 [2024-07-14 07:44:15.169948] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19e2400 (9): Bad file descriptor 00:26:59.129 [2024-07-14 07:44:15.170101] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:59.129 [2024-07-14 07:44:15.170126] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:59.129 [2024-07-14 07:44:15.170142] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:59.129 [2024-07-14 07:44:15.172452] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:59.129 [2024-07-14 07:44:15.181686] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:59.129 [2024-07-14 07:44:15.182109] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.129 [2024-07-14 07:44:15.182325] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.129 [2024-07-14 07:44:15.182375] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19e2400 with addr=10.0.0.2, port=4420 00:26:59.129 [2024-07-14 07:44:15.182395] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19e2400 is same with the state(5) to be set 00:26:59.129 [2024-07-14 07:44:15.182525] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19e2400 (9): Bad file descriptor 00:26:59.129 [2024-07-14 07:44:15.182676] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:59.129 [2024-07-14 07:44:15.182700] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:59.129 [2024-07-14 07:44:15.182716] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:59.129 [2024-07-14 07:44:15.185229] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:59.129 [2024-07-14 07:44:15.194125] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:59.129 [2024-07-14 07:44:15.194660] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.129 [2024-07-14 07:44:15.194898] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.129 [2024-07-14 07:44:15.194928] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19e2400 with addr=10.0.0.2, port=4420 00:26:59.129 [2024-07-14 07:44:15.194946] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19e2400 is same with the state(5) to be set 00:26:59.129 [2024-07-14 07:44:15.195111] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19e2400 (9): Bad file descriptor 00:26:59.129 [2024-07-14 07:44:15.195246] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:59.129 [2024-07-14 07:44:15.195270] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:59.129 [2024-07-14 07:44:15.195286] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:59.129 [2024-07-14 07:44:15.197689] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:59.129 [2024-07-14 07:44:15.206667] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:59.129 [2024-07-14 07:44:15.207094] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.129 [2024-07-14 07:44:15.207316] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.129 [2024-07-14 07:44:15.207346] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19e2400 with addr=10.0.0.2, port=4420 00:26:59.129 [2024-07-14 07:44:15.207364] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19e2400 is same with the state(5) to be set 00:26:59.129 [2024-07-14 07:44:15.207512] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19e2400 (9): Bad file descriptor 00:26:59.129 [2024-07-14 07:44:15.207669] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:59.129 [2024-07-14 07:44:15.207695] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:59.129 [2024-07-14 07:44:15.207711] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:59.129 [2024-07-14 07:44:15.209877] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:59.129 [2024-07-14 07:44:15.219126] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:59.129 [2024-07-14 07:44:15.219550] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.129 [2024-07-14 07:44:15.219931] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.129 [2024-07-14 07:44:15.219961] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19e2400 with addr=10.0.0.2, port=4420 00:26:59.129 [2024-07-14 07:44:15.219979] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19e2400 is same with the state(5) to be set 00:26:59.129 [2024-07-14 07:44:15.220090] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19e2400 (9): Bad file descriptor 00:26:59.129 [2024-07-14 07:44:15.220258] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:59.129 [2024-07-14 07:44:15.220283] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:59.129 [2024-07-14 07:44:15.220299] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:59.129 [2024-07-14 07:44:15.222528] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:59.129 [2024-07-14 07:44:15.231690] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:59.129 [2024-07-14 07:44:15.232019] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.129 [2024-07-14 07:44:15.232255] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.129 [2024-07-14 07:44:15.232303] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19e2400 with addr=10.0.0.2, port=4420 00:26:59.129 [2024-07-14 07:44:15.232322] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19e2400 is same with the state(5) to be set 00:26:59.129 [2024-07-14 07:44:15.232452] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19e2400 (9): Bad file descriptor 00:26:59.129 [2024-07-14 07:44:15.232607] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:59.129 [2024-07-14 07:44:15.232633] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:59.129 [2024-07-14 07:44:15.232650] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:59.129 [2024-07-14 07:44:15.235095] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:59.129 [2024-07-14 07:44:15.244315] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:59.129 [2024-07-14 07:44:15.244699] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.129 [2024-07-14 07:44:15.244948] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.129 [2024-07-14 07:44:15.244976] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19e2400 with addr=10.0.0.2, port=4420 00:26:59.129 [2024-07-14 07:44:15.244992] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19e2400 is same with the state(5) to be set 00:26:59.129 [2024-07-14 07:44:15.245202] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19e2400 (9): Bad file descriptor 00:26:59.129 [2024-07-14 07:44:15.245371] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:59.129 [2024-07-14 07:44:15.245401] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:59.129 [2024-07-14 07:44:15.245419] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:59.129 [2024-07-14 07:44:15.247856] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:59.129 [2024-07-14 07:44:15.256878] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:59.129 [2024-07-14 07:44:15.257286] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.129 [2024-07-14 07:44:15.257546] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.129 [2024-07-14 07:44:15.257591] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19e2400 with addr=10.0.0.2, port=4420 00:26:59.130 [2024-07-14 07:44:15.257609] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19e2400 is same with the state(5) to be set 00:26:59.130 [2024-07-14 07:44:15.257792] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19e2400 (9): Bad file descriptor 00:26:59.130 [2024-07-14 07:44:15.257998] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:59.130 [2024-07-14 07:44:15.258022] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:59.130 [2024-07-14 07:44:15.258037] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:59.130 [2024-07-14 07:44:15.260441] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:59.130 [2024-07-14 07:44:15.269402] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:59.130 [2024-07-14 07:44:15.269783] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.130 [2024-07-14 07:44:15.269984] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.130 [2024-07-14 07:44:15.270012] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19e2400 with addr=10.0.0.2, port=4420 00:26:59.130 [2024-07-14 07:44:15.270029] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19e2400 is same with the state(5) to be set 00:26:59.130 [2024-07-14 07:44:15.270198] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19e2400 (9): Bad file descriptor 00:26:59.130 [2024-07-14 07:44:15.270350] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:59.130 [2024-07-14 07:44:15.270375] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:59.130 [2024-07-14 07:44:15.270392] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:59.130 [2024-07-14 07:44:15.272618] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:59.130 [2024-07-14 07:44:15.281966] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:59.130 [2024-07-14 07:44:15.282485] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.130 [2024-07-14 07:44:15.282765] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.130 [2024-07-14 07:44:15.282813] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19e2400 with addr=10.0.0.2, port=4420 00:26:59.130 [2024-07-14 07:44:15.282832] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19e2400 is same with the state(5) to be set 00:26:59.130 [2024-07-14 07:44:15.283061] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19e2400 (9): Bad file descriptor 00:26:59.130 [2024-07-14 07:44:15.283231] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:59.130 [2024-07-14 07:44:15.283256] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:59.130 [2024-07-14 07:44:15.283277] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:59.130 [2024-07-14 07:44:15.285633] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:59.130 [2024-07-14 07:44:15.294674] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:59.130 [2024-07-14 07:44:15.295120] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.130 [2024-07-14 07:44:15.295344] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.130 [2024-07-14 07:44:15.295373] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19e2400 with addr=10.0.0.2, port=4420 00:26:59.130 [2024-07-14 07:44:15.295390] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19e2400 is same with the state(5) to be set 00:26:59.130 [2024-07-14 07:44:15.295611] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19e2400 (9): Bad file descriptor 00:26:59.130 [2024-07-14 07:44:15.295813] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:59.130 [2024-07-14 07:44:15.295838] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:59.130 [2024-07-14 07:44:15.295855] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:59.389 [2024-07-14 07:44:15.298541] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:59.389 [2024-07-14 07:44:15.307387] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:59.389 [2024-07-14 07:44:15.307807] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.389 [2024-07-14 07:44:15.308014] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.389 [2024-07-14 07:44:15.308046] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19e2400 with addr=10.0.0.2, port=4420 00:26:59.389 [2024-07-14 07:44:15.308065] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19e2400 is same with the state(5) to be set 00:26:59.389 [2024-07-14 07:44:15.308231] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19e2400 (9): Bad file descriptor 00:26:59.389 [2024-07-14 07:44:15.308419] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:59.389 [2024-07-14 07:44:15.308445] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:59.389 [2024-07-14 07:44:15.308461] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:59.389 [2024-07-14 07:44:15.310710] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:59.389 [2024-07-14 07:44:15.320042] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:59.389 [2024-07-14 07:44:15.320500] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.389 [2024-07-14 07:44:15.320928] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.389 [2024-07-14 07:44:15.320958] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19e2400 with addr=10.0.0.2, port=4420 00:26:59.389 [2024-07-14 07:44:15.320976] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19e2400 is same with the state(5) to be set 00:26:59.389 [2024-07-14 07:44:15.321143] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19e2400 (9): Bad file descriptor 00:26:59.389 [2024-07-14 07:44:15.321259] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:59.389 [2024-07-14 07:44:15.321284] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:59.389 [2024-07-14 07:44:15.321300] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:59.389 [2024-07-14 07:44:15.323647] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:59.389 [2024-07-14 07:44:15.332534] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:59.389 [2024-07-14 07:44:15.332932] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.389 [2024-07-14 07:44:15.333147] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.389 [2024-07-14 07:44:15.333173] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19e2400 with addr=10.0.0.2, port=4420 00:26:59.389 [2024-07-14 07:44:15.333190] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19e2400 is same with the state(5) to be set 00:26:59.389 [2024-07-14 07:44:15.333407] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19e2400 (9): Bad file descriptor 00:26:59.389 [2024-07-14 07:44:15.333631] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:59.389 [2024-07-14 07:44:15.333656] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:59.389 [2024-07-14 07:44:15.333672] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:59.389 [2024-07-14 07:44:15.335946] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:59.389 [2024-07-14 07:44:15.345109] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:59.389 [2024-07-14 07:44:15.345484] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.389 [2024-07-14 07:44:15.345764] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.389 [2024-07-14 07:44:15.345798] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19e2400 with addr=10.0.0.2, port=4420 00:26:59.389 [2024-07-14 07:44:15.345833] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19e2400 is same with the state(5) to be set 00:26:59.389 [2024-07-14 07:44:15.345993] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19e2400 (9): Bad file descriptor 00:26:59.389 [2024-07-14 07:44:15.346164] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:59.389 [2024-07-14 07:44:15.346189] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:59.389 [2024-07-14 07:44:15.346205] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:59.389 [2024-07-14 07:44:15.348540] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:59.389 [2024-07-14 07:44:15.357874] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:59.389 [2024-07-14 07:44:15.358282] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.389 [2024-07-14 07:44:15.358545] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.389 [2024-07-14 07:44:15.358590] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19e2400 with addr=10.0.0.2, port=4420 00:26:59.389 [2024-07-14 07:44:15.358609] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19e2400 is same with the state(5) to be set 00:26:59.389 [2024-07-14 07:44:15.358756] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19e2400 (9): Bad file descriptor 00:26:59.389 [2024-07-14 07:44:15.358938] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:59.389 [2024-07-14 07:44:15.358964] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:59.389 [2024-07-14 07:44:15.358980] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:59.389 [2024-07-14 07:44:15.361280] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:59.389 [2024-07-14 07:44:15.370633] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:59.389 [2024-07-14 07:44:15.371078] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.389 [2024-07-14 07:44:15.371296] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.389 [2024-07-14 07:44:15.371322] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19e2400 with addr=10.0.0.2, port=4420 00:26:59.389 [2024-07-14 07:44:15.371338] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19e2400 is same with the state(5) to be set 00:26:59.389 [2024-07-14 07:44:15.371498] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19e2400 (9): Bad file descriptor 00:26:59.389 [2024-07-14 07:44:15.371687] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:59.389 [2024-07-14 07:44:15.371711] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:59.389 [2024-07-14 07:44:15.371727] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:59.389 [2024-07-14 07:44:15.374074] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:59.389 [2024-07-14 07:44:15.383044] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:59.389 [2024-07-14 07:44:15.383401] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.389 [2024-07-14 07:44:15.383648] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.389 [2024-07-14 07:44:15.383696] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19e2400 with addr=10.0.0.2, port=4420 00:26:59.389 [2024-07-14 07:44:15.383714] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19e2400 is same with the state(5) to be set 00:26:59.389 [2024-07-14 07:44:15.383891] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19e2400 (9): Bad file descriptor 00:26:59.389 [2024-07-14 07:44:15.384080] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:59.389 [2024-07-14 07:44:15.384105] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:59.389 [2024-07-14 07:44:15.384121] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:59.389 [2024-07-14 07:44:15.386457] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:59.389 [2024-07-14 07:44:15.395630] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:59.389 [2024-07-14 07:44:15.396036] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.389 [2024-07-14 07:44:15.396312] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.389 [2024-07-14 07:44:15.396341] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19e2400 with addr=10.0.0.2, port=4420 00:26:59.389 [2024-07-14 07:44:15.396359] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19e2400 is same with the state(5) to be set 00:26:59.389 [2024-07-14 07:44:15.396471] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19e2400 (9): Bad file descriptor 00:26:59.389 [2024-07-14 07:44:15.396604] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:59.389 [2024-07-14 07:44:15.396629] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:59.389 [2024-07-14 07:44:15.396645] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:59.390 [2024-07-14 07:44:15.398944] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:59.390 [2024-07-14 07:44:15.408306] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:59.390 [2024-07-14 07:44:15.408731] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.390 [2024-07-14 07:44:15.408911] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.390 [2024-07-14 07:44:15.408942] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19e2400 with addr=10.0.0.2, port=4420 00:26:59.390 [2024-07-14 07:44:15.408960] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19e2400 is same with the state(5) to be set 00:26:59.390 [2024-07-14 07:44:15.409144] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19e2400 (9): Bad file descriptor 00:26:59.390 [2024-07-14 07:44:15.409278] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:59.390 [2024-07-14 07:44:15.409303] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:59.390 [2024-07-14 07:44:15.409319] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:59.390 [2024-07-14 07:44:15.411604] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:59.390 [2024-07-14 07:44:15.421182] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:59.390 [2024-07-14 07:44:15.421606] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.390 [2024-07-14 07:44:15.421852] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.390 [2024-07-14 07:44:15.421889] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19e2400 with addr=10.0.0.2, port=4420 00:26:59.390 [2024-07-14 07:44:15.421908] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19e2400 is same with the state(5) to be set 00:26:59.390 [2024-07-14 07:44:15.422110] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19e2400 (9): Bad file descriptor 00:26:59.390 [2024-07-14 07:44:15.422316] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:59.390 [2024-07-14 07:44:15.422341] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:59.390 [2024-07-14 07:44:15.422357] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:59.390 [2024-07-14 07:44:15.424882] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:59.390 [2024-07-14 07:44:15.433767] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:59.390 [2024-07-14 07:44:15.434119] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.390 [2024-07-14 07:44:15.434341] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.390 [2024-07-14 07:44:15.434371] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19e2400 with addr=10.0.0.2, port=4420 00:26:59.390 [2024-07-14 07:44:15.434389] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19e2400 is same with the state(5) to be set 00:26:59.390 [2024-07-14 07:44:15.434590] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19e2400 (9): Bad file descriptor 00:26:59.390 [2024-07-14 07:44:15.434760] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:59.390 [2024-07-14 07:44:15.434785] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:59.390 [2024-07-14 07:44:15.434802] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:59.390 [2024-07-14 07:44:15.437239] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:59.390 [2024-07-14 07:44:15.446440] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:59.390 [2024-07-14 07:44:15.446849] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.390 [2024-07-14 07:44:15.447069] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.390 [2024-07-14 07:44:15.447099] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19e2400 with addr=10.0.0.2, port=4420 00:26:59.390 [2024-07-14 07:44:15.447117] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19e2400 is same with the state(5) to be set 00:26:59.390 [2024-07-14 07:44:15.447337] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19e2400 (9): Bad file descriptor 00:26:59.390 [2024-07-14 07:44:15.447489] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:59.390 [2024-07-14 07:44:15.447514] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:59.390 [2024-07-14 07:44:15.447530] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:59.390 [2024-07-14 07:44:15.449949] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:59.390 [2024-07-14 07:44:15.459233] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:59.390 [2024-07-14 07:44:15.459643] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.390 [2024-07-14 07:44:15.459881] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.390 [2024-07-14 07:44:15.459912] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19e2400 with addr=10.0.0.2, port=4420 00:26:59.390 [2024-07-14 07:44:15.459930] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19e2400 is same with the state(5) to be set 00:26:59.390 [2024-07-14 07:44:15.460096] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19e2400 (9): Bad file descriptor 00:26:59.390 [2024-07-14 07:44:15.460266] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:59.390 [2024-07-14 07:44:15.460291] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:59.390 [2024-07-14 07:44:15.460307] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:59.390 [2024-07-14 07:44:15.462790] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:59.390 [2024-07-14 07:44:15.471692] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:59.390 [2024-07-14 07:44:15.472062] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.390 [2024-07-14 07:44:15.472410] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.390 [2024-07-14 07:44:15.472463] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19e2400 with addr=10.0.0.2, port=4420 00:26:59.390 [2024-07-14 07:44:15.472481] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19e2400 is same with the state(5) to be set 00:26:59.390 [2024-07-14 07:44:15.472592] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19e2400 (9): Bad file descriptor 00:26:59.390 [2024-07-14 07:44:15.472779] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:59.390 [2024-07-14 07:44:15.472804] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:59.390 [2024-07-14 07:44:15.472820] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:59.390 [2024-07-14 07:44:15.475277] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:59.390 [2024-07-14 07:44:15.484278] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:59.390 [2024-07-14 07:44:15.484635] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.390 [2024-07-14 07:44:15.484874] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.390 [2024-07-14 07:44:15.484905] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19e2400 with addr=10.0.0.2, port=4420 00:26:59.390 [2024-07-14 07:44:15.484930] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19e2400 is same with the state(5) to be set 00:26:59.390 [2024-07-14 07:44:15.485061] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19e2400 (9): Bad file descriptor 00:26:59.390 [2024-07-14 07:44:15.485268] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:59.390 [2024-07-14 07:44:15.485292] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:59.390 [2024-07-14 07:44:15.485309] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:59.390 [2024-07-14 07:44:15.487889] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:59.390 [2024-07-14 07:44:15.496589] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:59.390 [2024-07-14 07:44:15.496980] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.390 [2024-07-14 07:44:15.497189] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.390 [2024-07-14 07:44:15.497218] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19e2400 with addr=10.0.0.2, port=4420 00:26:59.390 [2024-07-14 07:44:15.497236] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19e2400 is same with the state(5) to be set 00:26:59.390 [2024-07-14 07:44:15.497402] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19e2400 (9): Bad file descriptor 00:26:59.390 [2024-07-14 07:44:15.497590] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:59.390 [2024-07-14 07:44:15.497614] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:59.390 [2024-07-14 07:44:15.497631] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:59.390 [2024-07-14 07:44:15.499931] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:59.390 [2024-07-14 07:44:15.508945] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:59.390 [2024-07-14 07:44:15.509295] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.390 [2024-07-14 07:44:15.509502] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.390 [2024-07-14 07:44:15.509531] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19e2400 with addr=10.0.0.2, port=4420 00:26:59.390 [2024-07-14 07:44:15.509549] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19e2400 is same with the state(5) to be set 00:26:59.390 [2024-07-14 07:44:15.509697] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19e2400 (9): Bad file descriptor 00:26:59.390 [2024-07-14 07:44:15.509878] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:59.390 [2024-07-14 07:44:15.509903] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:59.390 [2024-07-14 07:44:15.509919] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:59.390 [2024-07-14 07:44:15.512291] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:59.390 [2024-07-14 07:44:15.521355] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:59.390 [2024-07-14 07:44:15.521836] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.390 [2024-07-14 07:44:15.522039] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.390 [2024-07-14 07:44:15.522065] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19e2400 with addr=10.0.0.2, port=4420 00:26:59.390 [2024-07-14 07:44:15.522081] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19e2400 is same with the state(5) to be set 00:26:59.390 [2024-07-14 07:44:15.522234] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19e2400 (9): Bad file descriptor 00:26:59.390 [2024-07-14 07:44:15.522405] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:59.391 [2024-07-14 07:44:15.522430] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:59.391 [2024-07-14 07:44:15.522446] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:59.391 [2024-07-14 07:44:15.524822] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:59.391 [2024-07-14 07:44:15.533952] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:59.391 [2024-07-14 07:44:15.534376] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.391 [2024-07-14 07:44:15.534621] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.391 [2024-07-14 07:44:15.534662] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19e2400 with addr=10.0.0.2, port=4420 00:26:59.391 [2024-07-14 07:44:15.534678] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19e2400 is same with the state(5) to be set 00:26:59.391 [2024-07-14 07:44:15.534891] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19e2400 (9): Bad file descriptor 00:26:59.391 [2024-07-14 07:44:15.535026] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:59.391 [2024-07-14 07:44:15.535050] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:59.391 [2024-07-14 07:44:15.535066] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:59.391 [2024-07-14 07:44:15.537291] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:59.391 [2024-07-14 07:44:15.546533] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:59.391 [2024-07-14 07:44:15.546913] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.391 [2024-07-14 07:44:15.547161] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.391 [2024-07-14 07:44:15.547187] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19e2400 with addr=10.0.0.2, port=4420 00:26:59.391 [2024-07-14 07:44:15.547203] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19e2400 is same with the state(5) to be set 00:26:59.391 [2024-07-14 07:44:15.547369] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19e2400 (9): Bad file descriptor 00:26:59.391 [2024-07-14 07:44:15.547521] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:59.391 [2024-07-14 07:44:15.547545] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:59.391 [2024-07-14 07:44:15.547562] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:59.391 [2024-07-14 07:44:15.549810] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:59.650 [2024-07-14 07:44:15.559362] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:59.650 [2024-07-14 07:44:15.559827] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.650 [2024-07-14 07:44:15.560084] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.650 [2024-07-14 07:44:15.560111] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19e2400 with addr=10.0.0.2, port=4420 00:26:59.650 [2024-07-14 07:44:15.560127] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19e2400 is same with the state(5) to be set 00:26:59.650 [2024-07-14 07:44:15.560326] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19e2400 (9): Bad file descriptor 00:26:59.650 [2024-07-14 07:44:15.560503] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:59.650 [2024-07-14 07:44:15.560528] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:59.650 [2024-07-14 07:44:15.560545] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:59.650 [2024-07-14 07:44:15.563099] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:59.650 [2024-07-14 07:44:15.571914] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:59.650 [2024-07-14 07:44:15.572538] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.650 [2024-07-14 07:44:15.572796] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.650 [2024-07-14 07:44:15.572826] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19e2400 with addr=10.0.0.2, port=4420 00:26:59.650 [2024-07-14 07:44:15.572845] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19e2400 is same with the state(5) to be set 00:26:59.650 [2024-07-14 07:44:15.573056] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19e2400 (9): Bad file descriptor 00:26:59.650 [2024-07-14 07:44:15.573208] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:59.650 [2024-07-14 07:44:15.573233] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:59.650 [2024-07-14 07:44:15.573249] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:59.650 [2024-07-14 07:44:15.575512] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:59.650 [2024-07-14 07:44:15.584559] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:59.650 [2024-07-14 07:44:15.584980] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.650 [2024-07-14 07:44:15.585188] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.650 [2024-07-14 07:44:15.585218] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19e2400 with addr=10.0.0.2, port=4420 00:26:59.650 [2024-07-14 07:44:15.585236] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19e2400 is same with the state(5) to be set 00:26:59.650 [2024-07-14 07:44:15.585420] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19e2400 (9): Bad file descriptor 00:26:59.650 [2024-07-14 07:44:15.585573] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:59.650 [2024-07-14 07:44:15.585597] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:59.650 [2024-07-14 07:44:15.585614] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:59.650 [2024-07-14 07:44:15.588103] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:59.650 [2024-07-14 07:44:15.596988] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:59.650 [2024-07-14 07:44:15.597456] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.650 [2024-07-14 07:44:15.597664] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.650 [2024-07-14 07:44:15.597689] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19e2400 with addr=10.0.0.2, port=4420 00:26:59.650 [2024-07-14 07:44:15.597705] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19e2400 is same with the state(5) to be set 00:26:59.650 [2024-07-14 07:44:15.597850] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19e2400 (9): Bad file descriptor 00:26:59.650 [2024-07-14 07:44:15.598044] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:59.650 [2024-07-14 07:44:15.598067] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:59.650 [2024-07-14 07:44:15.598081] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:59.650 [2024-07-14 07:44:15.600409] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:59.650 [2024-07-14 07:44:15.609444] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:59.650 [2024-07-14 07:44:15.609823] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.650 [2024-07-14 07:44:15.610046] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.650 [2024-07-14 07:44:15.610078] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19e2400 with addr=10.0.0.2, port=4420 00:26:59.650 [2024-07-14 07:44:15.610097] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19e2400 is same with the state(5) to be set 00:26:59.650 [2024-07-14 07:44:15.610246] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19e2400 (9): Bad file descriptor 00:26:59.650 [2024-07-14 07:44:15.610417] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:59.650 [2024-07-14 07:44:15.610441] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:59.650 [2024-07-14 07:44:15.610458] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:59.650 [2024-07-14 07:44:15.612741] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:59.650 [2024-07-14 07:44:15.621951] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:59.650 [2024-07-14 07:44:15.622335] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.650 [2024-07-14 07:44:15.622618] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.650 [2024-07-14 07:44:15.622693] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19e2400 with addr=10.0.0.2, port=4420 00:26:59.650 [2024-07-14 07:44:15.622711] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19e2400 is same with the state(5) to be set 00:26:59.650 [2024-07-14 07:44:15.622906] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19e2400 (9): Bad file descriptor 00:26:59.650 [2024-07-14 07:44:15.623095] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:59.650 [2024-07-14 07:44:15.623120] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:59.650 [2024-07-14 07:44:15.623135] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:59.650 [2024-07-14 07:44:15.625258] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:59.650 [2024-07-14 07:44:15.634570] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:59.650 [2024-07-14 07:44:15.634925] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.650 [2024-07-14 07:44:15.635157] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.650 [2024-07-14 07:44:15.635182] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19e2400 with addr=10.0.0.2, port=4420 00:26:59.650 [2024-07-14 07:44:15.635198] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19e2400 is same with the state(5) to be set 00:26:59.650 [2024-07-14 07:44:15.635380] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19e2400 (9): Bad file descriptor 00:26:59.650 [2024-07-14 07:44:15.635568] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:59.650 [2024-07-14 07:44:15.635593] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:59.650 [2024-07-14 07:44:15.635615] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:59.650 [2024-07-14 07:44:15.638092] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:59.650 [2024-07-14 07:44:15.647152] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:59.650 [2024-07-14 07:44:15.647511] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.650 [2024-07-14 07:44:15.647786] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.650 [2024-07-14 07:44:15.647839] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19e2400 with addr=10.0.0.2, port=4420 00:26:59.650 [2024-07-14 07:44:15.647857] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19e2400 is same with the state(5) to be set 00:26:59.650 [2024-07-14 07:44:15.648018] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19e2400 (9): Bad file descriptor 00:26:59.650 [2024-07-14 07:44:15.648153] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:59.650 [2024-07-14 07:44:15.648177] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:59.650 [2024-07-14 07:44:15.648194] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:59.650 [2024-07-14 07:44:15.650298] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:59.651 [2024-07-14 07:44:15.659803] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:59.651 [2024-07-14 07:44:15.660190] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.651 [2024-07-14 07:44:15.660442] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.651 [2024-07-14 07:44:15.660467] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19e2400 with addr=10.0.0.2, port=4420 00:26:59.651 [2024-07-14 07:44:15.660498] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19e2400 is same with the state(5) to be set 00:26:59.651 [2024-07-14 07:44:15.660650] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19e2400 (9): Bad file descriptor 00:26:59.651 [2024-07-14 07:44:15.660823] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:59.651 [2024-07-14 07:44:15.660848] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:59.651 [2024-07-14 07:44:15.660874] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:59.651 [2024-07-14 07:44:15.663215] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:59.651 [2024-07-14 07:44:15.672448] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:59.651 [2024-07-14 07:44:15.672828] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.651 [2024-07-14 07:44:15.673052] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.651 [2024-07-14 07:44:15.673082] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19e2400 with addr=10.0.0.2, port=4420 00:26:59.651 [2024-07-14 07:44:15.673100] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19e2400 is same with the state(5) to be set 00:26:59.651 [2024-07-14 07:44:15.673266] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19e2400 (9): Bad file descriptor 00:26:59.651 [2024-07-14 07:44:15.673399] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:59.651 [2024-07-14 07:44:15.673424] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:59.651 [2024-07-14 07:44:15.673445] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:59.651 [2024-07-14 07:44:15.675838] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:59.651 [2024-07-14 07:44:15.685052] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:59.651 [2024-07-14 07:44:15.685496] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.651 [2024-07-14 07:44:15.685797] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.651 [2024-07-14 07:44:15.685855] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19e2400 with addr=10.0.0.2, port=4420 00:26:59.651 [2024-07-14 07:44:15.685885] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19e2400 is same with the state(5) to be set 00:26:59.651 [2024-07-14 07:44:15.686054] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19e2400 (9): Bad file descriptor 00:26:59.651 [2024-07-14 07:44:15.686225] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:59.651 [2024-07-14 07:44:15.686249] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:59.651 [2024-07-14 07:44:15.686265] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:59.651 [2024-07-14 07:44:15.688618] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:59.651 [2024-07-14 07:44:15.697861] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:59.651 [2024-07-14 07:44:15.698254] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.651 [2024-07-14 07:44:15.698461] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.651 [2024-07-14 07:44:15.698491] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19e2400 with addr=10.0.0.2, port=4420 00:26:59.651 [2024-07-14 07:44:15.698508] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19e2400 is same with the state(5) to be set 00:26:59.651 [2024-07-14 07:44:15.698675] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19e2400 (9): Bad file descriptor 00:26:59.651 [2024-07-14 07:44:15.698864] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:59.651 [2024-07-14 07:44:15.698898] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:59.651 [2024-07-14 07:44:15.698915] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:59.651 [2024-07-14 07:44:15.701322] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:59.651 [2024-07-14 07:44:15.710751] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:59.651 [2024-07-14 07:44:15.711135] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.651 [2024-07-14 07:44:15.711433] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.651 [2024-07-14 07:44:15.711496] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19e2400 with addr=10.0.0.2, port=4420 00:26:59.651 [2024-07-14 07:44:15.711514] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19e2400 is same with the state(5) to be set 00:26:59.651 [2024-07-14 07:44:15.711716] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19e2400 (9): Bad file descriptor 00:26:59.651 [2024-07-14 07:44:15.711915] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:59.651 [2024-07-14 07:44:15.711940] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:59.651 [2024-07-14 07:44:15.711956] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:59.651 [2024-07-14 07:44:15.714276] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:59.651 [2024-07-14 07:44:15.723296] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:59.651 [2024-07-14 07:44:15.723737] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.651 [2024-07-14 07:44:15.723978] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.651 [2024-07-14 07:44:15.724006] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19e2400 with addr=10.0.0.2, port=4420 00:26:59.651 [2024-07-14 07:44:15.724023] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19e2400 is same with the state(5) to be set 00:26:59.651 [2024-07-14 07:44:15.724211] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19e2400 (9): Bad file descriptor 00:26:59.651 [2024-07-14 07:44:15.724367] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:59.651 [2024-07-14 07:44:15.724392] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:59.651 [2024-07-14 07:44:15.724409] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:59.651 [2024-07-14 07:44:15.726565] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:59.651 [2024-07-14 07:44:15.735740] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:59.651 [2024-07-14 07:44:15.736151] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.651 [2024-07-14 07:44:15.736497] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.651 [2024-07-14 07:44:15.736554] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19e2400 with addr=10.0.0.2, port=4420 00:26:59.651 [2024-07-14 07:44:15.736571] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19e2400 is same with the state(5) to be set 00:26:59.651 [2024-07-14 07:44:15.736720] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19e2400 (9): Bad file descriptor 00:26:59.651 [2024-07-14 07:44:15.736884] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:59.651 [2024-07-14 07:44:15.736909] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:59.651 [2024-07-14 07:44:15.736926] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:59.651 [2024-07-14 07:44:15.739262] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:59.651 [2024-07-14 07:44:15.748495] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:59.651 [2024-07-14 07:44:15.748896] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.651 [2024-07-14 07:44:15.749137] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.651 [2024-07-14 07:44:15.749163] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19e2400 with addr=10.0.0.2, port=4420 00:26:59.651 [2024-07-14 07:44:15.749179] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19e2400 is same with the state(5) to be set 00:26:59.651 [2024-07-14 07:44:15.749378] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19e2400 (9): Bad file descriptor 00:26:59.651 [2024-07-14 07:44:15.749547] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:59.651 [2024-07-14 07:44:15.749572] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:59.651 [2024-07-14 07:44:15.749588] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:59.651 [2024-07-14 07:44:15.751820] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:59.651 [2024-07-14 07:44:15.761151] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:59.651 [2024-07-14 07:44:15.761529] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.651 [2024-07-14 07:44:15.761920] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.651 [2024-07-14 07:44:15.761950] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19e2400 with addr=10.0.0.2, port=4420 00:26:59.651 [2024-07-14 07:44:15.761967] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19e2400 is same with the state(5) to be set 00:26:59.651 [2024-07-14 07:44:15.762133] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19e2400 (9): Bad file descriptor 00:26:59.651 [2024-07-14 07:44:15.762267] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:59.651 [2024-07-14 07:44:15.762291] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:59.651 [2024-07-14 07:44:15.762307] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:59.651 [2024-07-14 07:44:15.764662] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:59.651 [2024-07-14 07:44:15.773896] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:59.651 [2024-07-14 07:44:15.774327] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.651 [2024-07-14 07:44:15.774648] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.651 [2024-07-14 07:44:15.774693] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19e2400 with addr=10.0.0.2, port=4420 00:26:59.651 [2024-07-14 07:44:15.774711] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19e2400 is same with the state(5) to be set 00:26:59.651 [2024-07-14 07:44:15.774858] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19e2400 (9): Bad file descriptor 00:26:59.651 [2024-07-14 07:44:15.775040] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:59.651 [2024-07-14 07:44:15.775064] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:59.651 [2024-07-14 07:44:15.775081] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:59.651 [2024-07-14 07:44:15.777561] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:59.652 [2024-07-14 07:44:15.786591] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:59.652 [2024-07-14 07:44:15.786984] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.652 [2024-07-14 07:44:15.787197] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.652 [2024-07-14 07:44:15.787226] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19e2400 with addr=10.0.0.2, port=4420 00:26:59.652 [2024-07-14 07:44:15.787244] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19e2400 is same with the state(5) to be set 00:26:59.652 [2024-07-14 07:44:15.787410] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19e2400 (9): Bad file descriptor 00:26:59.652 [2024-07-14 07:44:15.787597] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:59.652 [2024-07-14 07:44:15.787622] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:59.652 [2024-07-14 07:44:15.787638] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:59.652 [2024-07-14 07:44:15.789946] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:59.652 [2024-07-14 07:44:15.799253] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:59.652 [2024-07-14 07:44:15.799622] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.652 [2024-07-14 07:44:15.799832] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.652 [2024-07-14 07:44:15.799861] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19e2400 with addr=10.0.0.2, port=4420 00:26:59.652 [2024-07-14 07:44:15.799890] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19e2400 is same with the state(5) to be set 00:26:59.652 [2024-07-14 07:44:15.800058] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19e2400 (9): Bad file descriptor 00:26:59.652 [2024-07-14 07:44:15.800282] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:59.652 [2024-07-14 07:44:15.800306] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:59.652 [2024-07-14 07:44:15.800323] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:59.652 [2024-07-14 07:44:15.802551] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:59.652 [2024-07-14 07:44:15.811852] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:59.652 [2024-07-14 07:44:15.812248] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.652 [2024-07-14 07:44:15.812401] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.652 [2024-07-14 07:44:15.812426] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19e2400 with addr=10.0.0.2, port=4420 00:26:59.652 [2024-07-14 07:44:15.812441] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19e2400 is same with the state(5) to be set 00:26:59.652 [2024-07-14 07:44:15.812581] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19e2400 (9): Bad file descriptor 00:26:59.652 [2024-07-14 07:44:15.812784] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:59.652 [2024-07-14 07:44:15.812809] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:59.652 [2024-07-14 07:44:15.812825] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:59.652 [2024-07-14 07:44:15.815374] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:59.913 [2024-07-14 07:44:15.824474] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:59.913 [2024-07-14 07:44:15.824995] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.913 [2024-07-14 07:44:15.825208] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.913 [2024-07-14 07:44:15.825239] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19e2400 with addr=10.0.0.2, port=4420 00:26:59.913 [2024-07-14 07:44:15.825258] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19e2400 is same with the state(5) to be set 00:26:59.913 [2024-07-14 07:44:15.825407] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19e2400 (9): Bad file descriptor 00:26:59.913 [2024-07-14 07:44:15.825614] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:59.913 [2024-07-14 07:44:15.825639] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:59.913 [2024-07-14 07:44:15.825655] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:59.913 [2024-07-14 07:44:15.828222] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:59.913 [2024-07-14 07:44:15.837282] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:59.913 [2024-07-14 07:44:15.837646] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.914 [2024-07-14 07:44:15.837857] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.914 [2024-07-14 07:44:15.837895] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19e2400 with addr=10.0.0.2, port=4420 00:26:59.914 [2024-07-14 07:44:15.837920] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19e2400 is same with the state(5) to be set 00:26:59.914 [2024-07-14 07:44:15.838069] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19e2400 (9): Bad file descriptor 00:26:59.914 [2024-07-14 07:44:15.838239] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:59.914 [2024-07-14 07:44:15.838264] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:59.914 [2024-07-14 07:44:15.838280] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:59.914 [2024-07-14 07:44:15.840653] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:59.914 [2024-07-14 07:44:15.849871] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:59.914 [2024-07-14 07:44:15.850244] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.914 [2024-07-14 07:44:15.850458] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.914 [2024-07-14 07:44:15.850487] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19e2400 with addr=10.0.0.2, port=4420 00:26:59.914 [2024-07-14 07:44:15.850506] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19e2400 is same with the state(5) to be set 00:26:59.914 [2024-07-14 07:44:15.850653] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19e2400 (9): Bad file descriptor 00:26:59.914 [2024-07-14 07:44:15.850824] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:59.914 [2024-07-14 07:44:15.850848] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:59.914 [2024-07-14 07:44:15.850864] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:59.914 [2024-07-14 07:44:15.853378] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:59.914 [2024-07-14 07:44:15.862572] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:59.914 [2024-07-14 07:44:15.862965] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.914 [2024-07-14 07:44:15.863165] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.914 [2024-07-14 07:44:15.863194] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19e2400 with addr=10.0.0.2, port=4420 00:26:59.914 [2024-07-14 07:44:15.863213] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19e2400 is same with the state(5) to be set 00:26:59.914 [2024-07-14 07:44:15.863397] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19e2400 (9): Bad file descriptor 00:26:59.914 [2024-07-14 07:44:15.863549] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:59.914 [2024-07-14 07:44:15.863573] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:59.914 [2024-07-14 07:44:15.863589] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:59.914 [2024-07-14 07:44:15.866015] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:59.914 [2024-07-14 07:44:15.875047] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:59.914 [2024-07-14 07:44:15.875429] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.914 [2024-07-14 07:44:15.875640] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.914 [2024-07-14 07:44:15.875667] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19e2400 with addr=10.0.0.2, port=4420 00:26:59.914 [2024-07-14 07:44:15.875684] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19e2400 is same with the state(5) to be set 00:26:59.914 [2024-07-14 07:44:15.875835] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19e2400 (9): Bad file descriptor 00:26:59.914 [2024-07-14 07:44:15.876014] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:59.914 [2024-07-14 07:44:15.876039] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:59.914 [2024-07-14 07:44:15.876055] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:59.914 [2024-07-14 07:44:15.878427] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:59.914 [2024-07-14 07:44:15.887549] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:59.914 [2024-07-14 07:44:15.888028] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.914 [2024-07-14 07:44:15.888340] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.914 [2024-07-14 07:44:15.888366] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19e2400 with addr=10.0.0.2, port=4420 00:26:59.914 [2024-07-14 07:44:15.888382] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19e2400 is same with the state(5) to be set 00:26:59.914 [2024-07-14 07:44:15.888587] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19e2400 (9): Bad file descriptor 00:26:59.914 [2024-07-14 07:44:15.888775] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:59.914 [2024-07-14 07:44:15.888799] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:59.914 [2024-07-14 07:44:15.888816] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:59.914 [2024-07-14 07:44:15.891215] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:59.914 [2024-07-14 07:44:15.900145] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:59.914 [2024-07-14 07:44:15.900513] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.914 [2024-07-14 07:44:15.900751] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.914 [2024-07-14 07:44:15.900781] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19e2400 with addr=10.0.0.2, port=4420 00:26:59.914 [2024-07-14 07:44:15.900800] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19e2400 is same with the state(5) to be set 00:26:59.914 [2024-07-14 07:44:15.900993] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19e2400 (9): Bad file descriptor 00:26:59.914 [2024-07-14 07:44:15.901164] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:59.914 [2024-07-14 07:44:15.901189] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:59.914 [2024-07-14 07:44:15.901205] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:59.914 [2024-07-14 07:44:15.903523] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:59.914 [2024-07-14 07:44:15.912461] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:59.914 [2024-07-14 07:44:15.912835] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.914 [2024-07-14 07:44:15.913047] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.914 [2024-07-14 07:44:15.913078] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19e2400 with addr=10.0.0.2, port=4420 00:26:59.914 [2024-07-14 07:44:15.913096] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19e2400 is same with the state(5) to be set 00:26:59.914 [2024-07-14 07:44:15.913270] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19e2400 (9): Bad file descriptor 00:26:59.914 [2024-07-14 07:44:15.913440] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:59.914 [2024-07-14 07:44:15.913465] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:59.914 [2024-07-14 07:44:15.913481] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:59.914 [2024-07-14 07:44:15.915953] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:59.914 [2024-07-14 07:44:15.925051] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:59.914 [2024-07-14 07:44:15.925438] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.914 [2024-07-14 07:44:15.925735] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.914 [2024-07-14 07:44:15.925764] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19e2400 with addr=10.0.0.2, port=4420 00:26:59.914 [2024-07-14 07:44:15.925782] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19e2400 is same with the state(5) to be set 00:26:59.914 [2024-07-14 07:44:15.925959] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19e2400 (9): Bad file descriptor 00:26:59.914 [2024-07-14 07:44:15.926130] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:59.914 [2024-07-14 07:44:15.926155] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:59.914 [2024-07-14 07:44:15.926171] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:59.914 [2024-07-14 07:44:15.928561] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:59.914 [2024-07-14 07:44:15.937534] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:59.914 [2024-07-14 07:44:15.937893] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.914 [2024-07-14 07:44:15.938080] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.914 [2024-07-14 07:44:15.938111] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19e2400 with addr=10.0.0.2, port=4420 00:26:59.914 [2024-07-14 07:44:15.938130] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19e2400 is same with the state(5) to be set 00:26:59.914 [2024-07-14 07:44:15.938314] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19e2400 (9): Bad file descriptor 00:26:59.914 [2024-07-14 07:44:15.938485] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:59.914 [2024-07-14 07:44:15.938509] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:59.914 [2024-07-14 07:44:15.938526] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:59.914 [2024-07-14 07:44:15.940870] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:59.914 [2024-07-14 07:44:15.950018] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:59.914 [2024-07-14 07:44:15.950431] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.914 [2024-07-14 07:44:15.950674] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.914 [2024-07-14 07:44:15.950703] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19e2400 with addr=10.0.0.2, port=4420 00:26:59.914 [2024-07-14 07:44:15.950721] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19e2400 is same with the state(5) to be set 00:26:59.914 [2024-07-14 07:44:15.950933] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19e2400 (9): Bad file descriptor 00:26:59.914 [2024-07-14 07:44:15.951073] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:59.914 [2024-07-14 07:44:15.951098] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:59.914 [2024-07-14 07:44:15.951115] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:59.914 [2024-07-14 07:44:15.953559] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:59.914 [2024-07-14 07:44:15.962505] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:59.914 [2024-07-14 07:44:15.962880] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.915 [2024-07-14 07:44:15.963052] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.915 [2024-07-14 07:44:15.963079] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19e2400 with addr=10.0.0.2, port=4420 00:26:59.915 [2024-07-14 07:44:15.963096] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19e2400 is same with the state(5) to be set 00:26:59.915 [2024-07-14 07:44:15.963251] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19e2400 (9): Bad file descriptor 00:26:59.915 [2024-07-14 07:44:15.963393] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:59.915 [2024-07-14 07:44:15.963416] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:59.915 [2024-07-14 07:44:15.963431] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:59.915 [2024-07-14 07:44:15.965418] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:59.915 [2024-07-14 07:44:15.974766] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:59.915 [2024-07-14 07:44:15.975151] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.915 [2024-07-14 07:44:15.975388] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.915 [2024-07-14 07:44:15.975415] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19e2400 with addr=10.0.0.2, port=4420 00:26:59.915 [2024-07-14 07:44:15.975432] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19e2400 is same with the state(5) to be set 00:26:59.915 [2024-07-14 07:44:15.975608] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19e2400 (9): Bad file descriptor 00:26:59.915 [2024-07-14 07:44:15.975752] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:59.915 [2024-07-14 07:44:15.975773] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:59.915 [2024-07-14 07:44:15.975787] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:59.915 [2024-07-14 07:44:15.977721] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:59.915 [2024-07-14 07:44:15.987257] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:59.915 [2024-07-14 07:44:15.987691] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.915 [2024-07-14 07:44:15.987915] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.915 [2024-07-14 07:44:15.987944] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19e2400 with addr=10.0.0.2, port=4420 00:26:59.915 [2024-07-14 07:44:15.987975] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19e2400 is same with the state(5) to be set 00:26:59.915 [2024-07-14 07:44:15.988191] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19e2400 (9): Bad file descriptor 00:26:59.915 [2024-07-14 07:44:15.988312] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:59.915 [2024-07-14 07:44:15.988337] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:59.915 [2024-07-14 07:44:15.988351] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:59.915 [2024-07-14 07:44:15.990676] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:59.915 [2024-07-14 07:44:15.999957] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:59.915 [2024-07-14 07:44:16.000415] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.915 [2024-07-14 07:44:16.000649] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.915 [2024-07-14 07:44:16.000678] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19e2400 with addr=10.0.0.2, port=4420 00:26:59.915 [2024-07-14 07:44:16.000696] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19e2400 is same with the state(5) to be set 00:26:59.915 [2024-07-14 07:44:16.000892] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19e2400 (9): Bad file descriptor 00:26:59.915 [2024-07-14 07:44:16.001073] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:59.915 [2024-07-14 07:44:16.001098] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:59.915 [2024-07-14 07:44:16.001114] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:59.915 [2024-07-14 07:44:16.003626] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:59.915 [2024-07-14 07:44:16.012679] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:59.915 [2024-07-14 07:44:16.013051] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.915 [2024-07-14 07:44:16.013382] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.915 [2024-07-14 07:44:16.013435] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19e2400 with addr=10.0.0.2, port=4420 00:26:59.915 [2024-07-14 07:44:16.013453] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19e2400 is same with the state(5) to be set 00:26:59.915 [2024-07-14 07:44:16.013673] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19e2400 (9): Bad file descriptor 00:26:59.915 [2024-07-14 07:44:16.013826] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:59.915 [2024-07-14 07:44:16.013850] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:59.915 [2024-07-14 07:44:16.013875] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:59.915 [2024-07-14 07:44:16.016299] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:59.915 [2024-07-14 07:44:16.025283] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:59.915 [2024-07-14 07:44:16.025887] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.915 [2024-07-14 07:44:16.026143] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.915 [2024-07-14 07:44:16.026188] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19e2400 with addr=10.0.0.2, port=4420 00:26:59.915 [2024-07-14 07:44:16.026206] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19e2400 is same with the state(5) to be set 00:26:59.915 [2024-07-14 07:44:16.026408] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19e2400 (9): Bad file descriptor 00:26:59.915 [2024-07-14 07:44:16.026615] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:59.915 [2024-07-14 07:44:16.026640] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:59.915 [2024-07-14 07:44:16.026662] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:59.915 [2024-07-14 07:44:16.028970] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:59.915 [2024-07-14 07:44:16.037778] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:59.915 [2024-07-14 07:44:16.038161] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.915 [2024-07-14 07:44:16.038464] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.915 [2024-07-14 07:44:16.038493] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19e2400 with addr=10.0.0.2, port=4420 00:26:59.915 [2024-07-14 07:44:16.038510] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19e2400 is same with the state(5) to be set 00:26:59.915 [2024-07-14 07:44:16.038618] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19e2400 (9): Bad file descriptor 00:26:59.915 [2024-07-14 07:44:16.038745] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:59.915 [2024-07-14 07:44:16.038770] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:59.915 [2024-07-14 07:44:16.038786] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:59.915 [2024-07-14 07:44:16.041044] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:59.915 [2024-07-14 07:44:16.050257] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:59.915 [2024-07-14 07:44:16.050606] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.915 [2024-07-14 07:44:16.050785] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.915 [2024-07-14 07:44:16.050814] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19e2400 with addr=10.0.0.2, port=4420 00:26:59.915 [2024-07-14 07:44:16.050832] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19e2400 is same with the state(5) to be set 00:26:59.915 [2024-07-14 07:44:16.050992] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19e2400 (9): Bad file descriptor 00:26:59.915 [2024-07-14 07:44:16.051197] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:59.915 [2024-07-14 07:44:16.051222] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:59.915 [2024-07-14 07:44:16.051239] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:59.915 [2024-07-14 07:44:16.053611] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:59.915 [2024-07-14 07:44:16.062861] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:59.915 [2024-07-14 07:44:16.063243] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.915 [2024-07-14 07:44:16.063552] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.915 [2024-07-14 07:44:16.063581] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19e2400 with addr=10.0.0.2, port=4420 00:26:59.915 [2024-07-14 07:44:16.063599] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19e2400 is same with the state(5) to be set 00:26:59.915 [2024-07-14 07:44:16.063783] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19e2400 (9): Bad file descriptor 00:26:59.915 [2024-07-14 07:44:16.063964] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:59.915 [2024-07-14 07:44:16.063990] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:59.915 [2024-07-14 07:44:16.064006] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:59.915 [2024-07-14 07:44:16.066434] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:59.915 [2024-07-14 07:44:16.075400] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:59.915 [2024-07-14 07:44:16.075830] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.915 [2024-07-14 07:44:16.076073] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.915 [2024-07-14 07:44:16.076100] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19e2400 with addr=10.0.0.2, port=4420 00:26:59.915 [2024-07-14 07:44:16.076116] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19e2400 is same with the state(5) to be set 00:26:59.915 [2024-07-14 07:44:16.076319] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19e2400 (9): Bad file descriptor 00:26:59.915 [2024-07-14 07:44:16.076507] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:59.915 [2024-07-14 07:44:16.076532] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:59.915 [2024-07-14 07:44:16.076548] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:59.915 [2024-07-14 07:44:16.079087] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:00.177 [2024-07-14 07:44:16.088161] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:00.177 [2024-07-14 07:44:16.088593] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:00.177 [2024-07-14 07:44:16.088846] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:00.177 [2024-07-14 07:44:16.088888] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19e2400 with addr=10.0.0.2, port=4420 00:27:00.177 [2024-07-14 07:44:16.088925] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19e2400 is same with the state(5) to be set 00:27:00.177 [2024-07-14 07:44:16.089059] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19e2400 (9): Bad file descriptor 00:27:00.177 [2024-07-14 07:44:16.089227] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:00.177 [2024-07-14 07:44:16.089252] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:00.177 [2024-07-14 07:44:16.089269] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:00.177 [2024-07-14 07:44:16.091427] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:00.177 [2024-07-14 07:44:16.100689] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:00.177 [2024-07-14 07:44:16.101075] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:00.177 [2024-07-14 07:44:16.101285] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:00.177 [2024-07-14 07:44:16.101315] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19e2400 with addr=10.0.0.2, port=4420 00:27:00.177 [2024-07-14 07:44:16.101333] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19e2400 is same with the state(5) to be set 00:27:00.177 [2024-07-14 07:44:16.101499] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19e2400 (9): Bad file descriptor 00:27:00.177 [2024-07-14 07:44:16.101688] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:00.177 [2024-07-14 07:44:16.101712] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:00.177 [2024-07-14 07:44:16.101728] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:00.177 [2024-07-14 07:44:16.104004] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:00.177 [2024-07-14 07:44:16.113510] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:00.177 [2024-07-14 07:44:16.113927] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:00.177 [2024-07-14 07:44:16.114135] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:00.177 [2024-07-14 07:44:16.114164] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19e2400 with addr=10.0.0.2, port=4420 00:27:00.177 [2024-07-14 07:44:16.114182] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19e2400 is same with the state(5) to be set 00:27:00.177 [2024-07-14 07:44:16.114312] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19e2400 (9): Bad file descriptor 00:27:00.177 [2024-07-14 07:44:16.114518] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:00.177 [2024-07-14 07:44:16.114543] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:00.177 [2024-07-14 07:44:16.114559] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:00.177 [2024-07-14 07:44:16.116908] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:00.177 [2024-07-14 07:44:16.125953] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:00.177 [2024-07-14 07:44:16.126365] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:00.177 [2024-07-14 07:44:16.126568] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:00.177 [2024-07-14 07:44:16.126597] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19e2400 with addr=10.0.0.2, port=4420 00:27:00.177 [2024-07-14 07:44:16.126615] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19e2400 is same with the state(5) to be set 00:27:00.177 [2024-07-14 07:44:16.126780] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19e2400 (9): Bad file descriptor 00:27:00.177 [2024-07-14 07:44:16.126944] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:00.177 [2024-07-14 07:44:16.126970] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:00.177 [2024-07-14 07:44:16.126986] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:00.177 [2024-07-14 07:44:16.129330] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:00.177 [2024-07-14 07:44:16.138530] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:00.177 [2024-07-14 07:44:16.138974] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:00.177 [2024-07-14 07:44:16.139218] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:00.177 [2024-07-14 07:44:16.139245] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19e2400 with addr=10.0.0.2, port=4420 00:27:00.177 [2024-07-14 07:44:16.139261] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19e2400 is same with the state(5) to be set 00:27:00.177 [2024-07-14 07:44:16.139465] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19e2400 (9): Bad file descriptor 00:27:00.177 [2024-07-14 07:44:16.139635] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:00.177 [2024-07-14 07:44:16.139661] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:00.177 [2024-07-14 07:44:16.139677] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:00.177 [2024-07-14 07:44:16.142164] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:00.177 [2024-07-14 07:44:16.151081] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:00.177 [2024-07-14 07:44:16.151548] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:00.177 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/bdevperf.sh: line 35: 14375 Killed "${NVMF_APP[@]}" "$@" 00:27:00.177 [2024-07-14 07:44:16.151824] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:00.177 [2024-07-14 07:44:16.151853] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19e2400 with addr=10.0.0.2, port=4420 00:27:00.177 [2024-07-14 07:44:16.151880] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19e2400 is same with the state(5) to be set 00:27:00.177 07:44:16 -- host/bdevperf.sh@36 -- # tgt_init 00:27:00.177 [2024-07-14 07:44:16.152078] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19e2400 (9): Bad file descriptor 00:27:00.177 07:44:16 -- host/bdevperf.sh@15 -- # nvmfappstart -m 0xE 00:27:00.177 [2024-07-14 07:44:16.152254] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:00.177 [2024-07-14 07:44:16.152280] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:00.177 [2024-07-14 07:44:16.152297] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:00.177 07:44:16 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:27:00.177 07:44:16 -- common/autotest_common.sh@712 -- # xtrace_disable 00:27:00.177 07:44:16 -- common/autotest_common.sh@10 -- # set +x 00:27:00.177 [2024-07-14 07:44:16.154705] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:00.177 07:44:16 -- nvmf/common.sh@469 -- # nvmfpid=15376 00:27:00.177 07:44:16 -- nvmf/common.sh@468 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:27:00.177 07:44:16 -- nvmf/common.sh@470 -- # waitforlisten 15376 00:27:00.177 07:44:16 -- common/autotest_common.sh@819 -- # '[' -z 15376 ']' 00:27:00.177 07:44:16 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:27:00.177 07:44:16 -- common/autotest_common.sh@824 -- # local max_retries=100 00:27:00.177 07:44:16 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:27:00.177 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:27:00.177 07:44:16 -- common/autotest_common.sh@828 -- # xtrace_disable 00:27:00.177 07:44:16 -- common/autotest_common.sh@10 -- # set +x 00:27:00.177 [2024-07-14 07:44:16.163824] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:00.177 [2024-07-14 07:44:16.164200] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:00.177 [2024-07-14 07:44:16.164415] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:00.177 [2024-07-14 07:44:16.164440] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19e2400 with addr=10.0.0.2, port=4420 00:27:00.177 [2024-07-14 07:44:16.164457] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19e2400 is same with the state(5) to be set 00:27:00.177 [2024-07-14 07:44:16.164635] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19e2400 (9): Bad file descriptor 00:27:00.177 [2024-07-14 07:44:16.164778] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:00.177 [2024-07-14 07:44:16.164802] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:00.177 [2024-07-14 07:44:16.164819] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:00.177 [2024-07-14 07:44:16.167217] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:00.177 [2024-07-14 07:44:16.176168] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:00.177 [2024-07-14 07:44:16.176506] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:00.177 [2024-07-14 07:44:16.176721] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:00.177 [2024-07-14 07:44:16.176746] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19e2400 with addr=10.0.0.2, port=4420 00:27:00.177 [2024-07-14 07:44:16.176767] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19e2400 is same with the state(5) to be set 00:27:00.177 [2024-07-14 07:44:16.176894] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19e2400 (9): Bad file descriptor 00:27:00.177 [2024-07-14 07:44:16.176981] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:00.177 [2024-07-14 07:44:16.177016] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:00.177 [2024-07-14 07:44:16.177030] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:00.177 [2024-07-14 07:44:16.179365] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:00.177 [2024-07-14 07:44:16.188394] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:00.177 [2024-07-14 07:44:16.188772] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:00.177 [2024-07-14 07:44:16.189021] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:00.177 [2024-07-14 07:44:16.189047] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19e2400 with addr=10.0.0.2, port=4420 00:27:00.177 [2024-07-14 07:44:16.189063] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19e2400 is same with the state(5) to be set 00:27:00.178 [2024-07-14 07:44:16.189210] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19e2400 (9): Bad file descriptor 00:27:00.178 [2024-07-14 07:44:16.189368] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:00.178 [2024-07-14 07:44:16.189387] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:00.178 [2024-07-14 07:44:16.189400] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:00.178 [2024-07-14 07:44:16.191366] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:00.178 [2024-07-14 07:44:16.196680] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:27:00.178 [2024-07-14 07:44:16.196750] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:27:00.178 [2024-07-14 07:44:16.200679] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:00.178 [2024-07-14 07:44:16.201017] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:00.178 [2024-07-14 07:44:16.201179] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:00.178 [2024-07-14 07:44:16.201205] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19e2400 with addr=10.0.0.2, port=4420 00:27:00.178 [2024-07-14 07:44:16.201221] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19e2400 is same with the state(5) to be set 00:27:00.178 [2024-07-14 07:44:16.201356] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19e2400 (9): Bad file descriptor 00:27:00.178 [2024-07-14 07:44:16.201514] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:00.178 [2024-07-14 07:44:16.201534] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:00.178 [2024-07-14 07:44:16.201547] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:00.178 [2024-07-14 07:44:16.203727] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:00.178 [2024-07-14 07:44:16.212828] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:00.178 [2024-07-14 07:44:16.213186] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:00.178 [2024-07-14 07:44:16.213464] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:00.178 [2024-07-14 07:44:16.213490] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19e2400 with addr=10.0.0.2, port=4420 00:27:00.178 [2024-07-14 07:44:16.213506] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19e2400 is same with the state(5) to be set 00:27:00.178 [2024-07-14 07:44:16.213699] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19e2400 (9): Bad file descriptor 00:27:00.178 [2024-07-14 07:44:16.213825] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:00.178 [2024-07-14 07:44:16.213845] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:00.178 [2024-07-14 07:44:16.213882] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:00.178 [2024-07-14 07:44:16.215870] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:00.178 [2024-07-14 07:44:16.225089] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:00.178 [2024-07-14 07:44:16.225441] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:00.178 [2024-07-14 07:44:16.225650] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:00.178 [2024-07-14 07:44:16.225675] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19e2400 with addr=10.0.0.2, port=4420 00:27:00.178 [2024-07-14 07:44:16.225690] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19e2400 is same with the state(5) to be set 00:27:00.178 [2024-07-14 07:44:16.225833] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19e2400 (9): Bad file descriptor 00:27:00.178 [2024-07-14 07:44:16.226012] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:00.178 [2024-07-14 07:44:16.226033] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:00.178 [2024-07-14 07:44:16.226046] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:00.178 [2024-07-14 07:44:16.228066] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:00.178 EAL: No free 2048 kB hugepages reported on node 1 00:27:00.178 [2024-07-14 07:44:16.237626] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:00.178 [2024-07-14 07:44:16.237989] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:00.178 [2024-07-14 07:44:16.238209] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:00.178 [2024-07-14 07:44:16.238234] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19e2400 with addr=10.0.0.2, port=4420 00:27:00.178 [2024-07-14 07:44:16.238250] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19e2400 is same with the state(5) to be set 00:27:00.178 [2024-07-14 07:44:16.238397] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19e2400 (9): Bad file descriptor 00:27:00.178 [2024-07-14 07:44:16.238539] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:00.178 [2024-07-14 07:44:16.238558] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:00.178 [2024-07-14 07:44:16.238571] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:00.178 [2024-07-14 07:44:16.240793] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:00.178 [2024-07-14 07:44:16.250115] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:00.178 [2024-07-14 07:44:16.250514] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:00.178 [2024-07-14 07:44:16.250734] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:00.178 [2024-07-14 07:44:16.250764] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19e2400 with addr=10.0.0.2, port=4420 00:27:00.178 [2024-07-14 07:44:16.250781] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19e2400 is same with the state(5) to be set 00:27:00.178 [2024-07-14 07:44:16.250940] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19e2400 (9): Bad file descriptor 00:27:00.178 [2024-07-14 07:44:16.251087] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:00.178 [2024-07-14 07:44:16.251107] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:00.178 [2024-07-14 07:44:16.251120] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:00.178 [2024-07-14 07:44:16.253429] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:00.178 [2024-07-14 07:44:16.262590] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:00.178 [2024-07-14 07:44:16.263011] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:00.178 [2024-07-14 07:44:16.263200] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:00.178 [2024-07-14 07:44:16.263226] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19e2400 with addr=10.0.0.2, port=4420 00:27:00.178 [2024-07-14 07:44:16.263241] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19e2400 is same with the state(5) to be set 00:27:00.178 [2024-07-14 07:44:16.263374] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19e2400 (9): Bad file descriptor 00:27:00.178 [2024-07-14 07:44:16.263531] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:00.178 [2024-07-14 07:44:16.263550] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:00.178 [2024-07-14 07:44:16.263563] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:00.178 [2024-07-14 07:44:16.265780] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:00.178 [2024-07-14 07:44:16.266439] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 3 00:27:00.178 [2024-07-14 07:44:16.275119] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:00.178 [2024-07-14 07:44:16.275669] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:00.178 [2024-07-14 07:44:16.275930] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:00.178 [2024-07-14 07:44:16.275958] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19e2400 with addr=10.0.0.2, port=4420 00:27:00.178 [2024-07-14 07:44:16.275978] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19e2400 is same with the state(5) to be set 00:27:00.178 [2024-07-14 07:44:16.276168] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19e2400 (9): Bad file descriptor 00:27:00.178 [2024-07-14 07:44:16.276329] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:00.178 [2024-07-14 07:44:16.276351] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:00.178 [2024-07-14 07:44:16.276367] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:00.178 [2024-07-14 07:44:16.278657] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:00.178 [2024-07-14 07:44:16.287789] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:00.178 [2024-07-14 07:44:16.288216] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:00.178 [2024-07-14 07:44:16.288394] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:00.178 [2024-07-14 07:44:16.288421] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19e2400 with addr=10.0.0.2, port=4420 00:27:00.178 [2024-07-14 07:44:16.288446] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19e2400 is same with the state(5) to be set 00:27:00.178 [2024-07-14 07:44:16.288615] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19e2400 (9): Bad file descriptor 00:27:00.178 [2024-07-14 07:44:16.288805] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:00.178 [2024-07-14 07:44:16.288826] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:00.178 [2024-07-14 07:44:16.288841] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:00.178 [2024-07-14 07:44:16.291242] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:00.178 [2024-07-14 07:44:16.300500] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:00.178 [2024-07-14 07:44:16.301002] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:00.178 [2024-07-14 07:44:16.301175] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:00.178 [2024-07-14 07:44:16.301201] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19e2400 with addr=10.0.0.2, port=4420 00:27:00.178 [2024-07-14 07:44:16.301217] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19e2400 is same with the state(5) to be set 00:27:00.178 [2024-07-14 07:44:16.301413] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19e2400 (9): Bad file descriptor 00:27:00.178 [2024-07-14 07:44:16.301606] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:00.178 [2024-07-14 07:44:16.301627] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:00.178 [2024-07-14 07:44:16.301656] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:00.178 [2024-07-14 07:44:16.303961] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:00.178 [2024-07-14 07:44:16.312984] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:00.178 [2024-07-14 07:44:16.313439] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:00.179 [2024-07-14 07:44:16.313607] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:00.179 [2024-07-14 07:44:16.313633] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19e2400 with addr=10.0.0.2, port=4420 00:27:00.179 [2024-07-14 07:44:16.313648] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19e2400 is same with the state(5) to be set 00:27:00.179 [2024-07-14 07:44:16.313780] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19e2400 (9): Bad file descriptor 00:27:00.179 [2024-07-14 07:44:16.313936] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:00.179 [2024-07-14 07:44:16.313958] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:00.179 [2024-07-14 07:44:16.313971] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:00.179 [2024-07-14 07:44:16.316333] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:00.179 [2024-07-14 07:44:16.325406] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:00.179 [2024-07-14 07:44:16.325828] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:00.179 [2024-07-14 07:44:16.326056] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:00.179 [2024-07-14 07:44:16.326083] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19e2400 with addr=10.0.0.2, port=4420 00:27:00.179 [2024-07-14 07:44:16.326106] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19e2400 is same with the state(5) to be set 00:27:00.179 [2024-07-14 07:44:16.326283] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19e2400 (9): Bad file descriptor 00:27:00.179 [2024-07-14 07:44:16.326441] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:00.179 [2024-07-14 07:44:16.326461] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:00.179 [2024-07-14 07:44:16.326474] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:00.179 [2024-07-14 07:44:16.328741] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:00.179 [2024-07-14 07:44:16.337930] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:00.179 [2024-07-14 07:44:16.338511] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:00.179 [2024-07-14 07:44:16.338741] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:00.179 [2024-07-14 07:44:16.338767] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19e2400 with addr=10.0.0.2, port=4420 00:27:00.179 [2024-07-14 07:44:16.338788] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19e2400 is same with the state(5) to be set 00:27:00.179 [2024-07-14 07:44:16.338957] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19e2400 (9): Bad file descriptor 00:27:00.179 [2024-07-14 07:44:16.339112] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:00.179 [2024-07-14 07:44:16.339134] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:00.179 [2024-07-14 07:44:16.339151] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:00.179 [2024-07-14 07:44:16.341800] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:00.440 [2024-07-14 07:44:16.350446] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:00.440 [2024-07-14 07:44:16.350910] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:00.440 [2024-07-14 07:44:16.351125] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:00.440 [2024-07-14 07:44:16.351156] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19e2400 with addr=10.0.0.2, port=4420 00:27:00.440 [2024-07-14 07:44:16.351190] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19e2400 is same with the state(5) to be set 00:27:00.440 [2024-07-14 07:44:16.351418] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19e2400 (9): Bad file descriptor 00:27:00.440 [2024-07-14 07:44:16.351627] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:00.440 [2024-07-14 07:44:16.351651] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:00.440 [2024-07-14 07:44:16.351665] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:00.440 [2024-07-14 07:44:16.353974] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:00.440 [2024-07-14 07:44:16.362961] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:00.440 [2024-07-14 07:44:16.363322] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:00.440 [2024-07-14 07:44:16.363495] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:00.440 [2024-07-14 07:44:16.363522] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19e2400 with addr=10.0.0.2, port=4420 00:27:00.440 [2024-07-14 07:44:16.363538] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19e2400 is same with the state(5) to be set 00:27:00.440 [2024-07-14 07:44:16.363701] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19e2400 (9): Bad file descriptor 00:27:00.440 [2024-07-14 07:44:16.363854] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:00.440 [2024-07-14 07:44:16.363885] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:00.440 [2024-07-14 07:44:16.363900] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:00.440 [2024-07-14 07:44:16.366329] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:00.440 [2024-07-14 07:44:16.375366] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:00.440 [2024-07-14 07:44:16.375845] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:00.440 [2024-07-14 07:44:16.376081] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:00.440 [2024-07-14 07:44:16.376108] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19e2400 with addr=10.0.0.2, port=4420 00:27:00.440 [2024-07-14 07:44:16.376123] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19e2400 is same with the state(5) to be set 00:27:00.440 [2024-07-14 07:44:16.376272] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19e2400 (9): Bad file descriptor 00:27:00.440 [2024-07-14 07:44:16.376490] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:00.440 [2024-07-14 07:44:16.376521] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:00.440 [2024-07-14 07:44:16.376534] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:00.440 [2024-07-14 07:44:16.378967] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:00.440 [2024-07-14 07:44:16.385421] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:27:00.440 [2024-07-14 07:44:16.385531] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:27:00.440 [2024-07-14 07:44:16.385548] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:27:00.440 [2024-07-14 07:44:16.385560] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:27:00.440 [2024-07-14 07:44:16.385681] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:27:00.440 [2024-07-14 07:44:16.385739] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:27:00.440 [2024-07-14 07:44:16.385743] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:27:00.440 [2024-07-14 07:44:16.387740] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:00.440 [2024-07-14 07:44:16.388149] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:00.440 [2024-07-14 07:44:16.388341] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:00.440 [2024-07-14 07:44:16.388367] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19e2400 with addr=10.0.0.2, port=4420 00:27:00.440 [2024-07-14 07:44:16.388383] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19e2400 is same with the state(5) to be set 00:27:00.440 [2024-07-14 07:44:16.388515] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19e2400 (9): Bad file descriptor 00:27:00.440 [2024-07-14 07:44:16.388698] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:00.440 [2024-07-14 07:44:16.388720] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:00.440 [2024-07-14 07:44:16.388734] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:00.440 [2024-07-14 07:44:16.390955] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:00.440 [2024-07-14 07:44:16.400035] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:00.440 [2024-07-14 07:44:16.400562] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:00.440 [2024-07-14 07:44:16.400774] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:00.440 [2024-07-14 07:44:16.400801] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19e2400 with addr=10.0.0.2, port=4420 00:27:00.440 [2024-07-14 07:44:16.400820] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19e2400 is same with the state(5) to be set 00:27:00.440 [2024-07-14 07:44:16.401029] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19e2400 (9): Bad file descriptor 00:27:00.440 [2024-07-14 07:44:16.401239] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:00.440 [2024-07-14 07:44:16.401261] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:00.440 [2024-07-14 07:44:16.401277] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:00.440 [2024-07-14 07:44:16.403429] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:00.440 [2024-07-14 07:44:16.412267] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:00.440 [2024-07-14 07:44:16.412784] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:00.440 [2024-07-14 07:44:16.412993] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:00.440 [2024-07-14 07:44:16.413021] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19e2400 with addr=10.0.0.2, port=4420 00:27:00.440 [2024-07-14 07:44:16.413041] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19e2400 is same with the state(5) to be set 00:27:00.440 [2024-07-14 07:44:16.413201] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19e2400 (9): Bad file descriptor 00:27:00.440 [2024-07-14 07:44:16.413339] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:00.441 [2024-07-14 07:44:16.413361] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:00.441 [2024-07-14 07:44:16.413378] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:00.441 [2024-07-14 07:44:16.415468] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:00.441 [2024-07-14 07:44:16.424653] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:00.441 [2024-07-14 07:44:16.425251] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:00.441 [2024-07-14 07:44:16.425455] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:00.441 [2024-07-14 07:44:16.425481] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19e2400 with addr=10.0.0.2, port=4420 00:27:00.441 [2024-07-14 07:44:16.425502] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19e2400 is same with the state(5) to be set 00:27:00.441 [2024-07-14 07:44:16.425696] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19e2400 (9): Bad file descriptor 00:27:00.441 [2024-07-14 07:44:16.425910] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:00.441 [2024-07-14 07:44:16.425939] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:00.441 [2024-07-14 07:44:16.425957] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:00.441 [2024-07-14 07:44:16.428005] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:00.441 [2024-07-14 07:44:16.437189] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:00.441 [2024-07-14 07:44:16.437698] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:00.441 [2024-07-14 07:44:16.437930] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:00.441 [2024-07-14 07:44:16.437957] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19e2400 with addr=10.0.0.2, port=4420 00:27:00.441 [2024-07-14 07:44:16.437977] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19e2400 is same with the state(5) to be set 00:27:00.441 [2024-07-14 07:44:16.438216] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19e2400 (9): Bad file descriptor 00:27:00.441 [2024-07-14 07:44:16.438370] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:00.441 [2024-07-14 07:44:16.438392] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:00.441 [2024-07-14 07:44:16.438410] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:00.441 [2024-07-14 07:44:16.440502] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:00.441 [2024-07-14 07:44:16.449584] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:00.441 [2024-07-14 07:44:16.450017] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:00.441 [2024-07-14 07:44:16.450228] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:00.441 [2024-07-14 07:44:16.450254] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19e2400 with addr=10.0.0.2, port=4420 00:27:00.441 [2024-07-14 07:44:16.450273] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19e2400 is same with the state(5) to be set 00:27:00.441 [2024-07-14 07:44:16.450432] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19e2400 (9): Bad file descriptor 00:27:00.441 [2024-07-14 07:44:16.450587] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:00.441 [2024-07-14 07:44:16.450608] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:00.441 [2024-07-14 07:44:16.450627] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:00.441 [2024-07-14 07:44:16.452658] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:00.441 [2024-07-14 07:44:16.461896] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:00.441 [2024-07-14 07:44:16.462433] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:00.441 [2024-07-14 07:44:16.462647] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:00.441 [2024-07-14 07:44:16.462677] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19e2400 with addr=10.0.0.2, port=4420 00:27:00.441 [2024-07-14 07:44:16.462700] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19e2400 is same with the state(5) to be set 00:27:00.441 [2024-07-14 07:44:16.462908] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19e2400 (9): Bad file descriptor 00:27:00.441 [2024-07-14 07:44:16.463064] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:00.441 [2024-07-14 07:44:16.463086] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:00.441 [2024-07-14 07:44:16.463104] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:00.441 [2024-07-14 07:44:16.465079] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:00.441 [2024-07-14 07:44:16.474499] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:00.441 [2024-07-14 07:44:16.474852] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:00.441 [2024-07-14 07:44:16.475070] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:00.441 [2024-07-14 07:44:16.475109] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19e2400 with addr=10.0.0.2, port=4420 00:27:00.441 [2024-07-14 07:44:16.475126] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19e2400 is same with the state(5) to be set 00:27:00.441 [2024-07-14 07:44:16.475289] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19e2400 (9): Bad file descriptor 00:27:00.441 [2024-07-14 07:44:16.475421] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:00.441 [2024-07-14 07:44:16.475442] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:00.441 [2024-07-14 07:44:16.475457] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:00.441 [2024-07-14 07:44:16.477552] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:00.441 [2024-07-14 07:44:16.486918] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:00.441 [2024-07-14 07:44:16.487275] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:00.441 [2024-07-14 07:44:16.487477] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:00.441 [2024-07-14 07:44:16.487503] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19e2400 with addr=10.0.0.2, port=4420 00:27:00.441 [2024-07-14 07:44:16.487519] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19e2400 is same with the state(5) to be set 00:27:00.441 [2024-07-14 07:44:16.487652] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19e2400 (9): Bad file descriptor 00:27:00.441 [2024-07-14 07:44:16.487784] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:00.441 [2024-07-14 07:44:16.487804] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:00.441 [2024-07-14 07:44:16.487818] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:00.441 [2024-07-14 07:44:16.489908] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:00.441 [2024-07-14 07:44:16.499345] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:00.441 [2024-07-14 07:44:16.499759] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:00.441 [2024-07-14 07:44:16.499982] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:00.441 [2024-07-14 07:44:16.500008] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19e2400 with addr=10.0.0.2, port=4420 00:27:00.441 [2024-07-14 07:44:16.500024] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19e2400 is same with the state(5) to be set 00:27:00.441 [2024-07-14 07:44:16.500205] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19e2400 (9): Bad file descriptor 00:27:00.441 [2024-07-14 07:44:16.500385] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:00.441 [2024-07-14 07:44:16.500406] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:00.441 [2024-07-14 07:44:16.500420] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:00.441 [2024-07-14 07:44:16.502448] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:00.441 [2024-07-14 07:44:16.511389] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:00.441 [2024-07-14 07:44:16.511753] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:00.441 [2024-07-14 07:44:16.511920] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:00.441 [2024-07-14 07:44:16.511947] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19e2400 with addr=10.0.0.2, port=4420 00:27:00.441 [2024-07-14 07:44:16.511968] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19e2400 is same with the state(5) to be set 00:27:00.441 [2024-07-14 07:44:16.512151] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19e2400 (9): Bad file descriptor 00:27:00.441 [2024-07-14 07:44:16.512347] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:00.442 [2024-07-14 07:44:16.512368] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:00.442 [2024-07-14 07:44:16.512381] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:00.442 [2024-07-14 07:44:16.514365] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:00.442 [2024-07-14 07:44:16.523808] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:00.442 [2024-07-14 07:44:16.524198] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:00.442 [2024-07-14 07:44:16.524382] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:00.442 [2024-07-14 07:44:16.524408] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19e2400 with addr=10.0.0.2, port=4420 00:27:00.442 [2024-07-14 07:44:16.524423] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19e2400 is same with the state(5) to be set 00:27:00.442 [2024-07-14 07:44:16.524571] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19e2400 (9): Bad file descriptor 00:27:00.442 [2024-07-14 07:44:16.524734] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:00.442 [2024-07-14 07:44:16.524755] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:00.442 [2024-07-14 07:44:16.524768] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:00.442 [2024-07-14 07:44:16.526653] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:00.442 [2024-07-14 07:44:16.535925] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:00.442 [2024-07-14 07:44:16.536281] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:00.442 [2024-07-14 07:44:16.536511] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:00.442 [2024-07-14 07:44:16.536537] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19e2400 with addr=10.0.0.2, port=4420 00:27:00.442 [2024-07-14 07:44:16.536552] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19e2400 is same with the state(5) to be set 00:27:00.442 [2024-07-14 07:44:16.536702] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19e2400 (9): Bad file descriptor 00:27:00.442 [2024-07-14 07:44:16.536909] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:00.442 [2024-07-14 07:44:16.536931] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:00.442 [2024-07-14 07:44:16.536945] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:00.442 [2024-07-14 07:44:16.539075] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:00.442 [2024-07-14 07:44:16.548041] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:00.442 [2024-07-14 07:44:16.548383] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:00.442 [2024-07-14 07:44:16.548579] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:00.442 [2024-07-14 07:44:16.548605] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19e2400 with addr=10.0.0.2, port=4420 00:27:00.442 [2024-07-14 07:44:16.548620] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19e2400 is same with the state(5) to be set 00:27:00.442 [2024-07-14 07:44:16.548757] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19e2400 (9): Bad file descriptor 00:27:00.442 [2024-07-14 07:44:16.548964] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:00.442 [2024-07-14 07:44:16.548986] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:00.442 [2024-07-14 07:44:16.549000] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:00.442 [2024-07-14 07:44:16.551051] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:00.442 [2024-07-14 07:44:16.560417] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:00.442 [2024-07-14 07:44:16.560785] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:00.442 [2024-07-14 07:44:16.561002] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:00.442 [2024-07-14 07:44:16.561029] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19e2400 with addr=10.0.0.2, port=4420 00:27:00.442 [2024-07-14 07:44:16.561044] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19e2400 is same with the state(5) to be set 00:27:00.442 [2024-07-14 07:44:16.561209] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19e2400 (9): Bad file descriptor 00:27:00.442 [2024-07-14 07:44:16.561375] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:00.442 [2024-07-14 07:44:16.561395] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:00.442 [2024-07-14 07:44:16.561409] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:00.442 [2024-07-14 07:44:16.563456] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:00.442 [2024-07-14 07:44:16.572722] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:00.442 [2024-07-14 07:44:16.573150] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:00.442 [2024-07-14 07:44:16.573333] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:00.442 [2024-07-14 07:44:16.573359] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19e2400 with addr=10.0.0.2, port=4420 00:27:00.442 [2024-07-14 07:44:16.573375] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19e2400 is same with the state(5) to be set 00:27:00.442 [2024-07-14 07:44:16.573553] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19e2400 (9): Bad file descriptor 00:27:00.442 [2024-07-14 07:44:16.573685] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:00.442 [2024-07-14 07:44:16.573706] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:00.442 [2024-07-14 07:44:16.573720] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:00.442 [2024-07-14 07:44:16.576022] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:00.442 [2024-07-14 07:44:16.585159] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:00.442 [2024-07-14 07:44:16.585482] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:00.442 [2024-07-14 07:44:16.585695] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:00.442 [2024-07-14 07:44:16.585720] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19e2400 with addr=10.0.0.2, port=4420 00:27:00.442 [2024-07-14 07:44:16.585736] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19e2400 is same with the state(5) to be set 00:27:00.442 [2024-07-14 07:44:16.585894] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19e2400 (9): Bad file descriptor 00:27:00.442 [2024-07-14 07:44:16.586026] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:00.442 [2024-07-14 07:44:16.586047] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:00.442 [2024-07-14 07:44:16.586061] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:00.442 [2024-07-14 07:44:16.588077] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:00.442 [2024-07-14 07:44:16.597473] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:00.442 [2024-07-14 07:44:16.597860] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:00.442 [2024-07-14 07:44:16.598089] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:00.442 [2024-07-14 07:44:16.598115] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19e2400 with addr=10.0.0.2, port=4420 00:27:00.442 [2024-07-14 07:44:16.598131] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19e2400 is same with the state(5) to be set 00:27:00.442 [2024-07-14 07:44:16.598328] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19e2400 (9): Bad file descriptor 00:27:00.442 [2024-07-14 07:44:16.598507] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:00.442 [2024-07-14 07:44:16.598528] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:00.442 [2024-07-14 07:44:16.598541] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:00.442 [2024-07-14 07:44:16.600656] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:00.702 [2024-07-14 07:44:16.610010] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:00.702 [2024-07-14 07:44:16.610361] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:00.702 [2024-07-14 07:44:16.610533] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:00.702 [2024-07-14 07:44:16.610560] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19e2400 with addr=10.0.0.2, port=4420 00:27:00.702 [2024-07-14 07:44:16.610576] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19e2400 is same with the state(5) to be set 00:27:00.702 [2024-07-14 07:44:16.610741] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19e2400 (9): Bad file descriptor 00:27:00.702 [2024-07-14 07:44:16.610962] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:00.702 [2024-07-14 07:44:16.610984] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:00.702 [2024-07-14 07:44:16.610998] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:00.702 [2024-07-14 07:44:16.613086] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:00.702 [2024-07-14 07:44:16.622105] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:00.702 [2024-07-14 07:44:16.622506] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:00.702 [2024-07-14 07:44:16.622680] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:00.702 [2024-07-14 07:44:16.622708] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19e2400 with addr=10.0.0.2, port=4420 00:27:00.702 [2024-07-14 07:44:16.622724] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19e2400 is same with the state(5) to be set 00:27:00.702 [2024-07-14 07:44:16.622944] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19e2400 (9): Bad file descriptor 00:27:00.702 [2024-07-14 07:44:16.623124] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:00.702 [2024-07-14 07:44:16.623149] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:00.702 [2024-07-14 07:44:16.623164] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:00.702 [2024-07-14 07:44:16.625169] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:00.702 [2024-07-14 07:44:16.634343] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:00.703 [2024-07-14 07:44:16.634689] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:00.703 [2024-07-14 07:44:16.634884] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:00.703 [2024-07-14 07:44:16.634911] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19e2400 with addr=10.0.0.2, port=4420 00:27:00.703 [2024-07-14 07:44:16.634928] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19e2400 is same with the state(5) to be set 00:27:00.703 [2024-07-14 07:44:16.635109] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19e2400 (9): Bad file descriptor 00:27:00.703 [2024-07-14 07:44:16.635274] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:00.703 [2024-07-14 07:44:16.635295] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:00.703 [2024-07-14 07:44:16.635308] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:00.703 [2024-07-14 07:44:16.637403] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:00.703 [2024-07-14 07:44:16.646535] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:00.703 [2024-07-14 07:44:16.646913] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:00.703 [2024-07-14 07:44:16.647075] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:00.703 [2024-07-14 07:44:16.647102] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19e2400 with addr=10.0.0.2, port=4420 00:27:00.703 [2024-07-14 07:44:16.647118] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19e2400 is same with the state(5) to be set 00:27:00.703 [2024-07-14 07:44:16.647283] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19e2400 (9): Bad file descriptor 00:27:00.703 [2024-07-14 07:44:16.647464] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:00.703 [2024-07-14 07:44:16.647485] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:00.703 [2024-07-14 07:44:16.647499] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:00.703 [2024-07-14 07:44:16.649624] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:00.703 [2024-07-14 07:44:16.659130] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:00.703 [2024-07-14 07:44:16.659505] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:00.703 [2024-07-14 07:44:16.659685] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:00.703 [2024-07-14 07:44:16.659710] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19e2400 with addr=10.0.0.2, port=4420 00:27:00.703 [2024-07-14 07:44:16.659726] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19e2400 is same with the state(5) to be set 00:27:00.703 [2024-07-14 07:44:16.659898] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19e2400 (9): Bad file descriptor 00:27:00.703 [2024-07-14 07:44:16.660032] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:00.703 [2024-07-14 07:44:16.660053] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:00.703 [2024-07-14 07:44:16.660072] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:00.703 [2024-07-14 07:44:16.662124] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:00.703 [2024-07-14 07:44:16.671370] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:00.703 [2024-07-14 07:44:16.671727] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:00.703 [2024-07-14 07:44:16.671912] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:00.703 [2024-07-14 07:44:16.671938] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19e2400 with addr=10.0.0.2, port=4420 00:27:00.703 [2024-07-14 07:44:16.671954] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19e2400 is same with the state(5) to be set 00:27:00.703 [2024-07-14 07:44:16.672119] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19e2400 (9): Bad file descriptor 00:27:00.703 [2024-07-14 07:44:16.672285] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:00.703 [2024-07-14 07:44:16.672306] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:00.703 [2024-07-14 07:44:16.672320] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:00.703 [2024-07-14 07:44:16.674325] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:00.703 [2024-07-14 07:44:16.683781] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:00.703 [2024-07-14 07:44:16.684147] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:00.703 [2024-07-14 07:44:16.684343] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:00.703 [2024-07-14 07:44:16.684369] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19e2400 with addr=10.0.0.2, port=4420 00:27:00.703 [2024-07-14 07:44:16.684384] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19e2400 is same with the state(5) to be set 00:27:00.703 [2024-07-14 07:44:16.684517] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19e2400 (9): Bad file descriptor 00:27:00.703 [2024-07-14 07:44:16.684637] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:00.703 [2024-07-14 07:44:16.684658] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:00.703 [2024-07-14 07:44:16.684672] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:00.703 [2024-07-14 07:44:16.686694] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:00.703 [2024-07-14 07:44:16.696132] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:00.703 [2024-07-14 07:44:16.696534] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:00.703 [2024-07-14 07:44:16.696705] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:00.703 [2024-07-14 07:44:16.696731] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19e2400 with addr=10.0.0.2, port=4420 00:27:00.703 [2024-07-14 07:44:16.696747] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19e2400 is same with the state(5) to be set 00:27:00.703 [2024-07-14 07:44:16.696904] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19e2400 (9): Bad file descriptor 00:27:00.703 [2024-07-14 07:44:16.697041] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:00.703 [2024-07-14 07:44:16.697062] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:00.703 [2024-07-14 07:44:16.697076] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:00.703 [2024-07-14 07:44:16.699182] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:00.703 [2024-07-14 07:44:16.708493] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:00.703 [2024-07-14 07:44:16.708844] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:00.703 [2024-07-14 07:44:16.709036] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:00.703 [2024-07-14 07:44:16.709063] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19e2400 with addr=10.0.0.2, port=4420 00:27:00.703 [2024-07-14 07:44:16.709078] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19e2400 is same with the state(5) to be set 00:27:00.703 [2024-07-14 07:44:16.709275] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19e2400 (9): Bad file descriptor 00:27:00.703 [2024-07-14 07:44:16.709423] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:00.703 [2024-07-14 07:44:16.709444] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:00.703 [2024-07-14 07:44:16.709457] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:00.703 [2024-07-14 07:44:16.711570] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:00.703 [2024-07-14 07:44:16.720838] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:00.703 [2024-07-14 07:44:16.721213] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:00.703 [2024-07-14 07:44:16.721428] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:00.703 [2024-07-14 07:44:16.721453] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19e2400 with addr=10.0.0.2, port=4420 00:27:00.703 [2024-07-14 07:44:16.721469] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19e2400 is same with the state(5) to be set 00:27:00.703 [2024-07-14 07:44:16.721601] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19e2400 (9): Bad file descriptor 00:27:00.703 [2024-07-14 07:44:16.721766] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:00.703 [2024-07-14 07:44:16.721787] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:00.703 [2024-07-14 07:44:16.721801] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:00.703 [2024-07-14 07:44:16.723871] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:00.703 [2024-07-14 07:44:16.733097] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:00.703 [2024-07-14 07:44:16.733425] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:00.704 [2024-07-14 07:44:16.733595] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:00.704 [2024-07-14 07:44:16.733622] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19e2400 with addr=10.0.0.2, port=4420 00:27:00.704 [2024-07-14 07:44:16.733638] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19e2400 is same with the state(5) to be set 00:27:00.704 [2024-07-14 07:44:16.733787] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19e2400 (9): Bad file descriptor 00:27:00.704 [2024-07-14 07:44:16.733959] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:00.704 [2024-07-14 07:44:16.733980] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:00.704 [2024-07-14 07:44:16.733994] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:00.704 [2024-07-14 07:44:16.735931] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:00.704 [2024-07-14 07:44:16.745419] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:00.704 [2024-07-14 07:44:16.745787] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:00.704 [2024-07-14 07:44:16.745962] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:00.704 [2024-07-14 07:44:16.745988] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19e2400 with addr=10.0.0.2, port=4420 00:27:00.704 [2024-07-14 07:44:16.746004] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19e2400 is same with the state(5) to be set 00:27:00.704 [2024-07-14 07:44:16.746182] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19e2400 (9): Bad file descriptor 00:27:00.704 [2024-07-14 07:44:16.746314] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:00.704 [2024-07-14 07:44:16.746335] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:00.704 [2024-07-14 07:44:16.746348] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:00.704 [2024-07-14 07:44:16.748516] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:00.704 [2024-07-14 07:44:16.757750] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:00.704 [2024-07-14 07:44:16.758105] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:00.704 [2024-07-14 07:44:16.758264] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:00.704 [2024-07-14 07:44:16.758289] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19e2400 with addr=10.0.0.2, port=4420 00:27:00.704 [2024-07-14 07:44:16.758305] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19e2400 is same with the state(5) to be set 00:27:00.704 [2024-07-14 07:44:16.758454] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19e2400 (9): Bad file descriptor 00:27:00.704 [2024-07-14 07:44:16.758588] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:00.704 [2024-07-14 07:44:16.758609] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:00.704 [2024-07-14 07:44:16.758623] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:00.704 [2024-07-14 07:44:16.760730] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:00.704 [2024-07-14 07:44:16.769993] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:00.704 [2024-07-14 07:44:16.770339] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:00.704 [2024-07-14 07:44:16.770531] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:00.704 [2024-07-14 07:44:16.770556] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19e2400 with addr=10.0.0.2, port=4420 00:27:00.704 [2024-07-14 07:44:16.770572] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19e2400 is same with the state(5) to be set 00:27:00.704 [2024-07-14 07:44:16.770737] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19e2400 (9): Bad file descriptor 00:27:00.704 [2024-07-14 07:44:16.770895] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:00.704 [2024-07-14 07:44:16.770916] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:00.704 [2024-07-14 07:44:16.770930] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:00.704 [2024-07-14 07:44:16.772717] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:00.704 [2024-07-14 07:44:16.782121] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:00.704 [2024-07-14 07:44:16.782473] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:00.704 [2024-07-14 07:44:16.782654] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:00.704 [2024-07-14 07:44:16.782679] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19e2400 with addr=10.0.0.2, port=4420 00:27:00.704 [2024-07-14 07:44:16.782694] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19e2400 is same with the state(5) to be set 00:27:00.704 [2024-07-14 07:44:16.782859] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19e2400 (9): Bad file descriptor 00:27:00.704 [2024-07-14 07:44:16.783032] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:00.704 [2024-07-14 07:44:16.783053] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:00.704 [2024-07-14 07:44:16.783067] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:00.704 [2024-07-14 07:44:16.785033] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:00.704 [2024-07-14 07:44:16.794398] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:00.704 [2024-07-14 07:44:16.794744] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:00.704 [2024-07-14 07:44:16.794929] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:00.704 [2024-07-14 07:44:16.794955] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19e2400 with addr=10.0.0.2, port=4420 00:27:00.704 [2024-07-14 07:44:16.794971] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19e2400 is same with the state(5) to be set 00:27:00.704 [2024-07-14 07:44:16.795136] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19e2400 (9): Bad file descriptor 00:27:00.704 [2024-07-14 07:44:16.795331] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:00.704 [2024-07-14 07:44:16.795352] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:00.704 [2024-07-14 07:44:16.795366] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:00.704 [2024-07-14 07:44:16.797459] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:00.704 [2024-07-14 07:44:16.806678] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:00.704 [2024-07-14 07:44:16.807028] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:00.704 [2024-07-14 07:44:16.807191] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:00.704 [2024-07-14 07:44:16.807216] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19e2400 with addr=10.0.0.2, port=4420 00:27:00.704 [2024-07-14 07:44:16.807231] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19e2400 is same with the state(5) to be set 00:27:00.704 [2024-07-14 07:44:16.807364] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19e2400 (9): Bad file descriptor 00:27:00.704 [2024-07-14 07:44:16.807498] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:00.704 [2024-07-14 07:44:16.807519] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:00.704 [2024-07-14 07:44:16.807533] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:00.704 [2024-07-14 07:44:16.809749] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:00.704 [2024-07-14 07:44:16.819075] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:00.704 [2024-07-14 07:44:16.819434] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:00.704 [2024-07-14 07:44:16.819595] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:00.704 [2024-07-14 07:44:16.819625] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19e2400 with addr=10.0.0.2, port=4420 00:27:00.704 [2024-07-14 07:44:16.819641] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19e2400 is same with the state(5) to be set 00:27:00.704 [2024-07-14 07:44:16.819822] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19e2400 (9): Bad file descriptor 00:27:00.704 [2024-07-14 07:44:16.819967] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:00.705 [2024-07-14 07:44:16.819989] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:00.705 [2024-07-14 07:44:16.820003] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:00.705 [2024-07-14 07:44:16.821960] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:00.705 [2024-07-14 07:44:16.831554] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:00.705 [2024-07-14 07:44:16.831928] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:00.705 [2024-07-14 07:44:16.832133] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:00.705 [2024-07-14 07:44:16.832158] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19e2400 with addr=10.0.0.2, port=4420 00:27:00.705 [2024-07-14 07:44:16.832174] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19e2400 is same with the state(5) to be set 00:27:00.705 [2024-07-14 07:44:16.832307] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19e2400 (9): Bad file descriptor 00:27:00.705 [2024-07-14 07:44:16.832454] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:00.705 [2024-07-14 07:44:16.832474] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:00.705 [2024-07-14 07:44:16.832488] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:00.705 [2024-07-14 07:44:16.834612] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:00.705 [2024-07-14 07:44:16.843812] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:00.705 [2024-07-14 07:44:16.844179] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:00.705 [2024-07-14 07:44:16.844371] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:00.705 [2024-07-14 07:44:16.844396] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19e2400 with addr=10.0.0.2, port=4420 00:27:00.705 [2024-07-14 07:44:16.844411] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19e2400 is same with the state(5) to be set 00:27:00.705 [2024-07-14 07:44:16.844560] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19e2400 (9): Bad file descriptor 00:27:00.705 [2024-07-14 07:44:16.844741] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:00.705 [2024-07-14 07:44:16.844762] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:00.705 [2024-07-14 07:44:16.844776] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:00.705 [2024-07-14 07:44:16.846794] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:00.705 [2024-07-14 07:44:16.856070] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:00.705 [2024-07-14 07:44:16.856413] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:00.705 [2024-07-14 07:44:16.856604] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:00.705 [2024-07-14 07:44:16.856630] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19e2400 with addr=10.0.0.2, port=4420 00:27:00.705 [2024-07-14 07:44:16.856650] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19e2400 is same with the state(5) to be set 00:27:00.705 [2024-07-14 07:44:16.856767] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19e2400 (9): Bad file descriptor 00:27:00.705 [2024-07-14 07:44:16.856973] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:00.705 [2024-07-14 07:44:16.856995] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:00.705 [2024-07-14 07:44:16.857008] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:00.705 [2024-07-14 07:44:16.858972] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:00.705 [2024-07-14 07:44:16.868649] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:00.705 [2024-07-14 07:44:16.869046] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:00.705 [2024-07-14 07:44:16.869232] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:00.705 [2024-07-14 07:44:16.869260] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19e2400 with addr=10.0.0.2, port=4420 00:27:00.705 [2024-07-14 07:44:16.869277] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19e2400 is same with the state(5) to be set 00:27:00.705 [2024-07-14 07:44:16.869411] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19e2400 (9): Bad file descriptor 00:27:00.705 [2024-07-14 07:44:16.869610] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:00.705 [2024-07-14 07:44:16.869634] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:00.705 [2024-07-14 07:44:16.869649] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:00.965 [2024-07-14 07:44:16.872039] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:00.965 [2024-07-14 07:44:16.880824] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:00.965 [2024-07-14 07:44:16.881224] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:00.965 [2024-07-14 07:44:16.881433] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:00.965 [2024-07-14 07:44:16.881459] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19e2400 with addr=10.0.0.2, port=4420 00:27:00.965 [2024-07-14 07:44:16.881475] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19e2400 is same with the state(5) to be set 00:27:00.965 [2024-07-14 07:44:16.881624] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19e2400 (9): Bad file descriptor 00:27:00.965 [2024-07-14 07:44:16.881789] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:00.965 [2024-07-14 07:44:16.881810] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:00.965 [2024-07-14 07:44:16.881824] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:00.965 [2024-07-14 07:44:16.884043] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:00.965 [2024-07-14 07:44:16.893020] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:00.965 [2024-07-14 07:44:16.893392] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:00.965 [2024-07-14 07:44:16.893579] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:00.965 [2024-07-14 07:44:16.893605] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19e2400 with addr=10.0.0.2, port=4420 00:27:00.965 [2024-07-14 07:44:16.893620] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19e2400 is same with the state(5) to be set 00:27:00.965 [2024-07-14 07:44:16.893792] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19e2400 (9): Bad file descriptor 00:27:00.965 [2024-07-14 07:44:16.894017] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:00.965 [2024-07-14 07:44:16.894039] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:00.965 [2024-07-14 07:44:16.894053] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:00.965 [2024-07-14 07:44:16.896158] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:00.965 [2024-07-14 07:44:16.905437] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:00.965 [2024-07-14 07:44:16.905754] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:00.965 [2024-07-14 07:44:16.905967] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:00.965 [2024-07-14 07:44:16.905995] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19e2400 with addr=10.0.0.2, port=4420 00:27:00.965 [2024-07-14 07:44:16.906011] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19e2400 is same with the state(5) to be set 00:27:00.965 [2024-07-14 07:44:16.906177] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19e2400 (9): Bad file descriptor 00:27:00.965 [2024-07-14 07:44:16.906358] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:00.965 [2024-07-14 07:44:16.906379] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:00.965 [2024-07-14 07:44:16.906392] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:00.965 [2024-07-14 07:44:16.908300] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:00.965 [2024-07-14 07:44:16.917769] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:00.965 [2024-07-14 07:44:16.918160] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:00.965 [2024-07-14 07:44:16.918347] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:00.965 [2024-07-14 07:44:16.918373] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19e2400 with addr=10.0.0.2, port=4420 00:27:00.965 [2024-07-14 07:44:16.918389] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19e2400 is same with the state(5) to be set 00:27:00.965 [2024-07-14 07:44:16.918554] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19e2400 (9): Bad file descriptor 00:27:00.965 [2024-07-14 07:44:16.918706] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:00.965 [2024-07-14 07:44:16.918728] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:00.965 [2024-07-14 07:44:16.918741] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:00.965 [2024-07-14 07:44:16.921069] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:00.965 [2024-07-14 07:44:16.930181] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:00.965 [2024-07-14 07:44:16.930569] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:00.965 [2024-07-14 07:44:16.930722] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:00.965 [2024-07-14 07:44:16.930748] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19e2400 with addr=10.0.0.2, port=4420 00:27:00.965 [2024-07-14 07:44:16.930763] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19e2400 is same with the state(5) to be set 00:27:00.965 [2024-07-14 07:44:16.930953] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19e2400 (9): Bad file descriptor 00:27:00.965 [2024-07-14 07:44:16.931112] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:00.965 [2024-07-14 07:44:16.931134] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:00.965 [2024-07-14 07:44:16.931163] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:00.965 [2024-07-14 07:44:16.933307] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:00.965 [2024-07-14 07:44:16.942556] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:00.965 [2024-07-14 07:44:16.942954] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:00.965 [2024-07-14 07:44:16.943146] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:00.965 [2024-07-14 07:44:16.943172] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19e2400 with addr=10.0.0.2, port=4420 00:27:00.965 [2024-07-14 07:44:16.943188] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19e2400 is same with the state(5) to be set 00:27:00.965 [2024-07-14 07:44:16.943368] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19e2400 (9): Bad file descriptor 00:27:00.965 [2024-07-14 07:44:16.943553] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:00.965 [2024-07-14 07:44:16.943574] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:00.965 [2024-07-14 07:44:16.943588] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:00.965 [2024-07-14 07:44:16.945788] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:00.965 [2024-07-14 07:44:16.954878] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:00.966 [2024-07-14 07:44:16.955185] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:00.966 [2024-07-14 07:44:16.955388] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:00.966 [2024-07-14 07:44:16.955413] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19e2400 with addr=10.0.0.2, port=4420 00:27:00.966 [2024-07-14 07:44:16.955429] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19e2400 is same with the state(5) to be set 00:27:00.966 [2024-07-14 07:44:16.955529] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19e2400 (9): Bad file descriptor 00:27:00.966 [2024-07-14 07:44:16.955692] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:00.966 [2024-07-14 07:44:16.955713] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:00.966 [2024-07-14 07:44:16.955726] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:00.966 [2024-07-14 07:44:16.957619] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:00.966 [2024-07-14 07:44:16.967294] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:00.966 [2024-07-14 07:44:16.967684] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:00.966 [2024-07-14 07:44:16.967875] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:00.966 [2024-07-14 07:44:16.967901] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19e2400 with addr=10.0.0.2, port=4420 00:27:00.966 [2024-07-14 07:44:16.967916] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19e2400 is same with the state(5) to be set 00:27:00.966 [2024-07-14 07:44:16.968049] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19e2400 (9): Bad file descriptor 00:27:00.966 [2024-07-14 07:44:16.968215] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:00.966 [2024-07-14 07:44:16.968240] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:00.966 [2024-07-14 07:44:16.968255] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:00.966 [2024-07-14 07:44:16.970290] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:00.966 [2024-07-14 07:44:16.979410] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:00.966 [2024-07-14 07:44:16.979731] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:00.966 [2024-07-14 07:44:16.979921] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:00.966 [2024-07-14 07:44:16.979949] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19e2400 with addr=10.0.0.2, port=4420 00:27:00.966 [2024-07-14 07:44:16.979965] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19e2400 is same with the state(5) to be set 00:27:00.966 [2024-07-14 07:44:16.980131] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19e2400 (9): Bad file descriptor 00:27:00.966 [2024-07-14 07:44:16.980281] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:00.966 [2024-07-14 07:44:16.980302] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:00.966 [2024-07-14 07:44:16.980316] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:00.966 [2024-07-14 07:44:16.982319] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:00.966 [2024-07-14 07:44:16.991730] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:00.966 [2024-07-14 07:44:16.992061] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:00.966 [2024-07-14 07:44:16.992240] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:00.966 [2024-07-14 07:44:16.992265] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19e2400 with addr=10.0.0.2, port=4420 00:27:00.966 [2024-07-14 07:44:16.992281] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19e2400 is same with the state(5) to be set 00:27:00.966 [2024-07-14 07:44:16.992428] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19e2400 (9): Bad file descriptor 00:27:00.966 [2024-07-14 07:44:16.992560] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:00.966 [2024-07-14 07:44:16.992581] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:00.966 [2024-07-14 07:44:16.992594] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:00.966 [2024-07-14 07:44:16.994595] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:00.966 [2024-07-14 07:44:17.003787] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:00.966 [2024-07-14 07:44:17.004168] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:00.966 [2024-07-14 07:44:17.004328] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:00.966 [2024-07-14 07:44:17.004353] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19e2400 with addr=10.0.0.2, port=4420 00:27:00.966 [2024-07-14 07:44:17.004369] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19e2400 is same with the state(5) to be set 00:27:00.966 [2024-07-14 07:44:17.004485] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19e2400 (9): Bad file descriptor 00:27:00.966 [2024-07-14 07:44:17.004653] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:00.966 [2024-07-14 07:44:17.004674] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:00.966 [2024-07-14 07:44:17.004692] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:00.966 [2024-07-14 07:44:17.006753] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:00.966 [2024-07-14 07:44:17.016245] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:00.966 [2024-07-14 07:44:17.016612] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:00.966 [2024-07-14 07:44:17.016802] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:00.966 [2024-07-14 07:44:17.016827] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19e2400 with addr=10.0.0.2, port=4420 00:27:00.966 [2024-07-14 07:44:17.016843] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19e2400 is same with the state(5) to be set 00:27:00.966 [2024-07-14 07:44:17.017016] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19e2400 (9): Bad file descriptor 00:27:00.966 [2024-07-14 07:44:17.017138] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:00.966 [2024-07-14 07:44:17.017174] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:00.966 [2024-07-14 07:44:17.017189] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:00.966 [2024-07-14 07:44:17.019317] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:00.966 [2024-07-14 07:44:17.028647] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:00.966 [2024-07-14 07:44:17.028980] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:00.966 [2024-07-14 07:44:17.029168] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:00.966 [2024-07-14 07:44:17.029194] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19e2400 with addr=10.0.0.2, port=4420 00:27:00.966 [2024-07-14 07:44:17.029210] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19e2400 is same with the state(5) to be set 00:27:00.966 [2024-07-14 07:44:17.029375] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19e2400 (9): Bad file descriptor 00:27:00.966 [2024-07-14 07:44:17.029557] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:00.966 [2024-07-14 07:44:17.029578] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:00.966 [2024-07-14 07:44:17.029591] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:00.966 [2024-07-14 07:44:17.031682] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:00.966 [2024-07-14 07:44:17.041143] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:00.966 [2024-07-14 07:44:17.041482] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:00.966 [2024-07-14 07:44:17.041642] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:00.966 [2024-07-14 07:44:17.041667] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19e2400 with addr=10.0.0.2, port=4420 00:27:00.966 [2024-07-14 07:44:17.041682] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19e2400 is same with the state(5) to be set 00:27:00.967 [2024-07-14 07:44:17.041845] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19e2400 (9): Bad file descriptor 00:27:00.967 [2024-07-14 07:44:17.042059] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:00.967 [2024-07-14 07:44:17.042080] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:00.967 [2024-07-14 07:44:17.042094] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:00.967 [2024-07-14 07:44:17.044118] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:00.967 [2024-07-14 07:44:17.053255] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:00.967 [2024-07-14 07:44:17.053629] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:00.967 [2024-07-14 07:44:17.053808] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:00.967 [2024-07-14 07:44:17.053834] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19e2400 with addr=10.0.0.2, port=4420 00:27:00.967 [2024-07-14 07:44:17.053850] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19e2400 is same with the state(5) to be set 00:27:00.967 [2024-07-14 07:44:17.054039] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19e2400 (9): Bad file descriptor 00:27:00.967 [2024-07-14 07:44:17.054255] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:00.967 [2024-07-14 07:44:17.054276] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:00.967 [2024-07-14 07:44:17.054290] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:00.967 [2024-07-14 07:44:17.056445] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:00.967 [2024-07-14 07:44:17.065548] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:00.967 [2024-07-14 07:44:17.065893] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:00.967 [2024-07-14 07:44:17.066072] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:00.967 [2024-07-14 07:44:17.066098] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19e2400 with addr=10.0.0.2, port=4420 00:27:00.967 [2024-07-14 07:44:17.066114] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19e2400 is same with the state(5) to be set 00:27:00.967 [2024-07-14 07:44:17.066310] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19e2400 (9): Bad file descriptor 00:27:00.967 [2024-07-14 07:44:17.066482] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:00.967 [2024-07-14 07:44:17.066504] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:00.967 [2024-07-14 07:44:17.066517] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:00.967 [2024-07-14 07:44:17.068459] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:00.967 [2024-07-14 07:44:17.077911] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:00.967 [2024-07-14 07:44:17.078245] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:00.967 [2024-07-14 07:44:17.078401] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:00.967 [2024-07-14 07:44:17.078427] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19e2400 with addr=10.0.0.2, port=4420 00:27:00.967 [2024-07-14 07:44:17.078443] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19e2400 is same with the state(5) to be set 00:27:00.967 [2024-07-14 07:44:17.078590] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19e2400 (9): Bad file descriptor 00:27:00.967 [2024-07-14 07:44:17.078739] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:00.967 [2024-07-14 07:44:17.078759] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:00.967 [2024-07-14 07:44:17.078773] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:00.967 [2024-07-14 07:44:17.080903] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:00.967 [2024-07-14 07:44:17.090259] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:00.967 [2024-07-14 07:44:17.090586] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:00.967 [2024-07-14 07:44:17.090769] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:00.967 [2024-07-14 07:44:17.090795] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19e2400 with addr=10.0.0.2, port=4420 00:27:00.967 [2024-07-14 07:44:17.090810] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19e2400 is same with the state(5) to be set 00:27:00.967 [2024-07-14 07:44:17.090999] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19e2400 (9): Bad file descriptor 00:27:00.967 [2024-07-14 07:44:17.091153] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:00.967 [2024-07-14 07:44:17.091189] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:00.967 [2024-07-14 07:44:17.091203] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:00.967 [2024-07-14 07:44:17.093519] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:00.967 [2024-07-14 07:44:17.102539] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:00.967 [2024-07-14 07:44:17.102860] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:00.967 [2024-07-14 07:44:17.103054] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:00.967 [2024-07-14 07:44:17.103080] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19e2400 with addr=10.0.0.2, port=4420 00:27:00.967 [2024-07-14 07:44:17.103097] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19e2400 is same with the state(5) to be set 00:27:00.967 [2024-07-14 07:44:17.103230] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19e2400 (9): Bad file descriptor 00:27:00.967 [2024-07-14 07:44:17.103412] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:00.967 [2024-07-14 07:44:17.103433] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:00.967 [2024-07-14 07:44:17.103447] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:00.967 [2024-07-14 07:44:17.105589] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:00.967 [2024-07-14 07:44:17.114815] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:00.967 [2024-07-14 07:44:17.115216] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:00.967 [2024-07-14 07:44:17.115404] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:00.967 [2024-07-14 07:44:17.115430] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19e2400 with addr=10.0.0.2, port=4420 00:27:00.967 [2024-07-14 07:44:17.115446] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19e2400 is same with the state(5) to be set 00:27:00.967 [2024-07-14 07:44:17.115546] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19e2400 (9): Bad file descriptor 00:27:00.967 [2024-07-14 07:44:17.115745] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:00.967 [2024-07-14 07:44:17.115765] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:00.967 [2024-07-14 07:44:17.115779] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:00.967 [2024-07-14 07:44:17.117785] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:00.967 [2024-07-14 07:44:17.126979] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:00.967 [2024-07-14 07:44:17.127267] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:00.967 [2024-07-14 07:44:17.127452] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:00.967 [2024-07-14 07:44:17.127479] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19e2400 with addr=10.0.0.2, port=4420 00:27:00.967 [2024-07-14 07:44:17.127494] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19e2400 is same with the state(5) to be set 00:27:00.967 [2024-07-14 07:44:17.127658] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19e2400 (9): Bad file descriptor 00:27:00.967 [2024-07-14 07:44:17.127806] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:00.967 [2024-07-14 07:44:17.127827] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:00.967 [2024-07-14 07:44:17.127841] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:00.967 [2024-07-14 07:44:17.129969] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:01.227 [2024-07-14 07:44:17.139469] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:01.227 [2024-07-14 07:44:17.139798] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:01.227 [2024-07-14 07:44:17.140021] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:01.227 [2024-07-14 07:44:17.140051] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19e2400 with addr=10.0.0.2, port=4420 00:27:01.227 [2024-07-14 07:44:17.140069] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19e2400 is same with the state(5) to be set 00:27:01.227 [2024-07-14 07:44:17.140171] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19e2400 (9): Bad file descriptor 00:27:01.227 [2024-07-14 07:44:17.140291] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:01.228 [2024-07-14 07:44:17.140312] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:01.228 [2024-07-14 07:44:17.140328] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:01.228 [2024-07-14 07:44:17.142480] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:01.228 [2024-07-14 07:44:17.151721] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:01.228 [2024-07-14 07:44:17.152157] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:01.228 [2024-07-14 07:44:17.152350] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:01.228 [2024-07-14 07:44:17.152376] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19e2400 with addr=10.0.0.2, port=4420 00:27:01.228 [2024-07-14 07:44:17.152393] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19e2400 is same with the state(5) to be set 00:27:01.228 [2024-07-14 07:44:17.152574] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19e2400 (9): Bad file descriptor 00:27:01.228 [2024-07-14 07:44:17.152708] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:01.228 [2024-07-14 07:44:17.152730] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:01.228 [2024-07-14 07:44:17.152745] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:01.228 [2024-07-14 07:44:17.154642] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:01.228 [2024-07-14 07:44:17.163920] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:01.228 [2024-07-14 07:44:17.164277] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:01.228 [2024-07-14 07:44:17.164484] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:01.228 [2024-07-14 07:44:17.164514] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19e2400 with addr=10.0.0.2, port=4420 00:27:01.228 [2024-07-14 07:44:17.164531] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19e2400 is same with the state(5) to be set 00:27:01.228 [2024-07-14 07:44:17.164680] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19e2400 (9): Bad file descriptor 00:27:01.228 [2024-07-14 07:44:17.164849] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:01.228 [2024-07-14 07:44:17.164879] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:01.228 [2024-07-14 07:44:17.164895] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:01.228 [2024-07-14 07:44:17.167037] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:01.228 [2024-07-14 07:44:17.176240] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:01.228 [2024-07-14 07:44:17.176612] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:01.228 [2024-07-14 07:44:17.176773] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:01.228 [2024-07-14 07:44:17.176798] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19e2400 with addr=10.0.0.2, port=4420 00:27:01.228 [2024-07-14 07:44:17.176814] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19e2400 is same with the state(5) to be set 00:27:01.228 [2024-07-14 07:44:17.176987] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19e2400 (9): Bad file descriptor 00:27:01.228 [2024-07-14 07:44:17.177140] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:01.228 [2024-07-14 07:44:17.177162] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:01.228 [2024-07-14 07:44:17.177176] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:01.228 07:44:17 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:27:01.228 07:44:17 -- common/autotest_common.sh@852 -- # return 0 00:27:01.228 07:44:17 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:27:01.228 07:44:17 -- common/autotest_common.sh@718 -- # xtrace_disable 00:27:01.228 07:44:17 -- common/autotest_common.sh@10 -- # set +x 00:27:01.228 [2024-07-14 07:44:17.179211] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:01.228 [2024-07-14 07:44:17.188608] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:01.228 [2024-07-14 07:44:17.189003] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:01.228 [2024-07-14 07:44:17.189179] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:01.228 [2024-07-14 07:44:17.189214] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19e2400 with addr=10.0.0.2, port=4420 00:27:01.228 [2024-07-14 07:44:17.189229] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19e2400 is same with the state(5) to be set 00:27:01.228 [2024-07-14 07:44:17.189392] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19e2400 (9): Bad file descriptor 00:27:01.228 [2024-07-14 07:44:17.189525] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:01.228 [2024-07-14 07:44:17.189545] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:01.228 [2024-07-14 07:44:17.189559] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:01.228 [2024-07-14 07:44:17.191938] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:01.228 07:44:17 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:27:01.228 07:44:17 -- host/bdevperf.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:27:01.228 07:44:17 -- common/autotest_common.sh@551 -- # xtrace_disable 00:27:01.228 07:44:17 -- common/autotest_common.sh@10 -- # set +x 00:27:01.228 [2024-07-14 07:44:17.197243] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:27:01.228 [2024-07-14 07:44:17.201023] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:01.228 [2024-07-14 07:44:17.201381] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:01.228 [2024-07-14 07:44:17.201551] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:01.228 [2024-07-14 07:44:17.201575] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19e2400 with addr=10.0.0.2, port=4420 00:27:01.228 [2024-07-14 07:44:17.201591] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19e2400 is same with the state(5) to be set 00:27:01.228 [2024-07-14 07:44:17.201739] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19e2400 (9): Bad file descriptor 00:27:01.228 [2024-07-14 07:44:17.201906] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:01.228 [2024-07-14 07:44:17.201929] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:01.228 [2024-07-14 07:44:17.201943] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:01.228 07:44:17 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:27:01.228 07:44:17 -- host/bdevperf.sh@18 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:27:01.228 07:44:17 -- common/autotest_common.sh@551 -- # xtrace_disable 00:27:01.228 07:44:17 -- common/autotest_common.sh@10 -- # set +x 00:27:01.228 [2024-07-14 07:44:17.203973] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:01.228 [2024-07-14 07:44:17.213188] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:01.228 [2024-07-14 07:44:17.213580] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:01.228 [2024-07-14 07:44:17.213768] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:01.228 [2024-07-14 07:44:17.213793] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19e2400 with addr=10.0.0.2, port=4420 00:27:01.228 [2024-07-14 07:44:17.213809] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19e2400 is same with the state(5) to be set 00:27:01.228 [2024-07-14 07:44:17.213919] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19e2400 (9): Bad file descriptor 00:27:01.228 [2024-07-14 07:44:17.214056] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:01.228 [2024-07-14 07:44:17.214077] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:01.228 [2024-07-14 07:44:17.214092] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:01.228 [2024-07-14 07:44:17.216335] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:01.228 [2024-07-14 07:44:17.225275] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:01.228 [2024-07-14 07:44:17.225577] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:01.228 [2024-07-14 07:44:17.225788] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:01.228 [2024-07-14 07:44:17.225813] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19e2400 with addr=10.0.0.2, port=4420 00:27:01.228 [2024-07-14 07:44:17.225829] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19e2400 is same with the state(5) to be set 00:27:01.228 [2024-07-14 07:44:17.225987] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19e2400 (9): Bad file descriptor 00:27:01.228 [2024-07-14 07:44:17.226124] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:01.228 [2024-07-14 07:44:17.226151] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:01.228 [2024-07-14 07:44:17.226185] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:01.228 [2024-07-14 07:44:17.228239] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:01.228 [2024-07-14 07:44:17.237544] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:01.228 [2024-07-14 07:44:17.238071] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:01.228 [2024-07-14 07:44:17.238289] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:01.228 [2024-07-14 07:44:17.238316] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19e2400 with addr=10.0.0.2, port=4420 00:27:01.228 [2024-07-14 07:44:17.238336] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19e2400 is same with the state(5) to be set 00:27:01.228 [2024-07-14 07:44:17.238478] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19e2400 (9): Bad file descriptor 00:27:01.228 [2024-07-14 07:44:17.238640] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:01.228 [2024-07-14 07:44:17.238662] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:01.228 [2024-07-14 07:44:17.238679] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:01.228 [2024-07-14 07:44:17.240773] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:01.228 Malloc0 00:27:01.228 07:44:17 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:27:01.228 07:44:17 -- host/bdevperf.sh@19 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:27:01.228 07:44:17 -- common/autotest_common.sh@551 -- # xtrace_disable 00:27:01.228 07:44:17 -- common/autotest_common.sh@10 -- # set +x 00:27:01.228 [2024-07-14 07:44:17.249777] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:01.228 [2024-07-14 07:44:17.250134] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:01.228 [2024-07-14 07:44:17.250331] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:01.228 [2024-07-14 07:44:17.250356] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19e2400 with addr=10.0.0.2, port=4420 00:27:01.228 [2024-07-14 07:44:17.250372] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19e2400 is same with the state(5) to be set 00:27:01.229 [2024-07-14 07:44:17.250569] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19e2400 (9): Bad file descriptor 00:27:01.229 [2024-07-14 07:44:17.250685] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:01.229 [2024-07-14 07:44:17.250706] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:01.229 [2024-07-14 07:44:17.250719] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:01.229 07:44:17 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:27:01.229 07:44:17 -- host/bdevperf.sh@20 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:27:01.229 07:44:17 -- common/autotest_common.sh@551 -- # xtrace_disable 00:27:01.229 07:44:17 -- common/autotest_common.sh@10 -- # set +x 00:27:01.229 [2024-07-14 07:44:17.252918] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:01.229 07:44:17 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:27:01.229 07:44:17 -- host/bdevperf.sh@21 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:27:01.229 07:44:17 -- common/autotest_common.sh@551 -- # xtrace_disable 00:27:01.229 07:44:17 -- common/autotest_common.sh@10 -- # set +x 00:27:01.229 [2024-07-14 07:44:17.262243] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:01.229 [2024-07-14 07:44:17.262611] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:01.229 [2024-07-14 07:44:17.262772] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:01.229 [2024-07-14 07:44:17.262797] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19e2400 with addr=10.0.0.2, port=4420 00:27:01.229 [2024-07-14 07:44:17.262813] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19e2400 is same with the state(5) to be set 00:27:01.229 [2024-07-14 07:44:17.262955] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19e2400 (9): Bad file descriptor 00:27:01.229 [2024-07-14 07:44:17.263124] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:01.229 [2024-07-14 07:44:17.263146] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:01.229 [2024-07-14 07:44:17.263160] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:01.229 [2024-07-14 07:44:17.263791] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:27:01.229 [2024-07-14 07:44:17.265365] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:01.229 07:44:17 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:27:01.229 07:44:17 -- host/bdevperf.sh@38 -- # wait 14685 00:27:01.229 [2024-07-14 07:44:17.274616] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:01.487 [2024-07-14 07:44:17.429498] bdev_nvme.c:2040:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:27:09.599 00:27:09.599 Latency(us) 00:27:09.599 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:27:09.599 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:27:09.599 Verification LBA range: start 0x0 length 0x4000 00:27:09.599 Nvme1n1 : 15.01 9262.18 36.18 16104.41 0.00 5031.38 885.95 21845.33 00:27:09.599 =================================================================================================================== 00:27:09.599 Total : 9262.18 36.18 16104.41 0.00 5031.38 885.95 21845.33 00:27:09.857 07:44:25 -- host/bdevperf.sh@39 -- # sync 00:27:09.857 07:44:25 -- host/bdevperf.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:27:09.857 07:44:25 -- common/autotest_common.sh@551 -- # xtrace_disable 00:27:09.857 07:44:25 -- common/autotest_common.sh@10 -- # set +x 00:27:09.857 07:44:25 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:27:09.857 07:44:25 -- host/bdevperf.sh@42 -- # trap - SIGINT SIGTERM EXIT 00:27:09.857 07:44:25 -- host/bdevperf.sh@44 -- # nvmftestfini 00:27:09.857 07:44:25 -- nvmf/common.sh@476 -- # nvmfcleanup 00:27:09.857 07:44:25 -- nvmf/common.sh@116 -- # sync 00:27:09.857 07:44:25 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:27:09.857 07:44:25 -- nvmf/common.sh@119 -- # set +e 00:27:09.857 07:44:25 -- nvmf/common.sh@120 -- # for i in {1..20} 00:27:09.857 07:44:25 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:27:09.857 rmmod nvme_tcp 00:27:09.857 rmmod nvme_fabrics 00:27:09.857 rmmod nvme_keyring 00:27:09.857 07:44:25 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:27:09.857 07:44:25 -- nvmf/common.sh@123 -- # set -e 00:27:09.857 07:44:25 -- nvmf/common.sh@124 -- # return 0 00:27:09.857 07:44:25 -- nvmf/common.sh@477 -- # '[' -n 15376 ']' 00:27:09.857 07:44:25 -- nvmf/common.sh@478 -- # killprocess 15376 00:27:09.857 07:44:25 -- common/autotest_common.sh@926 -- # '[' -z 15376 ']' 00:27:09.857 07:44:25 -- common/autotest_common.sh@930 -- # kill -0 15376 00:27:09.857 07:44:25 -- common/autotest_common.sh@931 -- # uname 00:27:09.857 07:44:25 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:27:09.857 07:44:25 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 15376 00:27:09.857 07:44:25 -- common/autotest_common.sh@932 -- # process_name=reactor_1 00:27:09.858 07:44:25 -- common/autotest_common.sh@936 -- # '[' reactor_1 = sudo ']' 00:27:09.858 07:44:25 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 15376' 00:27:09.858 killing process with pid 15376 00:27:09.858 07:44:25 -- common/autotest_common.sh@945 -- # kill 15376 00:27:09.858 07:44:25 -- common/autotest_common.sh@950 -- # wait 15376 00:27:10.424 07:44:26 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:27:10.424 07:44:26 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:27:10.424 07:44:26 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:27:10.424 07:44:26 -- nvmf/common.sh@273 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:27:10.424 07:44:26 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:27:10.424 07:44:26 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:27:10.424 07:44:26 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:27:10.424 07:44:26 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:27:12.329 07:44:28 -- nvmf/common.sh@278 -- # ip -4 addr flush cvl_0_1 00:27:12.329 00:27:12.329 real 0m23.045s 00:27:12.329 user 1m1.450s 00:27:12.329 sys 0m4.596s 00:27:12.329 07:44:28 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:27:12.329 07:44:28 -- common/autotest_common.sh@10 -- # set +x 00:27:12.329 ************************************ 00:27:12.329 END TEST nvmf_bdevperf 00:27:12.329 ************************************ 00:27:12.329 07:44:28 -- nvmf/nvmf.sh@124 -- # run_test nvmf_target_disconnect /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/target_disconnect.sh --transport=tcp 00:27:12.329 07:44:28 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:27:12.329 07:44:28 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:27:12.329 07:44:28 -- common/autotest_common.sh@10 -- # set +x 00:27:12.329 ************************************ 00:27:12.329 START TEST nvmf_target_disconnect 00:27:12.329 ************************************ 00:27:12.329 07:44:28 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/target_disconnect.sh --transport=tcp 00:27:12.329 * Looking for test storage... 00:27:12.329 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:27:12.330 07:44:28 -- host/target_disconnect.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:27:12.330 07:44:28 -- nvmf/common.sh@7 -- # uname -s 00:27:12.330 07:44:28 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:27:12.330 07:44:28 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:27:12.330 07:44:28 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:27:12.330 07:44:28 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:27:12.330 07:44:28 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:27:12.330 07:44:28 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:27:12.330 07:44:28 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:27:12.330 07:44:28 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:27:12.330 07:44:28 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:27:12.330 07:44:28 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:27:12.330 07:44:28 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:27:12.330 07:44:28 -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:27:12.330 07:44:28 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:27:12.330 07:44:28 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:27:12.330 07:44:28 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:27:12.330 07:44:28 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:27:12.330 07:44:28 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:27:12.330 07:44:28 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:27:12.330 07:44:28 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:27:12.330 07:44:28 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:12.330 07:44:28 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:12.330 07:44:28 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:12.330 07:44:28 -- paths/export.sh@5 -- # export PATH 00:27:12.330 07:44:28 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:12.330 07:44:28 -- nvmf/common.sh@46 -- # : 0 00:27:12.330 07:44:28 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:27:12.330 07:44:28 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:27:12.330 07:44:28 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:27:12.330 07:44:28 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:27:12.330 07:44:28 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:27:12.330 07:44:28 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:27:12.330 07:44:28 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:27:12.330 07:44:28 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:27:12.330 07:44:28 -- host/target_disconnect.sh@11 -- # PLUGIN_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme 00:27:12.330 07:44:28 -- host/target_disconnect.sh@13 -- # MALLOC_BDEV_SIZE=64 00:27:12.330 07:44:28 -- host/target_disconnect.sh@14 -- # MALLOC_BLOCK_SIZE=512 00:27:12.330 07:44:28 -- host/target_disconnect.sh@77 -- # nvmftestinit 00:27:12.330 07:44:28 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:27:12.330 07:44:28 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:27:12.330 07:44:28 -- nvmf/common.sh@436 -- # prepare_net_devs 00:27:12.330 07:44:28 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:27:12.330 07:44:28 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:27:12.330 07:44:28 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:27:12.330 07:44:28 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:27:12.330 07:44:28 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:27:12.330 07:44:28 -- nvmf/common.sh@402 -- # [[ phy != virt ]] 00:27:12.330 07:44:28 -- nvmf/common.sh@402 -- # gather_supported_nvmf_pci_devs 00:27:12.330 07:44:28 -- nvmf/common.sh@284 -- # xtrace_disable 00:27:12.330 07:44:28 -- common/autotest_common.sh@10 -- # set +x 00:27:14.234 07:44:30 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:27:14.234 07:44:30 -- nvmf/common.sh@290 -- # pci_devs=() 00:27:14.234 07:44:30 -- nvmf/common.sh@290 -- # local -a pci_devs 00:27:14.234 07:44:30 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:27:14.234 07:44:30 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:27:14.234 07:44:30 -- nvmf/common.sh@292 -- # pci_drivers=() 00:27:14.234 07:44:30 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:27:14.234 07:44:30 -- nvmf/common.sh@294 -- # net_devs=() 00:27:14.234 07:44:30 -- nvmf/common.sh@294 -- # local -ga net_devs 00:27:14.234 07:44:30 -- nvmf/common.sh@295 -- # e810=() 00:27:14.234 07:44:30 -- nvmf/common.sh@295 -- # local -ga e810 00:27:14.234 07:44:30 -- nvmf/common.sh@296 -- # x722=() 00:27:14.234 07:44:30 -- nvmf/common.sh@296 -- # local -ga x722 00:27:14.234 07:44:30 -- nvmf/common.sh@297 -- # mlx=() 00:27:14.234 07:44:30 -- nvmf/common.sh@297 -- # local -ga mlx 00:27:14.234 07:44:30 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:27:14.234 07:44:30 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:27:14.234 07:44:30 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:27:14.234 07:44:30 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:27:14.234 07:44:30 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:27:14.234 07:44:30 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:27:14.234 07:44:30 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:27:14.234 07:44:30 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:27:14.234 07:44:30 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:27:14.234 07:44:30 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:27:14.234 07:44:30 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:27:14.234 07:44:30 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:27:14.234 07:44:30 -- nvmf/common.sh@320 -- # [[ tcp == rdma ]] 00:27:14.234 07:44:30 -- nvmf/common.sh@326 -- # [[ e810 == mlx5 ]] 00:27:14.234 07:44:30 -- nvmf/common.sh@328 -- # [[ e810 == e810 ]] 00:27:14.234 07:44:30 -- nvmf/common.sh@329 -- # pci_devs=("${e810[@]}") 00:27:14.234 07:44:30 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:27:14.234 07:44:30 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:27:14.234 07:44:30 -- nvmf/common.sh@340 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:27:14.234 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:27:14.234 07:44:30 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:27:14.234 07:44:30 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:27:14.234 07:44:30 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:27:14.234 07:44:30 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:27:14.234 07:44:30 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:27:14.234 07:44:30 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:27:14.234 07:44:30 -- nvmf/common.sh@340 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:27:14.234 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:27:14.234 07:44:30 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:27:14.234 07:44:30 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:27:14.234 07:44:30 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:27:14.234 07:44:30 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:27:14.234 07:44:30 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:27:14.234 07:44:30 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:27:14.234 07:44:30 -- nvmf/common.sh@371 -- # [[ e810 == e810 ]] 00:27:14.234 07:44:30 -- nvmf/common.sh@371 -- # [[ tcp == rdma ]] 00:27:14.234 07:44:30 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:27:14.234 07:44:30 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:27:14.234 07:44:30 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:27:14.234 07:44:30 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:27:14.234 07:44:30 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:27:14.234 Found net devices under 0000:0a:00.0: cvl_0_0 00:27:14.234 07:44:30 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:27:14.234 07:44:30 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:27:14.234 07:44:30 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:27:14.234 07:44:30 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:27:14.234 07:44:30 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:27:14.234 07:44:30 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:27:14.234 Found net devices under 0000:0a:00.1: cvl_0_1 00:27:14.234 07:44:30 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:27:14.234 07:44:30 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:27:14.234 07:44:30 -- nvmf/common.sh@402 -- # is_hw=yes 00:27:14.234 07:44:30 -- nvmf/common.sh@404 -- # [[ yes == yes ]] 00:27:14.234 07:44:30 -- nvmf/common.sh@405 -- # [[ tcp == tcp ]] 00:27:14.234 07:44:30 -- nvmf/common.sh@406 -- # nvmf_tcp_init 00:27:14.234 07:44:30 -- nvmf/common.sh@228 -- # NVMF_INITIATOR_IP=10.0.0.1 00:27:14.234 07:44:30 -- nvmf/common.sh@229 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:27:14.234 07:44:30 -- nvmf/common.sh@230 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:27:14.234 07:44:30 -- nvmf/common.sh@233 -- # (( 2 > 1 )) 00:27:14.234 07:44:30 -- nvmf/common.sh@235 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:27:14.234 07:44:30 -- nvmf/common.sh@236 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:27:14.234 07:44:30 -- nvmf/common.sh@239 -- # NVMF_SECOND_TARGET_IP= 00:27:14.234 07:44:30 -- nvmf/common.sh@241 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:27:14.234 07:44:30 -- nvmf/common.sh@242 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:27:14.234 07:44:30 -- nvmf/common.sh@243 -- # ip -4 addr flush cvl_0_0 00:27:14.234 07:44:30 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_1 00:27:14.234 07:44:30 -- nvmf/common.sh@247 -- # ip netns add cvl_0_0_ns_spdk 00:27:14.234 07:44:30 -- nvmf/common.sh@250 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:27:14.492 07:44:30 -- nvmf/common.sh@253 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:27:14.492 07:44:30 -- nvmf/common.sh@254 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:27:14.492 07:44:30 -- nvmf/common.sh@257 -- # ip link set cvl_0_1 up 00:27:14.492 07:44:30 -- nvmf/common.sh@259 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:27:14.492 07:44:30 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:27:14.492 07:44:30 -- nvmf/common.sh@263 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:27:14.492 07:44:30 -- nvmf/common.sh@266 -- # ping -c 1 10.0.0.2 00:27:14.492 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:27:14.492 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.165 ms 00:27:14.492 00:27:14.492 --- 10.0.0.2 ping statistics --- 00:27:14.493 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:27:14.493 rtt min/avg/max/mdev = 0.165/0.165/0.165/0.000 ms 00:27:14.493 07:44:30 -- nvmf/common.sh@267 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:27:14.493 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:27:14.493 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.185 ms 00:27:14.493 00:27:14.493 --- 10.0.0.1 ping statistics --- 00:27:14.493 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:27:14.493 rtt min/avg/max/mdev = 0.185/0.185/0.185/0.000 ms 00:27:14.493 07:44:30 -- nvmf/common.sh@269 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:27:14.493 07:44:30 -- nvmf/common.sh@410 -- # return 0 00:27:14.493 07:44:30 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:27:14.493 07:44:30 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:27:14.493 07:44:30 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:27:14.493 07:44:30 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:27:14.493 07:44:30 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:27:14.493 07:44:30 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:27:14.493 07:44:30 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:27:14.493 07:44:30 -- host/target_disconnect.sh@78 -- # run_test nvmf_target_disconnect_tc1 nvmf_target_disconnect_tc1 00:27:14.493 07:44:30 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:27:14.493 07:44:30 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:27:14.493 07:44:30 -- common/autotest_common.sh@10 -- # set +x 00:27:14.493 ************************************ 00:27:14.493 START TEST nvmf_target_disconnect_tc1 00:27:14.493 ************************************ 00:27:14.493 07:44:30 -- common/autotest_common.sh@1104 -- # nvmf_target_disconnect_tc1 00:27:14.493 07:44:30 -- host/target_disconnect.sh@32 -- # set +e 00:27:14.493 07:44:30 -- host/target_disconnect.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -q 32 -o 4096 -w randrw -M 50 -t 10 -c 0xF -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:27:14.493 EAL: No free 2048 kB hugepages reported on node 1 00:27:14.493 [2024-07-14 07:44:30.587169] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.493 [2024-07-14 07:44:30.587520] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.493 [2024-07-14 07:44:30.587548] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x889920 with addr=10.0.0.2, port=4420 00:27:14.493 [2024-07-14 07:44:30.587582] nvme_tcp.c:2596:nvme_tcp_ctrlr_construct: *ERROR*: failed to create admin qpair 00:27:14.493 [2024-07-14 07:44:30.587603] nvme.c: 821:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:27:14.493 [2024-07-14 07:44:30.587616] nvme.c: 898:spdk_nvme_probe: *ERROR*: Create probe context failed 00:27:14.493 spdk_nvme_probe() failed for transport address '10.0.0.2' 00:27:14.493 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect: errors occurred 00:27:14.493 Initializing NVMe Controllers 00:27:14.493 07:44:30 -- host/target_disconnect.sh@33 -- # trap - ERR 00:27:14.493 07:44:30 -- host/target_disconnect.sh@33 -- # print_backtrace 00:27:14.493 07:44:30 -- common/autotest_common.sh@1132 -- # [[ hxBET =~ e ]] 00:27:14.493 07:44:30 -- common/autotest_common.sh@1132 -- # return 0 00:27:14.493 07:44:30 -- host/target_disconnect.sh@37 -- # '[' 1 '!=' 1 ']' 00:27:14.493 07:44:30 -- host/target_disconnect.sh@41 -- # set -e 00:27:14.493 00:27:14.493 real 0m0.083s 00:27:14.493 user 0m0.037s 00:27:14.493 sys 0m0.043s 00:27:14.493 07:44:30 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:27:14.493 07:44:30 -- common/autotest_common.sh@10 -- # set +x 00:27:14.493 ************************************ 00:27:14.493 END TEST nvmf_target_disconnect_tc1 00:27:14.493 ************************************ 00:27:14.493 07:44:30 -- host/target_disconnect.sh@79 -- # run_test nvmf_target_disconnect_tc2 nvmf_target_disconnect_tc2 00:27:14.493 07:44:30 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:27:14.493 07:44:30 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:27:14.493 07:44:30 -- common/autotest_common.sh@10 -- # set +x 00:27:14.493 ************************************ 00:27:14.493 START TEST nvmf_target_disconnect_tc2 00:27:14.493 ************************************ 00:27:14.493 07:44:30 -- common/autotest_common.sh@1104 -- # nvmf_target_disconnect_tc2 00:27:14.493 07:44:30 -- host/target_disconnect.sh@45 -- # disconnect_init 10.0.0.2 00:27:14.493 07:44:30 -- host/target_disconnect.sh@17 -- # nvmfappstart -m 0xF0 00:27:14.493 07:44:30 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:27:14.493 07:44:30 -- common/autotest_common.sh@712 -- # xtrace_disable 00:27:14.493 07:44:30 -- common/autotest_common.sh@10 -- # set +x 00:27:14.493 07:44:30 -- nvmf/common.sh@469 -- # nvmfpid=18565 00:27:14.493 07:44:30 -- nvmf/common.sh@468 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF0 00:27:14.493 07:44:30 -- nvmf/common.sh@470 -- # waitforlisten 18565 00:27:14.493 07:44:30 -- common/autotest_common.sh@819 -- # '[' -z 18565 ']' 00:27:14.493 07:44:30 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:27:14.493 07:44:30 -- common/autotest_common.sh@824 -- # local max_retries=100 00:27:14.493 07:44:30 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:27:14.493 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:27:14.493 07:44:30 -- common/autotest_common.sh@828 -- # xtrace_disable 00:27:14.493 07:44:30 -- common/autotest_common.sh@10 -- # set +x 00:27:14.751 [2024-07-14 07:44:30.664706] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:27:14.751 [2024-07-14 07:44:30.664782] [ DPDK EAL parameters: nvmf -c 0xF0 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:27:14.751 EAL: No free 2048 kB hugepages reported on node 1 00:27:14.751 [2024-07-14 07:44:30.728548] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:27:14.751 [2024-07-14 07:44:30.836889] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:27:14.751 [2024-07-14 07:44:30.837023] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:27:14.751 [2024-07-14 07:44:30.837041] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:27:14.751 [2024-07-14 07:44:30.837054] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:27:14.751 [2024-07-14 07:44:30.837146] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 5 00:27:14.751 [2024-07-14 07:44:30.837194] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 6 00:27:14.751 [2024-07-14 07:44:30.837251] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 7 00:27:14.751 [2024-07-14 07:44:30.837254] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 4 00:27:15.681 07:44:31 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:27:15.681 07:44:31 -- common/autotest_common.sh@852 -- # return 0 00:27:15.681 07:44:31 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:27:15.681 07:44:31 -- common/autotest_common.sh@718 -- # xtrace_disable 00:27:15.681 07:44:31 -- common/autotest_common.sh@10 -- # set +x 00:27:15.681 07:44:31 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:27:15.681 07:44:31 -- host/target_disconnect.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:27:15.681 07:44:31 -- common/autotest_common.sh@551 -- # xtrace_disable 00:27:15.681 07:44:31 -- common/autotest_common.sh@10 -- # set +x 00:27:15.681 Malloc0 00:27:15.681 07:44:31 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:27:15.681 07:44:31 -- host/target_disconnect.sh@21 -- # rpc_cmd nvmf_create_transport -t tcp -o 00:27:15.681 07:44:31 -- common/autotest_common.sh@551 -- # xtrace_disable 00:27:15.681 07:44:31 -- common/autotest_common.sh@10 -- # set +x 00:27:15.681 [2024-07-14 07:44:31.725296] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:27:15.681 07:44:31 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:27:15.681 07:44:31 -- host/target_disconnect.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:27:15.681 07:44:31 -- common/autotest_common.sh@551 -- # xtrace_disable 00:27:15.681 07:44:31 -- common/autotest_common.sh@10 -- # set +x 00:27:15.681 07:44:31 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:27:15.681 07:44:31 -- host/target_disconnect.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:27:15.681 07:44:31 -- common/autotest_common.sh@551 -- # xtrace_disable 00:27:15.681 07:44:31 -- common/autotest_common.sh@10 -- # set +x 00:27:15.681 07:44:31 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:27:15.681 07:44:31 -- host/target_disconnect.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:27:15.681 07:44:31 -- common/autotest_common.sh@551 -- # xtrace_disable 00:27:15.681 07:44:31 -- common/autotest_common.sh@10 -- # set +x 00:27:15.681 [2024-07-14 07:44:31.753530] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:27:15.681 07:44:31 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:27:15.681 07:44:31 -- host/target_disconnect.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:27:15.681 07:44:31 -- common/autotest_common.sh@551 -- # xtrace_disable 00:27:15.681 07:44:31 -- common/autotest_common.sh@10 -- # set +x 00:27:15.681 07:44:31 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:27:15.681 07:44:31 -- host/target_disconnect.sh@50 -- # reconnectpid=18725 00:27:15.681 07:44:31 -- host/target_disconnect.sh@52 -- # sleep 2 00:27:15.682 07:44:31 -- host/target_disconnect.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -q 32 -o 4096 -w randrw -M 50 -t 10 -c 0xF -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:27:15.682 EAL: No free 2048 kB hugepages reported on node 1 00:27:17.613 07:44:33 -- host/target_disconnect.sh@53 -- # kill -9 18565 00:27:17.613 07:44:33 -- host/target_disconnect.sh@55 -- # sleep 2 00:27:17.613 Read completed with error (sct=0, sc=8) 00:27:17.613 starting I/O failed 00:27:17.613 Read completed with error (sct=0, sc=8) 00:27:17.613 starting I/O failed 00:27:17.613 Read completed with error (sct=0, sc=8) 00:27:17.613 starting I/O failed 00:27:17.613 Read completed with error (sct=0, sc=8) 00:27:17.613 starting I/O failed 00:27:17.613 Read completed with error (sct=0, sc=8) 00:27:17.613 starting I/O failed 00:27:17.613 Read completed with error (sct=0, sc=8) 00:27:17.613 starting I/O failed 00:27:17.613 Read completed with error (sct=0, sc=8) 00:27:17.613 starting I/O failed 00:27:17.613 Read completed with error (sct=0, sc=8) 00:27:17.613 starting I/O failed 00:27:17.613 Read completed with error (sct=0, sc=8) 00:27:17.613 starting I/O failed 00:27:17.613 Read completed with error (sct=0, sc=8) 00:27:17.613 starting I/O failed 00:27:17.613 Read completed with error (sct=0, sc=8) 00:27:17.613 starting I/O failed 00:27:17.613 Read completed with error (sct=0, sc=8) 00:27:17.613 starting I/O failed 00:27:17.613 Read completed with error (sct=0, sc=8) 00:27:17.613 starting I/O failed 00:27:17.613 Write completed with error (sct=0, sc=8) 00:27:17.613 starting I/O failed 00:27:17.613 Read completed with error (sct=0, sc=8) 00:27:17.613 starting I/O failed 00:27:17.613 Write completed with error (sct=0, sc=8) 00:27:17.613 starting I/O failed 00:27:17.613 Write completed with error (sct=0, sc=8) 00:27:17.613 starting I/O failed 00:27:17.613 Read completed with error (sct=0, sc=8) 00:27:17.613 starting I/O failed 00:27:17.613 Read completed with error (sct=0, sc=8) 00:27:17.613 starting I/O failed 00:27:17.613 Read completed with error (sct=0, sc=8) 00:27:17.613 starting I/O failed 00:27:17.613 Read completed with error (sct=0, sc=8) 00:27:17.613 starting I/O failed 00:27:17.613 Read completed with error (sct=0, sc=8) 00:27:17.613 starting I/O failed 00:27:17.613 Read completed with error (sct=0, sc=8) 00:27:17.613 starting I/O failed 00:27:17.613 Read completed with error (sct=0, sc=8) 00:27:17.613 starting I/O failed 00:27:17.613 Read completed with error (sct=0, sc=8) 00:27:17.613 starting I/O failed 00:27:17.613 Read completed with error (sct=0, sc=8) 00:27:17.613 starting I/O failed 00:27:17.613 Read completed with error (sct=0, sc=8) 00:27:17.613 starting I/O failed 00:27:17.613 Read completed with error (sct=0, sc=8) 00:27:17.613 starting I/O failed 00:27:17.613 Read completed with error (sct=0, sc=8) 00:27:17.613 starting I/O failed 00:27:17.613 Read completed with error (sct=0, sc=8) 00:27:17.613 starting I/O failed 00:27:17.613 Read completed with error (sct=0, sc=8) 00:27:17.613 starting I/O failed 00:27:17.613 Read completed with error (sct=0, sc=8) 00:27:17.613 starting I/O failed 00:27:17.613 [2024-07-14 07:44:33.777362] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:27:17.613 Read completed with error (sct=0, sc=8) 00:27:17.613 starting I/O failed 00:27:17.613 Read completed with error (sct=0, sc=8) 00:27:17.613 starting I/O failed 00:27:17.613 Read completed with error (sct=0, sc=8) 00:27:17.613 starting I/O failed 00:27:17.613 Read completed with error (sct=0, sc=8) 00:27:17.613 starting I/O failed 00:27:17.613 Read completed with error (sct=0, sc=8) 00:27:17.613 starting I/O failed 00:27:17.613 Read completed with error (sct=0, sc=8) 00:27:17.613 starting I/O failed 00:27:17.613 Read completed with error (sct=0, sc=8) 00:27:17.613 starting I/O failed 00:27:17.613 Write completed with error (sct=0, sc=8) 00:27:17.613 starting I/O failed 00:27:17.613 Write completed with error (sct=0, sc=8) 00:27:17.613 starting I/O failed 00:27:17.613 Read completed with error (sct=0, sc=8) 00:27:17.613 starting I/O failed 00:27:17.613 Read completed with error (sct=0, sc=8) 00:27:17.613 starting I/O failed 00:27:17.613 Read completed with error (sct=0, sc=8) 00:27:17.613 starting I/O failed 00:27:17.613 Write completed with error (sct=0, sc=8) 00:27:17.613 starting I/O failed 00:27:17.613 Read completed with error (sct=0, sc=8) 00:27:17.613 starting I/O failed 00:27:17.613 Write completed with error (sct=0, sc=8) 00:27:17.613 starting I/O failed 00:27:17.613 Read completed with error (sct=0, sc=8) 00:27:17.613 starting I/O failed 00:27:17.613 Read completed with error (sct=0, sc=8) 00:27:17.613 starting I/O failed 00:27:17.613 Write completed with error (sct=0, sc=8) 00:27:17.613 starting I/O failed 00:27:17.613 Read completed with error (sct=0, sc=8) 00:27:17.613 starting I/O failed 00:27:17.613 Read completed with error (sct=0, sc=8) 00:27:17.613 starting I/O failed 00:27:17.613 Read completed with error (sct=0, sc=8) 00:27:17.613 starting I/O failed 00:27:17.613 Read completed with error (sct=0, sc=8) 00:27:17.613 starting I/O failed 00:27:17.613 Write completed with error (sct=0, sc=8) 00:27:17.613 starting I/O failed 00:27:17.613 Read completed with error (sct=0, sc=8) 00:27:17.613 starting I/O failed 00:27:17.613 Write completed with error (sct=0, sc=8) 00:27:17.613 starting I/O failed 00:27:17.613 Write completed with error (sct=0, sc=8) 00:27:17.613 starting I/O failed 00:27:17.613 Write completed with error (sct=0, sc=8) 00:27:17.613 starting I/O failed 00:27:17.613 Read completed with error (sct=0, sc=8) 00:27:17.613 starting I/O failed 00:27:17.613 Read completed with error (sct=0, sc=8) 00:27:17.613 starting I/O failed 00:27:17.613 Read completed with error (sct=0, sc=8) 00:27:17.613 starting I/O failed 00:27:17.613 Write completed with error (sct=0, sc=8) 00:27:17.613 starting I/O failed 00:27:17.613 Write completed with error (sct=0, sc=8) 00:27:17.613 starting I/O failed 00:27:17.613 [2024-07-14 07:44:33.777693] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:17.613 Read completed with error (sct=0, sc=8) 00:27:17.613 starting I/O failed 00:27:17.613 Read completed with error (sct=0, sc=8) 00:27:17.613 starting I/O failed 00:27:17.613 Read completed with error (sct=0, sc=8) 00:27:17.613 starting I/O failed 00:27:17.613 Read completed with error (sct=0, sc=8) 00:27:17.613 starting I/O failed 00:27:17.614 Read completed with error (sct=0, sc=8) 00:27:17.614 starting I/O failed 00:27:17.614 Read completed with error (sct=0, sc=8) 00:27:17.614 starting I/O failed 00:27:17.614 Read completed with error (sct=0, sc=8) 00:27:17.614 starting I/O failed 00:27:17.614 Write completed with error (sct=0, sc=8) 00:27:17.614 starting I/O failed 00:27:17.614 Read completed with error (sct=0, sc=8) 00:27:17.614 starting I/O failed 00:27:17.614 Read completed with error (sct=0, sc=8) 00:27:17.614 starting I/O failed 00:27:17.614 Read completed with error (sct=0, sc=8) 00:27:17.614 starting I/O failed 00:27:17.614 Read completed with error (sct=0, sc=8) 00:27:17.614 starting I/O failed 00:27:17.614 Read completed with error (sct=0, sc=8) 00:27:17.614 starting I/O failed 00:27:17.614 Write completed with error (sct=0, sc=8) 00:27:17.614 starting I/O failed 00:27:17.614 Write completed with error (sct=0, sc=8) 00:27:17.614 starting I/O failed 00:27:17.614 Write completed with error (sct=0, sc=8) 00:27:17.614 starting I/O failed 00:27:17.614 Read completed with error (sct=0, sc=8) 00:27:17.614 starting I/O failed 00:27:17.614 Write completed with error (sct=0, sc=8) 00:27:17.614 starting I/O failed 00:27:17.614 Write completed with error (sct=0, sc=8) 00:27:17.614 starting I/O failed 00:27:17.614 Write completed with error (sct=0, sc=8) 00:27:17.614 starting I/O failed 00:27:17.614 Write completed with error (sct=0, sc=8) 00:27:17.614 starting I/O failed 00:27:17.614 Read completed with error (sct=0, sc=8) 00:27:17.614 starting I/O failed 00:27:17.614 Write completed with error (sct=0, sc=8) 00:27:17.614 starting I/O failed 00:27:17.614 Read completed with error (sct=0, sc=8) 00:27:17.614 starting I/O failed 00:27:17.614 Read completed with error (sct=0, sc=8) 00:27:17.614 starting I/O failed 00:27:17.614 Write completed with error (sct=0, sc=8) 00:27:17.614 starting I/O failed 00:27:17.614 Read completed with error (sct=0, sc=8) 00:27:17.614 starting I/O failed 00:27:17.614 Read completed with error (sct=0, sc=8) 00:27:17.614 starting I/O failed 00:27:17.614 Read completed with error (sct=0, sc=8) 00:27:17.614 starting I/O failed 00:27:17.614 Write completed with error (sct=0, sc=8) 00:27:17.614 starting I/O failed 00:27:17.614 Read completed with error (sct=0, sc=8) 00:27:17.614 starting I/O failed 00:27:17.614 Read completed with error (sct=0, sc=8) 00:27:17.614 starting I/O failed 00:27:17.614 [2024-07-14 07:44:33.778014] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:17.614 Read completed with error (sct=0, sc=8) 00:27:17.614 starting I/O failed 00:27:17.614 Read completed with error (sct=0, sc=8) 00:27:17.614 starting I/O failed 00:27:17.614 Read completed with error (sct=0, sc=8) 00:27:17.614 starting I/O failed 00:27:17.614 Read completed with error (sct=0, sc=8) 00:27:17.614 starting I/O failed 00:27:17.614 Read completed with error (sct=0, sc=8) 00:27:17.614 starting I/O failed 00:27:17.614 Read completed with error (sct=0, sc=8) 00:27:17.614 starting I/O failed 00:27:17.614 Write completed with error (sct=0, sc=8) 00:27:17.614 starting I/O failed 00:27:17.614 Read completed with error (sct=0, sc=8) 00:27:17.614 starting I/O failed 00:27:17.614 Read completed with error (sct=0, sc=8) 00:27:17.614 starting I/O failed 00:27:17.614 Write completed with error (sct=0, sc=8) 00:27:17.614 starting I/O failed 00:27:17.614 Read completed with error (sct=0, sc=8) 00:27:17.614 starting I/O failed 00:27:17.614 Read completed with error (sct=0, sc=8) 00:27:17.614 starting I/O failed 00:27:17.614 Write completed with error (sct=0, sc=8) 00:27:17.614 starting I/O failed 00:27:17.614 Write completed with error (sct=0, sc=8) 00:27:17.614 starting I/O failed 00:27:17.614 Read completed with error (sct=0, sc=8) 00:27:17.614 starting I/O failed 00:27:17.614 Write completed with error (sct=0, sc=8) 00:27:17.614 starting I/O failed 00:27:17.614 Write completed with error (sct=0, sc=8) 00:27:17.614 starting I/O failed 00:27:17.614 Write completed with error (sct=0, sc=8) 00:27:17.614 starting I/O failed 00:27:17.614 Read completed with error (sct=0, sc=8) 00:27:17.614 starting I/O failed 00:27:17.614 Write completed with error (sct=0, sc=8) 00:27:17.614 starting I/O failed 00:27:17.614 Read completed with error (sct=0, sc=8) 00:27:17.614 starting I/O failed 00:27:17.614 Write completed with error (sct=0, sc=8) 00:27:17.614 starting I/O failed 00:27:17.614 Write completed with error (sct=0, sc=8) 00:27:17.614 starting I/O failed 00:27:17.614 Write completed with error (sct=0, sc=8) 00:27:17.614 starting I/O failed 00:27:17.614 Write completed with error (sct=0, sc=8) 00:27:17.614 starting I/O failed 00:27:17.614 Write completed with error (sct=0, sc=8) 00:27:17.614 starting I/O failed 00:27:17.614 Read completed with error (sct=0, sc=8) 00:27:17.614 starting I/O failed 00:27:17.614 Write completed with error (sct=0, sc=8) 00:27:17.614 starting I/O failed 00:27:17.614 Write completed with error (sct=0, sc=8) 00:27:17.614 starting I/O failed 00:27:17.614 Write completed with error (sct=0, sc=8) 00:27:17.614 starting I/O failed 00:27:17.614 Read completed with error (sct=0, sc=8) 00:27:17.614 starting I/O failed 00:27:17.614 Write completed with error (sct=0, sc=8) 00:27:17.614 starting I/O failed 00:27:17.614 [2024-07-14 07:44:33.778347] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:27:17.614 [2024-07-14 07:44:33.778603] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.614 [2024-07-14 07:44:33.778853] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.614 [2024-07-14 07:44:33.778897] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0c28000b90 with addr=10.0.0.2, port=4420 00:27:17.614 qpair failed and we were unable to recover it. 00:27:17.614 [2024-07-14 07:44:33.779075] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.614 [2024-07-14 07:44:33.779262] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.614 [2024-07-14 07:44:33.779288] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0c28000b90 with addr=10.0.0.2, port=4420 00:27:17.614 qpair failed and we were unable to recover it. 00:27:17.614 [2024-07-14 07:44:33.779458] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.614 [2024-07-14 07:44:33.779691] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.614 [2024-07-14 07:44:33.779719] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0c28000b90 with addr=10.0.0.2, port=4420 00:27:17.614 qpair failed and we were unable to recover it. 00:27:17.614 [2024-07-14 07:44:33.779946] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.614 [2024-07-14 07:44:33.780121] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.614 [2024-07-14 07:44:33.780146] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0c28000b90 with addr=10.0.0.2, port=4420 00:27:17.614 qpair failed and we were unable to recover it. 00:27:17.614 [2024-07-14 07:44:33.780367] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.614 [2024-07-14 07:44:33.780557] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.614 [2024-07-14 07:44:33.780585] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0c28000b90 with addr=10.0.0.2, port=4420 00:27:17.614 qpair failed and we were unable to recover it. 00:27:17.614 [2024-07-14 07:44:33.780806] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.614 [2024-07-14 07:44:33.781004] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.614 [2024-07-14 07:44:33.781034] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0c28000b90 with addr=10.0.0.2, port=4420 00:27:17.614 qpair failed and we were unable to recover it. 00:27:17.614 [2024-07-14 07:44:33.781211] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.614 [2024-07-14 07:44:33.781422] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.614 [2024-07-14 07:44:33.781448] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0c28000b90 with addr=10.0.0.2, port=4420 00:27:17.614 qpair failed and we were unable to recover it. 00:27:17.614 [2024-07-14 07:44:33.781693] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.614 [2024-07-14 07:44:33.781920] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.614 [2024-07-14 07:44:33.781947] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0c28000b90 with addr=10.0.0.2, port=4420 00:27:17.614 qpair failed and we were unable to recover it. 00:27:17.614 [2024-07-14 07:44:33.782112] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.614 [2024-07-14 07:44:33.782359] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.614 [2024-07-14 07:44:33.782401] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0c28000b90 with addr=10.0.0.2, port=4420 00:27:17.614 qpair failed and we were unable to recover it. 00:27:17.614 [2024-07-14 07:44:33.782682] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.882 [2024-07-14 07:44:33.782923] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.882 [2024-07-14 07:44:33.782953] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0c28000b90 with addr=10.0.0.2, port=4420 00:27:17.882 qpair failed and we were unable to recover it. 00:27:17.882 [2024-07-14 07:44:33.783147] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.882 [2024-07-14 07:44:33.783316] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.882 [2024-07-14 07:44:33.783342] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0c28000b90 with addr=10.0.0.2, port=4420 00:27:17.882 qpair failed and we were unable to recover it. 00:27:17.882 [2024-07-14 07:44:33.783587] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.882 [2024-07-14 07:44:33.783786] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.882 [2024-07-14 07:44:33.783817] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0c28000b90 with addr=10.0.0.2, port=4420 00:27:17.882 qpair failed and we were unable to recover it. 00:27:17.882 [2024-07-14 07:44:33.784009] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.882 [2024-07-14 07:44:33.784218] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.882 [2024-07-14 07:44:33.784245] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0c28000b90 with addr=10.0.0.2, port=4420 00:27:17.882 qpair failed and we were unable to recover it. 00:27:17.882 [2024-07-14 07:44:33.784474] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.882 [2024-07-14 07:44:33.784713] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.883 [2024-07-14 07:44:33.784742] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0c28000b90 with addr=10.0.0.2, port=4420 00:27:17.883 qpair failed and we were unable to recover it. 00:27:17.883 [2024-07-14 07:44:33.784960] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.883 [2024-07-14 07:44:33.785124] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.883 [2024-07-14 07:44:33.785150] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0c28000b90 with addr=10.0.0.2, port=4420 00:27:17.883 qpair failed and we were unable to recover it. 00:27:17.883 [2024-07-14 07:44:33.785397] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.883 [2024-07-14 07:44:33.785701] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.883 [2024-07-14 07:44:33.785731] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0c28000b90 with addr=10.0.0.2, port=4420 00:27:17.883 qpair failed and we were unable to recover it. 00:27:17.883 [2024-07-14 07:44:33.785925] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.883 [2024-07-14 07:44:33.786097] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.883 [2024-07-14 07:44:33.786124] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0c28000b90 with addr=10.0.0.2, port=4420 00:27:17.883 qpair failed and we were unable to recover it. 00:27:17.883 [2024-07-14 07:44:33.786315] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.883 [2024-07-14 07:44:33.786537] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.883 [2024-07-14 07:44:33.786580] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0c28000b90 with addr=10.0.0.2, port=4420 00:27:17.883 qpair failed and we were unable to recover it. 00:27:17.883 [2024-07-14 07:44:33.786840] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.883 [2024-07-14 07:44:33.787050] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.883 [2024-07-14 07:44:33.787077] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0c28000b90 with addr=10.0.0.2, port=4420 00:27:17.883 qpair failed and we were unable to recover it. 00:27:17.883 [2024-07-14 07:44:33.787272] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.883 [2024-07-14 07:44:33.787500] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.883 [2024-07-14 07:44:33.787528] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0c28000b90 with addr=10.0.0.2, port=4420 00:27:17.883 qpair failed and we were unable to recover it. 00:27:17.883 [2024-07-14 07:44:33.787727] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.883 [2024-07-14 07:44:33.787949] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.883 [2024-07-14 07:44:33.787976] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0c28000b90 with addr=10.0.0.2, port=4420 00:27:17.883 qpair failed and we were unable to recover it. 00:27:17.883 [2024-07-14 07:44:33.788135] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.883 [2024-07-14 07:44:33.788322] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.883 [2024-07-14 07:44:33.788348] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0c28000b90 with addr=10.0.0.2, port=4420 00:27:17.883 qpair failed and we were unable to recover it. 00:27:17.883 [2024-07-14 07:44:33.788538] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.883 [2024-07-14 07:44:33.788718] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.883 [2024-07-14 07:44:33.788747] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0c28000b90 with addr=10.0.0.2, port=4420 00:27:17.883 qpair failed and we were unable to recover it. 00:27:17.883 [2024-07-14 07:44:33.788929] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.883 [2024-07-14 07:44:33.789098] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.883 [2024-07-14 07:44:33.789125] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0c28000b90 with addr=10.0.0.2, port=4420 00:27:17.883 qpair failed and we were unable to recover it. 00:27:17.883 [2024-07-14 07:44:33.789301] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.883 [2024-07-14 07:44:33.789661] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.883 [2024-07-14 07:44:33.789718] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0c28000b90 with addr=10.0.0.2, port=4420 00:27:17.883 qpair failed and we were unable to recover it. 00:27:17.883 [2024-07-14 07:44:33.789895] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.883 [2024-07-14 07:44:33.790105] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.883 [2024-07-14 07:44:33.790130] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0c28000b90 with addr=10.0.0.2, port=4420 00:27:17.883 qpair failed and we were unable to recover it. 00:27:17.883 [2024-07-14 07:44:33.790322] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.883 [2024-07-14 07:44:33.790515] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.883 [2024-07-14 07:44:33.790542] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0c28000b90 with addr=10.0.0.2, port=4420 00:27:17.883 qpair failed and we were unable to recover it. 00:27:17.883 [2024-07-14 07:44:33.790756] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.883 [2024-07-14 07:44:33.790951] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.883 [2024-07-14 07:44:33.790979] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0c28000b90 with addr=10.0.0.2, port=4420 00:27:17.883 qpair failed and we were unable to recover it. 00:27:17.883 [2024-07-14 07:44:33.791143] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.883 [2024-07-14 07:44:33.791324] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.883 [2024-07-14 07:44:33.791350] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0c28000b90 with addr=10.0.0.2, port=4420 00:27:17.883 qpair failed and we were unable to recover it. 00:27:17.883 [2024-07-14 07:44:33.791562] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.883 [2024-07-14 07:44:33.791739] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.883 [2024-07-14 07:44:33.791765] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0c28000b90 with addr=10.0.0.2, port=4420 00:27:17.883 qpair failed and we were unable to recover it. 00:27:17.883 [2024-07-14 07:44:33.791965] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.883 [2024-07-14 07:44:33.792146] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.883 [2024-07-14 07:44:33.792174] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0c28000b90 with addr=10.0.0.2, port=4420 00:27:17.883 qpair failed and we were unable to recover it. 00:27:17.883 [2024-07-14 07:44:33.792371] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.883 [2024-07-14 07:44:33.792583] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.883 [2024-07-14 07:44:33.792609] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0c28000b90 with addr=10.0.0.2, port=4420 00:27:17.883 qpair failed and we were unable to recover it. 00:27:17.883 [2024-07-14 07:44:33.792769] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.883 [2024-07-14 07:44:33.792954] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.883 [2024-07-14 07:44:33.792980] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0c28000b90 with addr=10.0.0.2, port=4420 00:27:17.883 qpair failed and we were unable to recover it. 00:27:17.883 [2024-07-14 07:44:33.793140] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.883 [2024-07-14 07:44:33.793353] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.883 [2024-07-14 07:44:33.793378] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0c28000b90 with addr=10.0.0.2, port=4420 00:27:17.883 qpair failed and we were unable to recover it. 00:27:17.883 [2024-07-14 07:44:33.793593] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.883 [2024-07-14 07:44:33.793814] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.883 [2024-07-14 07:44:33.793844] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0c28000b90 with addr=10.0.0.2, port=4420 00:27:17.883 qpair failed and we were unable to recover it. 00:27:17.883 [2024-07-14 07:44:33.794057] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.883 [2024-07-14 07:44:33.794259] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.883 [2024-07-14 07:44:33.794287] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0c30000b90 with addr=10.0.0.2, port=4420 00:27:17.883 qpair failed and we were unable to recover it. 00:27:17.883 [2024-07-14 07:44:33.794488] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.883 [2024-07-14 07:44:33.794699] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.883 [2024-07-14 07:44:33.794729] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0c30000b90 with addr=10.0.0.2, port=4420 00:27:17.883 qpair failed and we were unable to recover it. 00:27:17.883 [2024-07-14 07:44:33.794963] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.883 [2024-07-14 07:44:33.795126] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.883 [2024-07-14 07:44:33.795152] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0c30000b90 with addr=10.0.0.2, port=4420 00:27:17.883 qpair failed and we were unable to recover it. 00:27:17.883 [2024-07-14 07:44:33.795314] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.883 [2024-07-14 07:44:33.795500] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.883 [2024-07-14 07:44:33.795526] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0c30000b90 with addr=10.0.0.2, port=4420 00:27:17.883 qpair failed and we were unable to recover it. 00:27:17.883 [2024-07-14 07:44:33.795718] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.883 [2024-07-14 07:44:33.795878] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.883 [2024-07-14 07:44:33.795905] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0c30000b90 with addr=10.0.0.2, port=4420 00:27:17.883 qpair failed and we were unable to recover it. 00:27:17.883 [2024-07-14 07:44:33.796105] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.883 [2024-07-14 07:44:33.796343] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.883 [2024-07-14 07:44:33.796372] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0c30000b90 with addr=10.0.0.2, port=4420 00:27:17.883 qpair failed and we were unable to recover it. 00:27:17.883 [2024-07-14 07:44:33.796720] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.883 [2024-07-14 07:44:33.796961] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.883 [2024-07-14 07:44:33.796987] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0c30000b90 with addr=10.0.0.2, port=4420 00:27:17.883 qpair failed and we were unable to recover it. 00:27:17.883 [2024-07-14 07:44:33.797163] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.883 [2024-07-14 07:44:33.797351] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.883 [2024-07-14 07:44:33.797378] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0c30000b90 with addr=10.0.0.2, port=4420 00:27:17.883 qpair failed and we were unable to recover it. 00:27:17.883 [2024-07-14 07:44:33.797601] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.883 [2024-07-14 07:44:33.797792] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.883 [2024-07-14 07:44:33.797818] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0c30000b90 with addr=10.0.0.2, port=4420 00:27:17.883 qpair failed and we were unable to recover it. 00:27:17.883 [2024-07-14 07:44:33.798023] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.883 [2024-07-14 07:44:33.798216] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.883 [2024-07-14 07:44:33.798242] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0c30000b90 with addr=10.0.0.2, port=4420 00:27:17.883 qpair failed and we were unable to recover it. 00:27:17.883 [2024-07-14 07:44:33.798408] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.883 [2024-07-14 07:44:33.798593] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.883 [2024-07-14 07:44:33.798618] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0c30000b90 with addr=10.0.0.2, port=4420 00:27:17.883 qpair failed and we were unable to recover it. 00:27:17.883 [2024-07-14 07:44:33.798834] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.883 [2024-07-14 07:44:33.799023] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.883 [2024-07-14 07:44:33.799050] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0c30000b90 with addr=10.0.0.2, port=4420 00:27:17.883 qpair failed and we were unable to recover it. 00:27:17.883 [2024-07-14 07:44:33.799270] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.883 [2024-07-14 07:44:33.799489] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.883 [2024-07-14 07:44:33.799515] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0c30000b90 with addr=10.0.0.2, port=4420 00:27:17.883 qpair failed and we were unable to recover it. 00:27:17.883 [2024-07-14 07:44:33.800119] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.883 [2024-07-14 07:44:33.800370] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.883 [2024-07-14 07:44:33.800395] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0c30000b90 with addr=10.0.0.2, port=4420 00:27:17.883 qpair failed and we were unable to recover it. 00:27:17.883 [2024-07-14 07:44:33.800602] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.883 [2024-07-14 07:44:33.800800] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.883 [2024-07-14 07:44:33.800825] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0c30000b90 with addr=10.0.0.2, port=4420 00:27:17.883 qpair failed and we were unable to recover it. 00:27:17.883 [2024-07-14 07:44:33.801046] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.883 [2024-07-14 07:44:33.801220] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.883 [2024-07-14 07:44:33.801246] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0c30000b90 with addr=10.0.0.2, port=4420 00:27:17.883 qpair failed and we were unable to recover it. 00:27:17.883 [2024-07-14 07:44:33.801462] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.883 [2024-07-14 07:44:33.801708] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.883 [2024-07-14 07:44:33.801733] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0c30000b90 with addr=10.0.0.2, port=4420 00:27:17.883 qpair failed and we were unable to recover it. 00:27:17.883 [2024-07-14 07:44:33.801970] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.883 [2024-07-14 07:44:33.802188] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.883 [2024-07-14 07:44:33.802229] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0c30000b90 with addr=10.0.0.2, port=4420 00:27:17.883 qpair failed and we were unable to recover it. 00:27:17.883 [2024-07-14 07:44:33.802422] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.883 [2024-07-14 07:44:33.802610] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.883 [2024-07-14 07:44:33.802649] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0c30000b90 with addr=10.0.0.2, port=4420 00:27:17.883 qpair failed and we were unable to recover it. 00:27:17.883 [2024-07-14 07:44:33.802840] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.883 [2024-07-14 07:44:33.803070] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.883 [2024-07-14 07:44:33.803114] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0c30000b90 with addr=10.0.0.2, port=4420 00:27:17.883 qpair failed and we were unable to recover it. 00:27:17.883 [2024-07-14 07:44:33.803355] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.883 [2024-07-14 07:44:33.803616] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.883 [2024-07-14 07:44:33.803657] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0c30000b90 with addr=10.0.0.2, port=4420 00:27:17.883 qpair failed and we were unable to recover it. 00:27:17.883 [2024-07-14 07:44:33.803875] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.883 [2024-07-14 07:44:33.804103] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.883 [2024-07-14 07:44:33.804129] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0c30000b90 with addr=10.0.0.2, port=4420 00:27:17.883 qpair failed and we were unable to recover it. 00:27:17.883 [2024-07-14 07:44:33.804416] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.883 [2024-07-14 07:44:33.804587] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.883 [2024-07-14 07:44:33.804675] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0c30000b90 with addr=10.0.0.2, port=4420 00:27:17.883 qpair failed and we were unable to recover it. 00:27:17.883 [2024-07-14 07:44:33.804949] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.883 [2024-07-14 07:44:33.805109] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.883 [2024-07-14 07:44:33.805136] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0c30000b90 with addr=10.0.0.2, port=4420 00:27:17.883 qpair failed and we were unable to recover it. 00:27:17.883 [2024-07-14 07:44:33.805386] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.883 [2024-07-14 07:44:33.805597] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.883 [2024-07-14 07:44:33.805641] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0c30000b90 with addr=10.0.0.2, port=4420 00:27:17.883 qpair failed and we were unable to recover it. 00:27:17.883 [2024-07-14 07:44:33.805860] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.883 [2024-07-14 07:44:33.806025] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.883 [2024-07-14 07:44:33.806050] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0c30000b90 with addr=10.0.0.2, port=4420 00:27:17.883 qpair failed and we were unable to recover it. 00:27:17.883 [2024-07-14 07:44:33.806223] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.883 [2024-07-14 07:44:33.806416] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.883 [2024-07-14 07:44:33.806458] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0c30000b90 with addr=10.0.0.2, port=4420 00:27:17.883 qpair failed and we were unable to recover it. 00:27:17.883 [2024-07-14 07:44:33.806669] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.883 [2024-07-14 07:44:33.806894] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.883 [2024-07-14 07:44:33.806921] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0c30000b90 with addr=10.0.0.2, port=4420 00:27:17.883 qpair failed and we were unable to recover it. 00:27:17.883 [2024-07-14 07:44:33.807112] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.883 [2024-07-14 07:44:33.807308] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.883 [2024-07-14 07:44:33.807333] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0c30000b90 with addr=10.0.0.2, port=4420 00:27:17.883 qpair failed and we were unable to recover it. 00:27:17.883 [2024-07-14 07:44:33.807570] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.883 [2024-07-14 07:44:33.807810] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.883 [2024-07-14 07:44:33.807851] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0c30000b90 with addr=10.0.0.2, port=4420 00:27:17.883 qpair failed and we were unable to recover it. 00:27:17.883 [2024-07-14 07:44:33.808053] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.883 [2024-07-14 07:44:33.808258] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.883 [2024-07-14 07:44:33.808287] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0c30000b90 with addr=10.0.0.2, port=4420 00:27:17.883 qpair failed and we were unable to recover it. 00:27:17.883 [2024-07-14 07:44:33.808503] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.883 [2024-07-14 07:44:33.808738] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.883 [2024-07-14 07:44:33.808779] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0c30000b90 with addr=10.0.0.2, port=4420 00:27:17.883 qpair failed and we were unable to recover it. 00:27:17.883 [2024-07-14 07:44:33.808980] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.883 [2024-07-14 07:44:33.809257] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.883 [2024-07-14 07:44:33.809283] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0c30000b90 with addr=10.0.0.2, port=4420 00:27:17.883 qpair failed and we were unable to recover it. 00:27:17.883 [2024-07-14 07:44:33.809479] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.883 [2024-07-14 07:44:33.809631] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.883 [2024-07-14 07:44:33.809657] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0c30000b90 with addr=10.0.0.2, port=4420 00:27:17.883 qpair failed and we were unable to recover it. 00:27:17.883 [2024-07-14 07:44:33.809873] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.883 [2024-07-14 07:44:33.810069] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.883 [2024-07-14 07:44:33.810095] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0c30000b90 with addr=10.0.0.2, port=4420 00:27:17.883 qpair failed and we were unable to recover it. 00:27:17.883 [2024-07-14 07:44:33.810410] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.883 [2024-07-14 07:44:33.810633] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.883 [2024-07-14 07:44:33.810658] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0c30000b90 with addr=10.0.0.2, port=4420 00:27:17.883 qpair failed and we were unable to recover it. 00:27:17.883 [2024-07-14 07:44:33.810850] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.883 [2024-07-14 07:44:33.811034] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.883 [2024-07-14 07:44:33.811060] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0c30000b90 with addr=10.0.0.2, port=4420 00:27:17.883 qpair failed and we were unable to recover it. 00:27:17.883 [2024-07-14 07:44:33.811281] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.883 [2024-07-14 07:44:33.811486] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.883 [2024-07-14 07:44:33.811529] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0c30000b90 with addr=10.0.0.2, port=4420 00:27:17.883 qpair failed and we were unable to recover it. 00:27:17.883 [2024-07-14 07:44:33.811743] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.883 [2024-07-14 07:44:33.811945] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.883 [2024-07-14 07:44:33.811988] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0c30000b90 with addr=10.0.0.2, port=4420 00:27:17.883 qpair failed and we were unable to recover it. 00:27:17.883 [2024-07-14 07:44:33.812242] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.883 [2024-07-14 07:44:33.812459] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.884 [2024-07-14 07:44:33.812484] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0c30000b90 with addr=10.0.0.2, port=4420 00:27:17.884 qpair failed and we were unable to recover it. 00:27:17.884 [2024-07-14 07:44:33.812715] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.884 [2024-07-14 07:44:33.813026] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.884 [2024-07-14 07:44:33.813052] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0c30000b90 with addr=10.0.0.2, port=4420 00:27:17.884 qpair failed and we were unable to recover it. 00:27:17.884 [2024-07-14 07:44:33.813372] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.884 [2024-07-14 07:44:33.813623] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.884 [2024-07-14 07:44:33.813666] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:17.884 qpair failed and we were unable to recover it. 00:27:17.884 [2024-07-14 07:44:33.813860] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.884 [2024-07-14 07:44:33.814074] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.884 [2024-07-14 07:44:33.814100] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:17.884 qpair failed and we were unable to recover it. 00:27:17.884 [2024-07-14 07:44:33.814303] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.884 [2024-07-14 07:44:33.814525] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.884 [2024-07-14 07:44:33.814550] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:17.884 qpair failed and we were unable to recover it. 00:27:17.884 [2024-07-14 07:44:33.814802] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.884 [2024-07-14 07:44:33.815040] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.884 [2024-07-14 07:44:33.815067] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:17.884 qpair failed and we were unable to recover it. 00:27:17.884 [2024-07-14 07:44:33.815277] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.884 [2024-07-14 07:44:33.815503] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.884 [2024-07-14 07:44:33.815531] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:17.884 qpair failed and we were unable to recover it. 00:27:17.884 [2024-07-14 07:44:33.815776] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.884 [2024-07-14 07:44:33.815970] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.884 [2024-07-14 07:44:33.815996] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:17.884 qpair failed and we were unable to recover it. 00:27:17.884 [2024-07-14 07:44:33.816181] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.884 [2024-07-14 07:44:33.816393] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.884 [2024-07-14 07:44:33.816420] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:17.884 qpair failed and we were unable to recover it. 00:27:17.884 [2024-07-14 07:44:33.816649] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.884 [2024-07-14 07:44:33.816811] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.884 [2024-07-14 07:44:33.816836] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:17.884 qpair failed and we were unable to recover it. 00:27:17.884 [2024-07-14 07:44:33.817030] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.884 [2024-07-14 07:44:33.817226] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.884 [2024-07-14 07:44:33.817250] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:17.884 qpair failed and we were unable to recover it. 00:27:17.884 [2024-07-14 07:44:33.817433] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.884 [2024-07-14 07:44:33.817638] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.884 [2024-07-14 07:44:33.817663] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:17.884 qpair failed and we were unable to recover it. 00:27:17.884 [2024-07-14 07:44:33.817844] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.884 [2024-07-14 07:44:33.818018] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.884 [2024-07-14 07:44:33.818044] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:17.884 qpair failed and we were unable to recover it. 00:27:17.884 [2024-07-14 07:44:33.818207] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.884 [2024-07-14 07:44:33.818384] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.884 [2024-07-14 07:44:33.818409] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:17.884 qpair failed and we were unable to recover it. 00:27:17.884 [2024-07-14 07:44:33.818606] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.884 [2024-07-14 07:44:33.818792] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.884 [2024-07-14 07:44:33.818817] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:17.884 qpair failed and we were unable to recover it. 00:27:17.884 [2024-07-14 07:44:33.819033] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.884 [2024-07-14 07:44:33.819214] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.884 [2024-07-14 07:44:33.819242] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:17.884 qpair failed and we were unable to recover it. 00:27:17.884 [2024-07-14 07:44:33.819445] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.884 [2024-07-14 07:44:33.819633] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.884 [2024-07-14 07:44:33.819658] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:17.884 qpair failed and we were unable to recover it. 00:27:17.884 [2024-07-14 07:44:33.819871] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.884 [2024-07-14 07:44:33.820060] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.884 [2024-07-14 07:44:33.820086] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:17.884 qpair failed and we were unable to recover it. 00:27:17.884 [2024-07-14 07:44:33.820272] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.884 [2024-07-14 07:44:33.820457] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.884 [2024-07-14 07:44:33.820482] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:17.884 qpair failed and we were unable to recover it. 00:27:17.884 [2024-07-14 07:44:33.820692] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.884 [2024-07-14 07:44:33.820920] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.884 [2024-07-14 07:44:33.820946] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:17.884 qpair failed and we were unable to recover it. 00:27:17.884 [2024-07-14 07:44:33.821135] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.884 [2024-07-14 07:44:33.821369] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.884 [2024-07-14 07:44:33.821397] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:17.884 qpair failed and we were unable to recover it. 00:27:17.884 [2024-07-14 07:44:33.821633] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.884 [2024-07-14 07:44:33.821816] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.884 [2024-07-14 07:44:33.821844] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:17.884 qpair failed and we were unable to recover it. 00:27:17.884 [2024-07-14 07:44:33.822084] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.884 [2024-07-14 07:44:33.822344] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.884 [2024-07-14 07:44:33.822374] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:17.884 qpair failed and we were unable to recover it. 00:27:17.884 [2024-07-14 07:44:33.822533] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.884 [2024-07-14 07:44:33.822711] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.884 [2024-07-14 07:44:33.822737] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:17.884 qpair failed and we were unable to recover it. 00:27:17.884 [2024-07-14 07:44:33.823001] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.884 [2024-07-14 07:44:33.823169] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.884 [2024-07-14 07:44:33.823195] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:17.884 qpair failed and we were unable to recover it. 00:27:17.884 [2024-07-14 07:44:33.823377] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.884 [2024-07-14 07:44:33.823580] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.884 [2024-07-14 07:44:33.823605] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:17.884 qpair failed and we were unable to recover it. 00:27:17.884 [2024-07-14 07:44:33.823797] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.884 [2024-07-14 07:44:33.823982] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.884 [2024-07-14 07:44:33.824008] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:17.884 qpair failed and we were unable to recover it. 00:27:17.884 [2024-07-14 07:44:33.824188] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.884 [2024-07-14 07:44:33.824376] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.884 [2024-07-14 07:44:33.824401] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:17.884 qpair failed and we were unable to recover it. 00:27:17.884 [2024-07-14 07:44:33.824611] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.884 [2024-07-14 07:44:33.824816] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.884 [2024-07-14 07:44:33.824841] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:17.884 qpair failed and we were unable to recover it. 00:27:17.884 [2024-07-14 07:44:33.825028] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.884 [2024-07-14 07:44:33.825184] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.884 [2024-07-14 07:44:33.825209] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:17.884 qpair failed and we were unable to recover it. 00:27:17.884 [2024-07-14 07:44:33.825399] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.884 [2024-07-14 07:44:33.825628] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.884 [2024-07-14 07:44:33.825656] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:17.884 qpair failed and we were unable to recover it. 00:27:17.884 [2024-07-14 07:44:33.825875] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.884 [2024-07-14 07:44:33.826069] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.884 [2024-07-14 07:44:33.826095] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:17.884 qpair failed and we were unable to recover it. 00:27:17.884 [2024-07-14 07:44:33.826273] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.884 [2024-07-14 07:44:33.826462] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.884 [2024-07-14 07:44:33.826487] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:17.884 qpair failed and we were unable to recover it. 00:27:17.884 [2024-07-14 07:44:33.826655] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.884 [2024-07-14 07:44:33.826836] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.884 [2024-07-14 07:44:33.826889] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:17.884 qpair failed and we were unable to recover it. 00:27:17.884 [2024-07-14 07:44:33.827100] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.884 [2024-07-14 07:44:33.827335] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.884 [2024-07-14 07:44:33.827360] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:17.884 qpair failed and we were unable to recover it. 00:27:17.884 [2024-07-14 07:44:33.827552] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.884 [2024-07-14 07:44:33.827708] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.884 [2024-07-14 07:44:33.827733] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:17.884 qpair failed and we were unable to recover it. 00:27:17.884 [2024-07-14 07:44:33.827890] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.884 [2024-07-14 07:44:33.828119] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.884 [2024-07-14 07:44:33.828146] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:17.884 qpair failed and we were unable to recover it. 00:27:17.884 [2024-07-14 07:44:33.828357] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.884 [2024-07-14 07:44:33.828560] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.884 [2024-07-14 07:44:33.828585] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:17.884 qpair failed and we were unable to recover it. 00:27:17.884 [2024-07-14 07:44:33.828764] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.884 [2024-07-14 07:44:33.828944] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.884 [2024-07-14 07:44:33.828969] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:17.884 qpair failed and we were unable to recover it. 00:27:17.884 [2024-07-14 07:44:33.829163] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.884 [2024-07-14 07:44:33.829425] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.884 [2024-07-14 07:44:33.829480] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:17.884 qpair failed and we were unable to recover it. 00:27:17.884 [2024-07-14 07:44:33.829691] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.884 [2024-07-14 07:44:33.829876] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.884 [2024-07-14 07:44:33.829902] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:17.884 qpair failed and we were unable to recover it. 00:27:17.884 [2024-07-14 07:44:33.830087] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.884 [2024-07-14 07:44:33.830326] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.884 [2024-07-14 07:44:33.830353] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:17.884 qpair failed and we were unable to recover it. 00:27:17.884 [2024-07-14 07:44:33.830591] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.884 [2024-07-14 07:44:33.830775] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.884 [2024-07-14 07:44:33.830800] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:17.884 qpair failed and we were unable to recover it. 00:27:17.884 [2024-07-14 07:44:33.830989] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.884 [2024-07-14 07:44:33.831174] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.884 [2024-07-14 07:44:33.831199] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:17.884 qpair failed and we were unable to recover it. 00:27:17.884 [2024-07-14 07:44:33.831408] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.884 [2024-07-14 07:44:33.831618] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.884 [2024-07-14 07:44:33.831643] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:17.884 qpair failed and we were unable to recover it. 00:27:17.884 [2024-07-14 07:44:33.831803] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.884 [2024-07-14 07:44:33.832035] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.884 [2024-07-14 07:44:33.832064] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:17.884 qpair failed and we were unable to recover it. 00:27:17.884 [2024-07-14 07:44:33.832255] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.884 [2024-07-14 07:44:33.832450] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.884 [2024-07-14 07:44:33.832478] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:17.884 qpair failed and we were unable to recover it. 00:27:17.884 [2024-07-14 07:44:33.832667] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.884 [2024-07-14 07:44:33.832889] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.884 [2024-07-14 07:44:33.832914] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:17.884 qpair failed and we were unable to recover it. 00:27:17.884 [2024-07-14 07:44:33.833173] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.884 [2024-07-14 07:44:33.833354] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.884 [2024-07-14 07:44:33.833379] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:17.884 qpair failed and we were unable to recover it. 00:27:17.884 [2024-07-14 07:44:33.833549] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.884 [2024-07-14 07:44:33.833762] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.884 [2024-07-14 07:44:33.833788] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:17.884 qpair failed and we were unable to recover it. 00:27:17.884 [2024-07-14 07:44:33.834002] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.884 [2024-07-14 07:44:33.834193] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.884 [2024-07-14 07:44:33.834221] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:17.884 qpair failed and we were unable to recover it. 00:27:17.884 [2024-07-14 07:44:33.834423] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.884 [2024-07-14 07:44:33.834626] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.884 [2024-07-14 07:44:33.834652] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:17.884 qpair failed and we were unable to recover it. 00:27:17.884 [2024-07-14 07:44:33.834861] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.884 [2024-07-14 07:44:33.835076] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.884 [2024-07-14 07:44:33.835104] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:17.884 qpair failed and we were unable to recover it. 00:27:17.884 [2024-07-14 07:44:33.835314] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.884 [2024-07-14 07:44:33.835473] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.884 [2024-07-14 07:44:33.835499] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:17.884 qpair failed and we were unable to recover it. 00:27:17.884 [2024-07-14 07:44:33.835694] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.884 [2024-07-14 07:44:33.835855] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.884 [2024-07-14 07:44:33.835889] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:17.884 qpair failed and we were unable to recover it. 00:27:17.884 [2024-07-14 07:44:33.836101] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.884 [2024-07-14 07:44:33.836279] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.884 [2024-07-14 07:44:33.836305] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:17.884 qpair failed and we were unable to recover it. 00:27:17.884 [2024-07-14 07:44:33.836496] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.884 [2024-07-14 07:44:33.836677] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.884 [2024-07-14 07:44:33.836702] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:17.884 qpair failed and we were unable to recover it. 00:27:17.884 [2024-07-14 07:44:33.836888] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.884 [2024-07-14 07:44:33.837109] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.884 [2024-07-14 07:44:33.837134] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:17.884 qpair failed and we were unable to recover it. 00:27:17.884 [2024-07-14 07:44:33.837369] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.884 [2024-07-14 07:44:33.837606] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.884 [2024-07-14 07:44:33.837653] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:17.884 qpair failed and we were unable to recover it. 00:27:17.884 [2024-07-14 07:44:33.837863] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.884 [2024-07-14 07:44:33.838048] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.884 [2024-07-14 07:44:33.838074] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:17.884 qpair failed and we were unable to recover it. 00:27:17.884 [2024-07-14 07:44:33.838259] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.884 [2024-07-14 07:44:33.838467] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.885 [2024-07-14 07:44:33.838493] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:17.885 qpair failed and we were unable to recover it. 00:27:17.885 [2024-07-14 07:44:33.838670] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.885 [2024-07-14 07:44:33.838875] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.885 [2024-07-14 07:44:33.838904] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:17.885 qpair failed and we were unable to recover it. 00:27:17.885 [2024-07-14 07:44:33.839113] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.885 [2024-07-14 07:44:33.839300] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.885 [2024-07-14 07:44:33.839325] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:17.885 qpair failed and we were unable to recover it. 00:27:17.885 [2024-07-14 07:44:33.839510] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.885 [2024-07-14 07:44:33.839758] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.885 [2024-07-14 07:44:33.839799] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:17.885 qpair failed and we were unable to recover it. 00:27:17.885 [2024-07-14 07:44:33.840021] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.885 [2024-07-14 07:44:33.840366] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.885 [2024-07-14 07:44:33.840424] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:17.885 qpair failed and we were unable to recover it. 00:27:17.885 [2024-07-14 07:44:33.840633] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.885 [2024-07-14 07:44:33.840850] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.885 [2024-07-14 07:44:33.840884] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:17.885 qpair failed and we were unable to recover it. 00:27:17.885 [2024-07-14 07:44:33.841102] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.885 [2024-07-14 07:44:33.841273] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.885 [2024-07-14 07:44:33.841298] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:17.885 qpair failed and we were unable to recover it. 00:27:17.885 [2024-07-14 07:44:33.841493] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.885 [2024-07-14 07:44:33.841774] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.885 [2024-07-14 07:44:33.841798] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:17.885 qpair failed and we were unable to recover it. 00:27:17.885 [2024-07-14 07:44:33.842020] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.885 [2024-07-14 07:44:33.842242] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.885 [2024-07-14 07:44:33.842267] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:17.885 qpair failed and we were unable to recover it. 00:27:17.885 [2024-07-14 07:44:33.842584] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.885 [2024-07-14 07:44:33.842819] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.885 [2024-07-14 07:44:33.842844] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:17.885 qpair failed and we were unable to recover it. 00:27:17.885 [2024-07-14 07:44:33.843121] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.885 [2024-07-14 07:44:33.843420] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.885 [2024-07-14 07:44:33.843470] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:17.885 qpair failed and we were unable to recover it. 00:27:17.885 [2024-07-14 07:44:33.843643] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.885 [2024-07-14 07:44:33.843874] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.885 [2024-07-14 07:44:33.843900] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:17.885 qpair failed and we were unable to recover it. 00:27:17.885 [2024-07-14 07:44:33.844078] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.885 [2024-07-14 07:44:33.844252] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.885 [2024-07-14 07:44:33.844278] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:17.885 qpair failed and we were unable to recover it. 00:27:17.885 [2024-07-14 07:44:33.844500] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.885 [2024-07-14 07:44:33.844712] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.885 [2024-07-14 07:44:33.844742] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:17.885 qpair failed and we were unable to recover it. 00:27:17.885 [2024-07-14 07:44:33.844952] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.885 [2024-07-14 07:44:33.845117] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.885 [2024-07-14 07:44:33.845144] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:17.885 qpair failed and we were unable to recover it. 00:27:17.885 [2024-07-14 07:44:33.845387] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.885 [2024-07-14 07:44:33.845545] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.885 [2024-07-14 07:44:33.845570] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:17.885 qpair failed and we were unable to recover it. 00:27:17.885 [2024-07-14 07:44:33.845776] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.885 [2024-07-14 07:44:33.845996] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.885 [2024-07-14 07:44:33.846021] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:17.885 qpair failed and we were unable to recover it. 00:27:17.885 [2024-07-14 07:44:33.846226] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.885 [2024-07-14 07:44:33.846384] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.885 [2024-07-14 07:44:33.846424] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:17.885 qpair failed and we were unable to recover it. 00:27:17.885 [2024-07-14 07:44:33.846607] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.885 [2024-07-14 07:44:33.846891] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.885 [2024-07-14 07:44:33.846916] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:17.885 qpair failed and we were unable to recover it. 00:27:17.885 [2024-07-14 07:44:33.847113] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.885 [2024-07-14 07:44:33.847297] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.885 [2024-07-14 07:44:33.847357] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:17.885 qpair failed and we were unable to recover it. 00:27:17.885 [2024-07-14 07:44:33.847559] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.885 [2024-07-14 07:44:33.847749] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.885 [2024-07-14 07:44:33.847774] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:17.885 qpair failed and we were unable to recover it. 00:27:17.885 [2024-07-14 07:44:33.848056] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.885 [2024-07-14 07:44:33.848305] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.885 [2024-07-14 07:44:33.848330] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:17.885 qpair failed and we were unable to recover it. 00:27:17.885 [2024-07-14 07:44:33.848516] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.885 [2024-07-14 07:44:33.848797] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.885 [2024-07-14 07:44:33.848821] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:17.885 qpair failed and we were unable to recover it. 00:27:17.885 [2024-07-14 07:44:33.849042] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.885 [2024-07-14 07:44:33.849254] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.885 [2024-07-14 07:44:33.849284] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:17.885 qpair failed and we were unable to recover it. 00:27:17.885 [2024-07-14 07:44:33.849492] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.885 [2024-07-14 07:44:33.849680] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.885 [2024-07-14 07:44:33.849705] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:17.885 qpair failed and we were unable to recover it. 00:27:17.885 [2024-07-14 07:44:33.849936] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.885 [2024-07-14 07:44:33.850171] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.885 [2024-07-14 07:44:33.850199] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:17.885 qpair failed and we were unable to recover it. 00:27:17.885 [2024-07-14 07:44:33.850376] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.885 [2024-07-14 07:44:33.850590] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.885 [2024-07-14 07:44:33.850616] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:17.885 qpair failed and we were unable to recover it. 00:27:17.885 [2024-07-14 07:44:33.850830] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.885 [2024-07-14 07:44:33.851040] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.885 [2024-07-14 07:44:33.851066] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:17.885 qpair failed and we were unable to recover it. 00:27:17.885 [2024-07-14 07:44:33.851217] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.885 [2024-07-14 07:44:33.851497] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.885 [2024-07-14 07:44:33.851543] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:17.885 qpair failed and we were unable to recover it. 00:27:17.885 [2024-07-14 07:44:33.851762] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.885 [2024-07-14 07:44:33.851967] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.885 [2024-07-14 07:44:33.851994] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:17.885 qpair failed and we were unable to recover it. 00:27:17.885 [2024-07-14 07:44:33.852183] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.885 [2024-07-14 07:44:33.852360] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.885 [2024-07-14 07:44:33.852384] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:17.885 qpair failed and we were unable to recover it. 00:27:17.885 [2024-07-14 07:44:33.852648] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.885 [2024-07-14 07:44:33.852799] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.885 [2024-07-14 07:44:33.852824] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:17.885 qpair failed and we were unable to recover it. 00:27:17.885 [2024-07-14 07:44:33.853060] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.885 [2024-07-14 07:44:33.853260] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.885 [2024-07-14 07:44:33.853288] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:17.885 qpair failed and we were unable to recover it. 00:27:17.885 [2024-07-14 07:44:33.853501] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.885 [2024-07-14 07:44:33.853858] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.885 [2024-07-14 07:44:33.853891] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:17.885 qpair failed and we were unable to recover it. 00:27:17.885 [2024-07-14 07:44:33.854137] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.885 [2024-07-14 07:44:33.854355] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.885 [2024-07-14 07:44:33.854380] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:17.885 qpair failed and we were unable to recover it. 00:27:17.885 [2024-07-14 07:44:33.854630] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.885 [2024-07-14 07:44:33.854831] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.885 [2024-07-14 07:44:33.854856] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:17.885 qpair failed and we were unable to recover it. 00:27:17.885 [2024-07-14 07:44:33.855068] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.885 [2024-07-14 07:44:33.855229] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.885 [2024-07-14 07:44:33.855271] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:17.885 qpair failed and we were unable to recover it. 00:27:17.885 [2024-07-14 07:44:33.855498] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.885 [2024-07-14 07:44:33.855717] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.885 [2024-07-14 07:44:33.855750] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:17.885 qpair failed and we were unable to recover it. 00:27:17.885 [2024-07-14 07:44:33.856058] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.885 [2024-07-14 07:44:33.856398] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.885 [2024-07-14 07:44:33.856456] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:17.885 qpair failed and we were unable to recover it. 00:27:17.885 [2024-07-14 07:44:33.856686] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.885 [2024-07-14 07:44:33.856862] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.885 [2024-07-14 07:44:33.856898] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:17.885 qpair failed and we were unable to recover it. 00:27:17.885 [2024-07-14 07:44:33.857114] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.885 [2024-07-14 07:44:33.857467] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.885 [2024-07-14 07:44:33.857517] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:17.885 qpair failed and we were unable to recover it. 00:27:17.885 [2024-07-14 07:44:33.857828] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.885 [2024-07-14 07:44:33.858090] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.885 [2024-07-14 07:44:33.858116] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:17.885 qpair failed and we were unable to recover it. 00:27:17.885 [2024-07-14 07:44:33.858314] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.885 [2024-07-14 07:44:33.858533] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.885 [2024-07-14 07:44:33.858567] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:17.885 qpair failed and we were unable to recover it. 00:27:17.885 [2024-07-14 07:44:33.858817] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.885 [2024-07-14 07:44:33.859043] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.885 [2024-07-14 07:44:33.859070] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:17.885 qpair failed and we were unable to recover it. 00:27:17.885 [2024-07-14 07:44:33.859295] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.885 [2024-07-14 07:44:33.859654] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.885 [2024-07-14 07:44:33.859709] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:17.885 qpair failed and we were unable to recover it. 00:27:17.885 [2024-07-14 07:44:33.859944] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.885 [2024-07-14 07:44:33.860151] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.885 [2024-07-14 07:44:33.860179] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:17.885 qpair failed and we were unable to recover it. 00:27:17.885 [2024-07-14 07:44:33.860417] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.885 [2024-07-14 07:44:33.860646] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.885 [2024-07-14 07:44:33.860671] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:17.885 qpair failed and we were unable to recover it. 00:27:17.885 [2024-07-14 07:44:33.860874] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.885 [2024-07-14 07:44:33.861108] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.885 [2024-07-14 07:44:33.861136] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:17.885 qpair failed and we were unable to recover it. 00:27:17.885 [2024-07-14 07:44:33.861382] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.885 [2024-07-14 07:44:33.861631] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.885 [2024-07-14 07:44:33.861655] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:17.885 qpair failed and we were unable to recover it. 00:27:17.885 [2024-07-14 07:44:33.861892] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.885 [2024-07-14 07:44:33.862116] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.885 [2024-07-14 07:44:33.862141] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:17.885 qpair failed and we were unable to recover it. 00:27:17.885 [2024-07-14 07:44:33.862372] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.885 [2024-07-14 07:44:33.862540] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.885 [2024-07-14 07:44:33.862564] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:17.885 qpair failed and we were unable to recover it. 00:27:17.885 [2024-07-14 07:44:33.862718] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.885 [2024-07-14 07:44:33.863028] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.885 [2024-07-14 07:44:33.863069] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:17.885 qpair failed and we were unable to recover it. 00:27:17.885 [2024-07-14 07:44:33.863272] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.885 [2024-07-14 07:44:33.863558] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.885 [2024-07-14 07:44:33.863582] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:17.885 qpair failed and we were unable to recover it. 00:27:17.885 [2024-07-14 07:44:33.863760] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.885 [2024-07-14 07:44:33.863953] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.886 [2024-07-14 07:44:33.863980] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:17.886 qpair failed and we were unable to recover it. 00:27:17.886 [2024-07-14 07:44:33.864170] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.886 [2024-07-14 07:44:33.864418] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.886 [2024-07-14 07:44:33.864459] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:17.886 qpair failed and we were unable to recover it. 00:27:17.886 [2024-07-14 07:44:33.864708] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.886 [2024-07-14 07:44:33.864883] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.886 [2024-07-14 07:44:33.864911] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:17.886 qpair failed and we were unable to recover it. 00:27:17.886 [2024-07-14 07:44:33.865126] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.886 [2024-07-14 07:44:33.865367] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.886 [2024-07-14 07:44:33.865407] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:17.886 qpair failed and we were unable to recover it. 00:27:17.886 [2024-07-14 07:44:33.865628] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.886 [2024-07-14 07:44:33.865847] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.886 [2024-07-14 07:44:33.865879] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:17.886 qpair failed and we were unable to recover it. 00:27:17.886 [2024-07-14 07:44:33.866104] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.886 [2024-07-14 07:44:33.866372] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.886 [2024-07-14 07:44:33.866400] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:17.886 qpair failed and we were unable to recover it. 00:27:17.886 [2024-07-14 07:44:33.866588] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.886 [2024-07-14 07:44:33.866805] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.886 [2024-07-14 07:44:33.866830] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:17.886 qpair failed and we were unable to recover it. 00:27:17.886 [2024-07-14 07:44:33.867050] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.886 [2024-07-14 07:44:33.867213] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.886 [2024-07-14 07:44:33.867241] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:17.886 qpair failed and we were unable to recover it. 00:27:17.886 [2024-07-14 07:44:33.867452] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.886 [2024-07-14 07:44:33.867657] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.886 [2024-07-14 07:44:33.867685] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:17.886 qpair failed and we were unable to recover it. 00:27:17.886 [2024-07-14 07:44:33.867923] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.886 [2024-07-14 07:44:33.868102] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.886 [2024-07-14 07:44:33.868127] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:17.886 qpair failed and we were unable to recover it. 00:27:17.886 [2024-07-14 07:44:33.868303] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.886 [2024-07-14 07:44:33.868533] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.886 [2024-07-14 07:44:33.868581] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:17.886 qpair failed and we were unable to recover it. 00:27:17.886 [2024-07-14 07:44:33.868813] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.886 [2024-07-14 07:44:33.869033] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.886 [2024-07-14 07:44:33.869062] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:17.886 qpair failed and we were unable to recover it. 00:27:17.886 [2024-07-14 07:44:33.869300] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.886 [2024-07-14 07:44:33.869554] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.886 [2024-07-14 07:44:33.869600] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:17.886 qpair failed and we were unable to recover it. 00:27:17.886 [2024-07-14 07:44:33.869834] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.886 [2024-07-14 07:44:33.870072] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.886 [2024-07-14 07:44:33.870098] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:17.886 qpair failed and we were unable to recover it. 00:27:17.886 [2024-07-14 07:44:33.870309] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.886 [2024-07-14 07:44:33.870493] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.886 [2024-07-14 07:44:33.870519] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:17.886 qpair failed and we were unable to recover it. 00:27:17.886 [2024-07-14 07:44:33.870715] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.886 [2024-07-14 07:44:33.870937] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.886 [2024-07-14 07:44:33.870963] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:17.886 qpair failed and we were unable to recover it. 00:27:17.886 [2024-07-14 07:44:33.871186] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.886 [2024-07-14 07:44:33.871390] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.886 [2024-07-14 07:44:33.871418] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:17.886 qpair failed and we were unable to recover it. 00:27:17.886 [2024-07-14 07:44:33.871620] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.886 [2024-07-14 07:44:33.871835] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.886 [2024-07-14 07:44:33.871861] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:17.886 qpair failed and we were unable to recover it. 00:27:17.886 [2024-07-14 07:44:33.872085] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.886 [2024-07-14 07:44:33.872293] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.886 [2024-07-14 07:44:33.872318] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:17.886 qpair failed and we were unable to recover it. 00:27:17.886 [2024-07-14 07:44:33.872519] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.886 [2024-07-14 07:44:33.872937] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.886 [2024-07-14 07:44:33.872962] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:17.886 qpair failed and we were unable to recover it. 00:27:17.886 [2024-07-14 07:44:33.873192] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.886 [2024-07-14 07:44:33.873601] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.886 [2024-07-14 07:44:33.873654] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:17.886 qpair failed and we were unable to recover it. 00:27:17.886 [2024-07-14 07:44:33.873952] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.886 [2024-07-14 07:44:33.874185] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.886 [2024-07-14 07:44:33.874218] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:17.886 qpair failed and we were unable to recover it. 00:27:17.886 [2024-07-14 07:44:33.874427] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.886 [2024-07-14 07:44:33.874639] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.886 [2024-07-14 07:44:33.874663] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:17.886 qpair failed and we were unable to recover it. 00:27:17.886 [2024-07-14 07:44:33.874840] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.886 [2024-07-14 07:44:33.875044] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.886 [2024-07-14 07:44:33.875071] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:17.886 qpair failed and we were unable to recover it. 00:27:17.886 [2024-07-14 07:44:33.875314] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.886 [2024-07-14 07:44:33.875705] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.886 [2024-07-14 07:44:33.875756] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:17.886 qpair failed and we were unable to recover it. 00:27:17.886 [2024-07-14 07:44:33.875986] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.886 [2024-07-14 07:44:33.876270] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.886 [2024-07-14 07:44:33.876326] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:17.886 qpair failed and we were unable to recover it. 00:27:17.886 [2024-07-14 07:44:33.876615] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.886 [2024-07-14 07:44:33.876858] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.886 [2024-07-14 07:44:33.876895] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:17.886 qpair failed and we were unable to recover it. 00:27:17.886 [2024-07-14 07:44:33.877080] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.886 [2024-07-14 07:44:33.877264] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.886 [2024-07-14 07:44:33.877303] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:17.886 qpair failed and we were unable to recover it. 00:27:17.886 [2024-07-14 07:44:33.877541] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.886 [2024-07-14 07:44:33.877785] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.886 [2024-07-14 07:44:33.877834] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:17.886 qpair failed and we were unable to recover it. 00:27:17.886 [2024-07-14 07:44:33.878092] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.886 [2024-07-14 07:44:33.878294] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.886 [2024-07-14 07:44:33.878320] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:17.886 qpair failed and we were unable to recover it. 00:27:17.886 [2024-07-14 07:44:33.878511] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.886 [2024-07-14 07:44:33.878702] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.886 [2024-07-14 07:44:33.878757] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:17.886 qpair failed and we were unable to recover it. 00:27:17.886 [2024-07-14 07:44:33.878937] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.886 [2024-07-14 07:44:33.879115] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.886 [2024-07-14 07:44:33.879143] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:17.886 qpair failed and we were unable to recover it. 00:27:17.886 [2024-07-14 07:44:33.879371] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.886 [2024-07-14 07:44:33.879573] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.886 [2024-07-14 07:44:33.879598] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:17.886 qpair failed and we were unable to recover it. 00:27:17.886 [2024-07-14 07:44:33.879803] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.886 [2024-07-14 07:44:33.880064] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.886 [2024-07-14 07:44:33.880103] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:17.886 qpair failed and we were unable to recover it. 00:27:17.886 [2024-07-14 07:44:33.880345] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.886 [2024-07-14 07:44:33.880589] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.886 [2024-07-14 07:44:33.880635] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:17.886 qpair failed and we were unable to recover it. 00:27:17.886 [2024-07-14 07:44:33.880837] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.886 [2024-07-14 07:44:33.881084] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.886 [2024-07-14 07:44:33.881125] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:17.886 qpair failed and we were unable to recover it. 00:27:17.886 [2024-07-14 07:44:33.881320] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.886 [2024-07-14 07:44:33.881570] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.886 [2024-07-14 07:44:33.881618] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:17.886 qpair failed and we were unable to recover it. 00:27:17.886 [2024-07-14 07:44:33.881822] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.886 [2024-07-14 07:44:33.882072] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.886 [2024-07-14 07:44:33.882097] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:17.886 qpair failed and we were unable to recover it. 00:27:17.886 [2024-07-14 07:44:33.882314] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.886 [2024-07-14 07:44:33.882596] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.886 [2024-07-14 07:44:33.882661] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:17.886 qpair failed and we were unable to recover it. 00:27:17.886 [2024-07-14 07:44:33.882940] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.886 [2024-07-14 07:44:33.883156] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.886 [2024-07-14 07:44:33.883181] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:17.886 qpair failed and we were unable to recover it. 00:27:17.886 [2024-07-14 07:44:33.883381] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.886 [2024-07-14 07:44:33.883705] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.886 [2024-07-14 07:44:33.883759] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:17.886 qpair failed and we were unable to recover it. 00:27:17.886 [2024-07-14 07:44:33.883970] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.886 [2024-07-14 07:44:33.884176] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.886 [2024-07-14 07:44:33.884205] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:17.886 qpair failed and we were unable to recover it. 00:27:17.886 [2024-07-14 07:44:33.884417] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.886 [2024-07-14 07:44:33.884662] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.886 [2024-07-14 07:44:33.884702] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:17.886 qpair failed and we were unable to recover it. 00:27:17.886 [2024-07-14 07:44:33.884902] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.886 [2024-07-14 07:44:33.885106] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.886 [2024-07-14 07:44:33.885134] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:17.886 qpair failed and we were unable to recover it. 00:27:17.886 [2024-07-14 07:44:33.885337] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.886 [2024-07-14 07:44:33.885538] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.886 [2024-07-14 07:44:33.885562] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:17.886 qpair failed and we were unable to recover it. 00:27:17.886 [2024-07-14 07:44:33.885781] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.886 [2024-07-14 07:44:33.886029] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.886 [2024-07-14 07:44:33.886055] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:17.886 qpair failed and we were unable to recover it. 00:27:17.886 [2024-07-14 07:44:33.886274] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.886 [2024-07-14 07:44:33.886502] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.886 [2024-07-14 07:44:33.886530] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:17.886 qpair failed and we were unable to recover it. 00:27:17.886 [2024-07-14 07:44:33.886743] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.886 [2024-07-14 07:44:33.886977] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.886 [2024-07-14 07:44:33.887007] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:17.886 qpair failed and we were unable to recover it. 00:27:17.886 [2024-07-14 07:44:33.887213] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.886 [2024-07-14 07:44:33.887415] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.886 [2024-07-14 07:44:33.887439] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:17.886 qpair failed and we were unable to recover it. 00:27:17.886 [2024-07-14 07:44:33.887614] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.886 [2024-07-14 07:44:33.887815] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.886 [2024-07-14 07:44:33.887839] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:17.886 qpair failed and we were unable to recover it. 00:27:17.886 [2024-07-14 07:44:33.888057] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.886 [2024-07-14 07:44:33.888273] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.886 [2024-07-14 07:44:33.888298] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:17.886 qpair failed and we were unable to recover it. 00:27:17.886 [2024-07-14 07:44:33.888500] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.886 [2024-07-14 07:44:33.888809] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.886 [2024-07-14 07:44:33.888833] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:17.886 qpair failed and we were unable to recover it. 00:27:17.886 [2024-07-14 07:44:33.889042] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.886 [2024-07-14 07:44:33.889257] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.886 [2024-07-14 07:44:33.889285] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:17.886 qpair failed and we were unable to recover it. 00:27:17.886 [2024-07-14 07:44:33.889493] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.886 [2024-07-14 07:44:33.889829] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.886 [2024-07-14 07:44:33.889905] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:17.886 qpair failed and we were unable to recover it. 00:27:17.886 [2024-07-14 07:44:33.890221] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.886 [2024-07-14 07:44:33.890694] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.886 [2024-07-14 07:44:33.890747] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:17.886 qpair failed and we were unable to recover it. 00:27:17.886 [2024-07-14 07:44:33.890978] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.886 [2024-07-14 07:44:33.891187] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.886 [2024-07-14 07:44:33.891215] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:17.886 qpair failed and we were unable to recover it. 00:27:17.886 [2024-07-14 07:44:33.891412] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.886 [2024-07-14 07:44:33.891587] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.886 [2024-07-14 07:44:33.891615] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:17.886 qpair failed and we were unable to recover it. 00:27:17.886 [2024-07-14 07:44:33.891852] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.886 [2024-07-14 07:44:33.892081] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.886 [2024-07-14 07:44:33.892107] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:17.886 qpair failed and we were unable to recover it. 00:27:17.886 [2024-07-14 07:44:33.892301] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.886 [2024-07-14 07:44:33.892505] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.886 [2024-07-14 07:44:33.892533] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:17.886 qpair failed and we were unable to recover it. 00:27:17.886 [2024-07-14 07:44:33.892701] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.886 [2024-07-14 07:44:33.892928] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.886 [2024-07-14 07:44:33.892957] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:17.886 qpair failed and we were unable to recover it. 00:27:17.886 [2024-07-14 07:44:33.893139] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.886 [2024-07-14 07:44:33.893321] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.886 [2024-07-14 07:44:33.893368] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:17.886 qpair failed and we were unable to recover it. 00:27:17.886 [2024-07-14 07:44:33.893595] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.886 [2024-07-14 07:44:33.893796] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.886 [2024-07-14 07:44:33.893824] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:17.886 qpair failed and we were unable to recover it. 00:27:17.886 [2024-07-14 07:44:33.894061] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.886 [2024-07-14 07:44:33.894324] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.887 [2024-07-14 07:44:33.894370] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:17.887 qpair failed and we were unable to recover it. 00:27:17.887 [2024-07-14 07:44:33.894603] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.887 [2024-07-14 07:44:33.894856] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.887 [2024-07-14 07:44:33.894893] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:17.887 qpair failed and we were unable to recover it. 00:27:17.887 [2024-07-14 07:44:33.895070] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.887 [2024-07-14 07:44:33.895424] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.887 [2024-07-14 07:44:33.895486] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:17.887 qpair failed and we were unable to recover it. 00:27:17.887 [2024-07-14 07:44:33.895719] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.887 [2024-07-14 07:44:33.895990] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.887 [2024-07-14 07:44:33.896019] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:17.887 qpair failed and we were unable to recover it. 00:27:17.887 [2024-07-14 07:44:33.896233] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.887 [2024-07-14 07:44:33.896485] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.887 [2024-07-14 07:44:33.896513] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:17.887 qpair failed and we were unable to recover it. 00:27:17.887 [2024-07-14 07:44:33.896742] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.887 [2024-07-14 07:44:33.896922] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.887 [2024-07-14 07:44:33.896951] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:17.887 qpair failed and we were unable to recover it. 00:27:17.887 [2024-07-14 07:44:33.897154] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.887 [2024-07-14 07:44:33.897388] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.887 [2024-07-14 07:44:33.897435] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:17.887 qpair failed and we were unable to recover it. 00:27:17.887 [2024-07-14 07:44:33.897653] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.887 [2024-07-14 07:44:33.897861] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.887 [2024-07-14 07:44:33.897901] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:17.887 qpair failed and we were unable to recover it. 00:27:17.887 [2024-07-14 07:44:33.898108] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.887 [2024-07-14 07:44:33.898316] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.887 [2024-07-14 07:44:33.898344] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:17.887 qpair failed and we were unable to recover it. 00:27:17.887 [2024-07-14 07:44:33.898517] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.887 [2024-07-14 07:44:33.898735] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.887 [2024-07-14 07:44:33.898781] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:17.887 qpair failed and we were unable to recover it. 00:27:17.887 [2024-07-14 07:44:33.898985] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.887 [2024-07-14 07:44:33.899207] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.887 [2024-07-14 07:44:33.899246] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:17.887 qpair failed and we were unable to recover it. 00:27:17.887 [2024-07-14 07:44:33.899492] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.887 [2024-07-14 07:44:33.899727] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.887 [2024-07-14 07:44:33.899752] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:17.887 qpair failed and we were unable to recover it. 00:27:17.887 [2024-07-14 07:44:33.899969] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.887 [2024-07-14 07:44:33.900281] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.887 [2024-07-14 07:44:33.900342] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:17.887 qpair failed and we were unable to recover it. 00:27:17.887 [2024-07-14 07:44:33.900642] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.887 [2024-07-14 07:44:33.900881] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.887 [2024-07-14 07:44:33.900910] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:17.887 qpair failed and we were unable to recover it. 00:27:17.887 [2024-07-14 07:44:33.901079] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.887 [2024-07-14 07:44:33.901350] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.887 [2024-07-14 07:44:33.901397] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:17.887 qpair failed and we were unable to recover it. 00:27:17.887 [2024-07-14 07:44:33.901627] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.887 [2024-07-14 07:44:33.901856] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.887 [2024-07-14 07:44:33.901901] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:17.887 qpair failed and we were unable to recover it. 00:27:17.887 [2024-07-14 07:44:33.902083] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.887 [2024-07-14 07:44:33.902397] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.887 [2024-07-14 07:44:33.902451] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:17.887 qpair failed and we were unable to recover it. 00:27:17.887 [2024-07-14 07:44:33.902667] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.887 [2024-07-14 07:44:33.902897] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.887 [2024-07-14 07:44:33.902926] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:17.887 qpair failed and we were unable to recover it. 00:27:17.887 [2024-07-14 07:44:33.903131] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.887 [2024-07-14 07:44:33.903302] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.887 [2024-07-14 07:44:33.903330] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:17.887 qpair failed and we were unable to recover it. 00:27:17.887 [2024-07-14 07:44:33.903542] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.887 [2024-07-14 07:44:33.903754] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.887 [2024-07-14 07:44:33.903782] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:17.887 qpair failed and we were unable to recover it. 00:27:17.887 [2024-07-14 07:44:33.904014] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.887 [2024-07-14 07:44:33.904201] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.887 [2024-07-14 07:44:33.904228] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:17.887 qpair failed and we were unable to recover it. 00:27:17.887 [2024-07-14 07:44:33.904462] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.887 [2024-07-14 07:44:33.904722] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.887 [2024-07-14 07:44:33.904750] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:17.887 qpair failed and we were unable to recover it. 00:27:17.887 [2024-07-14 07:44:33.904960] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.887 [2024-07-14 07:44:33.905322] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.887 [2024-07-14 07:44:33.905372] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:17.887 qpair failed and we were unable to recover it. 00:27:17.887 [2024-07-14 07:44:33.905552] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.887 [2024-07-14 07:44:33.905751] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.887 [2024-07-14 07:44:33.905779] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:17.887 qpair failed and we were unable to recover it. 00:27:17.887 [2024-07-14 07:44:33.906018] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.887 [2024-07-14 07:44:33.906251] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.887 [2024-07-14 07:44:33.906275] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:17.887 qpair failed and we were unable to recover it. 00:27:17.887 [2024-07-14 07:44:33.906448] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.887 [2024-07-14 07:44:33.906676] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.887 [2024-07-14 07:44:33.906715] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:17.887 qpair failed and we were unable to recover it. 00:27:17.887 [2024-07-14 07:44:33.906922] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.887 [2024-07-14 07:44:33.907105] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.887 [2024-07-14 07:44:33.907133] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:17.887 qpair failed and we were unable to recover it. 00:27:17.887 [2024-07-14 07:44:33.907339] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.887 [2024-07-14 07:44:33.907572] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.887 [2024-07-14 07:44:33.907619] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:17.887 qpair failed and we were unable to recover it. 00:27:17.887 [2024-07-14 07:44:33.907929] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.887 [2024-07-14 07:44:33.908134] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.887 [2024-07-14 07:44:33.908160] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:17.887 qpair failed and we were unable to recover it. 00:27:17.887 [2024-07-14 07:44:33.908386] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.887 [2024-07-14 07:44:33.908705] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.887 [2024-07-14 07:44:33.908763] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:17.887 qpair failed and we were unable to recover it. 00:27:17.887 [2024-07-14 07:44:33.908991] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.887 [2024-07-14 07:44:33.909161] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.887 [2024-07-14 07:44:33.909191] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:17.887 qpair failed and we were unable to recover it. 00:27:17.887 [2024-07-14 07:44:33.909453] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.887 [2024-07-14 07:44:33.909830] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.887 [2024-07-14 07:44:33.909907] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:17.887 qpair failed and we were unable to recover it. 00:27:17.887 [2024-07-14 07:44:33.910129] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.887 [2024-07-14 07:44:33.910351] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.887 [2024-07-14 07:44:33.910379] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:17.887 qpair failed and we were unable to recover it. 00:27:17.887 [2024-07-14 07:44:33.910587] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.887 [2024-07-14 07:44:33.910816] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.887 [2024-07-14 07:44:33.910843] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:17.887 qpair failed and we were unable to recover it. 00:27:17.887 [2024-07-14 07:44:33.911081] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.887 [2024-07-14 07:44:33.911412] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.887 [2024-07-14 07:44:33.911465] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:17.887 qpair failed and we were unable to recover it. 00:27:17.887 [2024-07-14 07:44:33.911679] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.887 [2024-07-14 07:44:33.911907] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.887 [2024-07-14 07:44:33.911933] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:17.887 qpair failed and we were unable to recover it. 00:27:17.887 [2024-07-14 07:44:33.912133] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.887 [2024-07-14 07:44:33.912348] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.887 [2024-07-14 07:44:33.912376] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:17.887 qpair failed and we were unable to recover it. 00:27:17.887 [2024-07-14 07:44:33.912604] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.887 [2024-07-14 07:44:33.912842] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.887 [2024-07-14 07:44:33.912877] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:17.887 qpair failed and we were unable to recover it. 00:27:17.887 [2024-07-14 07:44:33.913109] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.887 [2024-07-14 07:44:33.913452] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.887 [2024-07-14 07:44:33.913501] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:17.887 qpair failed and we were unable to recover it. 00:27:17.887 [2024-07-14 07:44:33.913717] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.887 [2024-07-14 07:44:33.913955] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.887 [2024-07-14 07:44:33.914000] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:17.887 qpair failed and we were unable to recover it. 00:27:17.887 [2024-07-14 07:44:33.914210] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.887 [2024-07-14 07:44:33.914456] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.887 [2024-07-14 07:44:33.914502] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:17.887 qpair failed and we were unable to recover it. 00:27:17.887 [2024-07-14 07:44:33.914690] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.887 [2024-07-14 07:44:33.914932] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.887 [2024-07-14 07:44:33.914958] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:17.887 qpair failed and we were unable to recover it. 00:27:17.887 [2024-07-14 07:44:33.915304] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.887 [2024-07-14 07:44:33.915710] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.887 [2024-07-14 07:44:33.915766] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:17.887 qpair failed and we were unable to recover it. 00:27:17.887 [2024-07-14 07:44:33.916058] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.887 [2024-07-14 07:44:33.916271] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.887 [2024-07-14 07:44:33.916299] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:17.887 qpair failed and we were unable to recover it. 00:27:17.887 [2024-07-14 07:44:33.916472] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.887 [2024-07-14 07:44:33.916702] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.887 [2024-07-14 07:44:33.916730] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:17.887 qpair failed and we were unable to recover it. 00:27:17.887 [2024-07-14 07:44:33.916956] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.887 [2024-07-14 07:44:33.917178] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.887 [2024-07-14 07:44:33.917243] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:17.887 qpair failed and we were unable to recover it. 00:27:17.887 [2024-07-14 07:44:33.917519] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.887 [2024-07-14 07:44:33.917766] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.887 [2024-07-14 07:44:33.917813] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:17.887 qpair failed and we were unable to recover it. 00:27:17.887 [2024-07-14 07:44:33.918050] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.887 [2024-07-14 07:44:33.918434] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.887 [2024-07-14 07:44:33.918485] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:17.887 qpair failed and we were unable to recover it. 00:27:17.887 [2024-07-14 07:44:33.918718] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.887 [2024-07-14 07:44:33.918944] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.887 [2024-07-14 07:44:33.918974] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:17.887 qpair failed and we were unable to recover it. 00:27:17.887 [2024-07-14 07:44:33.919182] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.887 [2024-07-14 07:44:33.919403] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.887 [2024-07-14 07:44:33.919452] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:17.887 qpair failed and we were unable to recover it. 00:27:17.887 [2024-07-14 07:44:33.919655] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.887 [2024-07-14 07:44:33.919876] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.887 [2024-07-14 07:44:33.919905] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:17.887 qpair failed and we were unable to recover it. 00:27:17.887 [2024-07-14 07:44:33.920110] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.888 [2024-07-14 07:44:33.920399] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.888 [2024-07-14 07:44:33.920444] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:17.888 qpair failed and we were unable to recover it. 00:27:17.888 [2024-07-14 07:44:33.920719] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.888 [2024-07-14 07:44:33.921007] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.888 [2024-07-14 07:44:33.921036] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:17.888 qpair failed and we were unable to recover it. 00:27:17.888 [2024-07-14 07:44:33.921237] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.888 [2024-07-14 07:44:33.921443] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.888 [2024-07-14 07:44:33.921490] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:17.888 qpair failed and we were unable to recover it. 00:27:17.888 [2024-07-14 07:44:33.921720] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.888 [2024-07-14 07:44:33.921935] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.888 [2024-07-14 07:44:33.921962] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:17.888 qpair failed and we were unable to recover it. 00:27:17.888 [2024-07-14 07:44:33.922179] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.888 [2024-07-14 07:44:33.922443] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.888 [2024-07-14 07:44:33.922497] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:17.888 qpair failed and we were unable to recover it. 00:27:17.888 [2024-07-14 07:44:33.922697] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.888 [2024-07-14 07:44:33.922982] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.888 [2024-07-14 07:44:33.923008] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:17.888 qpair failed and we were unable to recover it. 00:27:17.888 [2024-07-14 07:44:33.923238] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.888 [2024-07-14 07:44:33.923439] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.888 [2024-07-14 07:44:33.923468] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:17.888 qpair failed and we were unable to recover it. 00:27:17.888 [2024-07-14 07:44:33.923705] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.888 [2024-07-14 07:44:33.923915] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.888 [2024-07-14 07:44:33.923944] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:17.888 qpair failed and we were unable to recover it. 00:27:17.888 [2024-07-14 07:44:33.924173] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.888 [2024-07-14 07:44:33.924381] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.888 [2024-07-14 07:44:33.924405] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:17.888 qpair failed and we were unable to recover it. 00:27:17.888 [2024-07-14 07:44:33.924646] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.888 [2024-07-14 07:44:33.924845] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.888 [2024-07-14 07:44:33.924879] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:17.888 qpair failed and we were unable to recover it. 00:27:17.888 [2024-07-14 07:44:33.925110] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.888 [2024-07-14 07:44:33.925322] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.888 [2024-07-14 07:44:33.925351] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:17.888 qpair failed and we were unable to recover it. 00:27:17.888 [2024-07-14 07:44:33.925603] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.888 [2024-07-14 07:44:33.925847] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.888 [2024-07-14 07:44:33.925878] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:17.888 qpair failed and we were unable to recover it. 00:27:17.888 [2024-07-14 07:44:33.926103] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.888 [2024-07-14 07:44:33.926426] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.888 [2024-07-14 07:44:33.926475] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:17.888 qpair failed and we were unable to recover it. 00:27:17.888 [2024-07-14 07:44:33.926694] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.888 [2024-07-14 07:44:33.926905] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.888 [2024-07-14 07:44:33.926932] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:17.888 qpair failed and we were unable to recover it. 00:27:17.888 [2024-07-14 07:44:33.927142] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.888 [2024-07-14 07:44:33.927464] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.888 [2024-07-14 07:44:33.927510] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:17.888 qpair failed and we were unable to recover it. 00:27:17.888 [2024-07-14 07:44:33.927715] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.888 [2024-07-14 07:44:33.927981] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.888 [2024-07-14 07:44:33.928009] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:17.888 qpair failed and we were unable to recover it. 00:27:17.888 [2024-07-14 07:44:33.928213] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.888 [2024-07-14 07:44:33.928401] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.888 [2024-07-14 07:44:33.928426] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:17.888 qpair failed and we were unable to recover it. 00:27:17.888 [2024-07-14 07:44:33.928687] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.888 [2024-07-14 07:44:33.928893] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.888 [2024-07-14 07:44:33.928922] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:17.888 qpair failed and we were unable to recover it. 00:27:17.888 [2024-07-14 07:44:33.929123] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.888 [2024-07-14 07:44:33.929316] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.888 [2024-07-14 07:44:33.929362] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:17.888 qpair failed and we were unable to recover it. 00:27:17.888 [2024-07-14 07:44:33.929663] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.888 [2024-07-14 07:44:33.929845] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.888 [2024-07-14 07:44:33.929880] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:17.888 qpair failed and we were unable to recover it. 00:27:17.888 [2024-07-14 07:44:33.930093] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.888 [2024-07-14 07:44:33.930411] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.888 [2024-07-14 07:44:33.930467] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:17.888 qpair failed and we were unable to recover it. 00:27:17.888 [2024-07-14 07:44:33.930701] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.888 [2024-07-14 07:44:33.930930] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.888 [2024-07-14 07:44:33.930959] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:17.888 qpair failed and we were unable to recover it. 00:27:17.888 [2024-07-14 07:44:33.931172] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.888 [2024-07-14 07:44:33.931414] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.888 [2024-07-14 07:44:33.931438] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:17.888 qpair failed and we were unable to recover it. 00:27:17.888 [2024-07-14 07:44:33.931683] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.888 [2024-07-14 07:44:33.931893] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.888 [2024-07-14 07:44:33.931921] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:17.888 qpair failed and we were unable to recover it. 00:27:17.888 [2024-07-14 07:44:33.932153] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.888 [2024-07-14 07:44:33.932367] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.888 [2024-07-14 07:44:33.932391] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:17.888 qpair failed and we were unable to recover it. 00:27:17.888 [2024-07-14 07:44:33.932565] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.888 [2024-07-14 07:44:33.932820] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.888 [2024-07-14 07:44:33.932873] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:17.888 qpair failed and we were unable to recover it. 00:27:17.888 [2024-07-14 07:44:33.933090] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.888 [2024-07-14 07:44:33.933298] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.888 [2024-07-14 07:44:33.933344] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:17.888 qpair failed and we were unable to recover it. 00:27:17.888 [2024-07-14 07:44:33.933551] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.888 [2024-07-14 07:44:33.933784] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.888 [2024-07-14 07:44:33.933809] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:17.888 qpair failed and we were unable to recover it. 00:27:17.888 [2024-07-14 07:44:33.934062] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.888 [2024-07-14 07:44:33.934300] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.888 [2024-07-14 07:44:33.934345] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:17.888 qpair failed and we were unable to recover it. 00:27:17.888 [2024-07-14 07:44:33.934523] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.888 [2024-07-14 07:44:33.934753] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.888 [2024-07-14 07:44:33.934781] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:17.888 qpair failed and we were unable to recover it. 00:27:17.888 [2024-07-14 07:44:33.935016] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.888 [2024-07-14 07:44:33.935324] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.888 [2024-07-14 07:44:33.935381] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:17.888 qpair failed and we were unable to recover it. 00:27:17.888 [2024-07-14 07:44:33.935570] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.888 [2024-07-14 07:44:33.935781] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.888 [2024-07-14 07:44:33.935809] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:17.888 qpair failed and we were unable to recover it. 00:27:17.888 [2024-07-14 07:44:33.936043] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.888 [2024-07-14 07:44:33.936303] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.888 [2024-07-14 07:44:33.936349] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:17.888 qpair failed and we were unable to recover it. 00:27:17.888 [2024-07-14 07:44:33.936554] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.888 [2024-07-14 07:44:33.936780] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.888 [2024-07-14 07:44:33.936808] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:17.888 qpair failed and we were unable to recover it. 00:27:17.888 [2024-07-14 07:44:33.937023] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.888 [2024-07-14 07:44:33.937292] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.888 [2024-07-14 07:44:33.937346] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:17.888 qpair failed and we were unable to recover it. 00:27:17.888 [2024-07-14 07:44:33.937580] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.888 [2024-07-14 07:44:33.937745] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.888 [2024-07-14 07:44:33.937773] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:17.888 qpair failed and we were unable to recover it. 00:27:17.888 [2024-07-14 07:44:33.937982] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.888 [2024-07-14 07:44:33.938239] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.888 [2024-07-14 07:44:33.938285] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:17.888 qpair failed and we were unable to recover it. 00:27:17.888 [2024-07-14 07:44:33.938518] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.888 [2024-07-14 07:44:33.938802] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.888 [2024-07-14 07:44:33.938873] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:17.888 qpair failed and we were unable to recover it. 00:27:17.888 [2024-07-14 07:44:33.939083] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.888 [2024-07-14 07:44:33.939315] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.888 [2024-07-14 07:44:33.939361] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:17.888 qpair failed and we were unable to recover it. 00:27:17.888 [2024-07-14 07:44:33.939591] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.888 [2024-07-14 07:44:33.939793] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.888 [2024-07-14 07:44:33.939820] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:17.888 qpair failed and we were unable to recover it. 00:27:17.888 [2024-07-14 07:44:33.940044] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.888 [2024-07-14 07:44:33.940211] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.888 [2024-07-14 07:44:33.940236] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:17.888 qpair failed and we were unable to recover it. 00:27:17.888 [2024-07-14 07:44:33.940431] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.888 [2024-07-14 07:44:33.940656] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.888 [2024-07-14 07:44:33.940681] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:17.888 qpair failed and we were unable to recover it. 00:27:17.888 [2024-07-14 07:44:33.940878] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.888 [2024-07-14 07:44:33.941065] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.888 [2024-07-14 07:44:33.941090] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:17.888 qpair failed and we were unable to recover it. 00:27:17.888 [2024-07-14 07:44:33.941267] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.888 [2024-07-14 07:44:33.941446] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.888 [2024-07-14 07:44:33.941494] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:17.888 qpair failed and we were unable to recover it. 00:27:17.888 [2024-07-14 07:44:33.941700] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.888 [2024-07-14 07:44:33.941886] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.888 [2024-07-14 07:44:33.941916] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:17.888 qpair failed and we were unable to recover it. 00:27:17.888 [2024-07-14 07:44:33.942121] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.888 [2024-07-14 07:44:33.942332] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.888 [2024-07-14 07:44:33.942357] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:17.888 qpair failed and we were unable to recover it. 00:27:17.888 [2024-07-14 07:44:33.942601] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.888 [2024-07-14 07:44:33.942924] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.888 [2024-07-14 07:44:33.942953] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:17.888 qpair failed and we were unable to recover it. 00:27:17.888 [2024-07-14 07:44:33.943143] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.888 [2024-07-14 07:44:33.943403] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.888 [2024-07-14 07:44:33.943428] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:17.888 qpair failed and we were unable to recover it. 00:27:17.888 [2024-07-14 07:44:33.943661] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.888 [2024-07-14 07:44:33.943870] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.888 [2024-07-14 07:44:33.943899] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:17.888 qpair failed and we were unable to recover it. 00:27:17.888 [2024-07-14 07:44:33.944104] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.888 [2024-07-14 07:44:33.944264] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.888 [2024-07-14 07:44:33.944304] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:17.888 qpair failed and we were unable to recover it. 00:27:17.888 [2024-07-14 07:44:33.944496] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.888 [2024-07-14 07:44:33.944775] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.888 [2024-07-14 07:44:33.944822] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:17.888 qpair failed and we were unable to recover it. 00:27:17.888 [2024-07-14 07:44:33.945083] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.888 [2024-07-14 07:44:33.945436] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.888 [2024-07-14 07:44:33.945496] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:17.888 qpair failed and we were unable to recover it. 00:27:17.888 [2024-07-14 07:44:33.945710] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.888 [2024-07-14 07:44:33.945871] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.888 [2024-07-14 07:44:33.945913] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:17.888 qpair failed and we were unable to recover it. 00:27:17.888 [2024-07-14 07:44:33.946103] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.888 [2024-07-14 07:44:33.946340] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.888 [2024-07-14 07:44:33.946365] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:17.888 qpair failed and we were unable to recover it. 00:27:17.888 [2024-07-14 07:44:33.946618] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.888 [2024-07-14 07:44:33.946856] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.888 [2024-07-14 07:44:33.946893] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:17.888 qpair failed and we were unable to recover it. 00:27:17.888 [2024-07-14 07:44:33.947121] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.888 [2024-07-14 07:44:33.947545] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.888 [2024-07-14 07:44:33.947594] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:17.888 qpair failed and we were unable to recover it. 00:27:17.888 [2024-07-14 07:44:33.947817] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.888 [2024-07-14 07:44:33.947999] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.888 [2024-07-14 07:44:33.948025] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:17.888 qpair failed and we were unable to recover it. 00:27:17.888 [2024-07-14 07:44:33.948242] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.888 [2024-07-14 07:44:33.948423] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.888 [2024-07-14 07:44:33.948448] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:17.888 qpair failed and we were unable to recover it. 00:27:17.888 [2024-07-14 07:44:33.948649] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.888 [2024-07-14 07:44:33.948885] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.888 [2024-07-14 07:44:33.948910] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:17.888 qpair failed and we were unable to recover it. 00:27:17.888 [2024-07-14 07:44:33.949168] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.889 [2024-07-14 07:44:33.949536] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.889 [2024-07-14 07:44:33.949591] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:17.889 qpair failed and we were unable to recover it. 00:27:17.889 [2024-07-14 07:44:33.949824] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.889 [2024-07-14 07:44:33.950025] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.889 [2024-07-14 07:44:33.950056] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:17.889 qpair failed and we were unable to recover it. 00:27:17.889 [2024-07-14 07:44:33.950242] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.889 [2024-07-14 07:44:33.950489] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.889 [2024-07-14 07:44:33.950557] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:17.889 qpair failed and we were unable to recover it. 00:27:17.889 [2024-07-14 07:44:33.950825] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.889 [2024-07-14 07:44:33.951032] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.889 [2024-07-14 07:44:33.951060] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:17.889 qpair failed and we were unable to recover it. 00:27:17.889 [2024-07-14 07:44:33.951288] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.889 [2024-07-14 07:44:33.951562] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.889 [2024-07-14 07:44:33.951613] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:17.889 qpair failed and we were unable to recover it. 00:27:17.889 [2024-07-14 07:44:33.951904] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.889 [2024-07-14 07:44:33.952194] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.889 [2024-07-14 07:44:33.952219] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:17.889 qpair failed and we were unable to recover it. 00:27:17.889 [2024-07-14 07:44:33.952439] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.889 [2024-07-14 07:44:33.952686] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.889 [2024-07-14 07:44:33.952710] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:17.889 qpair failed and we were unable to recover it. 00:27:17.889 [2024-07-14 07:44:33.952902] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.889 [2024-07-14 07:44:33.953117] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.889 [2024-07-14 07:44:33.953142] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:17.889 qpair failed and we were unable to recover it. 00:27:17.889 [2024-07-14 07:44:33.953360] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.889 [2024-07-14 07:44:33.953571] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.889 [2024-07-14 07:44:33.953599] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:17.889 qpair failed and we were unable to recover it. 00:27:17.889 [2024-07-14 07:44:33.953772] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.889 [2024-07-14 07:44:33.954051] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.889 [2024-07-14 07:44:33.954077] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:17.889 qpair failed and we were unable to recover it. 00:27:17.889 [2024-07-14 07:44:33.954335] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.889 [2024-07-14 07:44:33.954539] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.889 [2024-07-14 07:44:33.954564] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:17.889 qpair failed and we were unable to recover it. 00:27:17.889 [2024-07-14 07:44:33.954776] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.889 [2024-07-14 07:44:33.954983] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.889 [2024-07-14 07:44:33.955008] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:17.889 qpair failed and we were unable to recover it. 00:27:17.889 [2024-07-14 07:44:33.955205] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.889 [2024-07-14 07:44:33.955414] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.889 [2024-07-14 07:44:33.955447] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:17.889 qpair failed and we were unable to recover it. 00:27:17.889 [2024-07-14 07:44:33.955646] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.889 [2024-07-14 07:44:33.955823] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.889 [2024-07-14 07:44:33.955853] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:17.889 qpair failed and we were unable to recover it. 00:27:17.889 [2024-07-14 07:44:33.956049] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.889 [2024-07-14 07:44:33.956259] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.889 [2024-07-14 07:44:33.956307] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:17.889 qpair failed and we were unable to recover it. 00:27:17.889 [2024-07-14 07:44:33.956497] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.889 [2024-07-14 07:44:33.956690] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.889 [2024-07-14 07:44:33.956736] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:17.889 qpair failed and we were unable to recover it. 00:27:17.889 [2024-07-14 07:44:33.956977] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.889 [2024-07-14 07:44:33.957211] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.889 [2024-07-14 07:44:33.957251] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:17.889 qpair failed and we were unable to recover it. 00:27:17.889 [2024-07-14 07:44:33.957448] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.889 [2024-07-14 07:44:33.957643] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.889 [2024-07-14 07:44:33.957668] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:17.889 qpair failed and we were unable to recover it. 00:27:17.889 [2024-07-14 07:44:33.957891] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.889 [2024-07-14 07:44:33.958148] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.889 [2024-07-14 07:44:33.958176] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:17.889 qpair failed and we were unable to recover it. 00:27:17.889 [2024-07-14 07:44:33.958383] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.889 [2024-07-14 07:44:33.958623] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.889 [2024-07-14 07:44:33.958648] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:17.889 qpair failed and we were unable to recover it. 00:27:17.889 [2024-07-14 07:44:33.958825] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.889 [2024-07-14 07:44:33.959007] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.889 [2024-07-14 07:44:33.959033] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:17.889 qpair failed and we were unable to recover it. 00:27:17.889 [2024-07-14 07:44:33.959222] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.889 [2024-07-14 07:44:33.959506] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.889 [2024-07-14 07:44:33.959533] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:17.889 qpair failed and we were unable to recover it. 00:27:17.889 [2024-07-14 07:44:33.959755] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.889 [2024-07-14 07:44:33.959942] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.889 [2024-07-14 07:44:33.959967] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:17.889 qpair failed and we were unable to recover it. 00:27:17.889 [2024-07-14 07:44:33.960214] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.889 [2024-07-14 07:44:33.960588] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.889 [2024-07-14 07:44:33.960639] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:17.889 qpair failed and we were unable to recover it. 00:27:17.889 [2024-07-14 07:44:33.960876] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.889 [2024-07-14 07:44:33.961047] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.889 [2024-07-14 07:44:33.961078] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:17.889 qpair failed and we were unable to recover it. 00:27:17.889 [2024-07-14 07:44:33.961268] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.889 [2024-07-14 07:44:33.961430] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.889 [2024-07-14 07:44:33.961455] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:17.889 qpair failed and we were unable to recover it. 00:27:17.889 [2024-07-14 07:44:33.961659] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.889 [2024-07-14 07:44:33.961921] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.889 [2024-07-14 07:44:33.961948] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:17.889 qpair failed and we were unable to recover it. 00:27:17.889 [2024-07-14 07:44:33.962177] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.889 [2024-07-14 07:44:33.962524] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.889 [2024-07-14 07:44:33.962573] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:17.889 qpair failed and we were unable to recover it. 00:27:17.889 [2024-07-14 07:44:33.962781] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.889 [2024-07-14 07:44:33.963013] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.889 [2024-07-14 07:44:33.963043] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:17.889 qpair failed and we were unable to recover it. 00:27:17.889 [2024-07-14 07:44:33.963262] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.889 [2024-07-14 07:44:33.963653] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.889 [2024-07-14 07:44:33.963703] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:17.889 qpair failed and we were unable to recover it. 00:27:17.889 [2024-07-14 07:44:33.963933] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.889 [2024-07-14 07:44:33.964142] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.889 [2024-07-14 07:44:33.964170] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:17.889 qpair failed and we were unable to recover it. 00:27:17.889 [2024-07-14 07:44:33.964400] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.889 [2024-07-14 07:44:33.964691] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.889 [2024-07-14 07:44:33.964715] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:17.889 qpair failed and we were unable to recover it. 00:27:17.889 [2024-07-14 07:44:33.964929] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.889 [2024-07-14 07:44:33.965124] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.889 [2024-07-14 07:44:33.965152] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:17.889 qpair failed and we were unable to recover it. 00:27:17.889 [2024-07-14 07:44:33.965385] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.889 [2024-07-14 07:44:33.965576] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.889 [2024-07-14 07:44:33.965621] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:17.889 qpair failed and we were unable to recover it. 00:27:17.889 [2024-07-14 07:44:33.965850] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.889 [2024-07-14 07:44:33.966088] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.889 [2024-07-14 07:44:33.966117] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:17.889 qpair failed and we were unable to recover it. 00:27:17.889 [2024-07-14 07:44:33.966298] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.889 [2024-07-14 07:44:33.966520] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.889 [2024-07-14 07:44:33.966566] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:17.889 qpair failed and we were unable to recover it. 00:27:17.889 [2024-07-14 07:44:33.966852] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.889 [2024-07-14 07:44:33.967062] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.889 [2024-07-14 07:44:33.967090] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:17.889 qpair failed and we were unable to recover it. 00:27:17.889 [2024-07-14 07:44:33.967296] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.889 [2024-07-14 07:44:33.967504] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.889 [2024-07-14 07:44:33.967532] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:17.889 qpair failed and we were unable to recover it. 00:27:17.889 [2024-07-14 07:44:33.967760] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.889 [2024-07-14 07:44:33.967967] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.889 [2024-07-14 07:44:33.967998] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:17.889 qpair failed and we were unable to recover it. 00:27:17.889 [2024-07-14 07:44:33.968192] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.889 [2024-07-14 07:44:33.968394] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.889 [2024-07-14 07:44:33.968422] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:17.889 qpair failed and we were unable to recover it. 00:27:17.889 [2024-07-14 07:44:33.968651] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.889 [2024-07-14 07:44:33.968856] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.889 [2024-07-14 07:44:33.968893] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:17.889 qpair failed and we were unable to recover it. 00:27:17.889 [2024-07-14 07:44:33.969105] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.889 [2024-07-14 07:44:33.969323] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.889 [2024-07-14 07:44:33.969351] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:17.889 qpair failed and we were unable to recover it. 00:27:17.889 [2024-07-14 07:44:33.969591] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.889 [2024-07-14 07:44:33.969792] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.889 [2024-07-14 07:44:33.969820] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:17.889 qpair failed and we were unable to recover it. 00:27:17.889 [2024-07-14 07:44:33.970028] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.889 [2024-07-14 07:44:33.970231] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.889 [2024-07-14 07:44:33.970256] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:17.889 qpair failed and we were unable to recover it. 00:27:17.889 [2024-07-14 07:44:33.970638] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.889 [2024-07-14 07:44:33.970889] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.889 [2024-07-14 07:44:33.970917] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:17.889 qpair failed and we were unable to recover it. 00:27:17.889 [2024-07-14 07:44:33.971127] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.889 [2024-07-14 07:44:33.971354] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.889 [2024-07-14 07:44:33.971401] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:17.889 qpair failed and we were unable to recover it. 00:27:17.889 [2024-07-14 07:44:33.971606] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.889 [2024-07-14 07:44:33.971832] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.889 [2024-07-14 07:44:33.971857] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:17.889 qpair failed and we were unable to recover it. 00:27:17.889 [2024-07-14 07:44:33.972072] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.889 [2024-07-14 07:44:33.972287] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.889 [2024-07-14 07:44:33.972315] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:17.889 qpair failed and we were unable to recover it. 00:27:17.889 [2024-07-14 07:44:33.972546] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.889 [2024-07-14 07:44:33.972926] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.889 [2024-07-14 07:44:33.972957] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:17.889 qpair failed and we were unable to recover it. 00:27:17.889 [2024-07-14 07:44:33.973176] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.889 [2024-07-14 07:44:33.973404] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.889 [2024-07-14 07:44:33.973461] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:17.889 qpair failed and we were unable to recover it. 00:27:17.889 [2024-07-14 07:44:33.973668] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.889 [2024-07-14 07:44:33.973840] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.889 [2024-07-14 07:44:33.973881] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:17.889 qpair failed and we were unable to recover it. 00:27:17.889 [2024-07-14 07:44:33.974093] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.889 [2024-07-14 07:44:33.974271] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.889 [2024-07-14 07:44:33.974298] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:17.889 qpair failed and we were unable to recover it. 00:27:17.889 [2024-07-14 07:44:33.974672] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.889 [2024-07-14 07:44:33.974927] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.889 [2024-07-14 07:44:33.974957] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:17.889 qpair failed and we were unable to recover it. 00:27:17.889 [2024-07-14 07:44:33.975143] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.889 [2024-07-14 07:44:33.975514] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.889 [2024-07-14 07:44:33.975574] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:17.889 qpair failed and we were unable to recover it. 00:27:17.889 [2024-07-14 07:44:33.975957] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.889 [2024-07-14 07:44:33.976193] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.889 [2024-07-14 07:44:33.976221] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:17.890 qpair failed and we were unable to recover it. 00:27:17.890 [2024-07-14 07:44:33.976454] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.890 [2024-07-14 07:44:33.976800] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.890 [2024-07-14 07:44:33.976853] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:17.890 qpair failed and we were unable to recover it. 00:27:17.890 [2024-07-14 07:44:33.977076] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.890 [2024-07-14 07:44:33.977360] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.890 [2024-07-14 07:44:33.977384] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:17.890 qpair failed and we were unable to recover it. 00:27:17.890 [2024-07-14 07:44:33.977565] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.890 [2024-07-14 07:44:33.977895] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.890 [2024-07-14 07:44:33.977950] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:17.890 qpair failed and we were unable to recover it. 00:27:17.890 [2024-07-14 07:44:33.978196] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.890 [2024-07-14 07:44:33.978406] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.890 [2024-07-14 07:44:33.978431] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:17.890 qpair failed and we were unable to recover it. 00:27:17.890 [2024-07-14 07:44:33.978644] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.890 [2024-07-14 07:44:33.978845] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.890 [2024-07-14 07:44:33.978882] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:17.890 qpair failed and we were unable to recover it. 00:27:17.890 [2024-07-14 07:44:33.979111] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.890 [2024-07-14 07:44:33.979338] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.890 [2024-07-14 07:44:33.979366] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:17.890 qpair failed and we were unable to recover it. 00:27:17.890 [2024-07-14 07:44:33.979605] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.890 [2024-07-14 07:44:33.979838] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.890 [2024-07-14 07:44:33.979874] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:17.890 qpair failed and we were unable to recover it. 00:27:17.890 [2024-07-14 07:44:33.980116] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.890 [2024-07-14 07:44:33.980646] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.890 [2024-07-14 07:44:33.980697] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:17.890 qpair failed and we were unable to recover it. 00:27:17.890 [2024-07-14 07:44:33.980930] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.890 [2024-07-14 07:44:33.981108] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.890 [2024-07-14 07:44:33.981141] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:17.890 qpair failed and we were unable to recover it. 00:27:17.890 [2024-07-14 07:44:33.981485] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.890 [2024-07-14 07:44:33.981856] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.890 [2024-07-14 07:44:33.981935] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:17.890 qpair failed and we were unable to recover it. 00:27:17.890 [2024-07-14 07:44:33.982161] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.890 [2024-07-14 07:44:33.982356] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.890 [2024-07-14 07:44:33.982381] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:17.890 qpair failed and we were unable to recover it. 00:27:17.890 [2024-07-14 07:44:33.982598] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.890 [2024-07-14 07:44:33.982936] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.890 [2024-07-14 07:44:33.982965] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:17.890 qpair failed and we were unable to recover it. 00:27:17.890 [2024-07-14 07:44:33.983192] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.890 [2024-07-14 07:44:33.983396] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.890 [2024-07-14 07:44:33.983426] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:17.890 qpair failed and we were unable to recover it. 00:27:17.890 [2024-07-14 07:44:33.983664] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.890 [2024-07-14 07:44:33.983913] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.890 [2024-07-14 07:44:33.983942] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:17.890 qpair failed and we were unable to recover it. 00:27:17.890 [2024-07-14 07:44:33.984142] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.890 [2024-07-14 07:44:33.984339] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.890 [2024-07-14 07:44:33.984364] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:17.890 qpair failed and we were unable to recover it. 00:27:17.890 [2024-07-14 07:44:33.984540] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.890 [2024-07-14 07:44:33.984797] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.890 [2024-07-14 07:44:33.984822] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:17.890 qpair failed and we were unable to recover it. 00:27:17.890 [2024-07-14 07:44:33.985035] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.890 [2024-07-14 07:44:33.985206] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.890 [2024-07-14 07:44:33.985234] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:17.890 qpair failed and we were unable to recover it. 00:27:17.890 [2024-07-14 07:44:33.985412] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.890 [2024-07-14 07:44:33.985628] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.890 [2024-07-14 07:44:33.985675] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:17.890 qpair failed and we were unable to recover it. 00:27:17.890 [2024-07-14 07:44:33.985894] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.890 [2024-07-14 07:44:33.986129] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.890 [2024-07-14 07:44:33.986157] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:17.890 qpair failed and we were unable to recover it. 00:27:17.890 [2024-07-14 07:44:33.986339] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.890 [2024-07-14 07:44:33.986580] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.890 [2024-07-14 07:44:33.986620] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:17.890 qpair failed and we were unable to recover it. 00:27:17.890 [2024-07-14 07:44:33.986837] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.890 [2024-07-14 07:44:33.987043] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.890 [2024-07-14 07:44:33.987071] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:17.890 qpair failed and we were unable to recover it. 00:27:17.890 [2024-07-14 07:44:33.987437] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.890 [2024-07-14 07:44:33.987682] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.890 [2024-07-14 07:44:33.987729] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:17.890 qpair failed and we were unable to recover it. 00:27:17.890 [2024-07-14 07:44:33.987960] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.890 [2024-07-14 07:44:33.988177] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.890 [2024-07-14 07:44:33.988205] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:17.890 qpair failed and we were unable to recover it. 00:27:17.890 [2024-07-14 07:44:33.988437] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.890 [2024-07-14 07:44:33.988634] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.890 [2024-07-14 07:44:33.988680] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:17.890 qpair failed and we were unable to recover it. 00:27:17.890 [2024-07-14 07:44:33.988909] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.890 [2024-07-14 07:44:33.989113] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.890 [2024-07-14 07:44:33.989141] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:17.890 qpair failed and we were unable to recover it. 00:27:17.890 [2024-07-14 07:44:33.989353] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.890 [2024-07-14 07:44:33.989604] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.890 [2024-07-14 07:44:33.989650] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:17.890 qpair failed and we were unable to recover it. 00:27:17.890 [2024-07-14 07:44:33.989906] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.890 [2024-07-14 07:44:33.990137] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.890 [2024-07-14 07:44:33.990165] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:17.890 qpair failed and we were unable to recover it. 00:27:17.890 [2024-07-14 07:44:33.990397] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.890 [2024-07-14 07:44:33.990669] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.890 [2024-07-14 07:44:33.990693] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:17.890 qpair failed and we were unable to recover it. 00:27:17.890 [2024-07-14 07:44:33.990946] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.890 [2024-07-14 07:44:33.991124] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.890 [2024-07-14 07:44:33.991152] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:17.890 qpair failed and we were unable to recover it. 00:27:17.890 [2024-07-14 07:44:33.991366] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.890 [2024-07-14 07:44:33.991572] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.890 [2024-07-14 07:44:33.991600] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:17.890 qpair failed and we were unable to recover it. 00:27:17.890 [2024-07-14 07:44:33.991819] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.890 [2024-07-14 07:44:33.992071] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.890 [2024-07-14 07:44:33.992100] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:17.890 qpair failed and we were unable to recover it. 00:27:17.890 [2024-07-14 07:44:33.992305] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.890 [2024-07-14 07:44:33.992577] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.890 [2024-07-14 07:44:33.992605] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:17.890 qpair failed and we were unable to recover it. 00:27:17.890 [2024-07-14 07:44:33.992811] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.890 [2024-07-14 07:44:33.993015] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.890 [2024-07-14 07:44:33.993044] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:17.890 qpair failed and we were unable to recover it. 00:27:17.890 [2024-07-14 07:44:33.993221] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.890 [2024-07-14 07:44:33.993418] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.890 [2024-07-14 07:44:33.993446] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:17.890 qpair failed and we were unable to recover it. 00:27:17.890 [2024-07-14 07:44:33.993653] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.890 [2024-07-14 07:44:33.993950] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.890 [2024-07-14 07:44:33.993979] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:17.890 qpair failed and we were unable to recover it. 00:27:17.890 [2024-07-14 07:44:33.994209] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.890 [2024-07-14 07:44:33.994570] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.890 [2024-07-14 07:44:33.994621] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:17.890 qpair failed and we were unable to recover it. 00:27:17.890 [2024-07-14 07:44:33.994852] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.890 [2024-07-14 07:44:33.995100] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.890 [2024-07-14 07:44:33.995125] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:17.890 qpair failed and we were unable to recover it. 00:27:17.890 [2024-07-14 07:44:33.995413] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.890 [2024-07-14 07:44:33.995799] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.890 [2024-07-14 07:44:33.995855] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:17.890 qpair failed and we were unable to recover it. 00:27:17.890 [2024-07-14 07:44:33.996099] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.890 [2024-07-14 07:44:33.996435] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.890 [2024-07-14 07:44:33.996496] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:17.890 qpair failed and we were unable to recover it. 00:27:17.890 [2024-07-14 07:44:33.996737] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.890 [2024-07-14 07:44:33.996947] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.890 [2024-07-14 07:44:33.996976] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:17.890 qpair failed and we were unable to recover it. 00:27:17.890 [2024-07-14 07:44:33.997207] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.890 [2024-07-14 07:44:33.997464] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.890 [2024-07-14 07:44:33.997489] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:17.890 qpair failed and we were unable to recover it. 00:27:17.890 [2024-07-14 07:44:33.997706] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.890 [2024-07-14 07:44:33.998006] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.890 [2024-07-14 07:44:33.998031] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:17.890 qpair failed and we were unable to recover it. 00:27:17.890 [2024-07-14 07:44:33.998176] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.890 [2024-07-14 07:44:33.998401] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.890 [2024-07-14 07:44:33.998441] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:17.890 qpair failed and we were unable to recover it. 00:27:17.890 [2024-07-14 07:44:33.998654] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.890 [2024-07-14 07:44:33.998896] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.890 [2024-07-14 07:44:33.998923] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:17.890 qpair failed and we were unable to recover it. 00:27:17.890 [2024-07-14 07:44:33.999159] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.890 [2024-07-14 07:44:33.999387] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.890 [2024-07-14 07:44:33.999433] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:17.890 qpair failed and we were unable to recover it. 00:27:17.890 [2024-07-14 07:44:33.999711] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.890 [2024-07-14 07:44:33.999958] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.890 [2024-07-14 07:44:33.999987] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:17.890 qpair failed and we were unable to recover it. 00:27:17.890 [2024-07-14 07:44:34.000209] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.890 [2024-07-14 07:44:34.000422] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.890 [2024-07-14 07:44:34.000450] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:17.890 qpair failed and we were unable to recover it. 00:27:17.890 [2024-07-14 07:44:34.000659] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.890 [2024-07-14 07:44:34.000894] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.890 [2024-07-14 07:44:34.000924] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:17.890 qpair failed and we were unable to recover it. 00:27:17.890 [2024-07-14 07:44:34.001144] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.890 [2024-07-14 07:44:34.001380] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.890 [2024-07-14 07:44:34.001406] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:17.890 qpair failed and we were unable to recover it. 00:27:17.890 [2024-07-14 07:44:34.001635] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.890 [2024-07-14 07:44:34.001894] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.890 [2024-07-14 07:44:34.001923] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:17.890 qpair failed and we were unable to recover it. 00:27:17.890 [2024-07-14 07:44:34.002148] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.890 [2024-07-14 07:44:34.002595] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.890 [2024-07-14 07:44:34.002654] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:17.890 qpair failed and we were unable to recover it. 00:27:17.890 [2024-07-14 07:44:34.002854] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.890 [2024-07-14 07:44:34.003107] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.890 [2024-07-14 07:44:34.003135] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:17.890 qpair failed and we were unable to recover it. 00:27:17.890 [2024-07-14 07:44:34.003382] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.890 [2024-07-14 07:44:34.003579] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.890 [2024-07-14 07:44:34.003603] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:17.890 qpair failed and we were unable to recover it. 00:27:17.890 [2024-07-14 07:44:34.003853] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.890 [2024-07-14 07:44:34.004036] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.890 [2024-07-14 07:44:34.004064] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:17.890 qpair failed and we were unable to recover it. 00:27:17.890 [2024-07-14 07:44:34.004250] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.890 [2024-07-14 07:44:34.004509] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.890 [2024-07-14 07:44:34.004537] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:17.890 qpair failed and we were unable to recover it. 00:27:17.890 [2024-07-14 07:44:34.004744] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.890 [2024-07-14 07:44:34.004956] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.890 [2024-07-14 07:44:34.004982] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:17.890 qpair failed and we were unable to recover it. 00:27:17.890 [2024-07-14 07:44:34.005185] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.890 [2024-07-14 07:44:34.005418] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.890 [2024-07-14 07:44:34.005446] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:17.890 qpair failed and we were unable to recover it. 00:27:17.890 [2024-07-14 07:44:34.005807] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.890 [2024-07-14 07:44:34.006054] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.890 [2024-07-14 07:44:34.006084] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:17.890 qpair failed and we were unable to recover it. 00:27:17.890 [2024-07-14 07:44:34.006266] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.890 [2024-07-14 07:44:34.006448] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.890 [2024-07-14 07:44:34.006472] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:17.890 qpair failed and we were unable to recover it. 00:27:17.890 [2024-07-14 07:44:34.006702] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.890 [2024-07-14 07:44:34.006878] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.890 [2024-07-14 07:44:34.006908] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:17.891 qpair failed and we were unable to recover it. 00:27:17.891 [2024-07-14 07:44:34.007129] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.891 [2024-07-14 07:44:34.007325] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.891 [2024-07-14 07:44:34.007354] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:17.891 qpair failed and we were unable to recover it. 00:27:17.891 [2024-07-14 07:44:34.007719] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.891 [2024-07-14 07:44:34.008131] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.891 [2024-07-14 07:44:34.008195] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:17.891 qpair failed and we were unable to recover it. 00:27:17.891 [2024-07-14 07:44:34.008395] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.891 [2024-07-14 07:44:34.008563] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.891 [2024-07-14 07:44:34.008588] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:17.891 qpair failed and we were unable to recover it. 00:27:17.891 [2024-07-14 07:44:34.008797] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.891 [2024-07-14 07:44:34.009027] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.891 [2024-07-14 07:44:34.009056] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:17.891 qpair failed and we were unable to recover it. 00:27:17.891 [2024-07-14 07:44:34.009224] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.891 [2024-07-14 07:44:34.009526] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.891 [2024-07-14 07:44:34.009585] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:17.891 qpair failed and we were unable to recover it. 00:27:17.891 [2024-07-14 07:44:34.009814] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.891 [2024-07-14 07:44:34.010054] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.891 [2024-07-14 07:44:34.010083] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:17.891 qpair failed and we were unable to recover it. 00:27:17.891 [2024-07-14 07:44:34.010324] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.891 [2024-07-14 07:44:34.010560] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.891 [2024-07-14 07:44:34.010588] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:17.891 qpair failed and we were unable to recover it. 00:27:17.891 [2024-07-14 07:44:34.010813] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.891 [2024-07-14 07:44:34.010991] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.891 [2024-07-14 07:44:34.011020] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:17.891 qpair failed and we were unable to recover it. 00:27:17.891 [2024-07-14 07:44:34.011200] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.891 [2024-07-14 07:44:34.011408] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.891 [2024-07-14 07:44:34.011433] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:17.891 qpair failed and we were unable to recover it. 00:27:17.891 [2024-07-14 07:44:34.011690] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.891 [2024-07-14 07:44:34.011888] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.891 [2024-07-14 07:44:34.011925] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:17.891 qpair failed and we were unable to recover it. 00:27:17.891 [2024-07-14 07:44:34.012142] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.891 [2024-07-14 07:44:34.012313] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.891 [2024-07-14 07:44:34.012338] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:17.891 qpair failed and we were unable to recover it. 00:27:17.891 [2024-07-14 07:44:34.012533] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.891 [2024-07-14 07:44:34.012799] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.891 [2024-07-14 07:44:34.012844] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:17.891 qpair failed and we were unable to recover it. 00:27:17.891 [2024-07-14 07:44:34.013085] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.891 [2024-07-14 07:44:34.013354] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.891 [2024-07-14 07:44:34.013379] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:17.891 qpair failed and we were unable to recover it. 00:27:17.891 [2024-07-14 07:44:34.013553] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.891 [2024-07-14 07:44:34.013791] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.891 [2024-07-14 07:44:34.013818] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:17.891 qpair failed and we were unable to recover it. 00:27:17.891 [2024-07-14 07:44:34.014106] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.891 [2024-07-14 07:44:34.014399] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.891 [2024-07-14 07:44:34.014457] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:17.891 qpair failed and we were unable to recover it. 00:27:17.891 [2024-07-14 07:44:34.014639] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.891 [2024-07-14 07:44:34.014837] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.891 [2024-07-14 07:44:34.014875] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:17.891 qpair failed and we were unable to recover it. 00:27:17.891 [2024-07-14 07:44:34.015063] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.891 [2024-07-14 07:44:34.015292] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.891 [2024-07-14 07:44:34.015320] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:17.891 qpair failed and we were unable to recover it. 00:27:17.891 [2024-07-14 07:44:34.015689] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.891 [2024-07-14 07:44:34.016023] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.891 [2024-07-14 07:44:34.016052] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:17.891 qpair failed and we were unable to recover it. 00:27:17.891 [2024-07-14 07:44:34.016303] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.891 [2024-07-14 07:44:34.016538] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.891 [2024-07-14 07:44:34.016566] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:17.891 qpair failed and we were unable to recover it. 00:27:17.891 [2024-07-14 07:44:34.016808] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.891 [2024-07-14 07:44:34.016992] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.891 [2024-07-14 07:44:34.017017] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:17.891 qpair failed and we were unable to recover it. 00:27:17.891 [2024-07-14 07:44:34.017233] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.891 [2024-07-14 07:44:34.017548] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.891 [2024-07-14 07:44:34.017599] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:17.891 qpair failed and we were unable to recover it. 00:27:17.891 [2024-07-14 07:44:34.017829] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.891 [2024-07-14 07:44:34.018039] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.891 [2024-07-14 07:44:34.018068] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:17.891 qpair failed and we were unable to recover it. 00:27:17.891 [2024-07-14 07:44:34.018316] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.891 [2024-07-14 07:44:34.018521] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.891 [2024-07-14 07:44:34.018549] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:17.891 qpair failed and we were unable to recover it. 00:27:17.891 [2024-07-14 07:44:34.018724] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.891 [2024-07-14 07:44:34.018895] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.891 [2024-07-14 07:44:34.018937] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:17.891 qpair failed and we were unable to recover it. 00:27:17.891 [2024-07-14 07:44:34.019211] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.891 [2024-07-14 07:44:34.019484] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.891 [2024-07-14 07:44:34.019532] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:17.891 qpair failed and we were unable to recover it. 00:27:17.891 [2024-07-14 07:44:34.019811] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.891 [2024-07-14 07:44:34.020048] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.891 [2024-07-14 07:44:34.020077] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:17.891 qpair failed and we were unable to recover it. 00:27:17.891 [2024-07-14 07:44:34.020287] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.891 [2024-07-14 07:44:34.020542] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.891 [2024-07-14 07:44:34.020570] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:17.891 qpair failed and we were unable to recover it. 00:27:17.891 [2024-07-14 07:44:34.020751] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.891 [2024-07-14 07:44:34.020953] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.891 [2024-07-14 07:44:34.020982] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:17.891 qpair failed and we were unable to recover it. 00:27:17.891 [2024-07-14 07:44:34.021165] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.891 [2024-07-14 07:44:34.021371] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.891 [2024-07-14 07:44:34.021399] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:17.891 qpair failed and we were unable to recover it. 00:27:17.891 [2024-07-14 07:44:34.021679] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.891 [2024-07-14 07:44:34.021922] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.891 [2024-07-14 07:44:34.021949] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:17.891 qpair failed and we were unable to recover it. 00:27:17.891 [2024-07-14 07:44:34.022143] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.891 [2024-07-14 07:44:34.022344] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.891 [2024-07-14 07:44:34.022369] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:17.891 qpair failed and we were unable to recover it. 00:27:17.891 [2024-07-14 07:44:34.022590] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.891 [2024-07-14 07:44:34.022807] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.891 [2024-07-14 07:44:34.022836] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:17.891 qpair failed and we were unable to recover it. 00:27:17.891 [2024-07-14 07:44:34.023074] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.891 [2024-07-14 07:44:34.023279] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.891 [2024-07-14 07:44:34.023307] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:17.891 qpair failed and we were unable to recover it. 00:27:17.891 [2024-07-14 07:44:34.023569] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.891 [2024-07-14 07:44:34.023844] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.891 [2024-07-14 07:44:34.023880] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:17.891 qpair failed and we were unable to recover it. 00:27:17.891 [2024-07-14 07:44:34.024075] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.891 [2024-07-14 07:44:34.024352] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.891 [2024-07-14 07:44:34.024418] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:17.891 qpair failed and we were unable to recover it. 00:27:17.891 [2024-07-14 07:44:34.024658] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.891 [2024-07-14 07:44:34.024840] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.891 [2024-07-14 07:44:34.024874] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:17.891 qpair failed and we were unable to recover it. 00:27:17.891 [2024-07-14 07:44:34.025109] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.891 [2024-07-14 07:44:34.025382] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.891 [2024-07-14 07:44:34.025428] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:17.891 qpair failed and we were unable to recover it. 00:27:17.891 [2024-07-14 07:44:34.025643] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.891 [2024-07-14 07:44:34.025845] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.891 [2024-07-14 07:44:34.025895] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:17.891 qpair failed and we were unable to recover it. 00:27:17.891 [2024-07-14 07:44:34.026085] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.891 [2024-07-14 07:44:34.026259] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.891 [2024-07-14 07:44:34.026286] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:17.891 qpair failed and we were unable to recover it. 00:27:17.891 [2024-07-14 07:44:34.026490] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.891 [2024-07-14 07:44:34.026670] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.891 [2024-07-14 07:44:34.026698] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:17.891 qpair failed and we were unable to recover it. 00:27:17.891 [2024-07-14 07:44:34.026928] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.891 [2024-07-14 07:44:34.027164] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.891 [2024-07-14 07:44:34.027193] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:17.891 qpair failed and we were unable to recover it. 00:27:17.891 [2024-07-14 07:44:34.027406] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.891 [2024-07-14 07:44:34.027648] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.891 [2024-07-14 07:44:34.027673] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:17.891 qpair failed and we were unable to recover it. 00:27:17.891 [2024-07-14 07:44:34.027883] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.891 [2024-07-14 07:44:34.028167] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.891 [2024-07-14 07:44:34.028192] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:17.891 qpair failed and we were unable to recover it. 00:27:17.891 [2024-07-14 07:44:34.028387] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.891 [2024-07-14 07:44:34.028596] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.891 [2024-07-14 07:44:34.028621] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:17.891 qpair failed and we were unable to recover it. 00:27:17.891 [2024-07-14 07:44:34.028797] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.891 [2024-07-14 07:44:34.028984] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.891 [2024-07-14 07:44:34.029010] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:17.891 qpair failed and we were unable to recover it. 00:27:17.891 [2024-07-14 07:44:34.029172] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.891 [2024-07-14 07:44:34.029371] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.891 [2024-07-14 07:44:34.029396] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:17.891 qpair failed and we were unable to recover it. 00:27:17.891 [2024-07-14 07:44:34.029584] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.891 [2024-07-14 07:44:34.029822] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.891 [2024-07-14 07:44:34.029847] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:17.891 qpair failed and we were unable to recover it. 00:27:17.891 [2024-07-14 07:44:34.030077] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.891 [2024-07-14 07:44:34.030313] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.891 [2024-07-14 07:44:34.030360] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:17.891 qpair failed and we were unable to recover it. 00:27:17.891 [2024-07-14 07:44:34.030557] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.891 [2024-07-14 07:44:34.030773] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.891 [2024-07-14 07:44:34.030799] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:17.891 qpair failed and we were unable to recover it. 00:27:17.891 [2024-07-14 07:44:34.030988] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.891 [2024-07-14 07:44:34.031145] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.891 [2024-07-14 07:44:34.031170] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:17.891 qpair failed and we were unable to recover it. 00:27:17.891 [2024-07-14 07:44:34.031357] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.891 [2024-07-14 07:44:34.031531] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.891 [2024-07-14 07:44:34.031574] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:17.891 qpair failed and we were unable to recover it. 00:27:17.891 [2024-07-14 07:44:34.031775] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.891 [2024-07-14 07:44:34.031927] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.891 [2024-07-14 07:44:34.031953] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:17.891 qpair failed and we were unable to recover it. 00:27:17.891 [2024-07-14 07:44:34.032133] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.891 [2024-07-14 07:44:34.032340] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.891 [2024-07-14 07:44:34.032386] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:17.892 qpair failed and we were unable to recover it. 00:27:17.892 [2024-07-14 07:44:34.032702] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.892 [2024-07-14 07:44:34.032974] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.892 [2024-07-14 07:44:34.033002] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:17.892 qpair failed and we were unable to recover it. 00:27:17.892 [2024-07-14 07:44:34.033212] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.892 [2024-07-14 07:44:34.033467] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.892 [2024-07-14 07:44:34.033506] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:17.892 qpair failed and we were unable to recover it. 00:27:17.892 [2024-07-14 07:44:34.033717] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.892 [2024-07-14 07:44:34.033925] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.892 [2024-07-14 07:44:34.033966] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:17.892 qpair failed and we were unable to recover it. 00:27:17.892 [2024-07-14 07:44:34.034177] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.892 [2024-07-14 07:44:34.034387] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.892 [2024-07-14 07:44:34.034413] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:17.892 qpair failed and we were unable to recover it. 00:27:17.892 [2024-07-14 07:44:34.034574] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.892 [2024-07-14 07:44:34.034738] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.892 [2024-07-14 07:44:34.034763] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:17.892 qpair failed and we were unable to recover it. 00:27:17.892 [2024-07-14 07:44:34.034951] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.892 [2024-07-14 07:44:34.035118] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.892 [2024-07-14 07:44:34.035143] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:17.892 qpair failed and we were unable to recover it. 00:27:17.892 [2024-07-14 07:44:34.035301] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.892 [2024-07-14 07:44:34.035488] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.892 [2024-07-14 07:44:34.035513] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:17.892 qpair failed and we were unable to recover it. 00:27:17.892 [2024-07-14 07:44:34.035683] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.892 [2024-07-14 07:44:34.035923] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.892 [2024-07-14 07:44:34.035957] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:17.892 qpair failed and we were unable to recover it. 00:27:17.892 [2024-07-14 07:44:34.036162] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.892 [2024-07-14 07:44:34.036559] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.892 [2024-07-14 07:44:34.036613] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:17.892 qpair failed and we were unable to recover it. 00:27:17.892 [2024-07-14 07:44:34.036871] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.892 [2024-07-14 07:44:34.037099] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.892 [2024-07-14 07:44:34.037127] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:17.892 qpair failed and we were unable to recover it. 00:27:17.892 [2024-07-14 07:44:34.037331] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.892 [2024-07-14 07:44:34.037505] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.892 [2024-07-14 07:44:34.037533] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:17.892 qpair failed and we were unable to recover it. 00:27:17.892 [2024-07-14 07:44:34.037735] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.892 [2024-07-14 07:44:34.037902] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.892 [2024-07-14 07:44:34.037932] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:17.892 qpair failed and we were unable to recover it. 00:27:17.892 [2024-07-14 07:44:34.038150] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.892 [2024-07-14 07:44:34.038371] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.892 [2024-07-14 07:44:34.038403] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:17.892 qpair failed and we were unable to recover it. 00:27:17.892 [2024-07-14 07:44:34.038640] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.892 [2024-07-14 07:44:34.038831] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.892 [2024-07-14 07:44:34.038859] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:17.892 qpair failed and we were unable to recover it. 00:27:17.892 [2024-07-14 07:44:34.039073] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.892 [2024-07-14 07:44:34.039341] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.892 [2024-07-14 07:44:34.039389] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:17.892 qpair failed and we were unable to recover it. 00:27:17.892 [2024-07-14 07:44:34.039622] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.892 [2024-07-14 07:44:34.039793] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.892 [2024-07-14 07:44:34.039833] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:17.892 qpair failed and we were unable to recover it. 00:27:17.892 [2024-07-14 07:44:34.040086] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.892 [2024-07-14 07:44:34.040283] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.892 [2024-07-14 07:44:34.040310] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:17.892 qpair failed and we were unable to recover it. 00:27:17.892 [2024-07-14 07:44:34.040522] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.892 [2024-07-14 07:44:34.040747] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.892 [2024-07-14 07:44:34.040817] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:17.892 qpair failed and we were unable to recover it. 00:27:17.892 [2024-07-14 07:44:34.041064] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.892 [2024-07-14 07:44:34.041306] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.892 [2024-07-14 07:44:34.041334] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:17.892 qpair failed and we were unable to recover it. 00:27:17.892 [2024-07-14 07:44:34.041520] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.892 [2024-07-14 07:44:34.041750] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.892 [2024-07-14 07:44:34.041782] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:17.892 qpair failed and we were unable to recover it. 00:27:18.161 [2024-07-14 07:44:34.041985] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.161 [2024-07-14 07:44:34.042178] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.161 [2024-07-14 07:44:34.042204] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:18.161 qpair failed and we were unable to recover it. 00:27:18.161 [2024-07-14 07:44:34.042369] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.161 [2024-07-14 07:44:34.042572] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.161 [2024-07-14 07:44:34.042607] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:18.161 qpair failed and we were unable to recover it. 00:27:18.161 [2024-07-14 07:44:34.042807] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.161 [2024-07-14 07:44:34.043025] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.161 [2024-07-14 07:44:34.043062] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:18.161 qpair failed and we were unable to recover it. 00:27:18.161 [2024-07-14 07:44:34.043251] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.161 [2024-07-14 07:44:34.043429] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.161 [2024-07-14 07:44:34.043463] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:18.161 qpair failed and we were unable to recover it. 00:27:18.161 [2024-07-14 07:44:34.043641] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.161 [2024-07-14 07:44:34.043825] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.161 [2024-07-14 07:44:34.043851] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:18.161 qpair failed and we were unable to recover it. 00:27:18.161 [2024-07-14 07:44:34.044041] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.161 [2024-07-14 07:44:34.044212] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.161 [2024-07-14 07:44:34.044237] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:18.161 qpair failed and we were unable to recover it. 00:27:18.161 [2024-07-14 07:44:34.044460] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.161 [2024-07-14 07:44:34.044681] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.161 [2024-07-14 07:44:34.044719] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:18.161 qpair failed and we were unable to recover it. 00:27:18.161 [2024-07-14 07:44:34.044941] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.161 [2024-07-14 07:44:34.045087] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.161 [2024-07-14 07:44:34.045118] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:18.161 qpair failed and we were unable to recover it. 00:27:18.161 [2024-07-14 07:44:34.045328] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.161 [2024-07-14 07:44:34.045569] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.161 [2024-07-14 07:44:34.045597] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:18.161 qpair failed and we were unable to recover it. 00:27:18.161 [2024-07-14 07:44:34.045794] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.161 [2024-07-14 07:44:34.045983] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.161 [2024-07-14 07:44:34.046010] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:18.161 qpair failed and we were unable to recover it. 00:27:18.161 [2024-07-14 07:44:34.046178] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.161 [2024-07-14 07:44:34.046360] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.161 [2024-07-14 07:44:34.046386] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:18.161 qpair failed and we were unable to recover it. 00:27:18.161 [2024-07-14 07:44:34.046546] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.161 [2024-07-14 07:44:34.046756] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.161 [2024-07-14 07:44:34.046782] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:18.161 qpair failed and we were unable to recover it. 00:27:18.161 [2024-07-14 07:44:34.047000] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.161 [2024-07-14 07:44:34.047264] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.161 [2024-07-14 07:44:34.047309] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:18.161 qpair failed and we were unable to recover it. 00:27:18.161 [2024-07-14 07:44:34.047475] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.161 [2024-07-14 07:44:34.047663] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.161 [2024-07-14 07:44:34.047688] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:18.161 qpair failed and we were unable to recover it. 00:27:18.161 [2024-07-14 07:44:34.047854] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.161 [2024-07-14 07:44:34.048045] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.161 [2024-07-14 07:44:34.048089] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:18.161 qpair failed and we were unable to recover it. 00:27:18.161 [2024-07-14 07:44:34.048268] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.161 [2024-07-14 07:44:34.048501] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.161 [2024-07-14 07:44:34.048548] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:18.161 qpair failed and we were unable to recover it. 00:27:18.161 [2024-07-14 07:44:34.048765] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.161 [2024-07-14 07:44:34.048966] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.161 [2024-07-14 07:44:34.048992] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:18.161 qpair failed and we were unable to recover it. 00:27:18.161 [2024-07-14 07:44:34.049183] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.161 [2024-07-14 07:44:34.049420] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.161 [2024-07-14 07:44:34.049469] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:18.161 qpair failed and we were unable to recover it. 00:27:18.161 [2024-07-14 07:44:34.049698] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.161 [2024-07-14 07:44:34.049897] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.161 [2024-07-14 07:44:34.049924] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:18.161 qpair failed and we were unable to recover it. 00:27:18.161 [2024-07-14 07:44:34.050102] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.161 [2024-07-14 07:44:34.050300] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.161 [2024-07-14 07:44:34.050325] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:18.161 qpair failed and we were unable to recover it. 00:27:18.161 [2024-07-14 07:44:34.050509] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.161 [2024-07-14 07:44:34.050695] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.161 [2024-07-14 07:44:34.050721] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:18.161 qpair failed and we were unable to recover it. 00:27:18.161 [2024-07-14 07:44:34.050976] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.161 [2024-07-14 07:44:34.051183] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.161 [2024-07-14 07:44:34.051209] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:18.161 qpair failed and we were unable to recover it. 00:27:18.161 [2024-07-14 07:44:34.051420] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.161 [2024-07-14 07:44:34.051767] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.161 [2024-07-14 07:44:34.051824] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:18.161 qpair failed and we were unable to recover it. 00:27:18.161 [2024-07-14 07:44:34.052060] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.161 [2024-07-14 07:44:34.052249] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.161 [2024-07-14 07:44:34.052275] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:18.161 qpair failed and we were unable to recover it. 00:27:18.161 [2024-07-14 07:44:34.052470] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.161 [2024-07-14 07:44:34.052672] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.161 [2024-07-14 07:44:34.052718] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:18.161 qpair failed and we were unable to recover it. 00:27:18.161 [2024-07-14 07:44:34.052935] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.161 [2024-07-14 07:44:34.053150] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.161 [2024-07-14 07:44:34.053178] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:18.161 qpair failed and we were unable to recover it. 00:27:18.161 [2024-07-14 07:44:34.053393] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.161 [2024-07-14 07:44:34.053735] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.161 [2024-07-14 07:44:34.053784] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:18.161 qpair failed and we were unable to recover it. 00:27:18.161 [2024-07-14 07:44:34.053988] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.161 [2024-07-14 07:44:34.054331] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.161 [2024-07-14 07:44:34.054392] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:18.161 qpair failed and we were unable to recover it. 00:27:18.161 [2024-07-14 07:44:34.054594] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.161 [2024-07-14 07:44:34.054808] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.161 [2024-07-14 07:44:34.054834] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:18.161 qpair failed and we were unable to recover it. 00:27:18.161 [2024-07-14 07:44:34.055008] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.161 [2024-07-14 07:44:34.055164] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.161 [2024-07-14 07:44:34.055190] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:18.161 qpair failed and we were unable to recover it. 00:27:18.161 [2024-07-14 07:44:34.055364] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.161 [2024-07-14 07:44:34.055572] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.161 [2024-07-14 07:44:34.055598] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:18.161 qpair failed and we were unable to recover it. 00:27:18.161 [2024-07-14 07:44:34.055776] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.161 [2024-07-14 07:44:34.055988] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.161 [2024-07-14 07:44:34.056015] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:18.161 qpair failed and we were unable to recover it. 00:27:18.161 [2024-07-14 07:44:34.056305] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.161 [2024-07-14 07:44:34.056733] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.161 [2024-07-14 07:44:34.056783] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:18.161 qpair failed and we were unable to recover it. 00:27:18.161 [2024-07-14 07:44:34.056981] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.161 [2024-07-14 07:44:34.057207] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.161 [2024-07-14 07:44:34.057256] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:18.161 qpair failed and we were unable to recover it. 00:27:18.161 [2024-07-14 07:44:34.057462] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.161 [2024-07-14 07:44:34.057675] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.161 [2024-07-14 07:44:34.057701] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:18.161 qpair failed and we were unable to recover it. 00:27:18.161 [2024-07-14 07:44:34.057873] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.161 [2024-07-14 07:44:34.058061] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.161 [2024-07-14 07:44:34.058086] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:18.161 qpair failed and we were unable to recover it. 00:27:18.161 [2024-07-14 07:44:34.058287] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.161 [2024-07-14 07:44:34.058557] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.161 [2024-07-14 07:44:34.058603] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:18.161 qpair failed and we were unable to recover it. 00:27:18.161 [2024-07-14 07:44:34.058842] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.161 [2024-07-14 07:44:34.059012] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.161 [2024-07-14 07:44:34.059038] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:18.161 qpair failed and we were unable to recover it. 00:27:18.161 [2024-07-14 07:44:34.059229] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.161 [2024-07-14 07:44:34.059415] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.161 [2024-07-14 07:44:34.059444] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:18.161 qpair failed and we were unable to recover it. 00:27:18.161 [2024-07-14 07:44:34.059606] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.161 [2024-07-14 07:44:34.059792] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.161 [2024-07-14 07:44:34.059818] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:18.161 qpair failed and we were unable to recover it. 00:27:18.161 [2024-07-14 07:44:34.060012] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.161 [2024-07-14 07:44:34.060199] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.161 [2024-07-14 07:44:34.060224] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:18.161 qpair failed and we were unable to recover it. 00:27:18.161 [2024-07-14 07:44:34.060445] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.161 [2024-07-14 07:44:34.060656] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.161 [2024-07-14 07:44:34.060681] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:18.161 qpair failed and we were unable to recover it. 00:27:18.161 [2024-07-14 07:44:34.060837] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.161 [2024-07-14 07:44:34.061036] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.161 [2024-07-14 07:44:34.061062] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:18.161 qpair failed and we were unable to recover it. 00:27:18.161 [2024-07-14 07:44:34.061217] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.161 [2024-07-14 07:44:34.061374] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.161 [2024-07-14 07:44:34.061399] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:18.161 qpair failed and we were unable to recover it. 00:27:18.162 [2024-07-14 07:44:34.061589] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.162 [2024-07-14 07:44:34.061793] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.162 [2024-07-14 07:44:34.061823] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:18.162 qpair failed and we were unable to recover it. 00:27:18.162 [2024-07-14 07:44:34.062087] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.162 [2024-07-14 07:44:34.062271] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.162 [2024-07-14 07:44:34.062297] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:18.162 qpair failed and we were unable to recover it. 00:27:18.162 [2024-07-14 07:44:34.062459] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.162 [2024-07-14 07:44:34.062669] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.162 [2024-07-14 07:44:34.062695] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:18.162 qpair failed and we were unable to recover it. 00:27:18.162 [2024-07-14 07:44:34.062901] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.162 [2024-07-14 07:44:34.063099] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.162 [2024-07-14 07:44:34.063127] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:18.162 qpair failed and we were unable to recover it. 00:27:18.162 [2024-07-14 07:44:34.063491] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.162 [2024-07-14 07:44:34.063726] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.162 [2024-07-14 07:44:34.063755] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:18.162 qpair failed and we were unable to recover it. 00:27:18.162 [2024-07-14 07:44:34.063965] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.162 [2024-07-14 07:44:34.064185] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.162 [2024-07-14 07:44:34.064228] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:18.162 qpair failed and we were unable to recover it. 00:27:18.162 [2024-07-14 07:44:34.064498] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.162 [2024-07-14 07:44:34.064675] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.162 [2024-07-14 07:44:34.064703] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:18.162 qpair failed and we were unable to recover it. 00:27:18.162 [2024-07-14 07:44:34.064920] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.162 [2024-07-14 07:44:34.065132] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.162 [2024-07-14 07:44:34.065163] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:18.162 qpair failed and we were unable to recover it. 00:27:18.162 [2024-07-14 07:44:34.065369] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.162 [2024-07-14 07:44:34.065555] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.162 [2024-07-14 07:44:34.065580] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:18.162 qpair failed and we were unable to recover it. 00:27:18.162 [2024-07-14 07:44:34.065765] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.162 [2024-07-14 07:44:34.065916] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.162 [2024-07-14 07:44:34.065943] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:18.162 qpair failed and we were unable to recover it. 00:27:18.162 [2024-07-14 07:44:34.066135] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.162 [2024-07-14 07:44:34.066280] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.162 [2024-07-14 07:44:34.066306] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:18.162 qpair failed and we were unable to recover it. 00:27:18.162 [2024-07-14 07:44:34.066510] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.162 [2024-07-14 07:44:34.066694] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.162 [2024-07-14 07:44:34.066719] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:18.162 qpair failed and we were unable to recover it. 00:27:18.162 [2024-07-14 07:44:34.067034] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.162 [2024-07-14 07:44:34.067325] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.162 [2024-07-14 07:44:34.067377] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:18.162 qpair failed and we were unable to recover it. 00:27:18.162 [2024-07-14 07:44:34.067583] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.162 [2024-07-14 07:44:34.067767] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.162 [2024-07-14 07:44:34.067793] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:18.162 qpair failed and we were unable to recover it. 00:27:18.162 [2024-07-14 07:44:34.067955] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.162 [2024-07-14 07:44:34.068139] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.162 [2024-07-14 07:44:34.068186] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:18.162 qpair failed and we were unable to recover it. 00:27:18.162 [2024-07-14 07:44:34.068371] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.162 [2024-07-14 07:44:34.068556] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.162 [2024-07-14 07:44:34.068581] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:18.162 qpair failed and we were unable to recover it. 00:27:18.162 [2024-07-14 07:44:34.068791] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.162 [2024-07-14 07:44:34.068990] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.162 [2024-07-14 07:44:34.069020] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:18.162 qpair failed and we were unable to recover it. 00:27:18.162 [2024-07-14 07:44:34.069221] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.162 [2024-07-14 07:44:34.069416] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.162 [2024-07-14 07:44:34.069444] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:18.162 qpair failed and we were unable to recover it. 00:27:18.162 [2024-07-14 07:44:34.069646] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.162 [2024-07-14 07:44:34.069853] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.162 [2024-07-14 07:44:34.069896] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:18.162 qpair failed and we were unable to recover it. 00:27:18.162 [2024-07-14 07:44:34.070060] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.162 [2024-07-14 07:44:34.070249] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.162 [2024-07-14 07:44:34.070290] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:18.162 qpair failed and we were unable to recover it. 00:27:18.162 [2024-07-14 07:44:34.070499] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.162 [2024-07-14 07:44:34.070774] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.162 [2024-07-14 07:44:34.070823] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:18.162 qpair failed and we were unable to recover it. 00:27:18.162 [2024-07-14 07:44:34.071032] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.162 [2024-07-14 07:44:34.071323] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.162 [2024-07-14 07:44:34.071374] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:18.162 qpair failed and we were unable to recover it. 00:27:18.162 [2024-07-14 07:44:34.071579] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.162 [2024-07-14 07:44:34.071786] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.162 [2024-07-14 07:44:34.071814] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:18.162 qpair failed and we were unable to recover it. 00:27:18.162 [2024-07-14 07:44:34.072031] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.162 [2024-07-14 07:44:34.072212] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.162 [2024-07-14 07:44:34.072241] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:18.162 qpair failed and we were unable to recover it. 00:27:18.162 [2024-07-14 07:44:34.072443] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.162 [2024-07-14 07:44:34.072635] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.162 [2024-07-14 07:44:34.072659] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:18.162 qpair failed and we were unable to recover it. 00:27:18.162 [2024-07-14 07:44:34.072886] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.162 [2024-07-14 07:44:34.073099] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.162 [2024-07-14 07:44:34.073124] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:18.162 qpair failed and we were unable to recover it. 00:27:18.162 [2024-07-14 07:44:34.073333] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.162 [2024-07-14 07:44:34.073666] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.162 [2024-07-14 07:44:34.073713] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:18.162 qpair failed and we were unable to recover it. 00:27:18.162 [2024-07-14 07:44:34.073928] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.162 [2024-07-14 07:44:34.074121] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.162 [2024-07-14 07:44:34.074161] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:18.162 qpair failed and we were unable to recover it. 00:27:18.162 [2024-07-14 07:44:34.074553] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.162 [2024-07-14 07:44:34.074806] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.162 [2024-07-14 07:44:34.074835] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:18.162 qpair failed and we were unable to recover it. 00:27:18.162 [2024-07-14 07:44:34.075076] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.162 [2024-07-14 07:44:34.075246] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.162 [2024-07-14 07:44:34.075286] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:18.162 qpair failed and we were unable to recover it. 00:27:18.162 [2024-07-14 07:44:34.075477] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.162 [2024-07-14 07:44:34.075665] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.162 [2024-07-14 07:44:34.075690] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:18.162 qpair failed and we were unable to recover it. 00:27:18.162 [2024-07-14 07:44:34.075887] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.162 [2024-07-14 07:44:34.076040] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.162 [2024-07-14 07:44:34.076066] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:18.162 qpair failed and we were unable to recover it. 00:27:18.162 [2024-07-14 07:44:34.076253] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.162 [2024-07-14 07:44:34.076447] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.162 [2024-07-14 07:44:34.076471] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:18.162 qpair failed and we were unable to recover it. 00:27:18.162 [2024-07-14 07:44:34.076671] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.162 [2024-07-14 07:44:34.076950] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.162 [2024-07-14 07:44:34.076976] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:18.162 qpair failed and we were unable to recover it. 00:27:18.162 [2024-07-14 07:44:34.077167] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.162 [2024-07-14 07:44:34.077323] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.162 [2024-07-14 07:44:34.077364] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:18.162 qpair failed and we were unable to recover it. 00:27:18.162 [2024-07-14 07:44:34.077583] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.162 [2024-07-14 07:44:34.077845] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.162 [2024-07-14 07:44:34.077902] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:18.162 qpair failed and we were unable to recover it. 00:27:18.162 [2024-07-14 07:44:34.078093] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.162 [2024-07-14 07:44:34.078449] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.162 [2024-07-14 07:44:34.078502] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:18.162 qpair failed and we were unable to recover it. 00:27:18.162 [2024-07-14 07:44:34.078744] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.162 [2024-07-14 07:44:34.078932] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.162 [2024-07-14 07:44:34.078961] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:18.162 qpair failed and we were unable to recover it. 00:27:18.162 [2024-07-14 07:44:34.079140] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.162 [2024-07-14 07:44:34.079346] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.162 [2024-07-14 07:44:34.079371] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:18.162 qpair failed and we were unable to recover it. 00:27:18.162 [2024-07-14 07:44:34.079591] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.162 [2024-07-14 07:44:34.079817] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.162 [2024-07-14 07:44:34.079842] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:18.162 qpair failed and we were unable to recover it. 00:27:18.162 [2024-07-14 07:44:34.080089] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.162 [2024-07-14 07:44:34.080310] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.162 [2024-07-14 07:44:34.080335] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:18.162 qpair failed and we were unable to recover it. 00:27:18.162 [2024-07-14 07:44:34.080545] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.162 [2024-07-14 07:44:34.080824] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.162 [2024-07-14 07:44:34.080852] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:18.162 qpair failed and we were unable to recover it. 00:27:18.162 [2024-07-14 07:44:34.081066] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.162 [2024-07-14 07:44:34.081267] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.162 [2024-07-14 07:44:34.081319] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:18.162 qpair failed and we were unable to recover it. 00:27:18.162 [2024-07-14 07:44:34.081522] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.162 [2024-07-14 07:44:34.081707] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.162 [2024-07-14 07:44:34.081747] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:18.162 qpair failed and we were unable to recover it. 00:27:18.162 [2024-07-14 07:44:34.081951] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.162 [2024-07-14 07:44:34.082104] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.162 [2024-07-14 07:44:34.082145] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:18.162 qpair failed and we were unable to recover it. 00:27:18.162 [2024-07-14 07:44:34.082351] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.162 [2024-07-14 07:44:34.082735] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.162 [2024-07-14 07:44:34.082795] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:18.162 qpair failed and we were unable to recover it. 00:27:18.162 [2024-07-14 07:44:34.083002] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.162 [2024-07-14 07:44:34.083212] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.162 [2024-07-14 07:44:34.083241] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:18.162 qpair failed and we were unable to recover it. 00:27:18.162 [2024-07-14 07:44:34.083425] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.162 [2024-07-14 07:44:34.083701] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.162 [2024-07-14 07:44:34.083752] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:18.162 qpair failed and we were unable to recover it. 00:27:18.162 [2024-07-14 07:44:34.083983] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.162 [2024-07-14 07:44:34.084224] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.162 [2024-07-14 07:44:34.084252] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:18.162 qpair failed and we were unable to recover it. 00:27:18.162 [2024-07-14 07:44:34.084468] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.162 [2024-07-14 07:44:34.084634] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.162 [2024-07-14 07:44:34.084674] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:18.162 qpair failed and we were unable to recover it. 00:27:18.162 [2024-07-14 07:44:34.084863] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.162 [2024-07-14 07:44:34.085090] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.162 [2024-07-14 07:44:34.085115] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:18.162 qpair failed and we were unable to recover it. 00:27:18.162 [2024-07-14 07:44:34.085278] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.162 [2024-07-14 07:44:34.085435] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.162 [2024-07-14 07:44:34.085461] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:18.162 qpair failed and we were unable to recover it. 00:27:18.162 [2024-07-14 07:44:34.085624] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.162 [2024-07-14 07:44:34.085803] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.162 [2024-07-14 07:44:34.085828] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:18.162 qpair failed and we were unable to recover it. 00:27:18.162 [2024-07-14 07:44:34.086044] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.162 [2024-07-14 07:44:34.086223] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.162 [2024-07-14 07:44:34.086248] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:18.162 qpair failed and we were unable to recover it. 00:27:18.162 [2024-07-14 07:44:34.086454] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.162 [2024-07-14 07:44:34.086608] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.162 [2024-07-14 07:44:34.086648] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:18.162 qpair failed and we were unable to recover it. 00:27:18.162 [2024-07-14 07:44:34.086852] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.162 [2024-07-14 07:44:34.087008] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.162 [2024-07-14 07:44:34.087034] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:18.162 qpair failed and we were unable to recover it. 00:27:18.162 [2024-07-14 07:44:34.087226] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.162 [2024-07-14 07:44:34.087422] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.162 [2024-07-14 07:44:34.087447] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:18.162 qpair failed and we were unable to recover it. 00:27:18.162 [2024-07-14 07:44:34.087634] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.162 [2024-07-14 07:44:34.087893] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.162 [2024-07-14 07:44:34.087919] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:18.162 qpair failed and we were unable to recover it. 00:27:18.162 [2024-07-14 07:44:34.088104] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.162 [2024-07-14 07:44:34.088282] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.162 [2024-07-14 07:44:34.088307] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:18.162 qpair failed and we were unable to recover it. 00:27:18.162 [2024-07-14 07:44:34.088500] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.162 [2024-07-14 07:44:34.088692] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.162 [2024-07-14 07:44:34.088732] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:18.162 qpair failed and we were unable to recover it. 00:27:18.162 [2024-07-14 07:44:34.088943] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.162 [2024-07-14 07:44:34.089131] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.162 [2024-07-14 07:44:34.089155] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:18.162 qpair failed and we were unable to recover it. 00:27:18.162 [2024-07-14 07:44:34.089317] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.162 [2024-07-14 07:44:34.089533] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.162 [2024-07-14 07:44:34.089558] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:18.162 qpair failed and we were unable to recover it. 00:27:18.162 [2024-07-14 07:44:34.089743] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.162 [2024-07-14 07:44:34.089900] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.162 [2024-07-14 07:44:34.089927] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:18.162 qpair failed and we were unable to recover it. 00:27:18.162 [2024-07-14 07:44:34.090113] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.162 [2024-07-14 07:44:34.090264] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.162 [2024-07-14 07:44:34.090290] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:18.162 qpair failed and we were unable to recover it. 00:27:18.162 [2024-07-14 07:44:34.090442] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.162 [2024-07-14 07:44:34.090634] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.162 [2024-07-14 07:44:34.090659] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:18.162 qpair failed and we were unable to recover it. 00:27:18.162 [2024-07-14 07:44:34.090839] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.162 [2024-07-14 07:44:34.091030] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.162 [2024-07-14 07:44:34.091057] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:18.162 qpair failed and we were unable to recover it. 00:27:18.162 [2024-07-14 07:44:34.091240] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.162 [2024-07-14 07:44:34.091391] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.162 [2024-07-14 07:44:34.091416] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:18.162 qpair failed and we were unable to recover it. 00:27:18.162 [2024-07-14 07:44:34.091600] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.162 [2024-07-14 07:44:34.091804] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.162 [2024-07-14 07:44:34.091829] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:18.162 qpair failed and we were unable to recover it. 00:27:18.162 [2024-07-14 07:44:34.092043] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.162 [2024-07-14 07:44:34.092313] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.162 [2024-07-14 07:44:34.092365] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:18.162 qpair failed and we were unable to recover it. 00:27:18.162 [2024-07-14 07:44:34.092547] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.162 [2024-07-14 07:44:34.092754] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.162 [2024-07-14 07:44:34.092779] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:18.162 qpair failed and we were unable to recover it. 00:27:18.162 [2024-07-14 07:44:34.092966] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.162 [2024-07-14 07:44:34.093172] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.162 [2024-07-14 07:44:34.093201] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:18.162 qpair failed and we were unable to recover it. 00:27:18.162 [2024-07-14 07:44:34.093432] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.162 [2024-07-14 07:44:34.093812] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.162 [2024-07-14 07:44:34.093894] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:18.162 qpair failed and we were unable to recover it. 00:27:18.162 [2024-07-14 07:44:34.094083] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.162 [2024-07-14 07:44:34.094370] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.163 [2024-07-14 07:44:34.094394] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:18.163 qpair failed and we were unable to recover it. 00:27:18.163 [2024-07-14 07:44:34.094608] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.163 [2024-07-14 07:44:34.094818] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.163 [2024-07-14 07:44:34.094843] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:18.163 qpair failed and we were unable to recover it. 00:27:18.163 [2024-07-14 07:44:34.095039] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.163 [2024-07-14 07:44:34.095193] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.163 [2024-07-14 07:44:34.095218] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:18.163 qpair failed and we were unable to recover it. 00:27:18.163 [2024-07-14 07:44:34.095459] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.163 [2024-07-14 07:44:34.095688] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.163 [2024-07-14 07:44:34.095740] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:18.163 qpair failed and we were unable to recover it. 00:27:18.163 [2024-07-14 07:44:34.095963] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.163 [2024-07-14 07:44:34.096181] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.163 [2024-07-14 07:44:34.096207] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:18.163 qpair failed and we were unable to recover it. 00:27:18.163 [2024-07-14 07:44:34.096388] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.163 [2024-07-14 07:44:34.096546] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.163 [2024-07-14 07:44:34.096571] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:18.163 qpair failed and we were unable to recover it. 00:27:18.163 [2024-07-14 07:44:34.096764] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.163 [2024-07-14 07:44:34.096946] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.163 [2024-07-14 07:44:34.096972] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:18.163 qpair failed and we were unable to recover it. 00:27:18.163 [2024-07-14 07:44:34.097132] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.163 [2024-07-14 07:44:34.097314] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.163 [2024-07-14 07:44:34.097339] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:18.163 qpair failed and we were unable to recover it. 00:27:18.163 [2024-07-14 07:44:34.097522] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.163 [2024-07-14 07:44:34.097706] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.163 [2024-07-14 07:44:34.097731] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:18.163 qpair failed and we were unable to recover it. 00:27:18.163 [2024-07-14 07:44:34.097944] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.163 [2024-07-14 07:44:34.098098] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.163 [2024-07-14 07:44:34.098124] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:18.163 qpair failed and we were unable to recover it. 00:27:18.163 [2024-07-14 07:44:34.098334] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.163 [2024-07-14 07:44:34.098552] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.163 [2024-07-14 07:44:34.098577] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:18.163 qpair failed and we were unable to recover it. 00:27:18.163 [2024-07-14 07:44:34.098759] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.163 [2024-07-14 07:44:34.098940] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.163 [2024-07-14 07:44:34.098966] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:18.163 qpair failed and we were unable to recover it. 00:27:18.163 [2024-07-14 07:44:34.099225] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.163 [2024-07-14 07:44:34.099640] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.163 [2024-07-14 07:44:34.099692] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:18.163 qpair failed and we were unable to recover it. 00:27:18.163 [2024-07-14 07:44:34.099916] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.163 [2024-07-14 07:44:34.100107] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.163 [2024-07-14 07:44:34.100135] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:18.163 qpair failed and we were unable to recover it. 00:27:18.163 [2024-07-14 07:44:34.100335] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.163 [2024-07-14 07:44:34.100517] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.163 [2024-07-14 07:44:34.100545] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:18.163 qpair failed and we were unable to recover it. 00:27:18.163 [2024-07-14 07:44:34.100769] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.163 [2024-07-14 07:44:34.101000] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.163 [2024-07-14 07:44:34.101029] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:18.163 qpair failed and we were unable to recover it. 00:27:18.163 [2024-07-14 07:44:34.101314] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.163 [2024-07-14 07:44:34.101582] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.163 [2024-07-14 07:44:34.101610] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:18.163 qpair failed and we were unable to recover it. 00:27:18.163 [2024-07-14 07:44:34.101824] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.163 [2024-07-14 07:44:34.102052] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.163 [2024-07-14 07:44:34.102081] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:18.163 qpair failed and we were unable to recover it. 00:27:18.163 [2024-07-14 07:44:34.102314] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.163 [2024-07-14 07:44:34.102660] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.163 [2024-07-14 07:44:34.102730] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:18.163 qpair failed and we were unable to recover it. 00:27:18.163 [2024-07-14 07:44:34.102932] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.163 [2024-07-14 07:44:34.103131] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.163 [2024-07-14 07:44:34.103159] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:18.163 qpair failed and we were unable to recover it. 00:27:18.163 [2024-07-14 07:44:34.103527] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.163 [2024-07-14 07:44:34.103786] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.163 [2024-07-14 07:44:34.103814] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:18.163 qpair failed and we were unable to recover it. 00:27:18.163 [2024-07-14 07:44:34.104025] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.163 [2024-07-14 07:44:34.104337] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.163 [2024-07-14 07:44:34.104389] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:18.163 qpair failed and we were unable to recover it. 00:27:18.163 [2024-07-14 07:44:34.104618] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.163 [2024-07-14 07:44:34.104856] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.163 [2024-07-14 07:44:34.104893] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:18.163 qpair failed and we were unable to recover it. 00:27:18.163 [2024-07-14 07:44:34.105107] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.163 [2024-07-14 07:44:34.105450] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.163 [2024-07-14 07:44:34.105499] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:18.163 qpair failed and we were unable to recover it. 00:27:18.163 [2024-07-14 07:44:34.105802] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.163 [2024-07-14 07:44:34.106075] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.163 [2024-07-14 07:44:34.106106] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:18.163 qpair failed and we were unable to recover it. 00:27:18.163 [2024-07-14 07:44:34.106317] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.163 [2024-07-14 07:44:34.106526] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.163 [2024-07-14 07:44:34.106553] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:18.163 qpair failed and we were unable to recover it. 00:27:18.163 [2024-07-14 07:44:34.106782] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.163 [2024-07-14 07:44:34.107014] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.163 [2024-07-14 07:44:34.107043] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:18.163 qpair failed and we were unable to recover it. 00:27:18.163 [2024-07-14 07:44:34.107261] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.163 [2024-07-14 07:44:34.107502] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.163 [2024-07-14 07:44:34.107560] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:18.163 qpair failed and we were unable to recover it. 00:27:18.163 [2024-07-14 07:44:34.107761] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.163 [2024-07-14 07:44:34.107960] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.163 [2024-07-14 07:44:34.107990] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:18.163 qpair failed and we were unable to recover it. 00:27:18.163 [2024-07-14 07:44:34.108198] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.163 [2024-07-14 07:44:34.108425] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.163 [2024-07-14 07:44:34.108475] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:18.163 qpair failed and we were unable to recover it. 00:27:18.163 [2024-07-14 07:44:34.108684] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.163 [2024-07-14 07:44:34.108861] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.163 [2024-07-14 07:44:34.108896] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:18.163 qpair failed and we were unable to recover it. 00:27:18.163 [2024-07-14 07:44:34.109126] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.163 [2024-07-14 07:44:34.109353] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.163 [2024-07-14 07:44:34.109382] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:18.163 qpair failed and we were unable to recover it. 00:27:18.163 [2024-07-14 07:44:34.109728] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.163 [2024-07-14 07:44:34.110007] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.163 [2024-07-14 07:44:34.110036] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:18.163 qpair failed and we were unable to recover it. 00:27:18.163 [2024-07-14 07:44:34.110237] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.163 [2024-07-14 07:44:34.110473] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.163 [2024-07-14 07:44:34.110501] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:18.163 qpair failed and we were unable to recover it. 00:27:18.163 [2024-07-14 07:44:34.110730] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.163 [2024-07-14 07:44:34.110956] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.163 [2024-07-14 07:44:34.110986] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:18.163 qpair failed and we were unable to recover it. 00:27:18.163 [2024-07-14 07:44:34.111190] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.163 [2024-07-14 07:44:34.111447] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.163 [2024-07-14 07:44:34.111475] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:18.163 qpair failed and we were unable to recover it. 00:27:18.163 [2024-07-14 07:44:34.111656] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.163 [2024-07-14 07:44:34.111878] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.163 [2024-07-14 07:44:34.111906] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:18.163 qpair failed and we were unable to recover it. 00:27:18.163 [2024-07-14 07:44:34.112112] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.163 [2024-07-14 07:44:34.112318] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.163 [2024-07-14 07:44:34.112346] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:18.163 qpair failed and we were unable to recover it. 00:27:18.163 [2024-07-14 07:44:34.112551] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.163 [2024-07-14 07:44:34.112836] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.163 [2024-07-14 07:44:34.112864] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:18.163 qpair failed and we were unable to recover it. 00:27:18.163 [2024-07-14 07:44:34.113112] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.163 [2024-07-14 07:44:34.113328] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.163 [2024-07-14 07:44:34.113356] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:18.163 qpair failed and we were unable to recover it. 00:27:18.163 [2024-07-14 07:44:34.113653] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.163 [2024-07-14 07:44:34.113818] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.163 [2024-07-14 07:44:34.113842] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:18.163 qpair failed and we were unable to recover it. 00:27:18.163 [2024-07-14 07:44:34.114049] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.163 [2024-07-14 07:44:34.114367] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.163 [2024-07-14 07:44:34.114419] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:18.163 qpair failed and we were unable to recover it. 00:27:18.163 [2024-07-14 07:44:34.114606] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.163 [2024-07-14 07:44:34.114926] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.163 [2024-07-14 07:44:34.114952] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:18.163 qpair failed and we were unable to recover it. 00:27:18.163 [2024-07-14 07:44:34.115221] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.163 [2024-07-14 07:44:34.115569] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.163 [2024-07-14 07:44:34.115621] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:18.163 qpair failed and we were unable to recover it. 00:27:18.163 [2024-07-14 07:44:34.115820] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.163 [2024-07-14 07:44:34.116073] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.163 [2024-07-14 07:44:34.116102] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:18.163 qpair failed and we were unable to recover it. 00:27:18.163 [2024-07-14 07:44:34.116324] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.163 [2024-07-14 07:44:34.116711] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.163 [2024-07-14 07:44:34.116770] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:18.163 qpair failed and we were unable to recover it. 00:27:18.163 [2024-07-14 07:44:34.117017] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.163 [2024-07-14 07:44:34.117207] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.163 [2024-07-14 07:44:34.117234] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:18.163 qpair failed and we were unable to recover it. 00:27:18.163 [2024-07-14 07:44:34.117440] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.163 [2024-07-14 07:44:34.117671] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.163 [2024-07-14 07:44:34.117699] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:18.163 qpair failed and we were unable to recover it. 00:27:18.163 [2024-07-14 07:44:34.117878] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.163 [2024-07-14 07:44:34.118086] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.163 [2024-07-14 07:44:34.118111] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:18.163 qpair failed and we were unable to recover it. 00:27:18.163 [2024-07-14 07:44:34.118328] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.163 [2024-07-14 07:44:34.118569] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.163 [2024-07-14 07:44:34.118597] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:18.163 qpair failed and we were unable to recover it. 00:27:18.163 [2024-07-14 07:44:34.118775] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.163 [2024-07-14 07:44:34.118991] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.163 [2024-07-14 07:44:34.119019] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:18.163 qpair failed and we were unable to recover it. 00:27:18.163 [2024-07-14 07:44:34.119259] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.163 [2024-07-14 07:44:34.119571] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.163 [2024-07-14 07:44:34.119626] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:18.163 qpair failed and we were unable to recover it. 00:27:18.163 [2024-07-14 07:44:34.119906] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.163 [2024-07-14 07:44:34.120126] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.163 [2024-07-14 07:44:34.120154] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:18.163 qpair failed and we were unable to recover it. 00:27:18.163 [2024-07-14 07:44:34.120359] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.163 [2024-07-14 07:44:34.120572] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.163 [2024-07-14 07:44:34.120598] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:18.163 qpair failed and we were unable to recover it. 00:27:18.163 [2024-07-14 07:44:34.120806] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.163 [2024-07-14 07:44:34.121013] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.163 [2024-07-14 07:44:34.121039] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:18.163 qpair failed and we were unable to recover it. 00:27:18.163 [2024-07-14 07:44:34.121247] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.163 [2024-07-14 07:44:34.121578] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.163 [2024-07-14 07:44:34.121632] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:18.163 qpair failed and we were unable to recover it. 00:27:18.163 [2024-07-14 07:44:34.121876] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.163 [2024-07-14 07:44:34.122058] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.163 [2024-07-14 07:44:34.122088] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:18.163 qpair failed and we were unable to recover it. 00:27:18.163 [2024-07-14 07:44:34.122302] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.163 [2024-07-14 07:44:34.122547] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.163 [2024-07-14 07:44:34.122575] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:18.163 qpair failed and we were unable to recover it. 00:27:18.163 [2024-07-14 07:44:34.122811] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.163 [2024-07-14 07:44:34.122985] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.163 [2024-07-14 07:44:34.123015] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:18.163 qpair failed and we were unable to recover it. 00:27:18.163 [2024-07-14 07:44:34.123192] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.163 [2024-07-14 07:44:34.123517] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.163 [2024-07-14 07:44:34.123580] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:18.163 qpair failed and we were unable to recover it. 00:27:18.163 [2024-07-14 07:44:34.123812] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.163 [2024-07-14 07:44:34.124023] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.163 [2024-07-14 07:44:34.124052] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:18.163 qpair failed and we were unable to recover it. 00:27:18.163 [2024-07-14 07:44:34.124261] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.163 [2024-07-14 07:44:34.124495] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.163 [2024-07-14 07:44:34.124523] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:18.163 qpair failed and we were unable to recover it. 00:27:18.163 [2024-07-14 07:44:34.124741] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.163 [2024-07-14 07:44:34.124991] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.163 [2024-07-14 07:44:34.125017] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:18.163 qpair failed and we were unable to recover it. 00:27:18.163 [2024-07-14 07:44:34.125218] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.163 [2024-07-14 07:44:34.125502] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.163 [2024-07-14 07:44:34.125555] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:18.163 qpair failed and we were unable to recover it. 00:27:18.163 [2024-07-14 07:44:34.125781] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.163 [2024-07-14 07:44:34.125997] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.163 [2024-07-14 07:44:34.126025] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:18.163 qpair failed and we were unable to recover it. 00:27:18.163 [2024-07-14 07:44:34.126257] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.163 [2024-07-14 07:44:34.126468] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.163 [2024-07-14 07:44:34.126494] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:18.164 qpair failed and we were unable to recover it. 00:27:18.164 [2024-07-14 07:44:34.126667] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.164 [2024-07-14 07:44:34.126907] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.164 [2024-07-14 07:44:34.126936] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:18.164 qpair failed and we were unable to recover it. 00:27:18.164 [2024-07-14 07:44:34.127140] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.164 [2024-07-14 07:44:34.127489] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.164 [2024-07-14 07:44:34.127544] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:18.164 qpair failed and we were unable to recover it. 00:27:18.164 [2024-07-14 07:44:34.127855] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.164 [2024-07-14 07:44:34.128074] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.164 [2024-07-14 07:44:34.128103] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:18.164 qpair failed and we were unable to recover it. 00:27:18.164 [2024-07-14 07:44:34.128334] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.164 [2024-07-14 07:44:34.128538] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.164 [2024-07-14 07:44:34.128563] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:18.164 qpair failed and we were unable to recover it. 00:27:18.164 [2024-07-14 07:44:34.128778] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.164 [2024-07-14 07:44:34.128971] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.164 [2024-07-14 07:44:34.128997] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:18.164 qpair failed and we were unable to recover it. 00:27:18.164 [2024-07-14 07:44:34.129219] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.164 [2024-07-14 07:44:34.129544] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.164 [2024-07-14 07:44:34.129614] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:18.164 qpair failed and we were unable to recover it. 00:27:18.164 [2024-07-14 07:44:34.129817] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.164 [2024-07-14 07:44:34.130009] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.164 [2024-07-14 07:44:34.130038] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:18.164 qpair failed and we were unable to recover it. 00:27:18.164 [2024-07-14 07:44:34.130253] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.164 [2024-07-14 07:44:34.130676] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.164 [2024-07-14 07:44:34.130729] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:18.164 qpair failed and we were unable to recover it. 00:27:18.164 [2024-07-14 07:44:34.130938] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.164 [2024-07-14 07:44:34.131143] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.164 [2024-07-14 07:44:34.131171] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:18.164 qpair failed and we were unable to recover it. 00:27:18.164 [2024-07-14 07:44:34.131406] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.164 [2024-07-14 07:44:34.131604] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.164 [2024-07-14 07:44:34.131672] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:18.164 qpair failed and we were unable to recover it. 00:27:18.164 [2024-07-14 07:44:34.131940] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.164 [2024-07-14 07:44:34.132100] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.164 [2024-07-14 07:44:34.132125] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:18.164 qpair failed and we were unable to recover it. 00:27:18.164 [2024-07-14 07:44:34.132313] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.164 [2024-07-14 07:44:34.132715] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.164 [2024-07-14 07:44:34.132771] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:18.164 qpair failed and we were unable to recover it. 00:27:18.164 [2024-07-14 07:44:34.133006] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.164 [2024-07-14 07:44:34.133206] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.164 [2024-07-14 07:44:34.133278] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:18.164 qpair failed and we were unable to recover it. 00:27:18.164 [2024-07-14 07:44:34.133474] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.164 [2024-07-14 07:44:34.133652] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.164 [2024-07-14 07:44:34.133680] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:18.164 qpair failed and we were unable to recover it. 00:27:18.164 [2024-07-14 07:44:34.133907] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.164 [2024-07-14 07:44:34.134317] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.164 [2024-07-14 07:44:34.134368] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:18.164 qpair failed and we were unable to recover it. 00:27:18.164 [2024-07-14 07:44:34.134642] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.164 [2024-07-14 07:44:34.134885] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.164 [2024-07-14 07:44:34.134912] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:18.164 qpair failed and we were unable to recover it. 00:27:18.164 [2024-07-14 07:44:34.135109] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.164 [2024-07-14 07:44:34.135317] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.164 [2024-07-14 07:44:34.135341] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:18.164 qpair failed and we were unable to recover it. 00:27:18.164 [2024-07-14 07:44:34.135579] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.164 [2024-07-14 07:44:34.135918] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.164 [2024-07-14 07:44:34.135948] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:18.164 qpair failed and we were unable to recover it. 00:27:18.164 [2024-07-14 07:44:34.136195] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.164 [2024-07-14 07:44:34.136447] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.164 [2024-07-14 07:44:34.136502] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:18.164 qpair failed and we were unable to recover it. 00:27:18.164 [2024-07-14 07:44:34.136759] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.164 [2024-07-14 07:44:34.136976] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.164 [2024-07-14 07:44:34.137009] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:18.164 qpair failed and we were unable to recover it. 00:27:18.164 [2024-07-14 07:44:34.137249] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.164 [2024-07-14 07:44:34.137453] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.164 [2024-07-14 07:44:34.137481] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:18.164 qpair failed and we were unable to recover it. 00:27:18.164 [2024-07-14 07:44:34.137673] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.164 [2024-07-14 07:44:34.137896] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.164 [2024-07-14 07:44:34.137922] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:18.164 qpair failed and we were unable to recover it. 00:27:18.164 [2024-07-14 07:44:34.138080] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.164 [2024-07-14 07:44:34.138353] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.164 [2024-07-14 07:44:34.138381] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:18.164 qpair failed and we were unable to recover it. 00:27:18.164 [2024-07-14 07:44:34.138568] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.164 [2024-07-14 07:44:34.138848] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.164 [2024-07-14 07:44:34.138889] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:18.164 qpair failed and we were unable to recover it. 00:27:18.164 [2024-07-14 07:44:34.139066] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.164 [2024-07-14 07:44:34.139333] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.164 [2024-07-14 07:44:34.139385] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:18.164 qpair failed and we were unable to recover it. 00:27:18.164 [2024-07-14 07:44:34.139615] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.164 [2024-07-14 07:44:34.139829] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.164 [2024-07-14 07:44:34.139854] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:18.164 qpair failed and we were unable to recover it. 00:27:18.164 [2024-07-14 07:44:34.140106] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.164 [2024-07-14 07:44:34.140409] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.164 [2024-07-14 07:44:34.140467] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:18.164 qpair failed and we were unable to recover it. 00:27:18.164 [2024-07-14 07:44:34.140746] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.164 [2024-07-14 07:44:34.140954] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.164 [2024-07-14 07:44:34.140984] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:18.164 qpair failed and we were unable to recover it. 00:27:18.164 [2024-07-14 07:44:34.141192] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.164 [2024-07-14 07:44:34.141490] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.164 [2024-07-14 07:44:34.141545] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:18.164 qpair failed and we were unable to recover it. 00:27:18.164 [2024-07-14 07:44:34.141795] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.164 [2024-07-14 07:44:34.142009] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.164 [2024-07-14 07:44:34.142037] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:18.164 qpair failed and we were unable to recover it. 00:27:18.164 [2024-07-14 07:44:34.142286] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.164 [2024-07-14 07:44:34.142461] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.164 [2024-07-14 07:44:34.142486] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:18.164 qpair failed and we were unable to recover it. 00:27:18.164 [2024-07-14 07:44:34.142747] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.164 [2024-07-14 07:44:34.142997] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.164 [2024-07-14 07:44:34.143023] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:18.164 qpair failed and we were unable to recover it. 00:27:18.164 [2024-07-14 07:44:34.143202] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.164 [2024-07-14 07:44:34.143573] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.164 [2024-07-14 07:44:34.143632] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:18.164 qpair failed and we were unable to recover it. 00:27:18.164 [2024-07-14 07:44:34.143873] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.164 [2024-07-14 07:44:34.144107] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.164 [2024-07-14 07:44:34.144135] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:18.164 qpair failed and we were unable to recover it. 00:27:18.164 [2024-07-14 07:44:34.144521] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.164 [2024-07-14 07:44:34.144802] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.164 [2024-07-14 07:44:34.144827] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:18.164 qpair failed and we were unable to recover it. 00:27:18.164 [2024-07-14 07:44:34.145056] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.164 [2024-07-14 07:44:34.145411] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.164 [2024-07-14 07:44:34.145465] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:18.164 qpair failed and we were unable to recover it. 00:27:18.164 [2024-07-14 07:44:34.145698] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.164 [2024-07-14 07:44:34.145942] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.164 [2024-07-14 07:44:34.145972] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:18.164 qpair failed and we were unable to recover it. 00:27:18.164 [2024-07-14 07:44:34.146201] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.164 [2024-07-14 07:44:34.146519] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.164 [2024-07-14 07:44:34.146580] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:18.164 qpair failed and we were unable to recover it. 00:27:18.164 [2024-07-14 07:44:34.146889] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.164 [2024-07-14 07:44:34.147091] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.164 [2024-07-14 07:44:34.147119] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:18.164 qpair failed and we were unable to recover it. 00:27:18.164 [2024-07-14 07:44:34.147354] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.164 [2024-07-14 07:44:34.147590] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.164 [2024-07-14 07:44:34.147618] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:18.164 qpair failed and we were unable to recover it. 00:27:18.164 [2024-07-14 07:44:34.147829] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.164 [2024-07-14 07:44:34.148035] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.164 [2024-07-14 07:44:34.148061] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:18.164 qpair failed and we were unable to recover it. 00:27:18.164 [2024-07-14 07:44:34.148367] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.164 [2024-07-14 07:44:34.148607] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.164 [2024-07-14 07:44:34.148632] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:18.164 qpair failed and we were unable to recover it. 00:27:18.164 [2024-07-14 07:44:34.148884] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.164 [2024-07-14 07:44:34.149086] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.164 [2024-07-14 07:44:34.149115] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:18.164 qpair failed and we were unable to recover it. 00:27:18.164 [2024-07-14 07:44:34.149318] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.164 [2024-07-14 07:44:34.149691] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.164 [2024-07-14 07:44:34.149750] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:18.164 qpair failed and we were unable to recover it. 00:27:18.164 [2024-07-14 07:44:34.149997] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.164 [2024-07-14 07:44:34.150180] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.164 [2024-07-14 07:44:34.150208] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:18.164 qpair failed and we were unable to recover it. 00:27:18.164 [2024-07-14 07:44:34.150440] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.164 [2024-07-14 07:44:34.150819] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.164 [2024-07-14 07:44:34.150874] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:18.164 qpair failed and we were unable to recover it. 00:27:18.164 [2024-07-14 07:44:34.151078] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.164 [2024-07-14 07:44:34.151429] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.164 [2024-07-14 07:44:34.151481] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:18.164 qpair failed and we were unable to recover it. 00:27:18.164 [2024-07-14 07:44:34.151687] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.164 [2024-07-14 07:44:34.151909] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.164 [2024-07-14 07:44:34.151935] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:18.164 qpair failed and we were unable to recover it. 00:27:18.164 [2024-07-14 07:44:34.152160] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.164 [2024-07-14 07:44:34.152379] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.164 [2024-07-14 07:44:34.152407] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:18.164 qpair failed and we were unable to recover it. 00:27:18.164 [2024-07-14 07:44:34.152638] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.164 [2024-07-14 07:44:34.152844] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.164 [2024-07-14 07:44:34.152878] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:18.164 qpair failed and we were unable to recover it. 00:27:18.164 [2024-07-14 07:44:34.153100] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.164 [2024-07-14 07:44:34.153363] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.164 [2024-07-14 07:44:34.153419] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:18.164 qpair failed and we were unable to recover it. 00:27:18.164 [2024-07-14 07:44:34.153635] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.164 [2024-07-14 07:44:34.153843] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.164 [2024-07-14 07:44:34.153878] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:18.164 qpair failed and we were unable to recover it. 00:27:18.164 [2024-07-14 07:44:34.154060] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.164 [2024-07-14 07:44:34.154240] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.164 [2024-07-14 07:44:34.154268] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:18.164 qpair failed and we were unable to recover it. 00:27:18.164 [2024-07-14 07:44:34.154498] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.164 [2024-07-14 07:44:34.154782] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.164 [2024-07-14 07:44:34.154837] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:18.164 qpair failed and we were unable to recover it. 00:27:18.164 [2024-07-14 07:44:34.155098] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.164 [2024-07-14 07:44:34.155370] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.164 [2024-07-14 07:44:34.155395] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:18.164 qpair failed and we were unable to recover it. 00:27:18.164 [2024-07-14 07:44:34.155569] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.164 [2024-07-14 07:44:34.155834] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.164 [2024-07-14 07:44:34.155894] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:18.164 qpair failed and we were unable to recover it. 00:27:18.164 [2024-07-14 07:44:34.156100] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.164 [2024-07-14 07:44:34.156416] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.164 [2024-07-14 07:44:34.156466] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:18.164 qpair failed and we were unable to recover it. 00:27:18.164 [2024-07-14 07:44:34.156693] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.164 [2024-07-14 07:44:34.156922] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.164 [2024-07-14 07:44:34.156951] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:18.164 qpair failed and we were unable to recover it. 00:27:18.164 [2024-07-14 07:44:34.157131] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.164 [2024-07-14 07:44:34.157503] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.164 [2024-07-14 07:44:34.157552] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:18.164 qpair failed and we were unable to recover it. 00:27:18.164 [2024-07-14 07:44:34.157778] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.164 [2024-07-14 07:44:34.157993] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.164 [2024-07-14 07:44:34.158022] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:18.164 qpair failed and we were unable to recover it. 00:27:18.164 [2024-07-14 07:44:34.158221] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.164 [2024-07-14 07:44:34.158447] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.164 [2024-07-14 07:44:34.158471] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:18.164 qpair failed and we were unable to recover it. 00:27:18.164 [2024-07-14 07:44:34.158692] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.164 [2024-07-14 07:44:34.158889] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.164 [2024-07-14 07:44:34.158918] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:18.164 qpair failed and we were unable to recover it. 00:27:18.164 [2024-07-14 07:44:34.159094] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.164 [2024-07-14 07:44:34.159348] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.164 [2024-07-14 07:44:34.159388] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:18.164 qpair failed and we were unable to recover it. 00:27:18.164 [2024-07-14 07:44:34.159533] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.164 [2024-07-14 07:44:34.159827] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.164 [2024-07-14 07:44:34.159888] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:18.164 qpair failed and we were unable to recover it. 00:27:18.164 [2024-07-14 07:44:34.160091] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.164 [2024-07-14 07:44:34.160266] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.164 [2024-07-14 07:44:34.160294] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:18.164 qpair failed and we were unable to recover it. 00:27:18.164 [2024-07-14 07:44:34.160525] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.164 [2024-07-14 07:44:34.160686] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.164 [2024-07-14 07:44:34.160714] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:18.164 qpair failed and we were unable to recover it. 00:27:18.164 [2024-07-14 07:44:34.160922] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.164 [2024-07-14 07:44:34.161101] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.164 [2024-07-14 07:44:34.161129] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:18.164 qpair failed and we were unable to recover it. 00:27:18.164 [2024-07-14 07:44:34.161396] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.164 [2024-07-14 07:44:34.161597] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.164 [2024-07-14 07:44:34.161625] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:18.164 qpair failed and we were unable to recover it. 00:27:18.164 [2024-07-14 07:44:34.161824] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.164 [2024-07-14 07:44:34.162000] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.164 [2024-07-14 07:44:34.162029] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:18.164 qpair failed and we were unable to recover it. 00:27:18.164 [2024-07-14 07:44:34.162226] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.164 [2024-07-14 07:44:34.162486] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.164 [2024-07-14 07:44:34.162542] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:18.164 qpair failed and we were unable to recover it. 00:27:18.164 [2024-07-14 07:44:34.162771] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.164 [2024-07-14 07:44:34.163003] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.164 [2024-07-14 07:44:34.163037] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:18.164 qpair failed and we were unable to recover it. 00:27:18.164 [2024-07-14 07:44:34.163284] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.165 [2024-07-14 07:44:34.163504] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.165 [2024-07-14 07:44:34.163534] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:18.165 qpair failed and we were unable to recover it. 00:27:18.165 [2024-07-14 07:44:34.163764] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.165 [2024-07-14 07:44:34.163980] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.165 [2024-07-14 07:44:34.164006] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:18.165 qpair failed and we were unable to recover it. 00:27:18.165 [2024-07-14 07:44:34.164173] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.165 [2024-07-14 07:44:34.164379] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.165 [2024-07-14 07:44:34.164407] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:18.165 qpair failed and we were unable to recover it. 00:27:18.165 [2024-07-14 07:44:34.164707] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.165 [2024-07-14 07:44:34.164892] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.165 [2024-07-14 07:44:34.164919] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:18.165 qpair failed and we were unable to recover it. 00:27:18.165 [2024-07-14 07:44:34.165112] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.165 [2024-07-14 07:44:34.165354] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.165 [2024-07-14 07:44:34.165410] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:18.165 qpair failed and we were unable to recover it. 00:27:18.165 [2024-07-14 07:44:34.165639] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.165 [2024-07-14 07:44:34.165830] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.165 [2024-07-14 07:44:34.165858] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:18.165 qpair failed and we were unable to recover it. 00:27:18.165 [2024-07-14 07:44:34.166077] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.165 [2024-07-14 07:44:34.166237] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.165 [2024-07-14 07:44:34.166276] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:18.165 qpair failed and we were unable to recover it. 00:27:18.165 [2024-07-14 07:44:34.166589] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.165 [2024-07-14 07:44:34.166859] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.165 [2024-07-14 07:44:34.166905] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:18.165 qpair failed and we were unable to recover it. 00:27:18.165 [2024-07-14 07:44:34.167122] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.165 [2024-07-14 07:44:34.167500] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.165 [2024-07-14 07:44:34.167560] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:18.165 qpair failed and we were unable to recover it. 00:27:18.165 [2024-07-14 07:44:34.167753] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.165 [2024-07-14 07:44:34.167981] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.165 [2024-07-14 07:44:34.168010] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:18.165 qpair failed and we were unable to recover it. 00:27:18.165 [2024-07-14 07:44:34.168221] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.165 [2024-07-14 07:44:34.168453] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.165 [2024-07-14 07:44:34.168480] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:18.165 qpair failed and we were unable to recover it. 00:27:18.165 [2024-07-14 07:44:34.168819] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.165 [2024-07-14 07:44:34.169081] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.165 [2024-07-14 07:44:34.169110] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:18.165 qpair failed and we were unable to recover it. 00:27:18.165 [2024-07-14 07:44:34.169352] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.165 [2024-07-14 07:44:34.169621] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.165 [2024-07-14 07:44:34.169673] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:18.165 qpair failed and we were unable to recover it. 00:27:18.165 [2024-07-14 07:44:34.169849] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.165 [2024-07-14 07:44:34.170062] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.165 [2024-07-14 07:44:34.170091] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:18.165 qpair failed and we were unable to recover it. 00:27:18.165 [2024-07-14 07:44:34.170332] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.165 [2024-07-14 07:44:34.170683] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.165 [2024-07-14 07:44:34.170736] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:18.165 qpair failed and we were unable to recover it. 00:27:18.165 [2024-07-14 07:44:34.170947] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.165 [2024-07-14 07:44:34.171178] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.165 [2024-07-14 07:44:34.171239] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:18.165 qpair failed and we were unable to recover it. 00:27:18.165 [2024-07-14 07:44:34.171437] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.165 [2024-07-14 07:44:34.171623] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.165 [2024-07-14 07:44:34.171648] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:18.165 qpair failed and we were unable to recover it. 00:27:18.165 [2024-07-14 07:44:34.171863] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.165 [2024-07-14 07:44:34.172100] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.165 [2024-07-14 07:44:34.172128] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:18.165 qpair failed and we were unable to recover it. 00:27:18.165 [2024-07-14 07:44:34.172365] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.165 [2024-07-14 07:44:34.172627] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.165 [2024-07-14 07:44:34.172678] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:18.165 qpair failed and we were unable to recover it. 00:27:18.165 [2024-07-14 07:44:34.172878] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.165 [2024-07-14 07:44:34.173081] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.165 [2024-07-14 07:44:34.173108] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:18.165 qpair failed and we were unable to recover it. 00:27:18.165 [2024-07-14 07:44:34.173319] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.165 [2024-07-14 07:44:34.173603] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.165 [2024-07-14 07:44:34.173656] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:18.165 qpair failed and we were unable to recover it. 00:27:18.165 [2024-07-14 07:44:34.173872] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.165 [2024-07-14 07:44:34.174098] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.165 [2024-07-14 07:44:34.174126] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:18.165 qpair failed and we were unable to recover it. 00:27:18.165 [2024-07-14 07:44:34.174358] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.165 [2024-07-14 07:44:34.174685] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.165 [2024-07-14 07:44:34.174745] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:18.165 qpair failed and we were unable to recover it. 00:27:18.165 [2024-07-14 07:44:34.174977] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.165 [2024-07-14 07:44:34.175200] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.165 [2024-07-14 07:44:34.175225] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:18.165 qpair failed and we were unable to recover it. 00:27:18.165 [2024-07-14 07:44:34.175419] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.165 [2024-07-14 07:44:34.175624] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.165 [2024-07-14 07:44:34.175652] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:18.165 qpair failed and we were unable to recover it. 00:27:18.165 [2024-07-14 07:44:34.175829] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.165 [2024-07-14 07:44:34.176006] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.165 [2024-07-14 07:44:34.176035] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:18.165 qpair failed and we were unable to recover it. 00:27:18.165 [2024-07-14 07:44:34.176242] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.165 [2024-07-14 07:44:34.176470] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.165 [2024-07-14 07:44:34.176498] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:18.165 qpair failed and we were unable to recover it. 00:27:18.165 [2024-07-14 07:44:34.176947] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.165 [2024-07-14 07:44:34.177160] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.165 [2024-07-14 07:44:34.177187] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:18.165 qpair failed and we were unable to recover it. 00:27:18.165 [2024-07-14 07:44:34.177595] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.165 [2024-07-14 07:44:34.177822] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.165 [2024-07-14 07:44:34.177851] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:18.165 qpair failed and we were unable to recover it. 00:27:18.165 [2024-07-14 07:44:34.178087] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.165 [2024-07-14 07:44:34.178350] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.165 [2024-07-14 07:44:34.178379] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:18.165 qpair failed and we were unable to recover it. 00:27:18.165 [2024-07-14 07:44:34.178595] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.165 [2024-07-14 07:44:34.178802] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.165 [2024-07-14 07:44:34.178833] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:18.165 qpair failed and we were unable to recover it. 00:27:18.165 [2024-07-14 07:44:34.179029] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.165 [2024-07-14 07:44:34.179309] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.165 [2024-07-14 07:44:34.179360] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:18.165 qpair failed and we were unable to recover it. 00:27:18.165 [2024-07-14 07:44:34.179543] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.165 [2024-07-14 07:44:34.179830] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.165 [2024-07-14 07:44:34.179887] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:18.165 qpair failed and we were unable to recover it. 00:27:18.165 [2024-07-14 07:44:34.180097] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.165 [2024-07-14 07:44:34.180409] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.165 [2024-07-14 07:44:34.180469] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:18.165 qpair failed and we were unable to recover it. 00:27:18.165 [2024-07-14 07:44:34.180671] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.165 [2024-07-14 07:44:34.180891] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.165 [2024-07-14 07:44:34.180917] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:18.165 qpair failed and we were unable to recover it. 00:27:18.165 [2024-07-14 07:44:34.181088] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.165 [2024-07-14 07:44:34.181318] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.165 [2024-07-14 07:44:34.181346] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:18.165 qpair failed and we were unable to recover it. 00:27:18.165 [2024-07-14 07:44:34.181574] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.165 [2024-07-14 07:44:34.181816] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.165 [2024-07-14 07:44:34.181846] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:18.165 qpair failed and we were unable to recover it. 00:27:18.165 [2024-07-14 07:44:34.182025] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.165 [2024-07-14 07:44:34.182236] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.165 [2024-07-14 07:44:34.182287] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:18.165 qpair failed and we were unable to recover it. 00:27:18.165 [2024-07-14 07:44:34.182513] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.165 [2024-07-14 07:44:34.182728] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.165 [2024-07-14 07:44:34.182767] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:18.165 qpair failed and we were unable to recover it. 00:27:18.165 [2024-07-14 07:44:34.182961] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.165 [2024-07-14 07:44:34.183138] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.165 [2024-07-14 07:44:34.183167] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:18.165 qpair failed and we were unable to recover it. 00:27:18.165 [2024-07-14 07:44:34.183490] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.165 [2024-07-14 07:44:34.183894] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.165 [2024-07-14 07:44:34.183957] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:18.165 qpair failed and we were unable to recover it. 00:27:18.165 [2024-07-14 07:44:34.184196] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.165 [2024-07-14 07:44:34.184540] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.165 [2024-07-14 07:44:34.184584] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:18.165 qpair failed and we were unable to recover it. 00:27:18.165 [2024-07-14 07:44:34.184787] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.165 [2024-07-14 07:44:34.185015] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.165 [2024-07-14 07:44:34.185044] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:18.165 qpair failed and we were unable to recover it. 00:27:18.165 [2024-07-14 07:44:34.185298] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.165 [2024-07-14 07:44:34.185492] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.165 [2024-07-14 07:44:34.185516] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:18.165 qpair failed and we were unable to recover it. 00:27:18.165 [2024-07-14 07:44:34.185760] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.165 [2024-07-14 07:44:34.185975] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.165 [2024-07-14 07:44:34.186002] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:18.165 qpair failed and we were unable to recover it. 00:27:18.165 [2024-07-14 07:44:34.186212] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.165 [2024-07-14 07:44:34.186590] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.165 [2024-07-14 07:44:34.186637] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:18.165 qpair failed and we were unable to recover it. 00:27:18.165 [2024-07-14 07:44:34.186842] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.165 [2024-07-14 07:44:34.187046] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.165 [2024-07-14 07:44:34.187071] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:18.165 qpair failed and we were unable to recover it. 00:27:18.165 [2024-07-14 07:44:34.187377] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.165 [2024-07-14 07:44:34.187804] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.165 [2024-07-14 07:44:34.187874] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:18.165 qpair failed and we were unable to recover it. 00:27:18.165 [2024-07-14 07:44:34.188098] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.165 [2024-07-14 07:44:34.188308] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.165 [2024-07-14 07:44:34.188336] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:18.165 qpair failed and we were unable to recover it. 00:27:18.165 [2024-07-14 07:44:34.188538] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.165 [2024-07-14 07:44:34.188740] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.165 [2024-07-14 07:44:34.188768] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:18.165 qpair failed and we were unable to recover it. 00:27:18.165 [2024-07-14 07:44:34.188992] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.165 [2024-07-14 07:44:34.189233] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.165 [2024-07-14 07:44:34.189267] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:18.165 qpair failed and we were unable to recover it. 00:27:18.165 [2024-07-14 07:44:34.189453] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.165 [2024-07-14 07:44:34.189631] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.165 [2024-07-14 07:44:34.189659] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:18.165 qpair failed and we were unable to recover it. 00:27:18.165 [2024-07-14 07:44:34.189884] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.165 [2024-07-14 07:44:34.190045] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.165 [2024-07-14 07:44:34.190070] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:18.165 qpair failed and we were unable to recover it. 00:27:18.165 [2024-07-14 07:44:34.190286] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.165 [2024-07-14 07:44:34.190492] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.165 [2024-07-14 07:44:34.190520] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:18.165 qpair failed and we were unable to recover it. 00:27:18.165 [2024-07-14 07:44:34.190712] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.165 [2024-07-14 07:44:34.191032] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.165 [2024-07-14 07:44:34.191087] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:18.165 qpair failed and we were unable to recover it. 00:27:18.165 [2024-07-14 07:44:34.191498] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.165 [2024-07-14 07:44:34.191934] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.165 [2024-07-14 07:44:34.191963] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:18.165 qpair failed and we were unable to recover it. 00:27:18.165 [2024-07-14 07:44:34.192253] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.165 [2024-07-14 07:44:34.192639] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.165 [2024-07-14 07:44:34.192710] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:18.165 qpair failed and we were unable to recover it. 00:27:18.165 [2024-07-14 07:44:34.192892] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.165 [2024-07-14 07:44:34.193101] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.165 [2024-07-14 07:44:34.193127] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:18.165 qpair failed and we were unable to recover it. 00:27:18.165 [2024-07-14 07:44:34.193336] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.165 [2024-07-14 07:44:34.193541] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.165 [2024-07-14 07:44:34.193569] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:18.165 qpair failed and we were unable to recover it. 00:27:18.165 [2024-07-14 07:44:34.193807] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.165 [2024-07-14 07:44:34.193993] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.165 [2024-07-14 07:44:34.194018] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:18.165 qpair failed and we were unable to recover it. 00:27:18.165 [2024-07-14 07:44:34.194207] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.165 [2024-07-14 07:44:34.194410] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.165 [2024-07-14 07:44:34.194438] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:18.165 qpair failed and we were unable to recover it. 00:27:18.165 [2024-07-14 07:44:34.194670] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.165 [2024-07-14 07:44:34.194909] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.165 [2024-07-14 07:44:34.194935] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:18.165 qpair failed and we were unable to recover it. 00:27:18.165 [2024-07-14 07:44:34.195165] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.165 [2024-07-14 07:44:34.195572] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.165 [2024-07-14 07:44:34.195625] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:18.165 qpair failed and we were unable to recover it. 00:27:18.165 [2024-07-14 07:44:34.195855] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.165 [2024-07-14 07:44:34.196095] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.165 [2024-07-14 07:44:34.196124] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:18.165 qpair failed and we were unable to recover it. 00:27:18.165 [2024-07-14 07:44:34.196353] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.165 [2024-07-14 07:44:34.196539] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.165 [2024-07-14 07:44:34.196579] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:18.165 qpair failed and we were unable to recover it. 00:27:18.165 [2024-07-14 07:44:34.196828] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.165 [2024-07-14 07:44:34.197012] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.165 [2024-07-14 07:44:34.197041] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:18.165 qpair failed and we were unable to recover it. 00:27:18.165 [2024-07-14 07:44:34.197239] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.165 [2024-07-14 07:44:34.197437] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.165 [2024-07-14 07:44:34.197465] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:18.165 qpair failed and we were unable to recover it. 00:27:18.165 [2024-07-14 07:44:34.197718] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.166 [2024-07-14 07:44:34.197936] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.166 [2024-07-14 07:44:34.197962] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:18.166 qpair failed and we were unable to recover it. 00:27:18.166 [2024-07-14 07:44:34.198191] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.166 [2024-07-14 07:44:34.198424] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.166 [2024-07-14 07:44:34.198452] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:18.166 qpair failed and we were unable to recover it. 00:27:18.166 [2024-07-14 07:44:34.198671] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.166 [2024-07-14 07:44:34.198863] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.166 [2024-07-14 07:44:34.198902] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:18.166 qpair failed and we were unable to recover it. 00:27:18.166 [2024-07-14 07:44:34.199064] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.166 [2024-07-14 07:44:34.199301] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.166 [2024-07-14 07:44:34.199329] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:18.166 qpair failed and we were unable to recover it. 00:27:18.166 [2024-07-14 07:44:34.199681] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.166 [2024-07-14 07:44:34.199934] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.166 [2024-07-14 07:44:34.199963] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:18.166 qpair failed and we were unable to recover it. 00:27:18.166 [2024-07-14 07:44:34.200175] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.166 [2024-07-14 07:44:34.200438] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.166 [2024-07-14 07:44:34.200481] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:18.166 qpair failed and we were unable to recover it. 00:27:18.166 [2024-07-14 07:44:34.200687] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.166 [2024-07-14 07:44:34.200895] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.166 [2024-07-14 07:44:34.200924] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:18.166 qpair failed and we were unable to recover it. 00:27:18.166 [2024-07-14 07:44:34.201154] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.166 [2024-07-14 07:44:34.201528] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.166 [2024-07-14 07:44:34.201589] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:18.166 qpair failed and we were unable to recover it. 00:27:18.166 [2024-07-14 07:44:34.201788] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.166 [2024-07-14 07:44:34.202001] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.166 [2024-07-14 07:44:34.202030] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:18.166 qpair failed and we were unable to recover it. 00:27:18.166 [2024-07-14 07:44:34.202264] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.166 [2024-07-14 07:44:34.202463] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.166 [2024-07-14 07:44:34.202492] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:18.166 qpair failed and we were unable to recover it. 00:27:18.166 [2024-07-14 07:44:34.202694] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.166 [2024-07-14 07:44:34.202923] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.166 [2024-07-14 07:44:34.202953] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:18.166 qpair failed and we were unable to recover it. 00:27:18.166 [2024-07-14 07:44:34.203142] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.166 [2024-07-14 07:44:34.203320] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.166 [2024-07-14 07:44:34.203363] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:18.166 qpair failed and we were unable to recover it. 00:27:18.166 [2024-07-14 07:44:34.203640] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.166 [2024-07-14 07:44:34.203911] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.166 [2024-07-14 07:44:34.203940] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:18.166 qpair failed and we were unable to recover it. 00:27:18.166 [2024-07-14 07:44:34.204160] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.166 [2024-07-14 07:44:34.204395] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.166 [2024-07-14 07:44:34.204422] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:18.166 qpair failed and we were unable to recover it. 00:27:18.166 [2024-07-14 07:44:34.204630] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.166 [2024-07-14 07:44:34.204820] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.166 [2024-07-14 07:44:34.204848] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:18.166 qpair failed and we were unable to recover it. 00:27:18.166 [2024-07-14 07:44:34.205084] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.166 [2024-07-14 07:44:34.205296] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.166 [2024-07-14 07:44:34.205322] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:18.166 qpair failed and we were unable to recover it. 00:27:18.166 [2024-07-14 07:44:34.205621] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.166 [2024-07-14 07:44:34.205853] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.166 [2024-07-14 07:44:34.205890] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:18.166 qpair failed and we were unable to recover it. 00:27:18.166 [2024-07-14 07:44:34.206122] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.166 [2024-07-14 07:44:34.206557] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.166 [2024-07-14 07:44:34.206607] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:18.166 qpair failed and we were unable to recover it. 00:27:18.166 [2024-07-14 07:44:34.206812] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.166 [2024-07-14 07:44:34.207031] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.166 [2024-07-14 07:44:34.207057] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:18.166 qpair failed and we were unable to recover it. 00:27:18.166 [2024-07-14 07:44:34.207267] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.166 [2024-07-14 07:44:34.207671] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.166 [2024-07-14 07:44:34.207723] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:18.166 qpair failed and we were unable to recover it. 00:27:18.166 [2024-07-14 07:44:34.208030] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.166 [2024-07-14 07:44:34.208285] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.166 [2024-07-14 07:44:34.208339] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:18.166 qpair failed and we were unable to recover it. 00:27:18.166 [2024-07-14 07:44:34.208619] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.166 [2024-07-14 07:44:34.208874] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.166 [2024-07-14 07:44:34.208904] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:18.166 qpair failed and we were unable to recover it. 00:27:18.166 [2024-07-14 07:44:34.209083] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.166 [2024-07-14 07:44:34.209292] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.166 [2024-07-14 07:44:34.209320] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:18.166 qpair failed and we were unable to recover it. 00:27:18.166 [2024-07-14 07:44:34.209492] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.166 [2024-07-14 07:44:34.209727] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.166 [2024-07-14 07:44:34.209752] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:18.166 qpair failed and we were unable to recover it. 00:27:18.166 [2024-07-14 07:44:34.209970] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.166 [2024-07-14 07:44:34.210159] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.166 [2024-07-14 07:44:34.210199] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:18.166 qpair failed and we were unable to recover it. 00:27:18.166 [2024-07-14 07:44:34.210408] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.166 [2024-07-14 07:44:34.210655] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.166 [2024-07-14 07:44:34.210709] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:18.166 qpair failed and we were unable to recover it. 00:27:18.166 [2024-07-14 07:44:34.210944] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.166 [2024-07-14 07:44:34.211180] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.166 [2024-07-14 07:44:34.211209] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:18.166 qpair failed and we were unable to recover it. 00:27:18.166 [2024-07-14 07:44:34.211447] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.166 [2024-07-14 07:44:34.211835] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.166 [2024-07-14 07:44:34.211904] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:18.166 qpair failed and we were unable to recover it. 00:27:18.166 [2024-07-14 07:44:34.212142] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.166 [2024-07-14 07:44:34.212361] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.166 [2024-07-14 07:44:34.212386] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:18.166 qpair failed and we were unable to recover it. 00:27:18.166 [2024-07-14 07:44:34.212602] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.166 [2024-07-14 07:44:34.212757] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.166 [2024-07-14 07:44:34.212782] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:18.166 qpair failed and we were unable to recover it. 00:27:18.166 [2024-07-14 07:44:34.213010] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.166 [2024-07-14 07:44:34.213300] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.166 [2024-07-14 07:44:34.213362] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:18.166 qpair failed and we were unable to recover it. 00:27:18.166 [2024-07-14 07:44:34.213590] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.166 [2024-07-14 07:44:34.213798] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.166 [2024-07-14 07:44:34.213825] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:18.166 qpair failed and we were unable to recover it. 00:27:18.166 [2024-07-14 07:44:34.214037] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.166 [2024-07-14 07:44:34.214393] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.166 [2024-07-14 07:44:34.214446] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:18.166 qpair failed and we were unable to recover it. 00:27:18.166 [2024-07-14 07:44:34.214680] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.166 [2024-07-14 07:44:34.214897] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.166 [2024-07-14 07:44:34.214926] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:18.166 qpair failed and we were unable to recover it. 00:27:18.166 [2024-07-14 07:44:34.215159] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.166 [2024-07-14 07:44:34.215341] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.166 [2024-07-14 07:44:34.215373] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:18.166 qpair failed and we were unable to recover it. 00:27:18.166 [2024-07-14 07:44:34.215601] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.166 [2024-07-14 07:44:34.215802] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.166 [2024-07-14 07:44:34.215830] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:18.166 qpair failed and we were unable to recover it. 00:27:18.166 [2024-07-14 07:44:34.216041] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.166 [2024-07-14 07:44:34.216373] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.166 [2024-07-14 07:44:34.216436] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:18.166 qpair failed and we were unable to recover it. 00:27:18.166 [2024-07-14 07:44:34.216624] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.166 [2024-07-14 07:44:34.216857] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.166 [2024-07-14 07:44:34.216892] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:18.166 qpair failed and we were unable to recover it. 00:27:18.166 [2024-07-14 07:44:34.217077] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.166 [2024-07-14 07:44:34.217433] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.166 [2024-07-14 07:44:34.217473] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:18.166 qpair failed and we were unable to recover it. 00:27:18.166 [2024-07-14 07:44:34.217707] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.166 [2024-07-14 07:44:34.217883] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.166 [2024-07-14 07:44:34.217912] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:18.166 qpair failed and we were unable to recover it. 00:27:18.166 [2024-07-14 07:44:34.218117] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.166 [2024-07-14 07:44:34.218374] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.166 [2024-07-14 07:44:34.218402] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:18.166 qpair failed and we were unable to recover it. 00:27:18.166 [2024-07-14 07:44:34.218598] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.166 [2024-07-14 07:44:34.218809] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.166 [2024-07-14 07:44:34.218837] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:18.166 qpair failed and we were unable to recover it. 00:27:18.166 [2024-07-14 07:44:34.219060] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.166 [2024-07-14 07:44:34.219424] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.166 [2024-07-14 07:44:34.219479] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:18.166 qpair failed and we were unable to recover it. 00:27:18.166 [2024-07-14 07:44:34.219677] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.166 [2024-07-14 07:44:34.219908] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.166 [2024-07-14 07:44:34.219947] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:18.166 qpair failed and we were unable to recover it. 00:27:18.166 [2024-07-14 07:44:34.220179] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.166 [2024-07-14 07:44:34.220531] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.166 [2024-07-14 07:44:34.220599] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:18.166 qpair failed and we were unable to recover it. 00:27:18.166 [2024-07-14 07:44:34.220894] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.166 [2024-07-14 07:44:34.221130] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.166 [2024-07-14 07:44:34.221158] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:18.166 qpair failed and we were unable to recover it. 00:27:18.166 [2024-07-14 07:44:34.221367] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.166 [2024-07-14 07:44:34.221602] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.166 [2024-07-14 07:44:34.221627] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:18.166 qpair failed and we were unable to recover it. 00:27:18.166 [2024-07-14 07:44:34.221899] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.166 [2024-07-14 07:44:34.222080] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.166 [2024-07-14 07:44:34.222108] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:18.166 qpair failed and we were unable to recover it. 00:27:18.166 [2024-07-14 07:44:34.222520] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.166 [2024-07-14 07:44:34.222944] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.166 [2024-07-14 07:44:34.222973] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:18.166 qpair failed and we were unable to recover it. 00:27:18.166 [2024-07-14 07:44:34.223227] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.166 [2024-07-14 07:44:34.223616] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.166 [2024-07-14 07:44:34.223676] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:18.166 qpair failed and we were unable to recover it. 00:27:18.166 [2024-07-14 07:44:34.223913] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.166 [2024-07-14 07:44:34.224103] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.166 [2024-07-14 07:44:34.224129] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:18.166 qpair failed and we were unable to recover it. 00:27:18.166 [2024-07-14 07:44:34.224351] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.166 [2024-07-14 07:44:34.224551] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.166 [2024-07-14 07:44:34.224576] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:18.166 qpair failed and we were unable to recover it. 00:27:18.166 [2024-07-14 07:44:34.224892] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.166 [2024-07-14 07:44:34.225125] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.166 [2024-07-14 07:44:34.225153] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:18.166 qpair failed and we were unable to recover it. 00:27:18.166 [2024-07-14 07:44:34.225405] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.166 [2024-07-14 07:44:34.225636] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.166 [2024-07-14 07:44:34.225664] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:18.166 qpair failed and we were unable to recover it. 00:27:18.166 [2024-07-14 07:44:34.225904] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.166 [2024-07-14 07:44:34.226120] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.166 [2024-07-14 07:44:34.226145] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:18.166 qpair failed and we were unable to recover it. 00:27:18.166 [2024-07-14 07:44:34.226357] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.166 [2024-07-14 07:44:34.226577] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.166 [2024-07-14 07:44:34.226602] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:18.166 qpair failed and we were unable to recover it. 00:27:18.166 [2024-07-14 07:44:34.226840] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.166 [2024-07-14 07:44:34.227068] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.166 [2024-07-14 07:44:34.227097] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:18.166 qpair failed and we were unable to recover it. 00:27:18.166 [2024-07-14 07:44:34.227384] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.166 [2024-07-14 07:44:34.227709] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.166 [2024-07-14 07:44:34.227734] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:18.166 qpair failed and we were unable to recover it. 00:27:18.166 [2024-07-14 07:44:34.227947] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.166 [2024-07-14 07:44:34.228132] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.166 [2024-07-14 07:44:34.228160] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:18.166 qpair failed and we were unable to recover it. 00:27:18.166 [2024-07-14 07:44:34.228365] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.166 [2024-07-14 07:44:34.228537] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.166 [2024-07-14 07:44:34.228562] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:18.166 qpair failed and we were unable to recover it. 00:27:18.166 [2024-07-14 07:44:34.228783] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.166 [2024-07-14 07:44:34.228981] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.166 [2024-07-14 07:44:34.229010] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:18.166 qpair failed and we were unable to recover it. 00:27:18.166 [2024-07-14 07:44:34.229213] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.166 [2024-07-14 07:44:34.229578] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.166 [2024-07-14 07:44:34.229627] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:18.166 qpair failed and we were unable to recover it. 00:27:18.166 [2024-07-14 07:44:34.229855] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.166 [2024-07-14 07:44:34.230076] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.166 [2024-07-14 07:44:34.230102] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:18.166 qpair failed and we were unable to recover it. 00:27:18.166 [2024-07-14 07:44:34.230336] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.166 [2024-07-14 07:44:34.230684] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.166 [2024-07-14 07:44:34.230742] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:18.166 qpair failed and we were unable to recover it. 00:27:18.166 [2024-07-14 07:44:34.230986] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.166 [2024-07-14 07:44:34.231144] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.166 [2024-07-14 07:44:34.231184] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:18.166 qpair failed and we were unable to recover it. 00:27:18.166 [2024-07-14 07:44:34.231454] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.166 [2024-07-14 07:44:34.231666] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.166 [2024-07-14 07:44:34.231694] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:18.166 qpair failed and we were unable to recover it. 00:27:18.166 [2024-07-14 07:44:34.231904] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.166 [2024-07-14 07:44:34.232107] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.166 [2024-07-14 07:44:34.232134] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:18.166 qpair failed and we were unable to recover it. 00:27:18.166 [2024-07-14 07:44:34.232336] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.166 [2024-07-14 07:44:34.232685] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.166 [2024-07-14 07:44:34.232733] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:18.166 qpair failed and we were unable to recover it. 00:27:18.166 [2024-07-14 07:44:34.232961] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.167 [2024-07-14 07:44:34.233185] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.167 [2024-07-14 07:44:34.233250] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:18.167 qpair failed and we were unable to recover it. 00:27:18.167 [2024-07-14 07:44:34.233443] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.167 [2024-07-14 07:44:34.233656] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.167 [2024-07-14 07:44:34.233685] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:18.167 qpair failed and we were unable to recover it. 00:27:18.167 [2024-07-14 07:44:34.233886] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.167 [2024-07-14 07:44:34.234113] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.167 [2024-07-14 07:44:34.234142] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:18.167 qpair failed and we were unable to recover it. 00:27:18.167 [2024-07-14 07:44:34.234369] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.167 [2024-07-14 07:44:34.234653] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.167 [2024-07-14 07:44:34.234707] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:18.167 qpair failed and we were unable to recover it. 00:27:18.167 [2024-07-14 07:44:34.234935] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.167 [2024-07-14 07:44:34.235137] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.167 [2024-07-14 07:44:34.235165] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:18.167 qpair failed and we were unable to recover it. 00:27:18.167 [2024-07-14 07:44:34.235403] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.167 [2024-07-14 07:44:34.235736] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.167 [2024-07-14 07:44:34.235785] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:18.167 qpair failed and we were unable to recover it. 00:27:18.167 [2024-07-14 07:44:34.236025] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.167 [2024-07-14 07:44:34.236209] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.167 [2024-07-14 07:44:34.236237] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:18.167 qpair failed and we were unable to recover it. 00:27:18.167 [2024-07-14 07:44:34.236441] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.167 [2024-07-14 07:44:34.236671] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.167 [2024-07-14 07:44:34.236695] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:18.167 qpair failed and we were unable to recover it. 00:27:18.167 [2024-07-14 07:44:34.236925] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.167 [2024-07-14 07:44:34.237124] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.167 [2024-07-14 07:44:34.237152] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:18.167 qpair failed and we were unable to recover it. 00:27:18.167 [2024-07-14 07:44:34.237401] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.167 [2024-07-14 07:44:34.237733] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.167 [2024-07-14 07:44:34.237793] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:18.167 qpair failed and we were unable to recover it. 00:27:18.167 [2024-07-14 07:44:34.238029] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.167 [2024-07-14 07:44:34.238228] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.167 [2024-07-14 07:44:34.238253] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:18.167 qpair failed and we were unable to recover it. 00:27:18.167 [2024-07-14 07:44:34.238582] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.167 [2024-07-14 07:44:34.238819] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.167 [2024-07-14 07:44:34.238847] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:18.167 qpair failed and we were unable to recover it. 00:27:18.167 [2024-07-14 07:44:34.239066] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.167 [2024-07-14 07:44:34.239254] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.167 [2024-07-14 07:44:34.239280] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:18.167 qpair failed and we were unable to recover it. 00:27:18.167 [2024-07-14 07:44:34.239530] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.167 [2024-07-14 07:44:34.239938] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.167 [2024-07-14 07:44:34.239967] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:18.167 qpair failed and we were unable to recover it. 00:27:18.167 [2024-07-14 07:44:34.240216] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.167 [2024-07-14 07:44:34.240454] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.167 [2024-07-14 07:44:34.240482] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:18.167 qpair failed and we were unable to recover it. 00:27:18.167 [2024-07-14 07:44:34.240674] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.167 [2024-07-14 07:44:34.240882] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.167 [2024-07-14 07:44:34.240908] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:18.167 qpair failed and we were unable to recover it. 00:27:18.167 [2024-07-14 07:44:34.241123] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.167 [2024-07-14 07:44:34.241305] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.167 [2024-07-14 07:44:34.241333] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:18.167 qpair failed and we were unable to recover it. 00:27:18.167 [2024-07-14 07:44:34.241585] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.167 [2024-07-14 07:44:34.241801] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.167 [2024-07-14 07:44:34.241828] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:18.167 qpair failed and we were unable to recover it. 00:27:18.167 [2024-07-14 07:44:34.242002] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.167 [2024-07-14 07:44:34.242272] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.167 [2024-07-14 07:44:34.242324] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:18.167 qpair failed and we were unable to recover it. 00:27:18.167 [2024-07-14 07:44:34.242562] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.167 [2024-07-14 07:44:34.242793] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.167 [2024-07-14 07:44:34.242818] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:18.167 qpair failed and we were unable to recover it. 00:27:18.167 [2024-07-14 07:44:34.243017] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.167 [2024-07-14 07:44:34.243203] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.167 [2024-07-14 07:44:34.243231] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:18.167 qpair failed and we were unable to recover it. 00:27:18.167 [2024-07-14 07:44:34.243436] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.167 [2024-07-14 07:44:34.243899] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.167 [2024-07-14 07:44:34.243928] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:18.167 qpair failed and we were unable to recover it. 00:27:18.167 [2024-07-14 07:44:34.244129] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.167 [2024-07-14 07:44:34.244340] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.167 [2024-07-14 07:44:34.244368] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:18.167 qpair failed and we were unable to recover it. 00:27:18.167 [2024-07-14 07:44:34.244566] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.167 [2024-07-14 07:44:34.244763] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.167 [2024-07-14 07:44:34.244791] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:18.167 qpair failed and we were unable to recover it. 00:27:18.167 [2024-07-14 07:44:34.245027] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.167 [2024-07-14 07:44:34.245212] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.167 [2024-07-14 07:44:34.245240] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:18.167 qpair failed and we were unable to recover it. 00:27:18.167 [2024-07-14 07:44:34.245468] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.167 [2024-07-14 07:44:34.245780] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.167 [2024-07-14 07:44:34.245808] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:18.167 qpair failed and we were unable to recover it. 00:27:18.167 [2024-07-14 07:44:34.246013] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.167 [2024-07-14 07:44:34.246190] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.167 [2024-07-14 07:44:34.246218] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:18.167 qpair failed and we were unable to recover it. 00:27:18.167 [2024-07-14 07:44:34.246449] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.167 [2024-07-14 07:44:34.246667] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.167 [2024-07-14 07:44:34.246697] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:18.167 qpair failed and we were unable to recover it. 00:27:18.167 [2024-07-14 07:44:34.246926] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.167 [2024-07-14 07:44:34.247186] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.167 [2024-07-14 07:44:34.247214] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:18.167 qpair failed and we were unable to recover it. 00:27:18.167 [2024-07-14 07:44:34.247422] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.167 [2024-07-14 07:44:34.247776] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.167 [2024-07-14 07:44:34.247840] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:18.167 qpair failed and we were unable to recover it. 00:27:18.167 [2024-07-14 07:44:34.248080] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.167 [2024-07-14 07:44:34.248366] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.167 [2024-07-14 07:44:34.248419] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:18.167 qpair failed and we were unable to recover it. 00:27:18.167 [2024-07-14 07:44:34.248623] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.167 [2024-07-14 07:44:34.248837] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.167 [2024-07-14 07:44:34.248862] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:18.167 qpair failed and we were unable to recover it. 00:27:18.167 [2024-07-14 07:44:34.249097] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.167 [2024-07-14 07:44:34.249300] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.167 [2024-07-14 07:44:34.249328] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:18.167 qpair failed and we were unable to recover it. 00:27:18.167 [2024-07-14 07:44:34.249553] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.167 [2024-07-14 07:44:34.249955] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.167 [2024-07-14 07:44:34.249984] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:18.167 qpair failed and we were unable to recover it. 00:27:18.167 [2024-07-14 07:44:34.250229] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.167 [2024-07-14 07:44:34.250456] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.167 [2024-07-14 07:44:34.250507] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:18.167 qpair failed and we were unable to recover it. 00:27:18.167 [2024-07-14 07:44:34.250715] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.167 [2024-07-14 07:44:34.250943] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.167 [2024-07-14 07:44:34.250972] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:18.167 qpair failed and we were unable to recover it. 00:27:18.167 [2024-07-14 07:44:34.251150] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.167 [2024-07-14 07:44:34.251426] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.167 [2024-07-14 07:44:34.251454] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:18.167 qpair failed and we were unable to recover it. 00:27:18.167 [2024-07-14 07:44:34.251659] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.167 [2024-07-14 07:44:34.251872] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.167 [2024-07-14 07:44:34.251901] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:18.167 qpair failed and we were unable to recover it. 00:27:18.167 [2024-07-14 07:44:34.252134] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.167 [2024-07-14 07:44:34.252569] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.167 [2024-07-14 07:44:34.252623] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:18.167 qpair failed and we were unable to recover it. 00:27:18.167 [2024-07-14 07:44:34.252850] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.167 [2024-07-14 07:44:34.253028] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.167 [2024-07-14 07:44:34.253056] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:18.167 qpair failed and we were unable to recover it. 00:27:18.167 [2024-07-14 07:44:34.253354] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.167 [2024-07-14 07:44:34.253553] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.167 [2024-07-14 07:44:34.253581] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:18.167 qpair failed and we were unable to recover it. 00:27:18.167 [2024-07-14 07:44:34.253783] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.167 [2024-07-14 07:44:34.253975] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.167 [2024-07-14 07:44:34.254016] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:18.167 qpair failed and we were unable to recover it. 00:27:18.167 [2024-07-14 07:44:34.254234] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.167 [2024-07-14 07:44:34.254558] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.167 [2024-07-14 07:44:34.254618] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:18.167 qpair failed and we were unable to recover it. 00:27:18.167 [2024-07-14 07:44:34.254844] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.167 [2024-07-14 07:44:34.255064] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.167 [2024-07-14 07:44:34.255095] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:18.167 qpair failed and we were unable to recover it. 00:27:18.167 [2024-07-14 07:44:34.255464] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.167 [2024-07-14 07:44:34.255858] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.167 [2024-07-14 07:44:34.255916] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:18.167 qpair failed and we were unable to recover it. 00:27:18.167 [2024-07-14 07:44:34.256123] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.167 [2024-07-14 07:44:34.256312] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.167 [2024-07-14 07:44:34.256337] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:18.167 qpair failed and we were unable to recover it. 00:27:18.167 [2024-07-14 07:44:34.256573] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.167 [2024-07-14 07:44:34.256751] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.167 [2024-07-14 07:44:34.256779] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:18.167 qpair failed and we were unable to recover it. 00:27:18.167 [2024-07-14 07:44:34.256980] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.167 [2024-07-14 07:44:34.257183] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.167 [2024-07-14 07:44:34.257212] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:18.167 qpair failed and we were unable to recover it. 00:27:18.167 [2024-07-14 07:44:34.257422] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.167 [2024-07-14 07:44:34.257666] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.167 [2024-07-14 07:44:34.257691] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:18.167 qpair failed and we were unable to recover it. 00:27:18.167 [2024-07-14 07:44:34.257888] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.167 [2024-07-14 07:44:34.258089] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.167 [2024-07-14 07:44:34.258115] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:18.167 qpair failed and we were unable to recover it. 00:27:18.167 [2024-07-14 07:44:34.258302] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.167 [2024-07-14 07:44:34.258527] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.167 [2024-07-14 07:44:34.258551] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:18.167 qpair failed and we were unable to recover it. 00:27:18.167 [2024-07-14 07:44:34.258703] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.167 [2024-07-14 07:44:34.258957] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.167 [2024-07-14 07:44:34.258985] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:18.167 qpair failed and we were unable to recover it. 00:27:18.167 [2024-07-14 07:44:34.259261] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.167 [2024-07-14 07:44:34.259482] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.167 [2024-07-14 07:44:34.259551] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:18.167 qpair failed and we were unable to recover it. 00:27:18.167 [2024-07-14 07:44:34.259821] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.167 [2024-07-14 07:44:34.260049] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.167 [2024-07-14 07:44:34.260077] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:18.167 qpair failed and we were unable to recover it. 00:27:18.167 [2024-07-14 07:44:34.260286] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.167 [2024-07-14 07:44:34.260580] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.167 [2024-07-14 07:44:34.260640] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:18.167 qpair failed and we were unable to recover it. 00:27:18.167 [2024-07-14 07:44:34.260846] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.167 [2024-07-14 07:44:34.261057] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.167 [2024-07-14 07:44:34.261086] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:18.167 qpair failed and we were unable to recover it. 00:27:18.167 [2024-07-14 07:44:34.261454] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.167 [2024-07-14 07:44:34.261777] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.167 [2024-07-14 07:44:34.261801] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:18.167 qpair failed and we were unable to recover it. 00:27:18.167 [2024-07-14 07:44:34.262006] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.167 [2024-07-14 07:44:34.262251] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.167 [2024-07-14 07:44:34.262279] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:18.167 qpair failed and we were unable to recover it. 00:27:18.167 [2024-07-14 07:44:34.262461] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.167 [2024-07-14 07:44:34.262718] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.167 [2024-07-14 07:44:34.262743] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:18.167 qpair failed and we were unable to recover it. 00:27:18.167 [2024-07-14 07:44:34.263034] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.167 [2024-07-14 07:44:34.263375] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.167 [2024-07-14 07:44:34.263428] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:18.167 qpair failed and we were unable to recover it. 00:27:18.167 [2024-07-14 07:44:34.263614] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.167 [2024-07-14 07:44:34.263826] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.167 [2024-07-14 07:44:34.263876] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:18.167 qpair failed and we were unable to recover it. 00:27:18.167 [2024-07-14 07:44:34.264143] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.167 [2024-07-14 07:44:34.264357] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.167 [2024-07-14 07:44:34.264385] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:18.167 qpair failed and we were unable to recover it. 00:27:18.167 [2024-07-14 07:44:34.264563] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.167 [2024-07-14 07:44:34.264792] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.167 [2024-07-14 07:44:34.264820] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:18.167 qpair failed and we were unable to recover it. 00:27:18.167 [2024-07-14 07:44:34.265064] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.167 [2024-07-14 07:44:34.265267] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.167 [2024-07-14 07:44:34.265292] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:18.167 qpair failed and we were unable to recover it. 00:27:18.167 [2024-07-14 07:44:34.265516] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.167 [2024-07-14 07:44:34.265776] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.167 [2024-07-14 07:44:34.265800] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:18.167 qpair failed and we were unable to recover it. 00:27:18.167 [2024-07-14 07:44:34.265984] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.167 [2024-07-14 07:44:34.266189] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.167 [2024-07-14 07:44:34.266217] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:18.167 qpair failed and we were unable to recover it. 00:27:18.167 [2024-07-14 07:44:34.266453] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.167 [2024-07-14 07:44:34.266628] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.167 [2024-07-14 07:44:34.266653] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:18.167 qpair failed and we were unable to recover it. 00:27:18.167 [2024-07-14 07:44:34.266850] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.167 [2024-07-14 07:44:34.267044] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.167 [2024-07-14 07:44:34.267075] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:18.167 qpair failed and we were unable to recover it. 00:27:18.167 [2024-07-14 07:44:34.267357] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.167 [2024-07-14 07:44:34.267698] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.167 [2024-07-14 07:44:34.267763] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:18.167 qpair failed and we were unable to recover it. 00:27:18.167 [2024-07-14 07:44:34.267972] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.167 [2024-07-14 07:44:34.268131] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.167 [2024-07-14 07:44:34.268157] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:18.167 qpair failed and we were unable to recover it. 00:27:18.167 [2024-07-14 07:44:34.268435] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.167 [2024-07-14 07:44:34.268717] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.167 [2024-07-14 07:44:34.268772] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:18.167 qpair failed and we were unable to recover it. 00:27:18.167 [2024-07-14 07:44:34.269002] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.167 [2024-07-14 07:44:34.269208] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.167 [2024-07-14 07:44:34.269236] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:18.167 qpair failed and we were unable to recover it. 00:27:18.167 [2024-07-14 07:44:34.269465] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.167 [2024-07-14 07:44:34.269681] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.167 [2024-07-14 07:44:34.269706] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:18.168 qpair failed and we were unable to recover it. 00:27:18.168 [2024-07-14 07:44:34.269904] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.168 [2024-07-14 07:44:34.270089] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.168 [2024-07-14 07:44:34.270114] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:18.168 qpair failed and we were unable to recover it. 00:27:18.168 [2024-07-14 07:44:34.270330] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.168 [2024-07-14 07:44:34.270652] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.168 [2024-07-14 07:44:34.270703] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:18.168 qpair failed and we were unable to recover it. 00:27:18.168 [2024-07-14 07:44:34.270942] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.168 [2024-07-14 07:44:34.271130] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.168 [2024-07-14 07:44:34.271156] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:18.168 qpair failed and we were unable to recover it. 00:27:18.168 [2024-07-14 07:44:34.271370] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.168 [2024-07-14 07:44:34.271541] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.168 [2024-07-14 07:44:34.271569] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:18.168 qpair failed and we were unable to recover it. 00:27:18.168 [2024-07-14 07:44:34.271806] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.168 [2024-07-14 07:44:34.272012] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.168 [2024-07-14 07:44:34.272036] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:18.168 qpair failed and we were unable to recover it. 00:27:18.168 [2024-07-14 07:44:34.272245] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.168 [2024-07-14 07:44:34.272436] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.168 [2024-07-14 07:44:34.272468] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:18.168 qpair failed and we were unable to recover it. 00:27:18.168 [2024-07-14 07:44:34.272638] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.168 [2024-07-14 07:44:34.272845] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.168 [2024-07-14 07:44:34.272879] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:18.168 qpair failed and we were unable to recover it. 00:27:18.168 [2024-07-14 07:44:34.273063] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.168 [2024-07-14 07:44:34.273225] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.168 [2024-07-14 07:44:34.273265] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:18.168 qpair failed and we were unable to recover it. 00:27:18.168 [2024-07-14 07:44:34.273474] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.168 [2024-07-14 07:44:34.273786] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.168 [2024-07-14 07:44:34.273835] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:18.168 qpair failed and we were unable to recover it. 00:27:18.168 [2024-07-14 07:44:34.274055] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.168 [2024-07-14 07:44:34.274248] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.168 [2024-07-14 07:44:34.274272] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:18.168 qpair failed and we were unable to recover it. 00:27:18.168 [2024-07-14 07:44:34.274512] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.168 [2024-07-14 07:44:34.274708] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.168 [2024-07-14 07:44:34.274736] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:18.168 qpair failed and we were unable to recover it. 00:27:18.168 [2024-07-14 07:44:34.274957] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.168 [2024-07-14 07:44:34.275117] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.168 [2024-07-14 07:44:34.275142] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:18.168 qpair failed and we were unable to recover it. 00:27:18.168 [2024-07-14 07:44:34.275364] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.168 [2024-07-14 07:44:34.275552] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.168 [2024-07-14 07:44:34.275577] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:18.168 qpair failed and we were unable to recover it. 00:27:18.168 [2024-07-14 07:44:34.275777] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.168 [2024-07-14 07:44:34.275982] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.168 [2024-07-14 07:44:34.276008] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:18.168 qpair failed and we were unable to recover it. 00:27:18.168 [2024-07-14 07:44:34.276244] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.168 [2024-07-14 07:44:34.276472] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.168 [2024-07-14 07:44:34.276498] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:18.168 qpair failed and we were unable to recover it. 00:27:18.168 [2024-07-14 07:44:34.276739] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.168 [2024-07-14 07:44:34.276931] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.168 [2024-07-14 07:44:34.276957] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:18.168 qpair failed and we were unable to recover it. 00:27:18.168 [2024-07-14 07:44:34.277245] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.168 [2024-07-14 07:44:34.277672] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.168 [2024-07-14 07:44:34.277737] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:18.168 qpair failed and we were unable to recover it. 00:27:18.168 [2024-07-14 07:44:34.277978] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.168 [2024-07-14 07:44:34.278168] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.168 [2024-07-14 07:44:34.278193] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:18.168 qpair failed and we were unable to recover it. 00:27:18.168 [2024-07-14 07:44:34.278590] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.168 [2024-07-14 07:44:34.278817] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.168 [2024-07-14 07:44:34.278842] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:18.168 qpair failed and we were unable to recover it. 00:27:18.168 [2024-07-14 07:44:34.279056] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.168 [2024-07-14 07:44:34.279273] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.168 [2024-07-14 07:44:34.279298] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:18.168 qpair failed and we were unable to recover it. 00:27:18.168 [2024-07-14 07:44:34.279450] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.168 [2024-07-14 07:44:34.279637] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.168 [2024-07-14 07:44:34.279662] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:18.168 qpair failed and we were unable to recover it. 00:27:18.168 [2024-07-14 07:44:34.279829] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.168 [2024-07-14 07:44:34.280047] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.168 [2024-07-14 07:44:34.280074] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:18.168 qpair failed and we were unable to recover it. 00:27:18.168 [2024-07-14 07:44:34.280254] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.168 [2024-07-14 07:44:34.280413] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.168 [2024-07-14 07:44:34.280438] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:18.168 qpair failed and we were unable to recover it. 00:27:18.168 [2024-07-14 07:44:34.280647] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.168 [2024-07-14 07:44:34.280842] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.168 [2024-07-14 07:44:34.280878] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:18.168 qpair failed and we were unable to recover it. 00:27:18.168 [2024-07-14 07:44:34.281083] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.168 [2024-07-14 07:44:34.281292] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.168 [2024-07-14 07:44:34.281317] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:18.168 qpair failed and we were unable to recover it. 00:27:18.168 [2024-07-14 07:44:34.281497] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.168 [2024-07-14 07:44:34.281670] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.168 [2024-07-14 07:44:34.281699] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:18.168 qpair failed and we were unable to recover it. 00:27:18.168 [2024-07-14 07:44:34.281878] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.168 [2024-07-14 07:44:34.282079] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.168 [2024-07-14 07:44:34.282105] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:18.168 qpair failed and we were unable to recover it. 00:27:18.168 [2024-07-14 07:44:34.282338] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.168 [2024-07-14 07:44:34.282661] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.168 [2024-07-14 07:44:34.282732] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:18.168 qpair failed and we were unable to recover it. 00:27:18.168 [2024-07-14 07:44:34.282966] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.168 [2024-07-14 07:44:34.283180] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.168 [2024-07-14 07:44:34.283206] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:18.168 qpair failed and we were unable to recover it. 00:27:18.168 [2024-07-14 07:44:34.283360] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.168 [2024-07-14 07:44:34.283577] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.168 [2024-07-14 07:44:34.283603] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:18.168 qpair failed and we were unable to recover it. 00:27:18.168 [2024-07-14 07:44:34.283764] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.168 [2024-07-14 07:44:34.283956] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.168 [2024-07-14 07:44:34.283983] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:18.168 qpair failed and we were unable to recover it. 00:27:18.168 [2024-07-14 07:44:34.284146] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.168 [2024-07-14 07:44:34.284334] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.168 [2024-07-14 07:44:34.284359] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:18.168 qpair failed and we were unable to recover it. 00:27:18.168 [2024-07-14 07:44:34.284508] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.168 [2024-07-14 07:44:34.284739] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.168 [2024-07-14 07:44:34.284767] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:18.168 qpair failed and we were unable to recover it. 00:27:18.168 [2024-07-14 07:44:34.284951] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.168 [2024-07-14 07:44:34.285162] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.168 [2024-07-14 07:44:34.285190] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:18.168 qpair failed and we were unable to recover it. 00:27:18.168 [2024-07-14 07:44:34.285360] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.168 [2024-07-14 07:44:34.285622] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.168 [2024-07-14 07:44:34.285674] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:18.168 qpair failed and we were unable to recover it. 00:27:18.168 [2024-07-14 07:44:34.285879] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.168 [2024-07-14 07:44:34.286114] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.168 [2024-07-14 07:44:34.286139] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:18.168 qpair failed and we were unable to recover it. 00:27:18.168 [2024-07-14 07:44:34.286350] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.168 [2024-07-14 07:44:34.286511] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.168 [2024-07-14 07:44:34.286536] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:18.168 qpair failed and we were unable to recover it. 00:27:18.168 [2024-07-14 07:44:34.286726] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.168 [2024-07-14 07:44:34.287031] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.168 [2024-07-14 07:44:34.287060] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:18.168 qpair failed and we were unable to recover it. 00:27:18.168 [2024-07-14 07:44:34.287262] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.168 [2024-07-14 07:44:34.287627] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.168 [2024-07-14 07:44:34.287684] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:18.168 qpair failed and we were unable to recover it. 00:27:18.168 [2024-07-14 07:44:34.287914] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.168 [2024-07-14 07:44:34.288088] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.168 [2024-07-14 07:44:34.288116] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:18.168 qpair failed and we were unable to recover it. 00:27:18.168 [2024-07-14 07:44:34.288292] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.168 [2024-07-14 07:44:34.288622] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.168 [2024-07-14 07:44:34.288682] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:18.168 qpair failed and we were unable to recover it. 00:27:18.168 [2024-07-14 07:44:34.288908] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.168 [2024-07-14 07:44:34.289140] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.168 [2024-07-14 07:44:34.289168] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:18.168 qpair failed and we were unable to recover it. 00:27:18.168 [2024-07-14 07:44:34.289404] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.168 [2024-07-14 07:44:34.289590] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.168 [2024-07-14 07:44:34.289615] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:18.168 qpair failed and we were unable to recover it. 00:27:18.168 [2024-07-14 07:44:34.289798] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.168 [2024-07-14 07:44:34.289990] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.168 [2024-07-14 07:44:34.290016] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:18.168 qpair failed and we were unable to recover it. 00:27:18.168 [2024-07-14 07:44:34.290205] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.168 [2024-07-14 07:44:34.290411] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.168 [2024-07-14 07:44:34.290437] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:18.168 qpair failed and we were unable to recover it. 00:27:18.168 [2024-07-14 07:44:34.290622] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.168 [2024-07-14 07:44:34.290787] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.168 [2024-07-14 07:44:34.290816] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:18.168 qpair failed and we were unable to recover it. 00:27:18.168 [2024-07-14 07:44:34.291018] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.168 [2024-07-14 07:44:34.291234] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.168 [2024-07-14 07:44:34.291259] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:18.168 qpair failed and we were unable to recover it. 00:27:18.168 [2024-07-14 07:44:34.291442] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.168 [2024-07-14 07:44:34.291651] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.168 [2024-07-14 07:44:34.291692] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:18.168 qpair failed and we were unable to recover it. 00:27:18.168 [2024-07-14 07:44:34.291925] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.168 [2024-07-14 07:44:34.292135] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.168 [2024-07-14 07:44:34.292160] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:18.168 qpair failed and we were unable to recover it. 00:27:18.168 [2024-07-14 07:44:34.292348] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.168 [2024-07-14 07:44:34.292507] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.168 [2024-07-14 07:44:34.292532] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:18.168 qpair failed and we were unable to recover it. 00:27:18.168 [2024-07-14 07:44:34.292720] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.168 [2024-07-14 07:44:34.292925] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.168 [2024-07-14 07:44:34.292951] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:18.168 qpair failed and we were unable to recover it. 00:27:18.168 [2024-07-14 07:44:34.293142] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.168 [2024-07-14 07:44:34.293481] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.168 [2024-07-14 07:44:34.293528] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:18.168 qpair failed and we were unable to recover it. 00:27:18.168 [2024-07-14 07:44:34.293751] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.168 [2024-07-14 07:44:34.293967] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.168 [2024-07-14 07:44:34.293994] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:18.168 qpair failed and we were unable to recover it. 00:27:18.168 [2024-07-14 07:44:34.294182] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.168 [2024-07-14 07:44:34.294357] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.168 [2024-07-14 07:44:34.294382] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:18.168 qpair failed and we were unable to recover it. 00:27:18.168 [2024-07-14 07:44:34.294567] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.168 [2024-07-14 07:44:34.294733] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.168 [2024-07-14 07:44:34.294758] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:18.168 qpair failed and we were unable to recover it. 00:27:18.168 [2024-07-14 07:44:34.294978] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.168 [2024-07-14 07:44:34.295182] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.168 [2024-07-14 07:44:34.295239] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:18.168 qpair failed and we were unable to recover it. 00:27:18.168 [2024-07-14 07:44:34.295447] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.168 [2024-07-14 07:44:34.295648] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.168 [2024-07-14 07:44:34.295706] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:18.168 qpair failed and we were unable to recover it. 00:27:18.168 [2024-07-14 07:44:34.295937] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.168 [2024-07-14 07:44:34.296116] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.168 [2024-07-14 07:44:34.296144] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:18.168 qpair failed and we were unable to recover it. 00:27:18.168 [2024-07-14 07:44:34.296341] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.168 [2024-07-14 07:44:34.296573] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.168 [2024-07-14 07:44:34.296601] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:18.168 qpair failed and we were unable to recover it. 00:27:18.168 [2024-07-14 07:44:34.296814] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.168 [2024-07-14 07:44:34.296979] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.168 [2024-07-14 07:44:34.297005] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:18.168 qpair failed and we were unable to recover it. 00:27:18.168 [2024-07-14 07:44:34.297219] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.168 [2024-07-14 07:44:34.297405] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.168 [2024-07-14 07:44:34.297430] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:18.168 qpair failed and we were unable to recover it. 00:27:18.168 [2024-07-14 07:44:34.297595] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.168 [2024-07-14 07:44:34.297799] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.168 [2024-07-14 07:44:34.297827] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:18.168 qpair failed and we were unable to recover it. 00:27:18.168 [2024-07-14 07:44:34.298067] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.168 [2024-07-14 07:44:34.298278] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.168 [2024-07-14 07:44:34.298303] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:18.168 qpair failed and we were unable to recover it. 00:27:18.168 [2024-07-14 07:44:34.298490] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.168 [2024-07-14 07:44:34.298672] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.168 [2024-07-14 07:44:34.298697] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:18.168 qpair failed and we were unable to recover it. 00:27:18.168 [2024-07-14 07:44:34.298903] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.168 [2024-07-14 07:44:34.299088] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.168 [2024-07-14 07:44:34.299113] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:18.168 qpair failed and we were unable to recover it. 00:27:18.168 [2024-07-14 07:44:34.299329] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.168 [2024-07-14 07:44:34.299473] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.168 [2024-07-14 07:44:34.299499] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:18.168 qpair failed and we were unable to recover it. 00:27:18.168 [2024-07-14 07:44:34.299654] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.168 [2024-07-14 07:44:34.299802] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.168 [2024-07-14 07:44:34.299828] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:18.168 qpair failed and we were unable to recover it. 00:27:18.168 [2024-07-14 07:44:34.300003] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.168 [2024-07-14 07:44:34.300164] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.168 [2024-07-14 07:44:34.300190] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:18.169 qpair failed and we were unable to recover it. 00:27:18.169 [2024-07-14 07:44:34.300336] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.169 [2024-07-14 07:44:34.300520] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.169 [2024-07-14 07:44:34.300545] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:18.169 qpair failed and we were unable to recover it. 00:27:18.169 [2024-07-14 07:44:34.300700] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.169 [2024-07-14 07:44:34.300910] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.169 [2024-07-14 07:44:34.300940] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:18.169 qpair failed and we were unable to recover it. 00:27:18.169 [2024-07-14 07:44:34.301144] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.169 [2024-07-14 07:44:34.301332] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.169 [2024-07-14 07:44:34.301358] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:18.169 qpair failed and we were unable to recover it. 00:27:18.169 [2024-07-14 07:44:34.301568] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.169 [2024-07-14 07:44:34.301754] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.169 [2024-07-14 07:44:34.301779] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:18.169 qpair failed and we were unable to recover it. 00:27:18.169 [2024-07-14 07:44:34.301973] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.169 [2024-07-14 07:44:34.302123] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.169 [2024-07-14 07:44:34.302150] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:18.169 qpair failed and we were unable to recover it. 00:27:18.169 [2024-07-14 07:44:34.302308] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.169 [2024-07-14 07:44:34.302522] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.169 [2024-07-14 07:44:34.302547] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:18.169 qpair failed and we were unable to recover it. 00:27:18.169 [2024-07-14 07:44:34.303383] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.169 [2024-07-14 07:44:34.303604] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.169 [2024-07-14 07:44:34.303633] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:18.169 qpair failed and we were unable to recover it. 00:27:18.169 [2024-07-14 07:44:34.303790] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.169 [2024-07-14 07:44:34.303978] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.169 [2024-07-14 07:44:34.304005] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:18.169 qpair failed and we were unable to recover it. 00:27:18.169 [2024-07-14 07:44:34.304165] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.169 [2024-07-14 07:44:34.304375] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.169 [2024-07-14 07:44:34.304401] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:18.169 qpair failed and we were unable to recover it. 00:27:18.169 [2024-07-14 07:44:34.304591] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.169 [2024-07-14 07:44:34.304737] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.169 [2024-07-14 07:44:34.304763] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:18.169 qpair failed and we were unable to recover it. 00:27:18.169 [2024-07-14 07:44:34.304921] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.169 [2024-07-14 07:44:34.305110] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.169 [2024-07-14 07:44:34.305136] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:18.169 qpair failed and we were unable to recover it. 00:27:18.169 [2024-07-14 07:44:34.305294] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.169 [2024-07-14 07:44:34.305474] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.169 [2024-07-14 07:44:34.305499] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:18.169 qpair failed and we were unable to recover it. 00:27:18.169 [2024-07-14 07:44:34.305652] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.169 [2024-07-14 07:44:34.305855] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.169 [2024-07-14 07:44:34.305890] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:18.169 qpair failed and we were unable to recover it. 00:27:18.169 [2024-07-14 07:44:34.306070] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.169 [2024-07-14 07:44:34.306231] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.169 [2024-07-14 07:44:34.306257] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:18.169 qpair failed and we were unable to recover it. 00:27:18.169 [2024-07-14 07:44:34.306501] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.169 [2024-07-14 07:44:34.306702] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.169 [2024-07-14 07:44:34.306731] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:18.169 qpair failed and we were unable to recover it. 00:27:18.169 [2024-07-14 07:44:34.306960] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.169 [2024-07-14 07:44:34.307134] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.169 [2024-07-14 07:44:34.307163] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:18.169 qpair failed and we were unable to recover it. 00:27:18.169 [2024-07-14 07:44:34.307358] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.169 [2024-07-14 07:44:34.307570] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.169 [2024-07-14 07:44:34.307596] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:18.169 qpair failed and we were unable to recover it. 00:27:18.169 [2024-07-14 07:44:34.307789] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.169 [2024-07-14 07:44:34.307976] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.169 [2024-07-14 07:44:34.308002] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:18.169 qpair failed and we were unable to recover it. 00:27:18.169 [2024-07-14 07:44:34.308190] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.169 [2024-07-14 07:44:34.308378] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.169 [2024-07-14 07:44:34.308405] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:18.169 qpair failed and we were unable to recover it. 00:27:18.169 [2024-07-14 07:44:34.308606] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.169 [2024-07-14 07:44:34.308789] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.169 [2024-07-14 07:44:34.308817] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:18.169 qpair failed and we were unable to recover it. 00:27:18.169 [2024-07-14 07:44:34.309010] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.169 [2024-07-14 07:44:34.309199] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.169 [2024-07-14 07:44:34.309224] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:18.169 qpair failed and we were unable to recover it. 00:27:18.169 [2024-07-14 07:44:34.309379] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.169 [2024-07-14 07:44:34.309572] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.169 [2024-07-14 07:44:34.309598] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:18.169 qpair failed and we were unable to recover it. 00:27:18.169 [2024-07-14 07:44:34.309788] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.169 [2024-07-14 07:44:34.309980] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.169 [2024-07-14 07:44:34.310006] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:18.169 qpair failed and we were unable to recover it. 00:27:18.169 [2024-07-14 07:44:34.310219] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.169 [2024-07-14 07:44:34.310400] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.169 [2024-07-14 07:44:34.310425] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:18.169 qpair failed and we were unable to recover it. 00:27:18.169 [2024-07-14 07:44:34.310673] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.169 [2024-07-14 07:44:34.310832] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.169 [2024-07-14 07:44:34.310858] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:18.169 qpair failed and we were unable to recover it. 00:27:18.169 [2024-07-14 07:44:34.311061] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.169 [2024-07-14 07:44:34.311292] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.169 [2024-07-14 07:44:34.311342] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:18.169 qpair failed and we were unable to recover it. 00:27:18.169 [2024-07-14 07:44:34.311573] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.169 [2024-07-14 07:44:34.311771] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.169 [2024-07-14 07:44:34.311798] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:18.169 qpair failed and we were unable to recover it. 00:27:18.169 [2024-07-14 07:44:34.311994] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.169 [2024-07-14 07:44:34.312151] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.169 [2024-07-14 07:44:34.312191] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:18.169 qpair failed and we were unable to recover it. 00:27:18.169 [2024-07-14 07:44:34.312499] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.169 [2024-07-14 07:44:34.312737] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.169 [2024-07-14 07:44:34.312762] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:18.169 qpair failed and we were unable to recover it. 00:27:18.169 [2024-07-14 07:44:34.313058] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.169 [2024-07-14 07:44:34.313409] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.169 [2024-07-14 07:44:34.313475] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:18.169 qpair failed and we were unable to recover it. 00:27:18.169 [2024-07-14 07:44:34.313758] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.169 [2024-07-14 07:44:34.314037] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.169 [2024-07-14 07:44:34.314066] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:18.169 qpair failed and we were unable to recover it. 00:27:18.169 [2024-07-14 07:44:34.314402] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.169 [2024-07-14 07:44:34.314810] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.169 [2024-07-14 07:44:34.314872] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:18.169 qpair failed and we were unable to recover it. 00:27:18.169 [2024-07-14 07:44:34.315097] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.169 [2024-07-14 07:44:34.315272] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.169 [2024-07-14 07:44:34.315296] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:18.169 qpair failed and we were unable to recover it. 00:27:18.169 [2024-07-14 07:44:34.315516] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.169 [2024-07-14 07:44:34.315702] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.169 [2024-07-14 07:44:34.315734] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:18.169 qpair failed and we were unable to recover it. 00:27:18.169 [2024-07-14 07:44:34.316010] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.169 [2024-07-14 07:44:34.316229] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.169 [2024-07-14 07:44:34.316258] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:18.169 qpair failed and we were unable to recover it. 00:27:18.169 [2024-07-14 07:44:34.316470] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.169 [2024-07-14 07:44:34.316901] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.169 [2024-07-14 07:44:34.316958] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:18.169 qpair failed and we were unable to recover it. 00:27:18.169 [2024-07-14 07:44:34.317181] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.169 [2024-07-14 07:44:34.317439] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.169 [2024-07-14 07:44:34.317477] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:18.169 qpair failed and we were unable to recover it. 00:27:18.169 [2024-07-14 07:44:34.317661] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.169 [2024-07-14 07:44:34.317878] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.169 [2024-07-14 07:44:34.317906] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:18.169 qpair failed and we were unable to recover it. 00:27:18.169 [2024-07-14 07:44:34.318085] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.169 [2024-07-14 07:44:34.318317] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.169 [2024-07-14 07:44:34.318346] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:18.169 qpair failed and we were unable to recover it. 00:27:18.169 [2024-07-14 07:44:34.318680] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.169 [2024-07-14 07:44:34.318956] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.169 [2024-07-14 07:44:34.318993] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:18.169 qpair failed and we were unable to recover it. 00:27:18.169 [2024-07-14 07:44:34.319203] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.438 [2024-07-14 07:44:34.319389] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.438 [2024-07-14 07:44:34.319417] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:18.438 qpair failed and we were unable to recover it. 00:27:18.438 [2024-07-14 07:44:34.319605] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.438 [2024-07-14 07:44:34.319837] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.438 [2024-07-14 07:44:34.319897] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:18.438 qpair failed and we were unable to recover it. 00:27:18.438 [2024-07-14 07:44:34.320160] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.438 [2024-07-14 07:44:34.320535] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.438 [2024-07-14 07:44:34.320573] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:18.438 qpair failed and we were unable to recover it. 00:27:18.438 [2024-07-14 07:44:34.320828] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.438 [2024-07-14 07:44:34.321072] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.438 [2024-07-14 07:44:34.321110] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:18.438 qpair failed and we were unable to recover it. 00:27:18.438 [2024-07-14 07:44:34.321284] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.438 [2024-07-14 07:44:34.321478] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.438 [2024-07-14 07:44:34.321506] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:18.438 qpair failed and we were unable to recover it. 00:27:18.438 [2024-07-14 07:44:34.321714] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.438 [2024-07-14 07:44:34.321962] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.438 [2024-07-14 07:44:34.321992] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:18.438 qpair failed and we were unable to recover it. 00:27:18.438 [2024-07-14 07:44:34.322225] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.438 [2024-07-14 07:44:34.322486] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.438 [2024-07-14 07:44:34.322528] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:18.438 qpair failed and we were unable to recover it. 00:27:18.438 [2024-07-14 07:44:34.322763] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.438 [2024-07-14 07:44:34.322978] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.438 [2024-07-14 07:44:34.323006] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:18.438 qpair failed and we were unable to recover it. 00:27:18.438 [2024-07-14 07:44:34.323173] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.438 [2024-07-14 07:44:34.323357] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.438 [2024-07-14 07:44:34.323382] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:18.438 qpair failed and we were unable to recover it. 00:27:18.438 [2024-07-14 07:44:34.323596] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.438 [2024-07-14 07:44:34.323825] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.438 [2024-07-14 07:44:34.323858] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:18.438 qpair failed and we were unable to recover it. 00:27:18.438 [2024-07-14 07:44:34.324106] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.438 [2024-07-14 07:44:34.324307] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.438 [2024-07-14 07:44:34.324335] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:18.438 qpair failed and we were unable to recover it. 00:27:18.438 [2024-07-14 07:44:34.324551] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.438 [2024-07-14 07:44:34.324716] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.438 [2024-07-14 07:44:34.324744] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:18.438 qpair failed and we were unable to recover it. 00:27:18.438 [2024-07-14 07:44:34.324949] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.438 [2024-07-14 07:44:34.325139] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.438 [2024-07-14 07:44:34.325166] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:18.438 qpair failed and we were unable to recover it. 00:27:18.438 [2024-07-14 07:44:34.325333] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.438 [2024-07-14 07:44:34.325522] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.438 [2024-07-14 07:44:34.325547] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:18.438 qpair failed and we were unable to recover it. 00:27:18.438 [2024-07-14 07:44:34.325729] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.438 [2024-07-14 07:44:34.325960] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.438 [2024-07-14 07:44:34.325989] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:18.438 qpair failed and we were unable to recover it. 00:27:18.438 [2024-07-14 07:44:34.326200] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.438 [2024-07-14 07:44:34.326424] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.438 [2024-07-14 07:44:34.326452] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:18.438 qpair failed and we were unable to recover it. 00:27:18.438 [2024-07-14 07:44:34.326666] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.438 [2024-07-14 07:44:34.326895] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.438 [2024-07-14 07:44:34.326924] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:18.438 qpair failed and we were unable to recover it. 00:27:18.438 [2024-07-14 07:44:34.327134] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.438 [2024-07-14 07:44:34.327320] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.438 [2024-07-14 07:44:34.327348] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:18.438 qpair failed and we were unable to recover it. 00:27:18.438 [2024-07-14 07:44:34.327558] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.438 [2024-07-14 07:44:34.327712] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.438 [2024-07-14 07:44:34.327737] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:18.438 qpair failed and we were unable to recover it. 00:27:18.438 [2024-07-14 07:44:34.327944] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.438 [2024-07-14 07:44:34.328172] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.438 [2024-07-14 07:44:34.328243] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:18.438 qpair failed and we were unable to recover it. 00:27:18.438 [2024-07-14 07:44:34.328435] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.438 [2024-07-14 07:44:34.328623] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.438 [2024-07-14 07:44:34.328648] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:18.438 qpair failed and we were unable to recover it. 00:27:18.438 [2024-07-14 07:44:34.328860] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.438 [2024-07-14 07:44:34.329060] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.438 [2024-07-14 07:44:34.329089] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:18.439 qpair failed and we were unable to recover it. 00:27:18.439 [2024-07-14 07:44:34.329290] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.439 [2024-07-14 07:44:34.329654] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.439 [2024-07-14 07:44:34.329707] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:18.439 qpair failed and we were unable to recover it. 00:27:18.439 [2024-07-14 07:44:34.329998] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.439 [2024-07-14 07:44:34.330297] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.439 [2024-07-14 07:44:34.330370] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:18.439 qpair failed and we were unable to recover it. 00:27:18.439 [2024-07-14 07:44:34.330584] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.439 [2024-07-14 07:44:34.330819] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.439 [2024-07-14 07:44:34.330847] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:18.439 qpair failed and we were unable to recover it. 00:27:18.439 [2024-07-14 07:44:34.331099] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.439 [2024-07-14 07:44:34.331388] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.439 [2024-07-14 07:44:34.331416] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:18.439 qpair failed and we were unable to recover it. 00:27:18.439 [2024-07-14 07:44:34.331622] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.439 [2024-07-14 07:44:34.331795] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.439 [2024-07-14 07:44:34.331823] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:18.439 qpair failed and we were unable to recover it. 00:27:18.439 [2024-07-14 07:44:34.332062] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.439 [2024-07-14 07:44:34.332300] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.439 [2024-07-14 07:44:34.332325] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:18.439 qpair failed and we were unable to recover it. 00:27:18.439 [2024-07-14 07:44:34.332504] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.439 [2024-07-14 07:44:34.332690] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.439 [2024-07-14 07:44:34.332715] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:18.439 qpair failed and we were unable to recover it. 00:27:18.439 [2024-07-14 07:44:34.332923] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.439 [2024-07-14 07:44:34.333114] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.439 [2024-07-14 07:44:34.333139] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:18.439 qpair failed and we were unable to recover it. 00:27:18.439 [2024-07-14 07:44:34.333384] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.439 [2024-07-14 07:44:34.333676] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.439 [2024-07-14 07:44:34.333735] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:18.439 qpair failed and we were unable to recover it. 00:27:18.439 [2024-07-14 07:44:34.333943] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.439 [2024-07-14 07:44:34.334177] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.439 [2024-07-14 07:44:34.334203] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:18.439 qpair failed and we were unable to recover it. 00:27:18.439 [2024-07-14 07:44:34.334415] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.439 [2024-07-14 07:44:34.334693] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.439 [2024-07-14 07:44:34.334748] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:18.439 qpair failed and we were unable to recover it. 00:27:18.439 [2024-07-14 07:44:34.334960] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.439 [2024-07-14 07:44:34.335170] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.439 [2024-07-14 07:44:34.335199] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:18.439 qpair failed and we were unable to recover it. 00:27:18.439 [2024-07-14 07:44:34.335401] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.439 [2024-07-14 07:44:34.335725] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.439 [2024-07-14 07:44:34.335783] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:18.439 qpair failed and we were unable to recover it. 00:27:18.439 [2024-07-14 07:44:34.336011] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.439 [2024-07-14 07:44:34.336202] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.439 [2024-07-14 07:44:34.336227] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:18.439 qpair failed and we were unable to recover it. 00:27:18.439 [2024-07-14 07:44:34.336436] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.439 [2024-07-14 07:44:34.336698] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.439 [2024-07-14 07:44:34.336723] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:18.439 qpair failed and we were unable to recover it. 00:27:18.439 [2024-07-14 07:44:34.336932] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.439 [2024-07-14 07:44:34.337171] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.439 [2024-07-14 07:44:34.337197] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:18.439 qpair failed and we were unable to recover it. 00:27:18.439 [2024-07-14 07:44:34.337380] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.439 [2024-07-14 07:44:34.337630] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.439 [2024-07-14 07:44:34.337679] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:18.439 qpair failed and we were unable to recover it. 00:27:18.439 [2024-07-14 07:44:34.337926] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.439 [2024-07-14 07:44:34.338134] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.439 [2024-07-14 07:44:34.338186] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:18.439 qpair failed and we were unable to recover it. 00:27:18.439 [2024-07-14 07:44:34.338425] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.439 [2024-07-14 07:44:34.338665] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.439 [2024-07-14 07:44:34.338693] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:18.439 qpair failed and we were unable to recover it. 00:27:18.439 [2024-07-14 07:44:34.338920] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.439 [2024-07-14 07:44:34.339125] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.439 [2024-07-14 07:44:34.339165] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:18.439 qpair failed and we were unable to recover it. 00:27:18.439 [2024-07-14 07:44:34.339361] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.439 [2024-07-14 07:44:34.339605] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.439 [2024-07-14 07:44:34.339631] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:18.439 qpair failed and we were unable to recover it. 00:27:18.439 [2024-07-14 07:44:34.339818] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.439 [2024-07-14 07:44:34.340050] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.439 [2024-07-14 07:44:34.340079] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:18.439 qpair failed and we were unable to recover it. 00:27:18.439 [2024-07-14 07:44:34.340296] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.439 [2024-07-14 07:44:34.340682] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.439 [2024-07-14 07:44:34.340749] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:18.439 qpair failed and we were unable to recover it. 00:27:18.439 [2024-07-14 07:44:34.340995] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.439 [2024-07-14 07:44:34.341185] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.439 [2024-07-14 07:44:34.341213] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:18.439 qpair failed and we were unable to recover it. 00:27:18.439 [2024-07-14 07:44:34.341448] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.439 [2024-07-14 07:44:34.341665] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.439 [2024-07-14 07:44:34.341717] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:18.439 qpair failed and we were unable to recover it. 00:27:18.439 [2024-07-14 07:44:34.342031] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.439 [2024-07-14 07:44:34.342346] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.439 [2024-07-14 07:44:34.342404] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:18.439 qpair failed and we were unable to recover it. 00:27:18.439 [2024-07-14 07:44:34.342648] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.439 [2024-07-14 07:44:34.342838] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.439 [2024-07-14 07:44:34.342895] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:18.439 qpair failed and we were unable to recover it. 00:27:18.439 [2024-07-14 07:44:34.343115] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.439 [2024-07-14 07:44:34.343418] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.439 [2024-07-14 07:44:34.343467] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:18.439 qpair failed and we were unable to recover it. 00:27:18.439 [2024-07-14 07:44:34.343663] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.439 [2024-07-14 07:44:34.343875] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.439 [2024-07-14 07:44:34.343904] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:18.439 qpair failed and we were unable to recover it. 00:27:18.439 [2024-07-14 07:44:34.344112] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.439 [2024-07-14 07:44:34.344334] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.439 [2024-07-14 07:44:34.344360] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:18.439 qpair failed and we were unable to recover it. 00:27:18.439 [2024-07-14 07:44:34.344596] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.440 [2024-07-14 07:44:34.344807] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.440 [2024-07-14 07:44:34.344835] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:18.440 qpair failed and we were unable to recover it. 00:27:18.440 [2024-07-14 07:44:34.345061] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.440 [2024-07-14 07:44:34.345261] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.440 [2024-07-14 07:44:34.345299] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:18.440 qpair failed and we were unable to recover it. 00:27:18.440 [2024-07-14 07:44:34.345503] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.440 [2024-07-14 07:44:34.345815] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.440 [2024-07-14 07:44:34.345878] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:18.440 qpair failed and we were unable to recover it. 00:27:18.440 [2024-07-14 07:44:34.346065] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.440 [2024-07-14 07:44:34.346397] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.440 [2024-07-14 07:44:34.346454] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:18.440 qpair failed and we were unable to recover it. 00:27:18.440 [2024-07-14 07:44:34.346658] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.440 [2024-07-14 07:44:34.346877] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.440 [2024-07-14 07:44:34.346904] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:18.440 qpair failed and we were unable to recover it. 00:27:18.440 [2024-07-14 07:44:34.347138] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.440 [2024-07-14 07:44:34.347375] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.440 [2024-07-14 07:44:34.347415] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:18.440 qpair failed and we were unable to recover it. 00:27:18.440 [2024-07-14 07:44:34.347666] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.440 [2024-07-14 07:44:34.347879] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.440 [2024-07-14 07:44:34.347909] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:18.440 qpair failed and we were unable to recover it. 00:27:18.440 [2024-07-14 07:44:34.348092] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.440 [2024-07-14 07:44:34.348418] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.440 [2024-07-14 07:44:34.348469] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:18.440 qpair failed and we were unable to recover it. 00:27:18.440 [2024-07-14 07:44:34.348737] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.440 [2024-07-14 07:44:34.348976] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.440 [2024-07-14 07:44:34.349010] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:18.440 qpair failed and we were unable to recover it. 00:27:18.440 [2024-07-14 07:44:34.349215] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.440 [2024-07-14 07:44:34.349517] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.440 [2024-07-14 07:44:34.349569] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:18.440 qpair failed and we were unable to recover it. 00:27:18.440 [2024-07-14 07:44:34.349773] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.440 [2024-07-14 07:44:34.349944] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.440 [2024-07-14 07:44:34.349973] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:18.440 qpair failed and we were unable to recover it. 00:27:18.440 [2024-07-14 07:44:34.350231] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.440 [2024-07-14 07:44:34.350557] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.440 [2024-07-14 07:44:34.350618] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:18.440 qpair failed and we were unable to recover it. 00:27:18.440 [2024-07-14 07:44:34.350849] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.440 [2024-07-14 07:44:34.351079] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.440 [2024-07-14 07:44:34.351109] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:18.440 qpair failed and we were unable to recover it. 00:27:18.440 [2024-07-14 07:44:34.351357] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.440 [2024-07-14 07:44:34.351568] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.440 [2024-07-14 07:44:34.351596] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:18.440 qpair failed and we were unable to recover it. 00:27:18.440 [2024-07-14 07:44:34.351826] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.440 [2024-07-14 07:44:34.352033] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.440 [2024-07-14 07:44:34.352062] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:18.440 qpair failed and we were unable to recover it. 00:27:18.440 [2024-07-14 07:44:34.352338] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.440 [2024-07-14 07:44:34.352610] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.440 [2024-07-14 07:44:34.352663] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:18.440 qpair failed and we were unable to recover it. 00:27:18.440 [2024-07-14 07:44:34.352850] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.440 [2024-07-14 07:44:34.353070] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.440 [2024-07-14 07:44:34.353096] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:18.440 qpair failed and we were unable to recover it. 00:27:18.440 [2024-07-14 07:44:34.353350] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.440 [2024-07-14 07:44:34.353714] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.440 [2024-07-14 07:44:34.353765] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:18.440 qpair failed and we were unable to recover it. 00:27:18.440 [2024-07-14 07:44:34.354004] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.440 [2024-07-14 07:44:34.354221] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.440 [2024-07-14 07:44:34.354250] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:18.440 qpair failed and we were unable to recover it. 00:27:18.440 [2024-07-14 07:44:34.354455] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.440 [2024-07-14 07:44:34.354749] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.440 [2024-07-14 07:44:34.354807] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:18.440 qpair failed and we were unable to recover it. 00:27:18.440 [2024-07-14 07:44:34.355042] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.440 [2024-07-14 07:44:34.355336] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.440 [2024-07-14 07:44:34.355403] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:18.440 qpair failed and we were unable to recover it. 00:27:18.440 [2024-07-14 07:44:34.355612] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.440 [2024-07-14 07:44:34.355840] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.440 [2024-07-14 07:44:34.355886] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:18.440 qpair failed and we were unable to recover it. 00:27:18.440 [2024-07-14 07:44:34.356069] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.440 [2024-07-14 07:44:34.356281] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.440 [2024-07-14 07:44:34.356339] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:18.440 qpair failed and we were unable to recover it. 00:27:18.440 [2024-07-14 07:44:34.356736] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.440 [2024-07-14 07:44:34.357005] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.440 [2024-07-14 07:44:34.357033] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:18.440 qpair failed and we were unable to recover it. 00:27:18.440 [2024-07-14 07:44:34.357268] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.440 [2024-07-14 07:44:34.357641] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.440 [2024-07-14 07:44:34.357698] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:18.440 qpair failed and we were unable to recover it. 00:27:18.440 [2024-07-14 07:44:34.357928] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.440 [2024-07-14 07:44:34.358132] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.440 [2024-07-14 07:44:34.358167] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:18.440 qpair failed and we were unable to recover it. 00:27:18.440 [2024-07-14 07:44:34.358413] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.440 [2024-07-14 07:44:34.358583] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.440 [2024-07-14 07:44:34.358608] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:18.440 qpair failed and we were unable to recover it. 00:27:18.440 [2024-07-14 07:44:34.358923] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.440 [2024-07-14 07:44:34.359138] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.440 [2024-07-14 07:44:34.359177] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:18.440 qpair failed and we were unable to recover it. 00:27:18.440 [2024-07-14 07:44:34.359452] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.440 [2024-07-14 07:44:34.359732] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.440 [2024-07-14 07:44:34.359759] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:18.440 qpair failed and we were unable to recover it. 00:27:18.440 [2024-07-14 07:44:34.360003] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.440 [2024-07-14 07:44:34.360204] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.440 [2024-07-14 07:44:34.360229] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:18.440 qpair failed and we were unable to recover it. 00:27:18.440 [2024-07-14 07:44:34.360446] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.441 [2024-07-14 07:44:34.360622] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.441 [2024-07-14 07:44:34.360661] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:18.441 qpair failed and we were unable to recover it. 00:27:18.441 [2024-07-14 07:44:34.360882] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.441 [2024-07-14 07:44:34.361115] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.441 [2024-07-14 07:44:34.361143] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:18.441 qpair failed and we were unable to recover it. 00:27:18.441 [2024-07-14 07:44:34.361371] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.441 [2024-07-14 07:44:34.361622] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.441 [2024-07-14 07:44:34.361650] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:18.441 qpair failed and we were unable to recover it. 00:27:18.441 [2024-07-14 07:44:34.361881] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.441 [2024-07-14 07:44:34.362054] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.441 [2024-07-14 07:44:34.362079] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:18.441 qpair failed and we were unable to recover it. 00:27:18.441 [2024-07-14 07:44:34.362431] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.441 [2024-07-14 07:44:34.362925] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.441 [2024-07-14 07:44:34.362955] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:18.441 qpair failed and we were unable to recover it. 00:27:18.441 [2024-07-14 07:44:34.363183] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.441 [2024-07-14 07:44:34.363410] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.441 [2024-07-14 07:44:34.363434] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:18.441 qpair failed and we were unable to recover it. 00:27:18.441 [2024-07-14 07:44:34.363655] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.441 [2024-07-14 07:44:34.363864] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.441 [2024-07-14 07:44:34.363899] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:18.441 qpair failed and we were unable to recover it. 00:27:18.441 [2024-07-14 07:44:34.364077] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.441 [2024-07-14 07:44:34.364339] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.441 [2024-07-14 07:44:34.364394] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:18.441 qpair failed and we were unable to recover it. 00:27:18.441 [2024-07-14 07:44:34.364622] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.441 [2024-07-14 07:44:34.364809] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.441 [2024-07-14 07:44:34.364849] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:18.441 qpair failed and we were unable to recover it. 00:27:18.441 [2024-07-14 07:44:34.365091] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.441 [2024-07-14 07:44:34.365288] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.441 [2024-07-14 07:44:34.365316] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:18.441 qpair failed and we were unable to recover it. 00:27:18.441 [2024-07-14 07:44:34.365534] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.441 [2024-07-14 07:44:34.365934] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.441 [2024-07-14 07:44:34.365964] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:18.441 qpair failed and we were unable to recover it. 00:27:18.441 [2024-07-14 07:44:34.366142] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.441 [2024-07-14 07:44:34.366356] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.441 [2024-07-14 07:44:34.366384] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:18.441 qpair failed and we were unable to recover it. 00:27:18.441 [2024-07-14 07:44:34.366623] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.441 [2024-07-14 07:44:34.366828] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.441 [2024-07-14 07:44:34.366875] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:18.441 qpair failed and we were unable to recover it. 00:27:18.441 [2024-07-14 07:44:34.367092] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.441 [2024-07-14 07:44:34.367313] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.441 [2024-07-14 07:44:34.367339] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:18.441 qpair failed and we were unable to recover it. 00:27:18.441 [2024-07-14 07:44:34.367591] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.441 [2024-07-14 07:44:34.367816] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.441 [2024-07-14 07:44:34.367842] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:18.441 qpair failed and we were unable to recover it. 00:27:18.441 [2024-07-14 07:44:34.368075] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.441 [2024-07-14 07:44:34.368370] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.441 [2024-07-14 07:44:34.368438] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:18.441 qpair failed and we were unable to recover it. 00:27:18.441 [2024-07-14 07:44:34.369528] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.441 [2024-07-14 07:44:34.369936] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.441 [2024-07-14 07:44:34.369969] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:18.441 qpair failed and we were unable to recover it. 00:27:18.441 [2024-07-14 07:44:34.370177] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.441 [2024-07-14 07:44:34.370593] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.441 [2024-07-14 07:44:34.370645] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:18.441 qpair failed and we were unable to recover it. 00:27:18.441 [2024-07-14 07:44:34.370836] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.441 [2024-07-14 07:44:34.371058] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.441 [2024-07-14 07:44:34.371087] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:18.441 qpair failed and we were unable to recover it. 00:27:18.441 [2024-07-14 07:44:34.371299] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.441 [2024-07-14 07:44:34.371527] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.441 [2024-07-14 07:44:34.371580] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:18.441 qpair failed and we were unable to recover it. 00:27:18.441 [2024-07-14 07:44:34.371801] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.441 [2024-07-14 07:44:34.372003] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.441 [2024-07-14 07:44:34.372029] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:18.441 qpair failed and we were unable to recover it. 00:27:18.441 [2024-07-14 07:44:34.372245] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.441 [2024-07-14 07:44:34.372468] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.441 [2024-07-14 07:44:34.372493] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:18.441 qpair failed and we were unable to recover it. 00:27:18.441 [2024-07-14 07:44:34.372717] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.441 [2024-07-14 07:44:34.372881] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.441 [2024-07-14 07:44:34.372907] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:18.441 qpair failed and we were unable to recover it. 00:27:18.441 [2024-07-14 07:44:34.373090] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.441 [2024-07-14 07:44:34.373284] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.441 [2024-07-14 07:44:34.373309] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:18.441 qpair failed and we were unable to recover it. 00:27:18.441 [2024-07-14 07:44:34.373522] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.441 [2024-07-14 07:44:34.373739] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.441 [2024-07-14 07:44:34.373764] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:18.441 qpair failed and we were unable to recover it. 00:27:18.441 [2024-07-14 07:44:34.373994] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.441 [2024-07-14 07:44:34.374184] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.441 [2024-07-14 07:44:34.374210] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:18.441 qpair failed and we were unable to recover it. 00:27:18.441 [2024-07-14 07:44:34.374398] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.441 [2024-07-14 07:44:34.374587] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.441 [2024-07-14 07:44:34.374612] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:18.441 qpair failed and we were unable to recover it. 00:27:18.441 [2024-07-14 07:44:34.374787] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.441 [2024-07-14 07:44:34.374984] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.441 [2024-07-14 07:44:34.375010] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:18.441 qpair failed and we were unable to recover it. 00:27:18.441 [2024-07-14 07:44:34.375196] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.441 [2024-07-14 07:44:34.375477] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.441 [2024-07-14 07:44:34.375503] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:18.441 qpair failed and we were unable to recover it. 00:27:18.441 [2024-07-14 07:44:34.375730] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.441 [2024-07-14 07:44:34.375951] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.441 [2024-07-14 07:44:34.375981] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:18.441 qpair failed and we were unable to recover it. 00:27:18.441 [2024-07-14 07:44:34.376153] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.441 [2024-07-14 07:44:34.376387] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.442 [2024-07-14 07:44:34.376426] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:18.442 qpair failed and we were unable to recover it. 00:27:18.442 [2024-07-14 07:44:34.376804] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.442 [2024-07-14 07:44:34.377080] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.442 [2024-07-14 07:44:34.377107] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:18.442 qpair failed and we were unable to recover it. 00:27:18.442 [2024-07-14 07:44:34.377330] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.442 [2024-07-14 07:44:34.377554] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.442 [2024-07-14 07:44:34.377579] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:18.442 qpair failed and we were unable to recover it. 00:27:18.442 [2024-07-14 07:44:34.377825] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.442 [2024-07-14 07:44:34.378050] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.442 [2024-07-14 07:44:34.378077] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:18.442 qpair failed and we were unable to recover it. 00:27:18.442 [2024-07-14 07:44:34.378293] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.442 [2024-07-14 07:44:34.378459] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.442 [2024-07-14 07:44:34.378487] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:18.442 qpair failed and we were unable to recover it. 00:27:18.442 [2024-07-14 07:44:34.378698] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.442 [2024-07-14 07:44:34.378912] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.442 [2024-07-14 07:44:34.378940] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:18.442 qpair failed and we were unable to recover it. 00:27:18.442 [2024-07-14 07:44:34.379165] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.442 [2024-07-14 07:44:34.379532] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.442 [2024-07-14 07:44:34.379591] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:18.442 qpair failed and we were unable to recover it. 00:27:18.442 [2024-07-14 07:44:34.379805] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.442 [2024-07-14 07:44:34.380018] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.442 [2024-07-14 07:44:34.380044] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:18.442 qpair failed and we were unable to recover it. 00:27:18.442 [2024-07-14 07:44:34.380233] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.442 [2024-07-14 07:44:34.380584] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.442 [2024-07-14 07:44:34.380634] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:18.442 qpair failed and we were unable to recover it. 00:27:18.442 [2024-07-14 07:44:34.380834] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.442 [2024-07-14 07:44:34.381056] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.442 [2024-07-14 07:44:34.381083] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:18.442 qpair failed and we were unable to recover it. 00:27:18.442 [2024-07-14 07:44:34.381308] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.442 [2024-07-14 07:44:34.382128] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.442 [2024-07-14 07:44:34.382183] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:18.442 qpair failed and we were unable to recover it. 00:27:18.442 [2024-07-14 07:44:34.382439] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.442 [2024-07-14 07:44:34.382746] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.442 [2024-07-14 07:44:34.382810] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:18.442 qpair failed and we were unable to recover it. 00:27:18.442 [2024-07-14 07:44:34.383037] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.442 [2024-07-14 07:44:34.383322] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.442 [2024-07-14 07:44:34.383352] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:18.442 qpair failed and we were unable to recover it. 00:27:18.442 [2024-07-14 07:44:34.383761] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.442 [2024-07-14 07:44:34.383972] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.442 [2024-07-14 07:44:34.383999] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:18.442 qpair failed and we were unable to recover it. 00:27:18.442 [2024-07-14 07:44:34.384209] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.442 [2024-07-14 07:44:34.384413] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.442 [2024-07-14 07:44:34.384443] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:18.442 qpair failed and we were unable to recover it. 00:27:18.442 [2024-07-14 07:44:34.384659] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.442 [2024-07-14 07:44:34.384885] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.442 [2024-07-14 07:44:34.384914] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:18.442 qpair failed and we were unable to recover it. 00:27:18.442 [2024-07-14 07:44:34.385115] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.442 [2024-07-14 07:44:34.385362] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.442 [2024-07-14 07:44:34.385410] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:18.442 qpair failed and we were unable to recover it. 00:27:18.442 [2024-07-14 07:44:34.385618] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.442 [2024-07-14 07:44:34.385823] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.442 [2024-07-14 07:44:34.385851] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:18.442 qpair failed and we were unable to recover it. 00:27:18.442 [2024-07-14 07:44:34.386045] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.442 [2024-07-14 07:44:34.386206] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.442 [2024-07-14 07:44:34.386231] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:18.442 qpair failed and we were unable to recover it. 00:27:18.442 [2024-07-14 07:44:34.386421] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.442 [2024-07-14 07:44:34.386626] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.442 [2024-07-14 07:44:34.386654] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:18.442 qpair failed and we were unable to recover it. 00:27:18.442 [2024-07-14 07:44:34.386884] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.442 [2024-07-14 07:44:34.387094] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.442 [2024-07-14 07:44:34.387120] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:18.442 qpair failed and we were unable to recover it. 00:27:18.442 [2024-07-14 07:44:34.387319] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.442 [2024-07-14 07:44:34.387476] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.442 [2024-07-14 07:44:34.387516] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:18.442 qpair failed and we were unable to recover it. 00:27:18.442 [2024-07-14 07:44:34.387898] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.442 [2024-07-14 07:44:34.388084] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.442 [2024-07-14 07:44:34.388109] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:18.442 qpair failed and we were unable to recover it. 00:27:18.442 [2024-07-14 07:44:34.388328] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.442 [2024-07-14 07:44:34.388610] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.442 [2024-07-14 07:44:34.388660] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:18.442 qpair failed and we were unable to recover it. 00:27:18.442 [2024-07-14 07:44:34.388902] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.442 [2024-07-14 07:44:34.389086] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.442 [2024-07-14 07:44:34.389112] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:18.442 qpair failed and we were unable to recover it. 00:27:18.443 [2024-07-14 07:44:34.389305] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.443 [2024-07-14 07:44:34.389572] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.443 [2024-07-14 07:44:34.389619] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:18.443 qpair failed and we were unable to recover it. 00:27:18.443 [2024-07-14 07:44:34.389828] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.443 [2024-07-14 07:44:34.390055] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.443 [2024-07-14 07:44:34.390081] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:18.443 qpair failed and we were unable to recover it. 00:27:18.443 [2024-07-14 07:44:34.390335] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.443 [2024-07-14 07:44:34.390640] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.443 [2024-07-14 07:44:34.390694] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:18.443 qpair failed and we were unable to recover it. 00:27:18.443 [2024-07-14 07:44:34.390909] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.443 [2024-07-14 07:44:34.391077] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.443 [2024-07-14 07:44:34.391103] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:18.443 qpair failed and we were unable to recover it. 00:27:18.443 [2024-07-14 07:44:34.391331] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.443 [2024-07-14 07:44:34.391525] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.443 [2024-07-14 07:44:34.391570] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:18.443 qpair failed and we were unable to recover it. 00:27:18.443 [2024-07-14 07:44:34.391804] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.443 [2024-07-14 07:44:34.392036] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.443 [2024-07-14 07:44:34.392062] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:18.443 qpair failed and we were unable to recover it. 00:27:18.443 [2024-07-14 07:44:34.392245] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.443 [2024-07-14 07:44:34.392479] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.443 [2024-07-14 07:44:34.392526] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:18.443 qpair failed and we were unable to recover it. 00:27:18.443 [2024-07-14 07:44:34.392788] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.443 [2024-07-14 07:44:34.392986] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.443 [2024-07-14 07:44:34.393012] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:18.443 qpair failed and we were unable to recover it. 00:27:18.443 [2024-07-14 07:44:34.393185] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.443 [2024-07-14 07:44:34.393367] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.443 [2024-07-14 07:44:34.393395] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:18.443 qpair failed and we were unable to recover it. 00:27:18.443 [2024-07-14 07:44:34.393591] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.443 [2024-07-14 07:44:34.393796] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.443 [2024-07-14 07:44:34.393825] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:18.443 qpair failed and we were unable to recover it. 00:27:18.443 [2024-07-14 07:44:34.394057] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.443 [2024-07-14 07:44:34.394316] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.443 [2024-07-14 07:44:34.394361] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:18.443 qpair failed and we were unable to recover it. 00:27:18.443 [2024-07-14 07:44:34.394572] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.443 [2024-07-14 07:44:34.394793] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.443 [2024-07-14 07:44:34.394821] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:18.443 qpair failed and we were unable to recover it. 00:27:18.443 [2024-07-14 07:44:34.395019] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.443 [2024-07-14 07:44:34.395221] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.443 [2024-07-14 07:44:34.395266] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:18.443 qpair failed and we were unable to recover it. 00:27:18.443 [2024-07-14 07:44:34.395440] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.443 [2024-07-14 07:44:34.395620] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.443 [2024-07-14 07:44:34.395648] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:18.443 qpair failed and we were unable to recover it. 00:27:18.443 [2024-07-14 07:44:34.395917] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.443 [2024-07-14 07:44:34.396130] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.443 [2024-07-14 07:44:34.396156] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:18.443 qpair failed and we were unable to recover it. 00:27:18.443 [2024-07-14 07:44:34.396428] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.443 [2024-07-14 07:44:34.396657] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.443 [2024-07-14 07:44:34.396704] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:18.443 qpair failed and we were unable to recover it. 00:27:18.443 [2024-07-14 07:44:34.396972] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.443 [2024-07-14 07:44:34.397394] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.443 [2024-07-14 07:44:34.397450] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:18.443 qpair failed and we were unable to recover it. 00:27:18.443 [2024-07-14 07:44:34.397696] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.443 [2024-07-14 07:44:34.397877] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.443 [2024-07-14 07:44:34.397904] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:18.443 qpair failed and we were unable to recover it. 00:27:18.443 [2024-07-14 07:44:34.398075] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.443 [2024-07-14 07:44:34.398261] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.443 [2024-07-14 07:44:34.398287] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:18.443 qpair failed and we were unable to recover it. 00:27:18.443 [2024-07-14 07:44:34.398498] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.443 [2024-07-14 07:44:34.398710] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.443 [2024-07-14 07:44:34.398737] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:18.443 qpair failed and we were unable to recover it. 00:27:18.443 [2024-07-14 07:44:34.398917] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.443 [2024-07-14 07:44:34.399064] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.443 [2024-07-14 07:44:34.399090] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:18.443 qpair failed and we were unable to recover it. 00:27:18.443 [2024-07-14 07:44:34.399296] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.443 [2024-07-14 07:44:34.399542] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.443 [2024-07-14 07:44:34.399570] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:18.443 qpair failed and we were unable to recover it. 00:27:18.443 [2024-07-14 07:44:34.399780] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.443 [2024-07-14 07:44:34.399966] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.443 [2024-07-14 07:44:34.399992] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:18.443 qpair failed and we were unable to recover it. 00:27:18.443 [2024-07-14 07:44:34.400172] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.443 [2024-07-14 07:44:34.400379] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.443 [2024-07-14 07:44:34.400408] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:18.443 qpair failed and we were unable to recover it. 00:27:18.443 [2024-07-14 07:44:34.400611] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.443 [2024-07-14 07:44:34.400791] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.443 [2024-07-14 07:44:34.400819] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:18.443 qpair failed and we were unable to recover it. 00:27:18.443 [2024-07-14 07:44:34.401007] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.443 [2024-07-14 07:44:34.401218] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.443 [2024-07-14 07:44:34.401251] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:18.443 qpair failed and we were unable to recover it. 00:27:18.443 [2024-07-14 07:44:34.401498] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.443 [2024-07-14 07:44:34.401670] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.443 [2024-07-14 07:44:34.401730] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:18.443 qpair failed and we were unable to recover it. 00:27:18.443 [2024-07-14 07:44:34.401991] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.443 [2024-07-14 07:44:34.402141] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.443 [2024-07-14 07:44:34.402183] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:18.443 qpair failed and we were unable to recover it. 00:27:18.443 [2024-07-14 07:44:34.402390] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.443 [2024-07-14 07:44:34.402645] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.444 [2024-07-14 07:44:34.402693] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:18.444 qpair failed and we were unable to recover it. 00:27:18.444 [2024-07-14 07:44:34.402863] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.444 [2024-07-14 07:44:34.403051] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.444 [2024-07-14 07:44:34.403077] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:18.444 qpair failed and we were unable to recover it. 00:27:18.444 [2024-07-14 07:44:34.403255] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.444 [2024-07-14 07:44:34.403438] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.444 [2024-07-14 07:44:34.403467] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:18.444 qpair failed and we were unable to recover it. 00:27:18.444 [2024-07-14 07:44:34.403752] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.444 [2024-07-14 07:44:34.404034] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.444 [2024-07-14 07:44:34.404060] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:18.444 qpair failed and we were unable to recover it. 00:27:18.444 [2024-07-14 07:44:34.404226] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.444 [2024-07-14 07:44:34.404389] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.444 [2024-07-14 07:44:34.404414] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:18.444 qpair failed and we were unable to recover it. 00:27:18.444 [2024-07-14 07:44:34.404593] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.444 [2024-07-14 07:44:34.404771] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.444 [2024-07-14 07:44:34.404799] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:18.444 qpair failed and we were unable to recover it. 00:27:18.444 [2024-07-14 07:44:34.405005] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.444 [2024-07-14 07:44:34.405162] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.444 [2024-07-14 07:44:34.405204] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:18.444 qpair failed and we were unable to recover it. 00:27:18.444 [2024-07-14 07:44:34.405448] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.444 [2024-07-14 07:44:34.405776] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.444 [2024-07-14 07:44:34.405827] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:18.444 qpair failed and we were unable to recover it. 00:27:18.444 [2024-07-14 07:44:34.406042] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.444 [2024-07-14 07:44:34.406802] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.444 [2024-07-14 07:44:34.406835] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:18.444 qpair failed and we were unable to recover it. 00:27:18.444 [2024-07-14 07:44:34.407063] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.444 [2024-07-14 07:44:34.407265] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.444 [2024-07-14 07:44:34.407291] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:18.444 qpair failed and we were unable to recover it. 00:27:18.444 [2024-07-14 07:44:34.407497] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.444 [2024-07-14 07:44:34.407701] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.444 [2024-07-14 07:44:34.407729] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:18.444 qpair failed and we were unable to recover it. 00:27:18.444 [2024-07-14 07:44:34.407982] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.444 [2024-07-14 07:44:34.408146] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.444 [2024-07-14 07:44:34.408190] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:18.444 qpair failed and we were unable to recover it. 00:27:18.444 [2024-07-14 07:44:34.408394] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.444 [2024-07-14 07:44:34.408614] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.444 [2024-07-14 07:44:34.408647] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:18.444 qpair failed and we were unable to recover it. 00:27:18.444 [2024-07-14 07:44:34.408878] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.444 [2024-07-14 07:44:34.409054] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.444 [2024-07-14 07:44:34.409079] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:18.444 qpair failed and we were unable to recover it. 00:27:18.444 [2024-07-14 07:44:34.409289] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.444 [2024-07-14 07:44:34.409535] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.444 [2024-07-14 07:44:34.409583] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:18.444 qpair failed and we were unable to recover it. 00:27:18.444 [2024-07-14 07:44:34.409801] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.444 [2024-07-14 07:44:34.409999] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.444 [2024-07-14 07:44:34.410026] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:18.444 qpair failed and we were unable to recover it. 00:27:18.444 [2024-07-14 07:44:34.410803] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.444 [2024-07-14 07:44:34.411056] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.444 [2024-07-14 07:44:34.411083] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:18.444 qpair failed and we were unable to recover it. 00:27:18.444 [2024-07-14 07:44:34.411297] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.444 [2024-07-14 07:44:34.411517] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.444 [2024-07-14 07:44:34.411543] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:18.444 qpair failed and we were unable to recover it. 00:27:18.444 [2024-07-14 07:44:34.411703] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.444 [2024-07-14 07:44:34.411885] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.444 [2024-07-14 07:44:34.411912] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:18.444 qpair failed and we were unable to recover it. 00:27:18.444 [2024-07-14 07:44:34.412110] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.444 [2024-07-14 07:44:34.412340] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.444 [2024-07-14 07:44:34.412369] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:18.444 qpair failed and we were unable to recover it. 00:27:18.444 [2024-07-14 07:44:34.412597] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.444 [2024-07-14 07:44:34.412807] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.444 [2024-07-14 07:44:34.412835] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:18.444 qpair failed and we were unable to recover it. 00:27:18.444 [2024-07-14 07:44:34.413071] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.444 [2024-07-14 07:44:34.413330] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.444 [2024-07-14 07:44:34.413359] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:18.444 qpair failed and we were unable to recover it. 00:27:18.444 [2024-07-14 07:44:34.413754] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.444 [2024-07-14 07:44:34.413976] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.444 [2024-07-14 07:44:34.414003] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:18.444 qpair failed and we were unable to recover it. 00:27:18.444 [2024-07-14 07:44:34.414192] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.444 [2024-07-14 07:44:34.414414] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.444 [2024-07-14 07:44:34.414440] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:18.444 qpair failed and we were unable to recover it. 00:27:18.444 [2024-07-14 07:44:34.414600] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.444 [2024-07-14 07:44:34.414803] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.444 [2024-07-14 07:44:34.414831] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:18.444 qpair failed and we were unable to recover it. 00:27:18.444 [2024-07-14 07:44:34.415033] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.444 [2024-07-14 07:44:34.415198] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.444 [2024-07-14 07:44:34.415227] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:18.444 qpair failed and we were unable to recover it. 00:27:18.444 [2024-07-14 07:44:34.415466] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.444 [2024-07-14 07:44:34.415714] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.444 [2024-07-14 07:44:34.415742] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:18.444 qpair failed and we were unable to recover it. 00:27:18.444 [2024-07-14 07:44:34.415971] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.444 [2024-07-14 07:44:34.416124] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.444 [2024-07-14 07:44:34.416166] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:18.444 qpair failed and we were unable to recover it. 00:27:18.444 [2024-07-14 07:44:34.416383] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.444 [2024-07-14 07:44:34.416549] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.444 [2024-07-14 07:44:34.416576] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:18.444 qpair failed and we were unable to recover it. 00:27:18.444 [2024-07-14 07:44:34.416814] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.444 [2024-07-14 07:44:34.417028] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.444 [2024-07-14 07:44:34.417054] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:18.444 qpair failed and we were unable to recover it. 00:27:18.444 [2024-07-14 07:44:34.417252] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.444 [2024-07-14 07:44:34.417476] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.444 [2024-07-14 07:44:34.417501] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:18.444 qpair failed and we were unable to recover it. 00:27:18.444 [2024-07-14 07:44:34.417710] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.444 [2024-07-14 07:44:34.417929] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.445 [2024-07-14 07:44:34.417955] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:18.445 qpair failed and we were unable to recover it. 00:27:18.445 [2024-07-14 07:44:34.418113] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.445 [2024-07-14 07:44:34.418377] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.445 [2024-07-14 07:44:34.418405] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:18.445 qpair failed and we were unable to recover it. 00:27:18.445 [2024-07-14 07:44:34.418612] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.445 [2024-07-14 07:44:34.418827] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.445 [2024-07-14 07:44:34.418853] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:18.445 qpair failed and we were unable to recover it. 00:27:18.445 [2024-07-14 07:44:34.419028] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.445 [2024-07-14 07:44:34.419189] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.445 [2024-07-14 07:44:34.419214] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:18.445 qpair failed and we were unable to recover it. 00:27:18.445 [2024-07-14 07:44:34.419398] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.445 [2024-07-14 07:44:34.419559] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.445 [2024-07-14 07:44:34.419585] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:18.445 qpair failed and we were unable to recover it. 00:27:18.445 [2024-07-14 07:44:34.419775] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.445 [2024-07-14 07:44:34.419971] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.445 [2024-07-14 07:44:34.419998] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:18.445 qpair failed and we were unable to recover it. 00:27:18.445 [2024-07-14 07:44:34.420189] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.445 [2024-07-14 07:44:34.420441] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.445 [2024-07-14 07:44:34.420467] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:18.445 qpair failed and we were unable to recover it. 00:27:18.445 [2024-07-14 07:44:34.420627] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.445 [2024-07-14 07:44:34.420840] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.445 [2024-07-14 07:44:34.420878] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:18.445 qpair failed and we were unable to recover it. 00:27:18.445 [2024-07-14 07:44:34.421043] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.445 [2024-07-14 07:44:34.421228] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.445 [2024-07-14 07:44:34.421254] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:18.445 qpair failed and we were unable to recover it. 00:27:18.445 [2024-07-14 07:44:34.421446] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.445 [2024-07-14 07:44:34.421734] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.445 [2024-07-14 07:44:34.421762] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:18.445 qpair failed and we were unable to recover it. 00:27:18.445 [2024-07-14 07:44:34.422009] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.445 [2024-07-14 07:44:34.422197] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.445 [2024-07-14 07:44:34.422225] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:18.445 qpair failed and we were unable to recover it. 00:27:18.445 [2024-07-14 07:44:34.422454] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.445 [2024-07-14 07:44:34.422636] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.445 [2024-07-14 07:44:34.422677] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:18.445 qpair failed and we were unable to recover it. 00:27:18.445 [2024-07-14 07:44:34.422850] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.445 [2024-07-14 07:44:34.423052] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.445 [2024-07-14 07:44:34.423077] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:18.445 qpair failed and we were unable to recover it. 00:27:18.445 [2024-07-14 07:44:34.423286] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.445 [2024-07-14 07:44:34.423543] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.445 [2024-07-14 07:44:34.423584] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:18.445 qpair failed and we were unable to recover it. 00:27:18.445 [2024-07-14 07:44:34.423806] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.445 [2024-07-14 07:44:34.424018] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.445 [2024-07-14 07:44:34.424044] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:18.445 qpair failed and we were unable to recover it. 00:27:18.445 [2024-07-14 07:44:34.424205] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.445 [2024-07-14 07:44:34.424438] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.445 [2024-07-14 07:44:34.424465] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:18.445 qpair failed and we were unable to recover it. 00:27:18.445 [2024-07-14 07:44:34.424635] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.445 [2024-07-14 07:44:34.424813] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.445 [2024-07-14 07:44:34.424841] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:18.445 qpair failed and we were unable to recover it. 00:27:18.445 [2024-07-14 07:44:34.425047] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.445 [2024-07-14 07:44:34.425235] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.445 [2024-07-14 07:44:34.425263] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:18.445 qpair failed and we were unable to recover it. 00:27:18.445 [2024-07-14 07:44:34.425475] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.445 [2024-07-14 07:44:34.425757] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.445 [2024-07-14 07:44:34.425804] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:18.445 qpair failed and we were unable to recover it. 00:27:18.445 [2024-07-14 07:44:34.425998] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.445 [2024-07-14 07:44:34.426178] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.445 [2024-07-14 07:44:34.426206] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:18.445 qpair failed and we were unable to recover it. 00:27:18.445 [2024-07-14 07:44:34.426438] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.445 [2024-07-14 07:44:34.426680] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.445 [2024-07-14 07:44:34.426726] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:18.445 qpair failed and we were unable to recover it. 00:27:18.445 [2024-07-14 07:44:34.426938] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.445 [2024-07-14 07:44:34.427102] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.445 [2024-07-14 07:44:34.427128] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:18.445 qpair failed and we were unable to recover it. 00:27:18.445 [2024-07-14 07:44:34.427342] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.445 [2024-07-14 07:44:34.427542] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.445 [2024-07-14 07:44:34.427587] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:18.445 qpair failed and we were unable to recover it. 00:27:18.445 [2024-07-14 07:44:34.427821] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.445 [2024-07-14 07:44:34.428045] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.445 [2024-07-14 07:44:34.428070] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:18.445 qpair failed and we were unable to recover it. 00:27:18.445 [2024-07-14 07:44:34.428256] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.445 [2024-07-14 07:44:34.428471] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.445 [2024-07-14 07:44:34.428519] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:18.445 qpair failed and we were unable to recover it. 00:27:18.445 [2024-07-14 07:44:34.428789] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.445 [2024-07-14 07:44:34.428995] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.445 [2024-07-14 07:44:34.429023] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:18.445 qpair failed and we were unable to recover it. 00:27:18.445 [2024-07-14 07:44:34.429221] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.445 [2024-07-14 07:44:34.429428] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.445 [2024-07-14 07:44:34.429465] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:18.445 qpair failed and we were unable to recover it. 00:27:18.445 [2024-07-14 07:44:34.429706] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.445 [2024-07-14 07:44:34.429950] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.445 [2024-07-14 07:44:34.429981] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:18.445 qpair failed and we were unable to recover it. 00:27:18.445 [2024-07-14 07:44:34.430173] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.445 [2024-07-14 07:44:34.430432] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.445 [2024-07-14 07:44:34.430485] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:18.445 qpair failed and we were unable to recover it. 00:27:18.445 [2024-07-14 07:44:34.430724] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.445 [2024-07-14 07:44:34.430949] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.445 [2024-07-14 07:44:34.430976] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:18.446 qpair failed and we were unable to recover it. 00:27:18.446 [2024-07-14 07:44:34.431142] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.446 [2024-07-14 07:44:34.431304] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.446 [2024-07-14 07:44:34.431357] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:18.446 qpair failed and we were unable to recover it. 00:27:18.446 [2024-07-14 07:44:34.431560] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.446 [2024-07-14 07:44:34.431804] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.446 [2024-07-14 07:44:34.431832] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:18.446 qpair failed and we were unable to recover it. 00:27:18.446 [2024-07-14 07:44:34.432020] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.446 [2024-07-14 07:44:34.432206] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.446 [2024-07-14 07:44:34.432234] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:18.446 qpair failed and we were unable to recover it. 00:27:18.446 [2024-07-14 07:44:34.432448] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.446 [2024-07-14 07:44:34.432701] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.446 [2024-07-14 07:44:34.432730] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:18.446 qpair failed and we were unable to recover it. 00:27:18.446 [2024-07-14 07:44:34.432925] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.446 [2024-07-14 07:44:34.433091] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.446 [2024-07-14 07:44:34.433118] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:18.446 qpair failed and we were unable to recover it. 00:27:18.446 [2024-07-14 07:44:34.433327] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.446 [2024-07-14 07:44:34.433583] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.446 [2024-07-14 07:44:34.433611] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:18.446 qpair failed and we were unable to recover it. 00:27:18.446 [2024-07-14 07:44:34.433793] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.446 [2024-07-14 07:44:34.433996] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.446 [2024-07-14 07:44:34.434022] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:18.446 qpair failed and we were unable to recover it. 00:27:18.446 [2024-07-14 07:44:34.434185] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.446 [2024-07-14 07:44:34.434399] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.446 [2024-07-14 07:44:34.434444] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:18.446 qpair failed and we were unable to recover it. 00:27:18.446 [2024-07-14 07:44:34.434718] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.446 [2024-07-14 07:44:34.434944] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.446 [2024-07-14 07:44:34.434971] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:18.446 qpair failed and we were unable to recover it. 00:27:18.446 [2024-07-14 07:44:34.435130] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.446 [2024-07-14 07:44:34.435379] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.446 [2024-07-14 07:44:34.435425] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:18.446 qpair failed and we were unable to recover it. 00:27:18.446 [2024-07-14 07:44:34.435682] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.446 [2024-07-14 07:44:34.435923] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.446 [2024-07-14 07:44:34.435949] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:18.446 qpair failed and we were unable to recover it. 00:27:18.446 [2024-07-14 07:44:34.436103] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.446 [2024-07-14 07:44:34.436292] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.446 [2024-07-14 07:44:34.436342] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:18.446 qpair failed and we were unable to recover it. 00:27:18.446 [2024-07-14 07:44:34.436592] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.446 [2024-07-14 07:44:34.436861] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.446 [2024-07-14 07:44:34.436899] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:18.446 qpair failed and we were unable to recover it. 00:27:18.446 [2024-07-14 07:44:34.437081] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.446 [2024-07-14 07:44:34.437272] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.446 [2024-07-14 07:44:34.437300] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:18.446 qpair failed and we were unable to recover it. 00:27:18.446 [2024-07-14 07:44:34.437537] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.446 [2024-07-14 07:44:34.437760] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.446 [2024-07-14 07:44:34.437788] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:18.446 qpair failed and we were unable to recover it. 00:27:18.446 [2024-07-14 07:44:34.438008] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.446 [2024-07-14 07:44:34.438191] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.446 [2024-07-14 07:44:34.438219] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:18.446 qpair failed and we were unable to recover it. 00:27:18.446 [2024-07-14 07:44:34.438458] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.446 [2024-07-14 07:44:34.438761] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.446 [2024-07-14 07:44:34.438789] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:18.446 qpair failed and we were unable to recover it. 00:27:18.446 [2024-07-14 07:44:34.439008] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.446 [2024-07-14 07:44:34.439173] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.446 [2024-07-14 07:44:34.439199] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:18.446 qpair failed and we were unable to recover it. 00:27:18.446 [2024-07-14 07:44:34.439359] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.446 [2024-07-14 07:44:34.439521] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.446 [2024-07-14 07:44:34.439548] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:18.446 qpair failed and we were unable to recover it. 00:27:18.446 [2024-07-14 07:44:34.439760] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.446 [2024-07-14 07:44:34.439963] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.446 [2024-07-14 07:44:34.439990] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:18.446 qpair failed and we were unable to recover it. 00:27:18.446 [2024-07-14 07:44:34.440191] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.446 [2024-07-14 07:44:34.440429] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.446 [2024-07-14 07:44:34.440457] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:18.446 qpair failed and we were unable to recover it. 00:27:18.446 [2024-07-14 07:44:34.440665] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.446 [2024-07-14 07:44:34.440928] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.446 [2024-07-14 07:44:34.440955] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:18.446 qpair failed and we were unable to recover it. 00:27:18.446 [2024-07-14 07:44:34.441112] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.446 [2024-07-14 07:44:34.441414] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.446 [2024-07-14 07:44:34.441460] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:18.446 qpair failed and we were unable to recover it. 00:27:18.446 [2024-07-14 07:44:34.441678] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.446 [2024-07-14 07:44:34.441878] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.446 [2024-07-14 07:44:34.441905] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:18.446 qpair failed and we were unable to recover it. 00:27:18.446 [2024-07-14 07:44:34.442068] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.446 [2024-07-14 07:44:34.442300] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.446 [2024-07-14 07:44:34.442346] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:18.446 qpair failed and we were unable to recover it. 00:27:18.446 [2024-07-14 07:44:34.442589] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.446 [2024-07-14 07:44:34.442824] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.446 [2024-07-14 07:44:34.442850] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:18.446 qpair failed and we were unable to recover it. 00:27:18.446 [2024-07-14 07:44:34.443038] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.446 [2024-07-14 07:44:34.443204] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.446 [2024-07-14 07:44:34.443230] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:18.446 qpair failed and we were unable to recover it. 00:27:18.446 [2024-07-14 07:44:34.443413] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.446 [2024-07-14 07:44:34.443619] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.446 [2024-07-14 07:44:34.443663] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:18.446 qpair failed and we were unable to recover it. 00:27:18.446 [2024-07-14 07:44:34.443916] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.446 [2024-07-14 07:44:34.444074] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.446 [2024-07-14 07:44:34.444100] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:18.446 qpair failed and we were unable to recover it. 00:27:18.446 [2024-07-14 07:44:34.444318] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.446 [2024-07-14 07:44:34.444571] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.446 [2024-07-14 07:44:34.444617] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:18.446 qpair failed and we were unable to recover it. 00:27:18.446 [2024-07-14 07:44:34.444815] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.446 [2024-07-14 07:44:34.445008] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.447 [2024-07-14 07:44:34.445034] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:18.447 qpair failed and we were unable to recover it. 00:27:18.447 [2024-07-14 07:44:34.445194] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.447 [2024-07-14 07:44:34.445365] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.447 [2024-07-14 07:44:34.445413] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:18.447 qpair failed and we were unable to recover it. 00:27:18.447 [2024-07-14 07:44:34.445666] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.447 [2024-07-14 07:44:34.445889] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.447 [2024-07-14 07:44:34.445932] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:18.447 qpair failed and we were unable to recover it. 00:27:18.447 [2024-07-14 07:44:34.446098] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.447 [2024-07-14 07:44:34.446349] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.447 [2024-07-14 07:44:34.446396] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:18.447 qpair failed and we were unable to recover it. 00:27:18.447 [2024-07-14 07:44:34.446662] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.447 [2024-07-14 07:44:34.446896] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.447 [2024-07-14 07:44:34.446923] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:18.447 qpair failed and we were unable to recover it. 00:27:18.447 [2024-07-14 07:44:34.447078] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.447 [2024-07-14 07:44:34.447253] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.447 [2024-07-14 07:44:34.447281] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:18.447 qpair failed and we were unable to recover it. 00:27:18.447 [2024-07-14 07:44:34.447485] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.447 [2024-07-14 07:44:34.447768] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.447 [2024-07-14 07:44:34.447796] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:18.447 qpair failed and we were unable to recover it. 00:27:18.447 [2024-07-14 07:44:34.448006] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.447 [2024-07-14 07:44:34.448189] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.447 [2024-07-14 07:44:34.448235] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:18.447 qpair failed and we were unable to recover it. 00:27:18.447 [2024-07-14 07:44:34.448440] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.447 [2024-07-14 07:44:34.448670] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.447 [2024-07-14 07:44:34.448715] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:18.447 qpair failed and we were unable to recover it. 00:27:18.447 [2024-07-14 07:44:34.448976] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.447 [2024-07-14 07:44:34.449138] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.447 [2024-07-14 07:44:34.449179] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:18.447 qpair failed and we were unable to recover it. 00:27:18.447 [2024-07-14 07:44:34.449358] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.447 [2024-07-14 07:44:34.449547] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.447 [2024-07-14 07:44:34.449592] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:18.447 qpair failed and we were unable to recover it. 00:27:18.447 [2024-07-14 07:44:34.449822] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.447 [2024-07-14 07:44:34.450011] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.447 [2024-07-14 07:44:34.450037] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:18.447 qpair failed and we were unable to recover it. 00:27:18.447 [2024-07-14 07:44:34.450273] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.447 [2024-07-14 07:44:34.450570] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.447 [2024-07-14 07:44:34.450618] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:18.447 qpair failed and we were unable to recover it. 00:27:18.447 [2024-07-14 07:44:34.450938] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.447 [2024-07-14 07:44:34.451095] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.447 [2024-07-14 07:44:34.451120] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:18.447 qpair failed and we were unable to recover it. 00:27:18.447 [2024-07-14 07:44:34.451339] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.447 [2024-07-14 07:44:34.451541] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.447 [2024-07-14 07:44:34.451586] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:18.447 qpair failed and we were unable to recover it. 00:27:18.447 [2024-07-14 07:44:34.451811] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.447 [2024-07-14 07:44:34.452007] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.447 [2024-07-14 07:44:34.452034] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:18.447 qpair failed and we were unable to recover it. 00:27:18.447 [2024-07-14 07:44:34.452214] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.447 [2024-07-14 07:44:34.452431] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.447 [2024-07-14 07:44:34.452482] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:18.447 qpair failed and we were unable to recover it. 00:27:18.447 [2024-07-14 07:44:34.452733] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.447 [2024-07-14 07:44:34.452973] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.447 [2024-07-14 07:44:34.453000] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:18.447 qpair failed and we were unable to recover it. 00:27:18.447 [2024-07-14 07:44:34.453191] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.447 [2024-07-14 07:44:34.453434] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.447 [2024-07-14 07:44:34.453485] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:18.447 qpair failed and we were unable to recover it. 00:27:18.447 [2024-07-14 07:44:34.453744] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.447 [2024-07-14 07:44:34.453971] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.447 [2024-07-14 07:44:34.453997] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:18.447 qpair failed and we were unable to recover it. 00:27:18.447 [2024-07-14 07:44:34.454205] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.447 [2024-07-14 07:44:34.454415] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.447 [2024-07-14 07:44:34.454461] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:18.447 qpair failed and we were unable to recover it. 00:27:18.447 [2024-07-14 07:44:34.454729] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.447 [2024-07-14 07:44:34.454952] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.447 [2024-07-14 07:44:34.454978] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:18.447 qpair failed and we were unable to recover it. 00:27:18.447 [2024-07-14 07:44:34.455174] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.447 [2024-07-14 07:44:34.455360] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.447 [2024-07-14 07:44:34.455385] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:18.447 qpair failed and we were unable to recover it. 00:27:18.447 [2024-07-14 07:44:34.455590] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.447 [2024-07-14 07:44:34.455793] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.447 [2024-07-14 07:44:34.455821] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:18.447 qpair failed and we were unable to recover it. 00:27:18.447 [2024-07-14 07:44:34.456053] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.447 [2024-07-14 07:44:34.456264] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.447 [2024-07-14 07:44:34.456309] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:18.447 qpair failed and we were unable to recover it. 00:27:18.447 [2024-07-14 07:44:34.456536] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.447 [2024-07-14 07:44:34.456741] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.447 [2024-07-14 07:44:34.456770] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:18.447 qpair failed and we were unable to recover it. 00:27:18.447 [2024-07-14 07:44:34.456994] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.447 [2024-07-14 07:44:34.457175] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.448 [2024-07-14 07:44:34.457206] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:18.448 qpair failed and we were unable to recover it. 00:27:18.448 [2024-07-14 07:44:34.457434] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.448 [2024-07-14 07:44:34.457722] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.448 [2024-07-14 07:44:34.457781] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:18.448 qpair failed and we were unable to recover it. 00:27:18.448 [2024-07-14 07:44:34.458009] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.448 [2024-07-14 07:44:34.458194] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.448 [2024-07-14 07:44:34.458249] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:18.448 qpair failed and we were unable to recover it. 00:27:18.448 [2024-07-14 07:44:34.458432] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.448 [2024-07-14 07:44:34.458653] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.448 [2024-07-14 07:44:34.458700] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:18.448 qpair failed and we were unable to recover it. 00:27:18.448 [2024-07-14 07:44:34.458952] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.448 [2024-07-14 07:44:34.459115] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.448 [2024-07-14 07:44:34.459141] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:18.448 qpair failed and we were unable to recover it. 00:27:18.448 [2024-07-14 07:44:34.459347] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.448 [2024-07-14 07:44:34.459546] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.448 [2024-07-14 07:44:34.459591] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:18.448 qpair failed and we were unable to recover it. 00:27:18.448 [2024-07-14 07:44:34.459775] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.448 [2024-07-14 07:44:34.459999] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.448 [2024-07-14 07:44:34.460025] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:18.448 qpair failed and we were unable to recover it. 00:27:18.448 [2024-07-14 07:44:34.460193] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.448 [2024-07-14 07:44:34.460397] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.448 [2024-07-14 07:44:34.460425] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:18.448 qpair failed and we were unable to recover it. 00:27:18.448 [2024-07-14 07:44:34.460631] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.448 [2024-07-14 07:44:34.460836] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.448 [2024-07-14 07:44:34.460879] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:18.448 qpair failed and we were unable to recover it. 00:27:18.448 [2024-07-14 07:44:34.461095] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.448 [2024-07-14 07:44:34.461325] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.448 [2024-07-14 07:44:34.461353] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:18.448 qpair failed and we were unable to recover it. 00:27:18.448 [2024-07-14 07:44:34.461556] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.448 [2024-07-14 07:44:34.461802] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.448 [2024-07-14 07:44:34.461830] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:18.448 qpair failed and we were unable to recover it. 00:27:18.448 [2024-07-14 07:44:34.462038] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.448 [2024-07-14 07:44:34.462201] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.448 [2024-07-14 07:44:34.462244] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:18.448 qpair failed and we were unable to recover it. 00:27:18.448 [2024-07-14 07:44:34.462423] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.448 [2024-07-14 07:44:34.462654] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.448 [2024-07-14 07:44:34.462681] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:18.448 qpair failed and we were unable to recover it. 00:27:18.448 [2024-07-14 07:44:34.462860] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.448 [2024-07-14 07:44:34.463048] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.448 [2024-07-14 07:44:34.463073] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:18.448 qpair failed and we were unable to recover it. 00:27:18.448 [2024-07-14 07:44:34.463319] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.448 [2024-07-14 07:44:34.463576] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.448 [2024-07-14 07:44:34.463608] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:18.448 qpair failed and we were unable to recover it. 00:27:18.448 [2024-07-14 07:44:34.463851] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.448 [2024-07-14 07:44:34.464049] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.448 [2024-07-14 07:44:34.464075] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:18.448 qpair failed and we were unable to recover it. 00:27:18.448 [2024-07-14 07:44:34.464249] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.448 [2024-07-14 07:44:34.464483] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.448 [2024-07-14 07:44:34.464510] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:18.448 qpair failed and we were unable to recover it. 00:27:18.448 [2024-07-14 07:44:34.464723] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.448 [2024-07-14 07:44:34.464952] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.448 [2024-07-14 07:44:34.464979] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:18.448 qpair failed and we were unable to recover it. 00:27:18.448 [2024-07-14 07:44:34.465167] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.448 [2024-07-14 07:44:34.465431] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.448 [2024-07-14 07:44:34.465484] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:18.448 qpair failed and we were unable to recover it. 00:27:18.448 [2024-07-14 07:44:34.465755] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.448 [2024-07-14 07:44:34.465973] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.448 [2024-07-14 07:44:34.465999] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:18.448 qpair failed and we were unable to recover it. 00:27:18.448 [2024-07-14 07:44:34.466191] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.448 [2024-07-14 07:44:34.466387] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.448 [2024-07-14 07:44:34.466415] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:18.448 qpair failed and we were unable to recover it. 00:27:18.448 [2024-07-14 07:44:34.466587] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.448 [2024-07-14 07:44:34.466820] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.448 [2024-07-14 07:44:34.466848] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:18.448 qpair failed and we were unable to recover it. 00:27:18.448 [2024-07-14 07:44:34.467043] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.448 [2024-07-14 07:44:34.467218] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.448 [2024-07-14 07:44:34.467246] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:18.448 qpair failed and we were unable to recover it. 00:27:18.448 [2024-07-14 07:44:34.467455] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.448 [2024-07-14 07:44:34.467689] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.448 [2024-07-14 07:44:34.467717] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:18.448 qpair failed and we were unable to recover it. 00:27:18.448 [2024-07-14 07:44:34.467931] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.448 [2024-07-14 07:44:34.468095] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.448 [2024-07-14 07:44:34.468121] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:18.448 qpair failed and we were unable to recover it. 00:27:18.448 [2024-07-14 07:44:34.468278] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.448 [2024-07-14 07:44:34.468451] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.448 [2024-07-14 07:44:34.468478] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:18.448 qpair failed and we were unable to recover it. 00:27:18.448 [2024-07-14 07:44:34.468685] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.448 [2024-07-14 07:44:34.468884] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.448 [2024-07-14 07:44:34.468929] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:18.448 qpair failed and we were unable to recover it. 00:27:18.448 [2024-07-14 07:44:34.469085] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.448 [2024-07-14 07:44:34.469302] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.448 [2024-07-14 07:44:34.469330] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:18.448 qpair failed and we were unable to recover it. 00:27:18.448 [2024-07-14 07:44:34.469533] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.448 [2024-07-14 07:44:34.469700] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.448 [2024-07-14 07:44:34.469727] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:18.448 qpair failed and we were unable to recover it. 00:27:18.448 [2024-07-14 07:44:34.469943] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.448 [2024-07-14 07:44:34.470109] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.448 [2024-07-14 07:44:34.470149] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:18.448 qpair failed and we were unable to recover it. 00:27:18.449 [2024-07-14 07:44:34.470358] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.449 [2024-07-14 07:44:34.470552] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.449 [2024-07-14 07:44:34.470580] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:18.449 qpair failed and we were unable to recover it. 00:27:18.449 [2024-07-14 07:44:34.470877] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.449 [2024-07-14 07:44:34.471058] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.449 [2024-07-14 07:44:34.471083] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:18.449 qpair failed and we were unable to recover it. 00:27:18.449 [2024-07-14 07:44:34.471260] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.449 [2024-07-14 07:44:34.471491] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.449 [2024-07-14 07:44:34.471537] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:18.449 qpair failed and we were unable to recover it. 00:27:18.449 [2024-07-14 07:44:34.471722] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.449 [2024-07-14 07:44:34.471948] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.449 [2024-07-14 07:44:34.471974] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:18.449 qpair failed and we were unable to recover it. 00:27:18.449 [2024-07-14 07:44:34.472137] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.449 [2024-07-14 07:44:34.472353] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.449 [2024-07-14 07:44:34.472381] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:18.449 qpair failed and we were unable to recover it. 00:27:18.449 [2024-07-14 07:44:34.472584] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.449 [2024-07-14 07:44:34.472790] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.449 [2024-07-14 07:44:34.472820] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:18.449 qpair failed and we were unable to recover it. 00:27:18.449 [2024-07-14 07:44:34.473009] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.449 [2024-07-14 07:44:34.473181] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.449 [2024-07-14 07:44:34.473209] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:18.449 qpair failed and we were unable to recover it. 00:27:18.449 [2024-07-14 07:44:34.473419] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.449 [2024-07-14 07:44:34.473597] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.449 [2024-07-14 07:44:34.473627] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:18.449 qpair failed and we were unable to recover it. 00:27:18.449 [2024-07-14 07:44:34.473799] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.449 [2024-07-14 07:44:34.474018] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.449 [2024-07-14 07:44:34.474044] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:18.449 qpair failed and we were unable to recover it. 00:27:18.449 [2024-07-14 07:44:34.474206] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.449 [2024-07-14 07:44:34.474394] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.449 [2024-07-14 07:44:34.474423] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:18.449 qpair failed and we were unable to recover it. 00:27:18.449 [2024-07-14 07:44:34.474624] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.449 [2024-07-14 07:44:34.474827] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.449 [2024-07-14 07:44:34.474883] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:18.449 qpair failed and we were unable to recover it. 00:27:18.449 [2024-07-14 07:44:34.475096] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.449 [2024-07-14 07:44:34.475286] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.449 [2024-07-14 07:44:34.475314] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:18.449 qpair failed and we were unable to recover it. 00:27:18.449 [2024-07-14 07:44:34.475519] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.449 [2024-07-14 07:44:34.475701] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.449 [2024-07-14 07:44:34.475727] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:18.449 qpair failed and we were unable to recover it. 00:27:18.449 [2024-07-14 07:44:34.475908] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.449 [2024-07-14 07:44:34.476068] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.449 [2024-07-14 07:44:34.476099] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:18.449 qpair failed and we were unable to recover it. 00:27:18.449 [2024-07-14 07:44:34.476292] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.449 [2024-07-14 07:44:34.476500] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.449 [2024-07-14 07:44:34.476526] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:18.449 qpair failed and we were unable to recover it. 00:27:18.449 [2024-07-14 07:44:34.476716] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.449 [2024-07-14 07:44:34.476882] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.449 [2024-07-14 07:44:34.476909] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:18.449 qpair failed and we were unable to recover it. 00:27:18.449 [2024-07-14 07:44:34.477064] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.449 [2024-07-14 07:44:34.477235] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.449 [2024-07-14 07:44:34.477263] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:18.449 qpair failed and we were unable to recover it. 00:27:18.449 [2024-07-14 07:44:34.477451] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.449 [2024-07-14 07:44:34.477645] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.449 [2024-07-14 07:44:34.477674] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:18.449 qpair failed and we were unable to recover it. 00:27:18.449 [2024-07-14 07:44:34.477894] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.449 [2024-07-14 07:44:34.478071] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.449 [2024-07-14 07:44:34.478096] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:18.449 qpair failed and we were unable to recover it. 00:27:18.449 [2024-07-14 07:44:34.478260] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.449 [2024-07-14 07:44:34.478411] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.449 [2024-07-14 07:44:34.478452] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:18.449 qpair failed and we were unable to recover it. 00:27:18.449 [2024-07-14 07:44:34.478660] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.449 [2024-07-14 07:44:34.478924] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.449 [2024-07-14 07:44:34.478951] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:18.449 qpair failed and we were unable to recover it. 00:27:18.449 [2024-07-14 07:44:34.479141] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.449 [2024-07-14 07:44:34.479399] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.449 [2024-07-14 07:44:34.479427] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:18.449 qpair failed and we were unable to recover it. 00:27:18.449 [2024-07-14 07:44:34.479649] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.449 [2024-07-14 07:44:34.479886] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.449 [2024-07-14 07:44:34.479912] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:18.449 qpair failed and we were unable to recover it. 00:27:18.449 [2024-07-14 07:44:34.480083] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.449 [2024-07-14 07:44:34.480276] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.449 [2024-07-14 07:44:34.480304] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:18.449 qpair failed and we were unable to recover it. 00:27:18.449 [2024-07-14 07:44:34.480489] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.449 [2024-07-14 07:44:34.480741] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.449 [2024-07-14 07:44:34.480786] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:18.449 qpair failed and we were unable to recover it. 00:27:18.449 [2024-07-14 07:44:34.480985] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.449 [2024-07-14 07:44:34.481143] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.449 [2024-07-14 07:44:34.481187] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:18.449 qpair failed and we were unable to recover it. 00:27:18.449 [2024-07-14 07:44:34.481428] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.449 [2024-07-14 07:44:34.481609] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.449 [2024-07-14 07:44:34.481634] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:18.449 qpair failed and we were unable to recover it. 00:27:18.449 [2024-07-14 07:44:34.481822] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.449 [2024-07-14 07:44:34.482002] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.449 [2024-07-14 07:44:34.482028] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:18.449 qpair failed and we were unable to recover it. 00:27:18.449 [2024-07-14 07:44:34.482188] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.449 [2024-07-14 07:44:34.482423] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.449 [2024-07-14 07:44:34.482451] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:18.449 qpair failed and we were unable to recover it. 00:27:18.449 [2024-07-14 07:44:34.482678] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.449 [2024-07-14 07:44:34.482856] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.449 [2024-07-14 07:44:34.482892] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:18.449 qpair failed and we were unable to recover it. 00:27:18.449 [2024-07-14 07:44:34.483070] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.450 [2024-07-14 07:44:34.483220] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.450 [2024-07-14 07:44:34.483246] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:18.450 qpair failed and we were unable to recover it. 00:27:18.450 [2024-07-14 07:44:34.483467] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.450 [2024-07-14 07:44:34.483673] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.450 [2024-07-14 07:44:34.483720] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:18.450 qpair failed and we were unable to recover it. 00:27:18.450 [2024-07-14 07:44:34.483993] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.450 [2024-07-14 07:44:34.484170] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.450 [2024-07-14 07:44:34.484198] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:18.450 qpair failed and we were unable to recover it. 00:27:18.450 [2024-07-14 07:44:34.484433] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.450 [2024-07-14 07:44:34.484610] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.450 [2024-07-14 07:44:34.484640] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:18.450 qpair failed and we were unable to recover it. 00:27:18.450 [2024-07-14 07:44:34.484832] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.450 [2024-07-14 07:44:34.485011] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.450 [2024-07-14 07:44:34.485037] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:18.450 qpair failed and we were unable to recover it. 00:27:18.450 [2024-07-14 07:44:34.485192] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.450 [2024-07-14 07:44:34.485395] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.450 [2024-07-14 07:44:34.485427] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:18.450 qpair failed and we were unable to recover it. 00:27:18.450 [2024-07-14 07:44:34.485724] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.450 [2024-07-14 07:44:34.485981] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.450 [2024-07-14 07:44:34.486007] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:18.450 qpair failed and we were unable to recover it. 00:27:18.450 [2024-07-14 07:44:34.486172] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.450 [2024-07-14 07:44:34.486394] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.450 [2024-07-14 07:44:34.486422] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:18.450 qpair failed and we were unable to recover it. 00:27:18.450 [2024-07-14 07:44:34.486696] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.450 [2024-07-14 07:44:34.486914] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.450 [2024-07-14 07:44:34.486941] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:18.450 qpair failed and we were unable to recover it. 00:27:18.450 [2024-07-14 07:44:34.487105] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.450 [2024-07-14 07:44:34.487312] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.450 [2024-07-14 07:44:34.487339] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:18.450 qpair failed and we were unable to recover it. 00:27:18.450 [2024-07-14 07:44:34.487537] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.450 [2024-07-14 07:44:34.487736] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.450 [2024-07-14 07:44:34.487764] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:18.450 qpair failed and we were unable to recover it. 00:27:18.450 [2024-07-14 07:44:34.487948] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.450 [2024-07-14 07:44:34.488116] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.450 [2024-07-14 07:44:34.488141] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:18.450 qpair failed and we were unable to recover it. 00:27:18.450 [2024-07-14 07:44:34.488358] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.450 [2024-07-14 07:44:34.488591] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.450 [2024-07-14 07:44:34.488618] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:18.450 qpair failed and we were unable to recover it. 00:27:18.450 [2024-07-14 07:44:34.488824] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.450 [2024-07-14 07:44:34.489061] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.450 [2024-07-14 07:44:34.489088] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:18.450 qpair failed and we were unable to recover it. 00:27:18.450 [2024-07-14 07:44:34.489309] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.450 [2024-07-14 07:44:34.489539] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.450 [2024-07-14 07:44:34.489585] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:18.450 qpair failed and we were unable to recover it. 00:27:18.450 [2024-07-14 07:44:34.489792] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.450 [2024-07-14 07:44:34.489990] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.450 [2024-07-14 07:44:34.490017] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:18.450 qpair failed and we were unable to recover it. 00:27:18.450 [2024-07-14 07:44:34.490187] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.450 [2024-07-14 07:44:34.490425] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.450 [2024-07-14 07:44:34.490453] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:18.450 qpair failed and we were unable to recover it. 00:27:18.450 [2024-07-14 07:44:34.490689] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.450 [2024-07-14 07:44:34.490847] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.450 [2024-07-14 07:44:34.490889] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:18.450 qpair failed and we were unable to recover it. 00:27:18.450 [2024-07-14 07:44:34.491075] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.450 [2024-07-14 07:44:34.491280] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.450 [2024-07-14 07:44:34.491326] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:18.450 qpair failed and we were unable to recover it. 00:27:18.450 [2024-07-14 07:44:34.491541] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.450 [2024-07-14 07:44:34.491707] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.450 [2024-07-14 07:44:34.491735] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:18.450 qpair failed and we were unable to recover it. 00:27:18.450 [2024-07-14 07:44:34.491955] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.450 [2024-07-14 07:44:34.492165] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.450 [2024-07-14 07:44:34.492197] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:18.450 qpair failed and we were unable to recover it. 00:27:18.450 [2024-07-14 07:44:34.492425] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.450 [2024-07-14 07:44:34.492603] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.450 [2024-07-14 07:44:34.492631] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:18.450 qpair failed and we were unable to recover it. 00:27:18.450 [2024-07-14 07:44:34.492837] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.450 [2024-07-14 07:44:34.493026] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.450 [2024-07-14 07:44:34.493052] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:18.450 qpair failed and we were unable to recover it. 00:27:18.450 [2024-07-14 07:44:34.493223] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.450 [2024-07-14 07:44:34.493431] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.450 [2024-07-14 07:44:34.493478] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:18.450 qpair failed and we were unable to recover it. 00:27:18.450 [2024-07-14 07:44:34.493778] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.450 [2024-07-14 07:44:34.494010] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.450 [2024-07-14 07:44:34.494036] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:18.450 qpair failed and we were unable to recover it. 00:27:18.450 [2024-07-14 07:44:34.494230] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.450 [2024-07-14 07:44:34.494487] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.450 [2024-07-14 07:44:34.494533] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:18.450 qpair failed and we were unable to recover it. 00:27:18.450 [2024-07-14 07:44:34.494735] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.450 [2024-07-14 07:44:34.494956] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.450 [2024-07-14 07:44:34.494982] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:18.450 qpair failed and we were unable to recover it. 00:27:18.450 [2024-07-14 07:44:34.495138] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.450 [2024-07-14 07:44:34.495363] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.450 [2024-07-14 07:44:34.495392] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:18.450 qpair failed and we were unable to recover it. 00:27:18.450 [2024-07-14 07:44:34.495604] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.450 [2024-07-14 07:44:34.495779] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.450 [2024-07-14 07:44:34.495807] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:18.450 qpair failed and we were unable to recover it. 00:27:18.450 [2024-07-14 07:44:34.495997] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.450 [2024-07-14 07:44:34.496153] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.450 [2024-07-14 07:44:34.496202] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:18.450 qpair failed and we were unable to recover it. 00:27:18.451 [2024-07-14 07:44:34.496399] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.451 [2024-07-14 07:44:34.496627] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.451 [2024-07-14 07:44:34.496655] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:18.451 qpair failed and we were unable to recover it. 00:27:18.451 [2024-07-14 07:44:34.496877] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.451 [2024-07-14 07:44:34.497038] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.451 [2024-07-14 07:44:34.497064] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:18.451 qpair failed and we were unable to recover it. 00:27:18.451 [2024-07-14 07:44:34.497254] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.451 [2024-07-14 07:44:34.497488] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.451 [2024-07-14 07:44:34.497517] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:18.451 qpair failed and we were unable to recover it. 00:27:18.451 [2024-07-14 07:44:34.497724] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.451 [2024-07-14 07:44:34.498007] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.451 [2024-07-14 07:44:34.498033] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:18.451 qpair failed and we were unable to recover it. 00:27:18.451 [2024-07-14 07:44:34.498244] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.451 [2024-07-14 07:44:34.498450] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.451 [2024-07-14 07:44:34.498482] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:18.451 qpair failed and we were unable to recover it. 00:27:18.451 [2024-07-14 07:44:34.498682] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.451 [2024-07-14 07:44:34.498921] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.451 [2024-07-14 07:44:34.498947] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:18.451 qpair failed and we were unable to recover it. 00:27:18.451 [2024-07-14 07:44:34.499108] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.451 [2024-07-14 07:44:34.499336] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.451 [2024-07-14 07:44:34.499368] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:18.451 qpair failed and we were unable to recover it. 00:27:18.451 [2024-07-14 07:44:34.499695] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.451 [2024-07-14 07:44:34.499992] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.451 [2024-07-14 07:44:34.500018] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:18.451 qpair failed and we were unable to recover it. 00:27:18.451 [2024-07-14 07:44:34.500197] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.451 [2024-07-14 07:44:34.500452] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.451 [2024-07-14 07:44:34.500493] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:18.451 qpair failed and we were unable to recover it. 00:27:18.451 [2024-07-14 07:44:34.500751] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.451 [2024-07-14 07:44:34.500994] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.451 [2024-07-14 07:44:34.501019] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:18.451 qpair failed and we were unable to recover it. 00:27:18.451 [2024-07-14 07:44:34.501179] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.451 [2024-07-14 07:44:34.501386] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.451 [2024-07-14 07:44:34.501417] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:18.451 qpair failed and we were unable to recover it. 00:27:18.451 [2024-07-14 07:44:34.501714] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.451 [2024-07-14 07:44:34.501950] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.451 [2024-07-14 07:44:34.501976] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:18.451 qpair failed and we were unable to recover it. 00:27:18.451 [2024-07-14 07:44:34.502144] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.451 [2024-07-14 07:44:34.502379] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.451 [2024-07-14 07:44:34.502424] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:18.451 qpair failed and we were unable to recover it. 00:27:18.451 [2024-07-14 07:44:34.502665] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.451 [2024-07-14 07:44:34.502935] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.451 [2024-07-14 07:44:34.502961] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:18.451 qpair failed and we were unable to recover it. 00:27:18.451 [2024-07-14 07:44:34.503121] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.451 [2024-07-14 07:44:34.503329] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.451 [2024-07-14 07:44:34.503361] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:18.451 qpair failed and we were unable to recover it. 00:27:18.451 [2024-07-14 07:44:34.503605] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.451 [2024-07-14 07:44:34.503807] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.451 [2024-07-14 07:44:34.503835] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:18.451 qpair failed and we were unable to recover it. 00:27:18.451 [2024-07-14 07:44:34.504049] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.451 [2024-07-14 07:44:34.504227] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.451 [2024-07-14 07:44:34.504273] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:18.451 qpair failed and we were unable to recover it. 00:27:18.451 [2024-07-14 07:44:34.504523] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.451 [2024-07-14 07:44:34.504752] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.451 [2024-07-14 07:44:34.504778] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:18.451 qpair failed and we were unable to recover it. 00:27:18.451 [2024-07-14 07:44:34.504976] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.451 [2024-07-14 07:44:34.505127] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.451 [2024-07-14 07:44:34.505168] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:18.451 qpair failed and we were unable to recover it. 00:27:18.451 [2024-07-14 07:44:34.505363] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.451 [2024-07-14 07:44:34.505593] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.451 [2024-07-14 07:44:34.505639] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:18.451 qpair failed and we were unable to recover it. 00:27:18.451 [2024-07-14 07:44:34.505878] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.451 [2024-07-14 07:44:34.506064] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.451 [2024-07-14 07:44:34.506089] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:18.451 qpair failed and we were unable to recover it. 00:27:18.451 [2024-07-14 07:44:34.506326] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.451 [2024-07-14 07:44:34.506561] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.451 [2024-07-14 07:44:34.506607] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:18.451 qpair failed and we were unable to recover it. 00:27:18.451 [2024-07-14 07:44:34.506836] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.451 [2024-07-14 07:44:34.507018] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.451 [2024-07-14 07:44:34.507043] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:18.451 qpair failed and we were unable to recover it. 00:27:18.451 [2024-07-14 07:44:34.507213] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.451 [2024-07-14 07:44:34.507450] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.451 [2024-07-14 07:44:34.507496] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:18.451 qpair failed and we were unable to recover it. 00:27:18.451 [2024-07-14 07:44:34.507741] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.451 [2024-07-14 07:44:34.507948] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.451 [2024-07-14 07:44:34.507975] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:18.451 qpair failed and we were unable to recover it. 00:27:18.451 [2024-07-14 07:44:34.508146] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.451 [2024-07-14 07:44:34.508304] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.452 [2024-07-14 07:44:34.508345] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:18.452 qpair failed and we were unable to recover it. 00:27:18.452 [2024-07-14 07:44:34.508550] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.452 [2024-07-14 07:44:34.508746] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.452 [2024-07-14 07:44:34.508774] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:18.452 qpair failed and we were unable to recover it. 00:27:18.452 [2024-07-14 07:44:34.508989] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.452 [2024-07-14 07:44:34.509142] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.452 [2024-07-14 07:44:34.509194] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:18.452 qpair failed and we were unable to recover it. 00:27:18.452 [2024-07-14 07:44:34.509365] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.452 [2024-07-14 07:44:34.509619] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.452 [2024-07-14 07:44:34.509664] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:18.452 qpair failed and we were unable to recover it. 00:27:18.452 [2024-07-14 07:44:34.509901] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.452 [2024-07-14 07:44:34.510105] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.452 [2024-07-14 07:44:34.510130] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:18.452 qpair failed and we were unable to recover it. 00:27:18.452 [2024-07-14 07:44:34.510352] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.452 [2024-07-14 07:44:34.510610] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.452 [2024-07-14 07:44:34.510674] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:18.452 qpair failed and we were unable to recover it. 00:27:18.452 [2024-07-14 07:44:34.510891] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.452 [2024-07-14 07:44:34.511052] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.452 [2024-07-14 07:44:34.511077] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:18.452 qpair failed and we were unable to recover it. 00:27:18.452 [2024-07-14 07:44:34.511284] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.452 [2024-07-14 07:44:34.511480] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.452 [2024-07-14 07:44:34.511508] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:18.452 qpair failed and we were unable to recover it. 00:27:18.452 [2024-07-14 07:44:34.511835] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.452 [2024-07-14 07:44:34.512051] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.452 [2024-07-14 07:44:34.512077] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:18.452 qpair failed and we were unable to recover it. 00:27:18.452 [2024-07-14 07:44:34.512239] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.452 [2024-07-14 07:44:34.512426] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.452 [2024-07-14 07:44:34.512451] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:18.452 qpair failed and we were unable to recover it. 00:27:18.452 [2024-07-14 07:44:34.512615] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.452 [2024-07-14 07:44:34.512842] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.452 [2024-07-14 07:44:34.512887] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:18.452 qpair failed and we were unable to recover it. 00:27:18.452 [2024-07-14 07:44:34.513093] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.452 [2024-07-14 07:44:34.513358] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.452 [2024-07-14 07:44:34.513402] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:18.452 qpair failed and we were unable to recover it. 00:27:18.452 [2024-07-14 07:44:34.513681] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.452 [2024-07-14 07:44:34.513902] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.452 [2024-07-14 07:44:34.513932] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:18.452 qpair failed and we were unable to recover it. 00:27:18.452 [2024-07-14 07:44:34.514112] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.452 [2024-07-14 07:44:34.514326] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.452 [2024-07-14 07:44:34.514372] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:18.452 qpair failed and we were unable to recover it. 00:27:18.452 [2024-07-14 07:44:34.514793] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.452 [2024-07-14 07:44:34.515038] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.452 [2024-07-14 07:44:34.515067] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:18.452 qpair failed and we were unable to recover it. 00:27:18.452 [2024-07-14 07:44:34.515282] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.452 [2024-07-14 07:44:34.515633] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.452 [2024-07-14 07:44:34.515693] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:18.452 qpair failed and we were unable to recover it. 00:27:18.452 [2024-07-14 07:44:34.515908] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.452 [2024-07-14 07:44:34.516109] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.452 [2024-07-14 07:44:34.516137] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:18.452 qpair failed and we were unable to recover it. 00:27:18.452 [2024-07-14 07:44:34.516379] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.452 [2024-07-14 07:44:34.516610] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.452 [2024-07-14 07:44:34.516641] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:18.452 qpair failed and we were unable to recover it. 00:27:18.452 [2024-07-14 07:44:34.516857] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.452 [2024-07-14 07:44:34.517095] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.452 [2024-07-14 07:44:34.517124] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:18.452 qpair failed and we were unable to recover it. 00:27:18.452 [2024-07-14 07:44:34.517364] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.452 [2024-07-14 07:44:34.517684] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.452 [2024-07-14 07:44:34.517734] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:18.452 qpair failed and we were unable to recover it. 00:27:18.452 [2024-07-14 07:44:34.517975] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.452 [2024-07-14 07:44:34.518182] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.452 [2024-07-14 07:44:34.518213] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:18.452 qpair failed and we were unable to recover it. 00:27:18.452 [2024-07-14 07:44:34.518462] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.452 [2024-07-14 07:44:34.518794] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.452 [2024-07-14 07:44:34.518844] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:18.452 qpair failed and we were unable to recover it. 00:27:18.452 [2024-07-14 07:44:34.519056] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.452 [2024-07-14 07:44:34.519222] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.452 [2024-07-14 07:44:34.519248] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:18.452 qpair failed and we were unable to recover it. 00:27:18.452 [2024-07-14 07:44:34.519453] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.452 [2024-07-14 07:44:34.519661] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.452 [2024-07-14 07:44:34.519689] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:18.452 qpair failed and we were unable to recover it. 00:27:18.452 [2024-07-14 07:44:34.519875] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.452 [2024-07-14 07:44:34.520049] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.452 [2024-07-14 07:44:34.520079] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:18.452 qpair failed and we were unable to recover it. 00:27:18.452 [2024-07-14 07:44:34.520309] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.452 [2024-07-14 07:44:34.520494] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.452 [2024-07-14 07:44:34.520539] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:18.452 qpair failed and we were unable to recover it. 00:27:18.452 [2024-07-14 07:44:34.520781] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.452 [2024-07-14 07:44:34.520986] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.452 [2024-07-14 07:44:34.521015] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:18.452 qpair failed and we were unable to recover it. 00:27:18.452 [2024-07-14 07:44:34.521189] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.452 [2024-07-14 07:44:34.521392] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.452 [2024-07-14 07:44:34.521420] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:18.452 qpair failed and we were unable to recover it. 00:27:18.452 [2024-07-14 07:44:34.521620] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.452 [2024-07-14 07:44:34.521829] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.453 [2024-07-14 07:44:34.521872] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:18.453 qpair failed and we were unable to recover it. 00:27:18.453 [2024-07-14 07:44:34.522059] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.453 [2024-07-14 07:44:34.522213] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.453 [2024-07-14 07:44:34.522238] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:18.453 qpair failed and we were unable to recover it. 00:27:18.453 [2024-07-14 07:44:34.522451] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.453 [2024-07-14 07:44:34.522715] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.453 [2024-07-14 07:44:34.522762] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:18.453 qpair failed and we were unable to recover it. 00:27:18.453 [2024-07-14 07:44:34.522971] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.453 [2024-07-14 07:44:34.523132] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.453 [2024-07-14 07:44:34.523176] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:18.453 qpair failed and we were unable to recover it. 00:27:18.453 [2024-07-14 07:44:34.523393] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.453 [2024-07-14 07:44:34.523809] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.453 [2024-07-14 07:44:34.523888] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:18.453 qpair failed and we were unable to recover it. 00:27:18.453 [2024-07-14 07:44:34.524102] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.453 [2024-07-14 07:44:34.524356] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.453 [2024-07-14 07:44:34.524384] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:18.453 qpair failed and we were unable to recover it. 00:27:18.453 [2024-07-14 07:44:34.524563] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.453 [2024-07-14 07:44:34.524767] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.453 [2024-07-14 07:44:34.524795] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:18.453 qpair failed and we were unable to recover it. 00:27:18.453 [2024-07-14 07:44:34.525011] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.453 [2024-07-14 07:44:34.525226] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.453 [2024-07-14 07:44:34.525291] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:18.453 qpair failed and we were unable to recover it. 00:27:18.453 [2024-07-14 07:44:34.525623] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.453 [2024-07-14 07:44:34.525877] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.453 [2024-07-14 07:44:34.525906] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:18.453 qpair failed and we were unable to recover it. 00:27:18.453 [2024-07-14 07:44:34.526090] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.453 [2024-07-14 07:44:34.526294] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.453 [2024-07-14 07:44:34.526324] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:18.453 qpair failed and we were unable to recover it. 00:27:18.453 [2024-07-14 07:44:34.526565] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.453 [2024-07-14 07:44:34.526752] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.453 [2024-07-14 07:44:34.526779] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:18.453 qpair failed and we were unable to recover it. 00:27:18.453 [2024-07-14 07:44:34.526964] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.453 [2024-07-14 07:44:34.527154] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.453 [2024-07-14 07:44:34.527179] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:18.453 qpair failed and we were unable to recover it. 00:27:18.453 [2024-07-14 07:44:34.527386] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.453 [2024-07-14 07:44:34.527568] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.453 [2024-07-14 07:44:34.527597] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:18.453 qpair failed and we were unable to recover it. 00:27:18.453 [2024-07-14 07:44:34.527790] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.453 [2024-07-14 07:44:34.528010] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.453 [2024-07-14 07:44:34.528048] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:18.453 qpair failed and we were unable to recover it. 00:27:18.453 [2024-07-14 07:44:34.528266] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.453 [2024-07-14 07:44:34.528511] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.453 [2024-07-14 07:44:34.528565] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:18.453 qpair failed and we were unable to recover it. 00:27:18.453 [2024-07-14 07:44:34.528784] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.453 [2024-07-14 07:44:34.528960] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.453 [2024-07-14 07:44:34.528986] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:18.453 qpair failed and we were unable to recover it. 00:27:18.453 [2024-07-14 07:44:34.529152] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.453 [2024-07-14 07:44:34.529338] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.453 [2024-07-14 07:44:34.529364] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:18.453 qpair failed and we were unable to recover it. 00:27:18.453 [2024-07-14 07:44:34.529550] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.453 [2024-07-14 07:44:34.529738] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.453 [2024-07-14 07:44:34.529763] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:18.453 qpair failed and we were unable to recover it. 00:27:18.453 [2024-07-14 07:44:34.529959] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.453 [2024-07-14 07:44:34.530125] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.453 [2024-07-14 07:44:34.530152] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:18.453 qpair failed and we were unable to recover it. 00:27:18.453 [2024-07-14 07:44:34.530359] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.453 [2024-07-14 07:44:34.530584] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.453 [2024-07-14 07:44:34.530609] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:18.453 qpair failed and we were unable to recover it. 00:27:18.453 [2024-07-14 07:44:34.530788] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.453 [2024-07-14 07:44:34.530989] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.453 [2024-07-14 07:44:34.531016] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:18.453 qpair failed and we were unable to recover it. 00:27:18.453 [2024-07-14 07:44:34.531226] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.453 [2024-07-14 07:44:34.531451] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.453 [2024-07-14 07:44:34.531499] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:18.453 qpair failed and we were unable to recover it. 00:27:18.453 [2024-07-14 07:44:34.531727] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.453 [2024-07-14 07:44:34.531954] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.453 [2024-07-14 07:44:34.531990] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:18.453 qpair failed and we were unable to recover it. 00:27:18.453 [2024-07-14 07:44:34.532184] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.453 [2024-07-14 07:44:34.532374] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.453 [2024-07-14 07:44:34.532399] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:18.453 qpair failed and we were unable to recover it. 00:27:18.453 [2024-07-14 07:44:34.532606] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.453 [2024-07-14 07:44:34.532818] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.453 [2024-07-14 07:44:34.532843] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:18.453 qpair failed and we were unable to recover it. 00:27:18.453 [2024-07-14 07:44:34.533038] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.453 [2024-07-14 07:44:34.533200] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.453 [2024-07-14 07:44:34.533226] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:18.453 qpair failed and we were unable to recover it. 00:27:18.453 [2024-07-14 07:44:34.533407] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.453 [2024-07-14 07:44:34.533628] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.453 [2024-07-14 07:44:34.533654] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:18.453 qpair failed and we were unable to recover it. 00:27:18.453 [2024-07-14 07:44:34.533850] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.453 [2024-07-14 07:44:34.534047] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.453 [2024-07-14 07:44:34.534072] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:18.453 qpair failed and we were unable to recover it. 00:27:18.453 [2024-07-14 07:44:34.534288] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.453 [2024-07-14 07:44:34.534564] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.453 [2024-07-14 07:44:34.534592] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:18.453 qpair failed and we were unable to recover it. 00:27:18.453 [2024-07-14 07:44:34.534791] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.453 [2024-07-14 07:44:34.535028] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.453 [2024-07-14 07:44:34.535057] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:18.453 qpair failed and we were unable to recover it. 00:27:18.453 [2024-07-14 07:44:34.535300] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.453 [2024-07-14 07:44:34.535546] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.453 [2024-07-14 07:44:34.535592] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:18.454 qpair failed and we were unable to recover it. 00:27:18.454 [2024-07-14 07:44:34.535797] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.454 [2024-07-14 07:44:34.535963] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.454 [2024-07-14 07:44:34.535992] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:18.454 qpair failed and we were unable to recover it. 00:27:18.454 [2024-07-14 07:44:34.536189] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.454 [2024-07-14 07:44:34.536463] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.454 [2024-07-14 07:44:34.536509] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:18.454 qpair failed and we were unable to recover it. 00:27:18.454 [2024-07-14 07:44:34.536713] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.454 [2024-07-14 07:44:34.536900] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.454 [2024-07-14 07:44:34.536926] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:18.454 qpair failed and we were unable to recover it. 00:27:18.454 [2024-07-14 07:44:34.537138] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.454 [2024-07-14 07:44:34.537350] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.454 [2024-07-14 07:44:34.537375] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:18.454 qpair failed and we were unable to recover it. 00:27:18.454 [2024-07-14 07:44:34.537526] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.454 [2024-07-14 07:44:34.537741] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.454 [2024-07-14 07:44:34.537766] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:18.454 qpair failed and we were unable to recover it. 00:27:18.454 [2024-07-14 07:44:34.538014] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.454 [2024-07-14 07:44:34.538212] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.454 [2024-07-14 07:44:34.538270] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:18.454 qpair failed and we were unable to recover it. 00:27:18.454 [2024-07-14 07:44:34.538451] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.454 [2024-07-14 07:44:34.538661] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.454 [2024-07-14 07:44:34.538687] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:18.454 qpair failed and we were unable to recover it. 00:27:18.454 [2024-07-14 07:44:34.538901] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.454 [2024-07-14 07:44:34.539083] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.454 [2024-07-14 07:44:34.539109] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:18.454 qpair failed and we were unable to recover it. 00:27:18.454 [2024-07-14 07:44:34.539279] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.454 [2024-07-14 07:44:34.539440] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.454 [2024-07-14 07:44:34.539471] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:18.454 qpair failed and we were unable to recover it. 00:27:18.454 [2024-07-14 07:44:34.539649] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.454 [2024-07-14 07:44:34.539853] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.454 [2024-07-14 07:44:34.539893] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:18.454 qpair failed and we were unable to recover it. 00:27:18.454 [2024-07-14 07:44:34.540098] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.454 [2024-07-14 07:44:34.540339] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.454 [2024-07-14 07:44:34.540386] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:18.454 qpair failed and we were unable to recover it. 00:27:18.454 [2024-07-14 07:44:34.540617] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.454 [2024-07-14 07:44:34.540838] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.454 [2024-07-14 07:44:34.540863] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:18.454 qpair failed and we were unable to recover it. 00:27:18.454 [2024-07-14 07:44:34.541064] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.454 [2024-07-14 07:44:34.541243] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.454 [2024-07-14 07:44:34.541268] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:18.454 qpair failed and we were unable to recover it. 00:27:18.454 [2024-07-14 07:44:34.541465] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.454 [2024-07-14 07:44:34.541646] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.454 [2024-07-14 07:44:34.541679] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:18.454 qpair failed and we were unable to recover it. 00:27:18.454 [2024-07-14 07:44:34.541862] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.454 [2024-07-14 07:44:34.542020] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.454 [2024-07-14 07:44:34.542045] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:18.454 qpair failed and we were unable to recover it. 00:27:18.454 [2024-07-14 07:44:34.542233] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.454 [2024-07-14 07:44:34.542416] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.454 [2024-07-14 07:44:34.542441] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:18.454 qpair failed and we were unable to recover it. 00:27:18.454 [2024-07-14 07:44:34.542606] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.454 [2024-07-14 07:44:34.542786] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.454 [2024-07-14 07:44:34.542811] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:18.454 qpair failed and we were unable to recover it. 00:27:18.454 [2024-07-14 07:44:34.542999] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.454 [2024-07-14 07:44:34.543158] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.454 [2024-07-14 07:44:34.543184] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:18.454 qpair failed and we were unable to recover it. 00:27:18.454 [2024-07-14 07:44:34.543408] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.454 [2024-07-14 07:44:34.543553] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.454 [2024-07-14 07:44:34.543578] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:18.454 qpair failed and we were unable to recover it. 00:27:18.454 [2024-07-14 07:44:34.543737] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.454 [2024-07-14 07:44:34.543927] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.454 [2024-07-14 07:44:34.543953] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:18.454 qpair failed and we were unable to recover it. 00:27:18.454 [2024-07-14 07:44:34.544132] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.454 [2024-07-14 07:44:34.544394] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.454 [2024-07-14 07:44:34.544445] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:18.454 qpair failed and we were unable to recover it. 00:27:18.454 [2024-07-14 07:44:34.544811] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.454 [2024-07-14 07:44:34.545079] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.454 [2024-07-14 07:44:34.545108] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:18.454 qpair failed and we were unable to recover it. 00:27:18.454 [2024-07-14 07:44:34.545316] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.454 [2024-07-14 07:44:34.545533] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.454 [2024-07-14 07:44:34.545578] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:18.454 qpair failed and we were unable to recover it. 00:27:18.454 [2024-07-14 07:44:34.545814] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.454 [2024-07-14 07:44:34.545968] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.454 [2024-07-14 07:44:34.545994] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:18.454 qpair failed and we were unable to recover it. 00:27:18.454 [2024-07-14 07:44:34.546219] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.454 [2024-07-14 07:44:34.546432] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.454 [2024-07-14 07:44:34.546478] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:18.454 qpair failed and we were unable to recover it. 00:27:18.454 [2024-07-14 07:44:34.546749] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.454 [2024-07-14 07:44:34.546935] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.454 [2024-07-14 07:44:34.546965] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:18.454 qpair failed and we were unable to recover it. 00:27:18.454 [2024-07-14 07:44:34.547169] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.454 [2024-07-14 07:44:34.547347] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.454 [2024-07-14 07:44:34.547372] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:18.454 qpair failed and we were unable to recover it. 00:27:18.454 [2024-07-14 07:44:34.547587] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.454 [2024-07-14 07:44:34.547786] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.454 [2024-07-14 07:44:34.547814] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:18.454 qpair failed and we were unable to recover it. 00:27:18.454 [2024-07-14 07:44:34.548017] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.454 [2024-07-14 07:44:34.548204] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.454 [2024-07-14 07:44:34.548230] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:18.454 qpair failed and we were unable to recover it. 00:27:18.454 [2024-07-14 07:44:34.548415] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.454 [2024-07-14 07:44:34.548622] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.454 [2024-07-14 07:44:34.548667] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:18.454 qpair failed and we were unable to recover it. 00:27:18.454 [2024-07-14 07:44:34.548839] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.455 [2024-07-14 07:44:34.549063] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.455 [2024-07-14 07:44:34.549089] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:18.455 qpair failed and we were unable to recover it. 00:27:18.455 [2024-07-14 07:44:34.549303] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.455 [2024-07-14 07:44:34.549481] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.455 [2024-07-14 07:44:34.549512] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:18.455 qpair failed and we were unable to recover it. 00:27:18.455 [2024-07-14 07:44:34.549677] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.455 [2024-07-14 07:44:34.549857] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.455 [2024-07-14 07:44:34.549895] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:18.455 qpair failed and we were unable to recover it. 00:27:18.455 [2024-07-14 07:44:34.550074] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.455 [2024-07-14 07:44:34.550306] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.455 [2024-07-14 07:44:34.550333] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:18.455 qpair failed and we were unable to recover it. 00:27:18.455 [2024-07-14 07:44:34.550548] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.455 [2024-07-14 07:44:34.550741] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.455 [2024-07-14 07:44:34.550766] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:18.455 qpair failed and we were unable to recover it. 00:27:18.455 [2024-07-14 07:44:34.550980] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.455 [2024-07-14 07:44:34.551216] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.455 [2024-07-14 07:44:34.551242] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:18.455 qpair failed and we were unable to recover it. 00:27:18.455 [2024-07-14 07:44:34.551436] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.455 [2024-07-14 07:44:34.551649] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.455 [2024-07-14 07:44:34.551675] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:18.455 qpair failed and we were unable to recover it. 00:27:18.455 [2024-07-14 07:44:34.551836] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.455 [2024-07-14 07:44:34.552077] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.455 [2024-07-14 07:44:34.552103] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:18.455 qpair failed and we were unable to recover it. 00:27:18.455 [2024-07-14 07:44:34.552318] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.455 [2024-07-14 07:44:34.552474] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.455 [2024-07-14 07:44:34.552500] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:18.455 qpair failed and we were unable to recover it. 00:27:18.455 [2024-07-14 07:44:34.552692] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.455 [2024-07-14 07:44:34.552938] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.455 [2024-07-14 07:44:34.552964] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:18.455 qpair failed and we were unable to recover it. 00:27:18.455 [2024-07-14 07:44:34.553153] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.455 [2024-07-14 07:44:34.553361] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.455 [2024-07-14 07:44:34.553387] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:18.455 qpair failed and we were unable to recover it. 00:27:18.455 [2024-07-14 07:44:34.553541] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.455 [2024-07-14 07:44:34.553726] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.455 [2024-07-14 07:44:34.553751] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:18.455 qpair failed and we were unable to recover it. 00:27:18.455 [2024-07-14 07:44:34.553938] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.455 [2024-07-14 07:44:34.554161] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.455 [2024-07-14 07:44:34.554186] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:18.455 qpair failed and we were unable to recover it. 00:27:18.455 [2024-07-14 07:44:34.554393] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.455 [2024-07-14 07:44:34.554691] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.455 [2024-07-14 07:44:34.554757] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:18.455 qpair failed and we were unable to recover it. 00:27:18.455 [2024-07-14 07:44:34.555003] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.455 [2024-07-14 07:44:34.555200] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.455 [2024-07-14 07:44:34.555226] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:18.455 qpair failed and we were unable to recover it. 00:27:18.455 [2024-07-14 07:44:34.555411] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.455 [2024-07-14 07:44:34.555596] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.455 [2024-07-14 07:44:34.555622] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:18.455 qpair failed and we were unable to recover it. 00:27:18.455 [2024-07-14 07:44:34.555843] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.455 [2024-07-14 07:44:34.556024] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.455 [2024-07-14 07:44:34.556050] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:18.455 qpair failed and we were unable to recover it. 00:27:18.455 [2024-07-14 07:44:34.556234] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.455 [2024-07-14 07:44:34.556448] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.455 [2024-07-14 07:44:34.556476] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:18.455 qpair failed and we were unable to recover it. 00:27:18.455 [2024-07-14 07:44:34.556688] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.455 [2024-07-14 07:44:34.556892] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.455 [2024-07-14 07:44:34.556920] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:18.455 qpair failed and we were unable to recover it. 00:27:18.455 [2024-07-14 07:44:34.557146] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.455 [2024-07-14 07:44:34.557346] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.455 [2024-07-14 07:44:34.557391] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:18.455 qpair failed and we were unable to recover it. 00:27:18.455 [2024-07-14 07:44:34.557589] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.455 [2024-07-14 07:44:34.557790] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.455 [2024-07-14 07:44:34.557818] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:18.455 qpair failed and we were unable to recover it. 00:27:18.455 [2024-07-14 07:44:34.558031] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.455 [2024-07-14 07:44:34.558239] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.455 [2024-07-14 07:44:34.558285] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:18.455 qpair failed and we were unable to recover it. 00:27:18.455 [2024-07-14 07:44:34.558479] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.455 [2024-07-14 07:44:34.558701] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.455 [2024-07-14 07:44:34.558726] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:18.455 qpair failed and we were unable to recover it. 00:27:18.455 [2024-07-14 07:44:34.558919] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.455 [2024-07-14 07:44:34.559122] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.455 [2024-07-14 07:44:34.559150] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:18.455 qpair failed and we were unable to recover it. 00:27:18.455 [2024-07-14 07:44:34.559384] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.455 [2024-07-14 07:44:34.559586] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.455 [2024-07-14 07:44:34.559632] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:18.455 qpair failed and we were unable to recover it. 00:27:18.455 [2024-07-14 07:44:34.559824] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.455 [2024-07-14 07:44:34.560031] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.455 [2024-07-14 07:44:34.560059] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:18.455 qpair failed and we were unable to recover it. 00:27:18.455 [2024-07-14 07:44:34.560297] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.455 [2024-07-14 07:44:34.560554] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.455 [2024-07-14 07:44:34.560598] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:18.455 qpair failed and we were unable to recover it. 00:27:18.455 [2024-07-14 07:44:34.560806] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.455 [2024-07-14 07:44:34.561048] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.455 [2024-07-14 07:44:34.561077] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:18.455 qpair failed and we were unable to recover it. 00:27:18.455 [2024-07-14 07:44:34.561270] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.456 [2024-07-14 07:44:34.561473] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.456 [2024-07-14 07:44:34.561528] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:18.456 qpair failed and we were unable to recover it. 00:27:18.456 [2024-07-14 07:44:34.561739] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.456 [2024-07-14 07:44:34.561969] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.456 [2024-07-14 07:44:34.561998] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:18.456 qpair failed and we were unable to recover it. 00:27:18.456 [2024-07-14 07:44:34.562198] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.456 [2024-07-14 07:44:34.562538] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.456 [2024-07-14 07:44:34.562591] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:18.456 qpair failed and we were unable to recover it. 00:27:18.456 [2024-07-14 07:44:34.562829] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.456 [2024-07-14 07:44:34.563040] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.456 [2024-07-14 07:44:34.563068] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:18.456 qpair failed and we were unable to recover it. 00:27:18.456 [2024-07-14 07:44:34.563276] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.456 [2024-07-14 07:44:34.563432] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.456 [2024-07-14 07:44:34.563457] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:18.456 qpair failed and we were unable to recover it. 00:27:18.456 [2024-07-14 07:44:34.563650] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.456 [2024-07-14 07:44:34.563833] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.456 [2024-07-14 07:44:34.563859] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:18.456 qpair failed and we were unable to recover it. 00:27:18.456 [2024-07-14 07:44:34.564101] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.456 [2024-07-14 07:44:34.564395] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.456 [2024-07-14 07:44:34.564457] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:18.456 qpair failed and we were unable to recover it. 00:27:18.456 [2024-07-14 07:44:34.564724] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.456 [2024-07-14 07:44:34.564933] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.456 [2024-07-14 07:44:34.564959] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:18.456 qpair failed and we were unable to recover it. 00:27:18.456 [2024-07-14 07:44:34.565166] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.456 [2024-07-14 07:44:34.565375] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.456 [2024-07-14 07:44:34.565403] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:18.456 qpair failed and we were unable to recover it. 00:27:18.456 [2024-07-14 07:44:34.565596] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.456 [2024-07-14 07:44:34.565775] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.456 [2024-07-14 07:44:34.565803] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:18.456 qpair failed and we were unable to recover it. 00:27:18.456 [2024-07-14 07:44:34.566017] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.456 [2024-07-14 07:44:34.566276] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.456 [2024-07-14 07:44:34.566321] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:18.456 qpair failed and we were unable to recover it. 00:27:18.456 [2024-07-14 07:44:34.566556] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.456 [2024-07-14 07:44:34.566800] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.456 [2024-07-14 07:44:34.566828] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:18.456 qpair failed and we were unable to recover it. 00:27:18.456 [2024-07-14 07:44:34.567064] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.456 [2024-07-14 07:44:34.567235] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.456 [2024-07-14 07:44:34.567261] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:18.456 qpair failed and we were unable to recover it. 00:27:18.456 [2024-07-14 07:44:34.567471] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.456 [2024-07-14 07:44:34.567652] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.456 [2024-07-14 07:44:34.567677] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:18.456 qpair failed and we were unable to recover it. 00:27:18.456 [2024-07-14 07:44:34.567862] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.456 [2024-07-14 07:44:34.568032] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.456 [2024-07-14 07:44:34.568082] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:18.456 qpair failed and we were unable to recover it. 00:27:18.456 [2024-07-14 07:44:34.568307] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.456 [2024-07-14 07:44:34.568571] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.456 [2024-07-14 07:44:34.568616] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:18.456 qpair failed and we were unable to recover it. 00:27:18.456 [2024-07-14 07:44:34.568822] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.456 [2024-07-14 07:44:34.569037] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.456 [2024-07-14 07:44:34.569066] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:18.456 qpair failed and we were unable to recover it. 00:27:18.456 [2024-07-14 07:44:34.569294] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.456 [2024-07-14 07:44:34.569461] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.456 [2024-07-14 07:44:34.569489] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:18.456 qpair failed and we were unable to recover it. 00:27:18.456 [2024-07-14 07:44:34.569661] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.456 [2024-07-14 07:44:34.569861] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.456 [2024-07-14 07:44:34.569908] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:18.456 qpair failed and we were unable to recover it. 00:27:18.456 [2024-07-14 07:44:34.570096] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.456 [2024-07-14 07:44:34.570286] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.456 [2024-07-14 07:44:34.570311] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:18.456 qpair failed and we were unable to recover it. 00:27:18.456 [2024-07-14 07:44:34.570497] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.456 [2024-07-14 07:44:34.570772] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.456 [2024-07-14 07:44:34.570825] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:18.456 qpair failed and we were unable to recover it. 00:27:18.456 [2024-07-14 07:44:34.571092] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.456 [2024-07-14 07:44:34.571395] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.456 [2024-07-14 07:44:34.571453] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:18.456 qpair failed and we were unable to recover it. 00:27:18.456 [2024-07-14 07:44:34.571683] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.456 [2024-07-14 07:44:34.571859] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.456 [2024-07-14 07:44:34.571897] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:18.456 qpair failed and we were unable to recover it. 00:27:18.456 [2024-07-14 07:44:34.572104] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.456 [2024-07-14 07:44:34.572280] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.456 [2024-07-14 07:44:34.572306] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:18.456 qpair failed and we were unable to recover it. 00:27:18.456 [2024-07-14 07:44:34.572498] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.456 [2024-07-14 07:44:34.572676] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.456 [2024-07-14 07:44:34.572702] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:18.456 qpair failed and we were unable to recover it. 00:27:18.456 [2024-07-14 07:44:34.572885] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.456 [2024-07-14 07:44:34.573078] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.457 [2024-07-14 07:44:34.573108] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:18.457 qpair failed and we were unable to recover it. 00:27:18.457 [2024-07-14 07:44:34.573318] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.457 [2024-07-14 07:44:34.573531] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.457 [2024-07-14 07:44:34.573556] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:18.457 qpair failed and we were unable to recover it. 00:27:18.457 [2024-07-14 07:44:34.573707] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.457 [2024-07-14 07:44:34.573894] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.457 [2024-07-14 07:44:34.573920] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:18.457 qpair failed and we were unable to recover it. 00:27:18.457 [2024-07-14 07:44:34.574132] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.457 [2024-07-14 07:44:34.574317] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.457 [2024-07-14 07:44:34.574342] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:18.457 qpair failed and we were unable to recover it. 00:27:18.457 [2024-07-14 07:44:34.574519] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.457 [2024-07-14 07:44:34.574699] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.457 [2024-07-14 07:44:34.574724] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:18.457 qpair failed and we were unable to recover it. 00:27:18.457 [2024-07-14 07:44:34.574938] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.457 [2024-07-14 07:44:34.575117] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.457 [2024-07-14 07:44:34.575164] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:18.457 qpair failed and we were unable to recover it. 00:27:18.457 [2024-07-14 07:44:34.575472] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.457 [2024-07-14 07:44:34.575689] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.457 [2024-07-14 07:44:34.575719] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:18.457 qpair failed and we were unable to recover it. 00:27:18.457 [2024-07-14 07:44:34.575927] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.457 [2024-07-14 07:44:34.576114] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.457 [2024-07-14 07:44:34.576140] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:18.457 qpair failed and we were unable to recover it. 00:27:18.457 [2024-07-14 07:44:34.576330] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.457 [2024-07-14 07:44:34.576567] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.457 [2024-07-14 07:44:34.576595] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:18.457 qpair failed and we were unable to recover it. 00:27:18.457 [2024-07-14 07:44:34.576812] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.457 [2024-07-14 07:44:34.577020] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.457 [2024-07-14 07:44:34.577049] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:18.457 qpair failed and we were unable to recover it. 00:27:18.457 [2024-07-14 07:44:34.577222] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.457 [2024-07-14 07:44:34.577395] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.457 [2024-07-14 07:44:34.577427] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:18.457 qpair failed and we were unable to recover it. 00:27:18.457 [2024-07-14 07:44:34.577611] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.457 [2024-07-14 07:44:34.577820] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.457 [2024-07-14 07:44:34.577845] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:18.457 qpair failed and we were unable to recover it. 00:27:18.457 [2024-07-14 07:44:34.578053] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.457 [2024-07-14 07:44:34.578262] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.457 [2024-07-14 07:44:34.578300] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:18.457 qpair failed and we were unable to recover it. 00:27:18.457 [2024-07-14 07:44:34.578525] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.457 [2024-07-14 07:44:34.578788] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.457 [2024-07-14 07:44:34.578832] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:18.457 qpair failed and we were unable to recover it. 00:27:18.457 [2024-07-14 07:44:34.579033] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.457 [2024-07-14 07:44:34.579230] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.457 [2024-07-14 07:44:34.579259] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:18.457 qpair failed and we were unable to recover it. 00:27:18.457 [2024-07-14 07:44:34.579435] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.457 [2024-07-14 07:44:34.579632] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.457 [2024-07-14 07:44:34.579660] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:18.457 qpair failed and we were unable to recover it. 00:27:18.457 [2024-07-14 07:44:34.579888] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.457 [2024-07-14 07:44:34.580063] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.457 [2024-07-14 07:44:34.580093] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:18.457 qpair failed and we were unable to recover it. 00:27:18.457 [2024-07-14 07:44:34.580292] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.457 [2024-07-14 07:44:34.580630] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.457 [2024-07-14 07:44:34.580691] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:18.457 qpair failed and we were unable to recover it. 00:27:18.457 [2024-07-14 07:44:34.580897] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.457 [2024-07-14 07:44:34.581121] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.457 [2024-07-14 07:44:34.581150] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:18.457 qpair failed and we were unable to recover it. 00:27:18.457 [2024-07-14 07:44:34.581331] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.457 [2024-07-14 07:44:34.581676] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.457 [2024-07-14 07:44:34.581741] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:18.457 qpair failed and we were unable to recover it. 00:27:18.457 [2024-07-14 07:44:34.581982] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.457 [2024-07-14 07:44:34.582162] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.457 [2024-07-14 07:44:34.582190] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:18.457 qpair failed and we were unable to recover it. 00:27:18.457 [2024-07-14 07:44:34.582406] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.457 [2024-07-14 07:44:34.582634] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.457 [2024-07-14 07:44:34.582661] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:18.457 qpair failed and we were unable to recover it. 00:27:18.457 [2024-07-14 07:44:34.582897] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.457 [2024-07-14 07:44:34.583110] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.457 [2024-07-14 07:44:34.583136] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:18.457 qpair failed and we were unable to recover it. 00:27:18.457 [2024-07-14 07:44:34.583301] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.457 [2024-07-14 07:44:34.583569] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.457 [2024-07-14 07:44:34.583620] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:18.457 qpair failed and we were unable to recover it. 00:27:18.457 [2024-07-14 07:44:34.583844] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.457 [2024-07-14 07:44:34.584070] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.457 [2024-07-14 07:44:34.584098] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:18.457 qpair failed and we were unable to recover it. 00:27:18.457 [2024-07-14 07:44:34.584273] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.457 [2024-07-14 07:44:34.584502] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.457 [2024-07-14 07:44:34.584550] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:18.457 qpair failed and we were unable to recover it. 00:27:18.457 [2024-07-14 07:44:34.584765] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.457 [2024-07-14 07:44:34.584956] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.457 [2024-07-14 07:44:34.584983] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:18.457 qpair failed and we were unable to recover it. 00:27:18.457 [2024-07-14 07:44:34.585173] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.457 [2024-07-14 07:44:34.585359] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.457 [2024-07-14 07:44:34.585404] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:18.457 qpair failed and we were unable to recover it. 00:27:18.457 [2024-07-14 07:44:34.585636] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.457 [2024-07-14 07:44:34.585838] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.457 [2024-07-14 07:44:34.585875] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:18.457 qpair failed and we were unable to recover it. 00:27:18.457 [2024-07-14 07:44:34.586035] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.457 [2024-07-14 07:44:34.586258] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.457 [2024-07-14 07:44:34.586308] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:18.457 qpair failed and we were unable to recover it. 00:27:18.457 [2024-07-14 07:44:34.586524] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.457 [2024-07-14 07:44:34.586726] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.457 [2024-07-14 07:44:34.586754] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:18.457 qpair failed and we were unable to recover it. 00:27:18.458 [2024-07-14 07:44:34.586955] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.458 [2024-07-14 07:44:34.587171] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.458 [2024-07-14 07:44:34.587196] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:18.458 qpair failed and we were unable to recover it. 00:27:18.458 [2024-07-14 07:44:34.587440] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.458 [2024-07-14 07:44:34.587678] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.458 [2024-07-14 07:44:34.587704] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:18.458 qpair failed and we were unable to recover it. 00:27:18.458 [2024-07-14 07:44:34.587913] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.458 [2024-07-14 07:44:34.588150] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.458 [2024-07-14 07:44:34.588179] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:18.458 qpair failed and we were unable to recover it. 00:27:18.458 [2024-07-14 07:44:34.588466] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.458 [2024-07-14 07:44:34.588715] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.458 [2024-07-14 07:44:34.588743] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:18.458 qpair failed and we were unable to recover it. 00:27:18.458 [2024-07-14 07:44:34.588968] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.458 [2024-07-14 07:44:34.589180] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.458 [2024-07-14 07:44:34.589208] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:18.458 qpair failed and we were unable to recover it. 00:27:18.458 [2024-07-14 07:44:34.589409] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.458 [2024-07-14 07:44:34.589580] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.458 [2024-07-14 07:44:34.589609] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:18.458 qpair failed and we were unable to recover it. 00:27:18.458 [2024-07-14 07:44:34.589788] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.458 [2024-07-14 07:44:34.589964] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.458 [2024-07-14 07:44:34.589995] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:18.458 qpair failed and we were unable to recover it. 00:27:18.458 [2024-07-14 07:44:34.590195] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.458 [2024-07-14 07:44:34.590394] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.458 [2024-07-14 07:44:34.590420] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:18.458 qpair failed and we were unable to recover it. 00:27:18.458 [2024-07-14 07:44:34.590605] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.458 [2024-07-14 07:44:34.590786] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.458 [2024-07-14 07:44:34.590811] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:18.458 qpair failed and we were unable to recover it. 00:27:18.458 [2024-07-14 07:44:34.591029] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.458 [2024-07-14 07:44:34.591401] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.458 [2024-07-14 07:44:34.591451] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:18.458 qpair failed and we were unable to recover it. 00:27:18.458 [2024-07-14 07:44:34.591653] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.458 [2024-07-14 07:44:34.591846] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.458 [2024-07-14 07:44:34.591893] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:18.458 qpair failed and we were unable to recover it. 00:27:18.458 [2024-07-14 07:44:34.592140] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.458 [2024-07-14 07:44:34.592352] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.458 [2024-07-14 07:44:34.592380] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:18.458 qpair failed and we were unable to recover it. 00:27:18.458 [2024-07-14 07:44:34.592631] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.458 [2024-07-14 07:44:34.592859] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.458 [2024-07-14 07:44:34.592912] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:18.458 qpair failed and we were unable to recover it. 00:27:18.458 [2024-07-14 07:44:34.593138] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.458 [2024-07-14 07:44:34.593388] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.458 [2024-07-14 07:44:34.593430] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:18.458 qpair failed and we were unable to recover it. 00:27:18.458 [2024-07-14 07:44:34.593655] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.458 [2024-07-14 07:44:34.593874] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.458 [2024-07-14 07:44:34.593905] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:18.458 qpair failed and we were unable to recover it. 00:27:18.458 [2024-07-14 07:44:34.594114] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.458 [2024-07-14 07:44:34.594360] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.458 [2024-07-14 07:44:34.594385] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:18.458 qpair failed and we were unable to recover it. 00:27:18.458 [2024-07-14 07:44:34.594608] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.458 [2024-07-14 07:44:34.594889] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.458 [2024-07-14 07:44:34.594926] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:18.458 qpair failed and we were unable to recover it. 00:27:18.458 [2024-07-14 07:44:34.595170] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.458 [2024-07-14 07:44:34.595381] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.458 [2024-07-14 07:44:34.595412] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:18.458 qpair failed and we were unable to recover it. 00:27:18.458 [2024-07-14 07:44:34.595633] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.458 [2024-07-14 07:44:34.595807] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.458 [2024-07-14 07:44:34.595835] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:18.458 qpair failed and we were unable to recover it. 00:27:18.458 [2024-07-14 07:44:34.596080] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.458 [2024-07-14 07:44:34.596353] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.458 [2024-07-14 07:44:34.596408] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:18.458 qpair failed and we were unable to recover it. 00:27:18.458 [2024-07-14 07:44:34.596650] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.458 [2024-07-14 07:44:34.596821] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.458 [2024-07-14 07:44:34.596847] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:18.458 qpair failed and we were unable to recover it. 00:27:18.729 [2024-07-14 07:44:34.597039] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.729 [2024-07-14 07:44:34.597473] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.729 [2024-07-14 07:44:34.597527] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:18.729 qpair failed and we were unable to recover it. 00:27:18.729 [2024-07-14 07:44:34.597777] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.729 [2024-07-14 07:44:34.598007] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.729 [2024-07-14 07:44:34.598046] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:18.729 qpair failed and we were unable to recover it. 00:27:18.729 [2024-07-14 07:44:34.598346] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.729 [2024-07-14 07:44:34.598543] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.729 [2024-07-14 07:44:34.598581] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:18.729 qpair failed and we were unable to recover it. 00:27:18.729 [2024-07-14 07:44:34.598810] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.729 [2024-07-14 07:44:34.599077] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.729 [2024-07-14 07:44:34.599109] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:18.729 qpair failed and we were unable to recover it. 00:27:18.729 [2024-07-14 07:44:34.599333] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.729 [2024-07-14 07:44:34.599571] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.729 [2024-07-14 07:44:34.599596] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:18.729 qpair failed and we were unable to recover it. 00:27:18.729 [2024-07-14 07:44:34.599784] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.729 [2024-07-14 07:44:34.599974] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.729 [2024-07-14 07:44:34.600001] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:18.729 qpair failed and we were unable to recover it. 00:27:18.729 [2024-07-14 07:44:34.600211] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.729 [2024-07-14 07:44:34.600451] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.729 [2024-07-14 07:44:34.600477] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:18.729 qpair failed and we were unable to recover it. 00:27:18.729 [2024-07-14 07:44:34.600687] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.729 [2024-07-14 07:44:34.600844] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.729 [2024-07-14 07:44:34.600883] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:18.729 qpair failed and we were unable to recover it. 00:27:18.729 [2024-07-14 07:44:34.601059] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.729 [2024-07-14 07:44:34.601297] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.729 [2024-07-14 07:44:34.601322] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:18.729 qpair failed and we were unable to recover it. 00:27:18.729 [2024-07-14 07:44:34.601570] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.729 [2024-07-14 07:44:34.601770] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.729 [2024-07-14 07:44:34.601803] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:18.729 qpair failed and we were unable to recover it. 00:27:18.729 [2024-07-14 07:44:34.602026] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.729 [2024-07-14 07:44:34.602255] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.729 [2024-07-14 07:44:34.602280] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:18.730 qpair failed and we were unable to recover it. 00:27:18.730 [2024-07-14 07:44:34.602489] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.730 [2024-07-14 07:44:34.602652] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.730 [2024-07-14 07:44:34.602677] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:18.730 qpair failed and we were unable to recover it. 00:27:18.730 [2024-07-14 07:44:34.602890] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.730 [2024-07-14 07:44:34.603064] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.730 [2024-07-14 07:44:34.603092] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:18.730 qpair failed and we were unable to recover it. 00:27:18.730 [2024-07-14 07:44:34.603327] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.730 [2024-07-14 07:44:34.603530] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.730 [2024-07-14 07:44:34.603558] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:18.730 qpair failed and we were unable to recover it. 00:27:18.730 [2024-07-14 07:44:34.603763] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.730 [2024-07-14 07:44:34.603938] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.730 [2024-07-14 07:44:34.603966] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:18.730 qpair failed and we were unable to recover it. 00:27:18.730 [2024-07-14 07:44:34.604147] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.730 [2024-07-14 07:44:34.604335] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.730 [2024-07-14 07:44:34.604360] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:18.730 qpair failed and we were unable to recover it. 00:27:18.730 [2024-07-14 07:44:34.604546] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.730 [2024-07-14 07:44:34.604704] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.730 [2024-07-14 07:44:34.604730] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:18.730 qpair failed and we were unable to recover it. 00:27:18.730 [2024-07-14 07:44:34.604946] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.730 [2024-07-14 07:44:34.605118] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.730 [2024-07-14 07:44:34.605146] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:18.730 qpair failed and we were unable to recover it. 00:27:18.730 [2024-07-14 07:44:34.605425] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.730 [2024-07-14 07:44:34.605681] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.730 [2024-07-14 07:44:34.605706] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:18.730 qpair failed and we were unable to recover it. 00:27:18.730 [2024-07-14 07:44:34.605955] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.730 [2024-07-14 07:44:34.606133] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.730 [2024-07-14 07:44:34.606175] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:18.730 qpair failed and we were unable to recover it. 00:27:18.730 [2024-07-14 07:44:34.606380] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.730 [2024-07-14 07:44:34.606543] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.730 [2024-07-14 07:44:34.606568] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:18.730 qpair failed and we were unable to recover it. 00:27:18.730 [2024-07-14 07:44:34.606802] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.730 [2024-07-14 07:44:34.607017] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.730 [2024-07-14 07:44:34.607046] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:18.730 qpair failed and we were unable to recover it. 00:27:18.730 [2024-07-14 07:44:34.607258] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.730 [2024-07-14 07:44:34.607461] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.730 [2024-07-14 07:44:34.607489] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:18.730 qpair failed and we were unable to recover it. 00:27:18.730 [2024-07-14 07:44:34.607685] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.730 [2024-07-14 07:44:34.607889] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.730 [2024-07-14 07:44:34.607918] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:18.730 qpair failed and we were unable to recover it. 00:27:18.730 [2024-07-14 07:44:34.608117] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.730 [2024-07-14 07:44:34.608459] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.730 [2024-07-14 07:44:34.608518] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:18.730 qpair failed and we were unable to recover it. 00:27:18.730 [2024-07-14 07:44:34.608731] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.730 [2024-07-14 07:44:34.608967] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.730 [2024-07-14 07:44:34.608995] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:18.730 qpair failed and we were unable to recover it. 00:27:18.730 [2024-07-14 07:44:34.609203] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.730 [2024-07-14 07:44:34.609408] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.730 [2024-07-14 07:44:34.609453] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:18.730 qpair failed and we were unable to recover it. 00:27:18.730 [2024-07-14 07:44:34.609689] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.730 [2024-07-14 07:44:34.609878] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.730 [2024-07-14 07:44:34.609907] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:18.730 qpair failed and we were unable to recover it. 00:27:18.730 [2024-07-14 07:44:34.610078] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.730 [2024-07-14 07:44:34.610292] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.730 [2024-07-14 07:44:34.610320] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:18.730 qpair failed and we were unable to recover it. 00:27:18.730 [2024-07-14 07:44:34.610547] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.730 [2024-07-14 07:44:34.610928] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.730 [2024-07-14 07:44:34.610957] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:18.730 qpair failed and we were unable to recover it. 00:27:18.730 [2024-07-14 07:44:34.611199] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.730 [2024-07-14 07:44:34.611595] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.730 [2024-07-14 07:44:34.611645] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:18.730 qpair failed and we were unable to recover it. 00:27:18.730 [2024-07-14 07:44:34.611878] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.730 [2024-07-14 07:44:34.612084] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.730 [2024-07-14 07:44:34.612112] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:18.730 qpair failed and we were unable to recover it. 00:27:18.730 [2024-07-14 07:44:34.612325] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.730 [2024-07-14 07:44:34.612478] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.730 [2024-07-14 07:44:34.612517] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:18.730 qpair failed and we were unable to recover it. 00:27:18.730 [2024-07-14 07:44:34.612723] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.730 [2024-07-14 07:44:34.612986] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.730 [2024-07-14 07:44:34.613015] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:18.730 qpair failed and we were unable to recover it. 00:27:18.730 [2024-07-14 07:44:34.613326] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.730 [2024-07-14 07:44:34.613628] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.730 [2024-07-14 07:44:34.613657] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:18.730 qpair failed and we were unable to recover it. 00:27:18.730 [2024-07-14 07:44:34.613882] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.730 [2024-07-14 07:44:34.614059] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.730 [2024-07-14 07:44:34.614087] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:18.730 qpair failed and we were unable to recover it. 00:27:18.730 [2024-07-14 07:44:34.614325] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.730 [2024-07-14 07:44:34.614553] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.730 [2024-07-14 07:44:34.614598] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:18.730 qpair failed and we were unable to recover it. 00:27:18.730 [2024-07-14 07:44:34.614811] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.730 [2024-07-14 07:44:34.614970] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.730 [2024-07-14 07:44:34.615012] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:18.730 qpair failed and we were unable to recover it. 00:27:18.730 [2024-07-14 07:44:34.615366] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.730 [2024-07-14 07:44:34.615640] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.730 [2024-07-14 07:44:34.615668] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:18.730 qpair failed and we were unable to recover it. 00:27:18.730 [2024-07-14 07:44:34.615842] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.730 [2024-07-14 07:44:34.616062] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.730 [2024-07-14 07:44:34.616091] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:18.730 qpair failed and we were unable to recover it. 00:27:18.730 [2024-07-14 07:44:34.616306] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.730 [2024-07-14 07:44:34.616491] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.730 [2024-07-14 07:44:34.616519] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:18.730 qpair failed and we were unable to recover it. 00:27:18.730 [2024-07-14 07:44:34.616714] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.731 [2024-07-14 07:44:34.616957] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.731 [2024-07-14 07:44:34.616986] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:18.731 qpair failed and we were unable to recover it. 00:27:18.731 [2024-07-14 07:44:34.617211] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.731 [2024-07-14 07:44:34.617422] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.731 [2024-07-14 07:44:34.617467] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:18.731 qpair failed and we were unable to recover it. 00:27:18.731 [2024-07-14 07:44:34.617678] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.731 [2024-07-14 07:44:34.617907] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.731 [2024-07-14 07:44:34.617936] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:18.731 qpair failed and we were unable to recover it. 00:27:18.731 [2024-07-14 07:44:34.618140] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.731 [2024-07-14 07:44:34.618375] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.731 [2024-07-14 07:44:34.618402] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:18.731 qpair failed and we were unable to recover it. 00:27:18.731 [2024-07-14 07:44:34.618612] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.731 [2024-07-14 07:44:34.618810] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.731 [2024-07-14 07:44:34.618840] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:18.731 qpair failed and we were unable to recover it. 00:27:18.731 [2024-07-14 07:44:34.619045] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.731 [2024-07-14 07:44:34.619349] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.731 [2024-07-14 07:44:34.619421] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:18.731 qpair failed and we were unable to recover it. 00:27:18.731 [2024-07-14 07:44:34.619631] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.731 [2024-07-14 07:44:34.619831] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.731 [2024-07-14 07:44:34.619859] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:18.731 qpair failed and we were unable to recover it. 00:27:18.731 [2024-07-14 07:44:34.620069] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.731 [2024-07-14 07:44:34.620401] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.731 [2024-07-14 07:44:34.620470] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:18.731 qpair failed and we were unable to recover it. 00:27:18.731 [2024-07-14 07:44:34.620678] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.731 [2024-07-14 07:44:34.620884] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.731 [2024-07-14 07:44:34.620918] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:18.731 qpair failed and we were unable to recover it. 00:27:18.731 [2024-07-14 07:44:34.621130] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.731 [2024-07-14 07:44:34.621357] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.731 [2024-07-14 07:44:34.621409] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:18.731 qpair failed and we were unable to recover it. 00:27:18.731 [2024-07-14 07:44:34.621616] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.731 [2024-07-14 07:44:34.621807] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.731 [2024-07-14 07:44:34.621832] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:18.731 qpair failed and we were unable to recover it. 00:27:18.731 [2024-07-14 07:44:34.622049] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.731 [2024-07-14 07:44:34.622287] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.731 [2024-07-14 07:44:34.622335] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:18.731 qpair failed and we were unable to recover it. 00:27:18.731 [2024-07-14 07:44:34.622567] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.731 [2024-07-14 07:44:34.622728] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.731 [2024-07-14 07:44:34.622757] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:18.731 qpair failed and we were unable to recover it. 00:27:18.731 [2024-07-14 07:44:34.622966] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.731 [2024-07-14 07:44:34.623173] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.731 [2024-07-14 07:44:34.623202] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:18.731 qpair failed and we were unable to recover it. 00:27:18.731 [2024-07-14 07:44:34.623437] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.731 [2024-07-14 07:44:34.623659] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.731 [2024-07-14 07:44:34.623703] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:18.731 qpair failed and we were unable to recover it. 00:27:18.731 [2024-07-14 07:44:34.623942] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.731 [2024-07-14 07:44:34.624130] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.731 [2024-07-14 07:44:34.624159] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:18.731 qpair failed and we were unable to recover it. 00:27:18.731 [2024-07-14 07:44:34.624366] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.731 [2024-07-14 07:44:34.624571] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.731 [2024-07-14 07:44:34.624600] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:18.731 qpair failed and we were unable to recover it. 00:27:18.731 [2024-07-14 07:44:34.624806] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.731 [2024-07-14 07:44:34.625014] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.731 [2024-07-14 07:44:34.625044] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:18.731 qpair failed and we were unable to recover it. 00:27:18.731 [2024-07-14 07:44:34.625268] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.731 [2024-07-14 07:44:34.625573] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.731 [2024-07-14 07:44:34.625620] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:18.731 qpair failed and we were unable to recover it. 00:27:18.731 [2024-07-14 07:44:34.625829] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.731 [2024-07-14 07:44:34.626050] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.731 [2024-07-14 07:44:34.626085] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:18.731 qpair failed and we were unable to recover it. 00:27:18.731 [2024-07-14 07:44:34.626318] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.731 [2024-07-14 07:44:34.626690] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.731 [2024-07-14 07:44:34.626740] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:18.731 qpair failed and we were unable to recover it. 00:27:18.731 [2024-07-14 07:44:34.626948] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.731 [2024-07-14 07:44:34.627121] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.731 [2024-07-14 07:44:34.627149] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:18.731 qpair failed and we were unable to recover it. 00:27:18.731 [2024-07-14 07:44:34.627394] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.731 [2024-07-14 07:44:34.627608] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.731 [2024-07-14 07:44:34.627639] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:18.731 qpair failed and we were unable to recover it. 00:27:18.731 [2024-07-14 07:44:34.627895] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.731 [2024-07-14 07:44:34.628048] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.731 [2024-07-14 07:44:34.628075] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:18.731 qpair failed and we were unable to recover it. 00:27:18.731 [2024-07-14 07:44:34.628330] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.731 [2024-07-14 07:44:34.628580] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.731 [2024-07-14 07:44:34.628610] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:18.731 qpair failed and we were unable to recover it. 00:27:18.731 [2024-07-14 07:44:34.628815] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.731 [2024-07-14 07:44:34.629027] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.731 [2024-07-14 07:44:34.629053] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:18.731 qpair failed and we were unable to recover it. 00:27:18.731 [2024-07-14 07:44:34.629236] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.731 [2024-07-14 07:44:34.629527] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.731 [2024-07-14 07:44:34.629595] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:18.731 qpair failed and we were unable to recover it. 00:27:18.731 [2024-07-14 07:44:34.629832] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.731 [2024-07-14 07:44:34.630057] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.731 [2024-07-14 07:44:34.630084] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:18.731 qpair failed and we were unable to recover it. 00:27:18.731 [2024-07-14 07:44:34.630305] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.731 [2024-07-14 07:44:34.630748] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.731 [2024-07-14 07:44:34.630805] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:18.731 qpair failed and we were unable to recover it. 00:27:18.731 [2024-07-14 07:44:34.631041] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.731 [2024-07-14 07:44:34.631255] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.731 [2024-07-14 07:44:34.631287] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:18.731 qpair failed and we were unable to recover it. 00:27:18.731 [2024-07-14 07:44:34.631514] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.731 [2024-07-14 07:44:34.631740] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.731 [2024-07-14 07:44:34.631783] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:18.731 qpair failed and we were unable to recover it. 00:27:18.731 [2024-07-14 07:44:34.632028] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.731 [2024-07-14 07:44:34.632257] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.732 [2024-07-14 07:44:34.632308] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:18.732 qpair failed and we were unable to recover it. 00:27:18.732 [2024-07-14 07:44:34.632514] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.732 [2024-07-14 07:44:34.632715] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.732 [2024-07-14 07:44:34.632744] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:18.732 qpair failed and we were unable to recover it. 00:27:18.732 [2024-07-14 07:44:34.632920] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.732 [2024-07-14 07:44:34.633117] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.732 [2024-07-14 07:44:34.633145] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:18.732 qpair failed and we were unable to recover it. 00:27:18.732 [2024-07-14 07:44:34.633381] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.732 [2024-07-14 07:44:34.633631] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.732 [2024-07-14 07:44:34.633677] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:18.732 qpair failed and we were unable to recover it. 00:27:18.732 [2024-07-14 07:44:34.633888] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.732 [2024-07-14 07:44:34.634085] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.732 [2024-07-14 07:44:34.634111] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:18.732 qpair failed and we were unable to recover it. 00:27:18.732 [2024-07-14 07:44:34.634299] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.732 [2024-07-14 07:44:34.634620] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.732 [2024-07-14 07:44:34.634676] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:18.732 qpair failed and we were unable to recover it. 00:27:18.732 [2024-07-14 07:44:34.634890] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.732 [2024-07-14 07:44:34.635137] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.732 [2024-07-14 07:44:34.635165] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:18.732 qpair failed and we were unable to recover it. 00:27:18.732 [2024-07-14 07:44:34.635366] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.732 [2024-07-14 07:44:34.635555] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.732 [2024-07-14 07:44:34.635603] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:18.732 qpair failed and we were unable to recover it. 00:27:18.732 [2024-07-14 07:44:34.635832] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.732 [2024-07-14 07:44:34.636041] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.732 [2024-07-14 07:44:34.636070] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:18.732 qpair failed and we were unable to recover it. 00:27:18.732 [2024-07-14 07:44:34.636264] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.732 [2024-07-14 07:44:34.636563] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.732 [2024-07-14 07:44:34.636615] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:18.732 qpair failed and we were unable to recover it. 00:27:18.732 [2024-07-14 07:44:34.636851] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.732 [2024-07-14 07:44:34.637071] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.732 [2024-07-14 07:44:34.637096] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:18.732 qpair failed and we were unable to recover it. 00:27:18.732 [2024-07-14 07:44:34.637274] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.732 [2024-07-14 07:44:34.637449] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.732 [2024-07-14 07:44:34.637479] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:18.732 qpair failed and we were unable to recover it. 00:27:18.732 [2024-07-14 07:44:34.637709] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.732 [2024-07-14 07:44:34.637881] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.732 [2024-07-14 07:44:34.637918] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:18.732 qpair failed and we were unable to recover it. 00:27:18.732 [2024-07-14 07:44:34.638121] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.732 [2024-07-14 07:44:34.638330] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.732 [2024-07-14 07:44:34.638374] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:18.732 qpair failed and we were unable to recover it. 00:27:18.732 [2024-07-14 07:44:34.638690] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.732 [2024-07-14 07:44:34.638993] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.732 [2024-07-14 07:44:34.639023] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:18.732 qpair failed and we were unable to recover it. 00:27:18.732 [2024-07-14 07:44:34.639237] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.732 [2024-07-14 07:44:34.639575] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.732 [2024-07-14 07:44:34.639621] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:18.732 qpair failed and we were unable to recover it. 00:27:18.732 [2024-07-14 07:44:34.639830] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.732 [2024-07-14 07:44:34.640066] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.732 [2024-07-14 07:44:34.640092] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:18.732 qpair failed and we were unable to recover it. 00:27:18.732 [2024-07-14 07:44:34.640366] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.732 [2024-07-14 07:44:34.640618] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.732 [2024-07-14 07:44:34.640644] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:18.732 qpair failed and we were unable to recover it. 00:27:18.732 [2024-07-14 07:44:34.640877] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.732 [2024-07-14 07:44:34.641094] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.732 [2024-07-14 07:44:34.641122] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:18.732 qpair failed and we were unable to recover it. 00:27:18.732 [2024-07-14 07:44:34.641333] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.732 [2024-07-14 07:44:34.641501] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.732 [2024-07-14 07:44:34.641527] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:18.732 qpair failed and we were unable to recover it. 00:27:18.732 [2024-07-14 07:44:34.641746] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.732 [2024-07-14 07:44:34.641994] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.732 [2024-07-14 07:44:34.642023] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:18.732 qpair failed and we were unable to recover it. 00:27:18.732 [2024-07-14 07:44:34.642229] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.732 [2024-07-14 07:44:34.642421] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.732 [2024-07-14 07:44:34.642449] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:18.732 qpair failed and we were unable to recover it. 00:27:18.732 [2024-07-14 07:44:34.642670] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.732 [2024-07-14 07:44:34.642891] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.732 [2024-07-14 07:44:34.642927] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:18.732 qpair failed and we were unable to recover it. 00:27:18.732 [2024-07-14 07:44:34.643173] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.732 [2024-07-14 07:44:34.643410] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.732 [2024-07-14 07:44:34.643457] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:18.732 qpair failed and we were unable to recover it. 00:27:18.732 [2024-07-14 07:44:34.643663] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.732 [2024-07-14 07:44:34.643872] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.732 [2024-07-14 07:44:34.643902] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:18.732 qpair failed and we were unable to recover it. 00:27:18.732 [2024-07-14 07:44:34.644110] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.732 [2024-07-14 07:44:34.644316] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.732 [2024-07-14 07:44:34.644344] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:18.732 qpair failed and we were unable to recover it. 00:27:18.732 [2024-07-14 07:44:34.644644] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.732 [2024-07-14 07:44:34.644883] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.732 [2024-07-14 07:44:34.644912] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:18.732 qpair failed and we were unable to recover it. 00:27:18.732 [2024-07-14 07:44:34.645153] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.732 [2024-07-14 07:44:34.645388] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.732 [2024-07-14 07:44:34.645416] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:18.732 qpair failed and we were unable to recover it. 00:27:18.732 [2024-07-14 07:44:34.645620] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.732 [2024-07-14 07:44:34.645855] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.732 [2024-07-14 07:44:34.645892] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:18.732 qpair failed and we were unable to recover it. 00:27:18.732 [2024-07-14 07:44:34.646112] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.732 [2024-07-14 07:44:34.646320] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.732 [2024-07-14 07:44:34.646349] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:18.732 qpair failed and we were unable to recover it. 00:27:18.732 [2024-07-14 07:44:34.646557] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.732 [2024-07-14 07:44:34.646853] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.732 [2024-07-14 07:44:34.646945] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:18.733 qpair failed and we were unable to recover it. 00:27:18.733 [2024-07-14 07:44:34.647171] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.733 [2024-07-14 07:44:34.647335] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.733 [2024-07-14 07:44:34.647364] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:18.733 qpair failed and we were unable to recover it. 00:27:18.733 [2024-07-14 07:44:34.647549] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.733 [2024-07-14 07:44:34.647776] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.733 [2024-07-14 07:44:34.647805] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:18.733 qpair failed and we were unable to recover it. 00:27:18.733 [2024-07-14 07:44:34.648043] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.733 [2024-07-14 07:44:34.648283] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.733 [2024-07-14 07:44:34.648324] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:18.733 qpair failed and we were unable to recover it. 00:27:18.733 [2024-07-14 07:44:34.648501] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.733 [2024-07-14 07:44:34.648731] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.733 [2024-07-14 07:44:34.648759] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:18.733 qpair failed and we were unable to recover it. 00:27:18.733 [2024-07-14 07:44:34.648973] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.733 [2024-07-14 07:44:34.649203] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.733 [2024-07-14 07:44:34.649232] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:18.733 qpair failed and we were unable to recover it. 00:27:18.733 [2024-07-14 07:44:34.649431] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.733 [2024-07-14 07:44:34.649610] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.733 [2024-07-14 07:44:34.649639] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:18.733 qpair failed and we were unable to recover it. 00:27:18.733 [2024-07-14 07:44:34.649873] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.733 [2024-07-14 07:44:34.650101] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.733 [2024-07-14 07:44:34.650130] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:18.733 qpair failed and we were unable to recover it. 00:27:18.733 [2024-07-14 07:44:34.650361] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.733 [2024-07-14 07:44:34.650596] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.733 [2024-07-14 07:44:34.650642] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:18.733 qpair failed and we were unable to recover it. 00:27:18.733 [2024-07-14 07:44:34.650863] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.733 [2024-07-14 07:44:34.651058] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.733 [2024-07-14 07:44:34.651091] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:18.733 qpair failed and we were unable to recover it. 00:27:18.733 [2024-07-14 07:44:34.651272] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.733 [2024-07-14 07:44:34.651474] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.733 [2024-07-14 07:44:34.651503] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:18.733 qpair failed and we were unable to recover it. 00:27:18.733 [2024-07-14 07:44:34.651738] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.733 [2024-07-14 07:44:34.651967] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.733 [2024-07-14 07:44:34.652000] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:18.733 qpair failed and we were unable to recover it. 00:27:18.733 [2024-07-14 07:44:34.652187] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.733 [2024-07-14 07:44:34.652357] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.733 [2024-07-14 07:44:34.652382] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:18.733 qpair failed and we were unable to recover it. 00:27:18.733 [2024-07-14 07:44:34.652624] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.733 [2024-07-14 07:44:34.652828] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.733 [2024-07-14 07:44:34.652857] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:18.733 qpair failed and we were unable to recover it. 00:27:18.733 [2024-07-14 07:44:34.653074] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.733 [2024-07-14 07:44:34.653279] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.733 [2024-07-14 07:44:34.653309] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:18.733 qpair failed and we were unable to recover it. 00:27:18.733 [2024-07-14 07:44:34.653514] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.733 [2024-07-14 07:44:34.653695] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.733 [2024-07-14 07:44:34.653720] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:18.733 qpair failed and we were unable to recover it. 00:27:18.733 [2024-07-14 07:44:34.653962] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.733 [2024-07-14 07:44:34.654184] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.733 [2024-07-14 07:44:34.654209] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:18.733 qpair failed and we were unable to recover it. 00:27:18.733 [2024-07-14 07:44:34.654414] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.733 [2024-07-14 07:44:34.654563] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.733 [2024-07-14 07:44:34.654589] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:18.733 qpair failed and we were unable to recover it. 00:27:18.733 [2024-07-14 07:44:34.654830] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.733 [2024-07-14 07:44:34.655062] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.733 [2024-07-14 07:44:34.655092] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:18.733 qpair failed and we were unable to recover it. 00:27:18.733 [2024-07-14 07:44:34.655320] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.733 [2024-07-14 07:44:34.655533] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.733 [2024-07-14 07:44:34.655563] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:18.733 qpair failed and we were unable to recover it. 00:27:18.733 [2024-07-14 07:44:34.655858] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.733 [2024-07-14 07:44:34.656074] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.733 [2024-07-14 07:44:34.656104] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:18.733 qpair failed and we were unable to recover it. 00:27:18.733 [2024-07-14 07:44:34.656334] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.733 [2024-07-14 07:44:34.656602] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.733 [2024-07-14 07:44:34.656655] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:18.733 qpair failed and we were unable to recover it. 00:27:18.733 [2024-07-14 07:44:34.656904] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.733 [2024-07-14 07:44:34.657138] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.733 [2024-07-14 07:44:34.657167] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:18.733 qpair failed and we were unable to recover it. 00:27:18.733 [2024-07-14 07:44:34.657396] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.733 [2024-07-14 07:44:34.657574] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.733 [2024-07-14 07:44:34.657602] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:18.733 qpair failed and we were unable to recover it. 00:27:18.733 [2024-07-14 07:44:34.657780] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.733 [2024-07-14 07:44:34.657986] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.733 [2024-07-14 07:44:34.658015] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:18.733 qpair failed and we were unable to recover it. 00:27:18.733 [2024-07-14 07:44:34.658234] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.733 [2024-07-14 07:44:34.658499] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.733 [2024-07-14 07:44:34.658545] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:18.733 qpair failed and we were unable to recover it. 00:27:18.733 [2024-07-14 07:44:34.658747] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.733 [2024-07-14 07:44:34.658960] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.733 [2024-07-14 07:44:34.658990] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:18.734 qpair failed and we were unable to recover it. 00:27:18.734 [2024-07-14 07:44:34.659172] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.734 [2024-07-14 07:44:34.659427] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.734 [2024-07-14 07:44:34.659474] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:18.734 qpair failed and we were unable to recover it. 00:27:18.734 [2024-07-14 07:44:34.659828] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.734 [2024-07-14 07:44:34.660091] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.734 [2024-07-14 07:44:34.660118] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:18.734 qpair failed and we were unable to recover it. 00:27:18.734 [2024-07-14 07:44:34.660397] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.734 [2024-07-14 07:44:34.660624] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.734 [2024-07-14 07:44:34.660650] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:18.734 qpair failed and we were unable to recover it. 00:27:18.734 [2024-07-14 07:44:34.660933] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.734 [2024-07-14 07:44:34.661157] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.734 [2024-07-14 07:44:34.661182] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:18.734 qpair failed and we were unable to recover it. 00:27:18.734 [2024-07-14 07:44:34.661435] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.734 [2024-07-14 07:44:34.661668] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.734 [2024-07-14 07:44:34.661715] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:18.734 qpair failed and we were unable to recover it. 00:27:18.734 [2024-07-14 07:44:34.661949] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.734 [2024-07-14 07:44:34.662180] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.734 [2024-07-14 07:44:34.662209] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:18.734 qpair failed and we were unable to recover it. 00:27:18.734 [2024-07-14 07:44:34.662434] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.734 [2024-07-14 07:44:34.662756] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.734 [2024-07-14 07:44:34.662811] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:18.734 qpair failed and we were unable to recover it. 00:27:18.734 [2024-07-14 07:44:34.663043] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.734 [2024-07-14 07:44:34.663215] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.734 [2024-07-14 07:44:34.663244] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:18.734 qpair failed and we were unable to recover it. 00:27:18.734 [2024-07-14 07:44:34.663475] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.734 [2024-07-14 07:44:34.663734] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.734 [2024-07-14 07:44:34.663766] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:18.734 qpair failed and we were unable to recover it. 00:27:18.734 [2024-07-14 07:44:34.663946] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.734 [2024-07-14 07:44:34.664171] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.734 [2024-07-14 07:44:34.664198] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:18.734 qpair failed and we were unable to recover it. 00:27:18.734 [2024-07-14 07:44:34.664560] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.734 [2024-07-14 07:44:34.664820] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.734 [2024-07-14 07:44:34.664848] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:18.734 qpair failed and we were unable to recover it. 00:27:18.734 [2024-07-14 07:44:34.665064] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.734 [2024-07-14 07:44:34.665441] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.734 [2024-07-14 07:44:34.665488] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:18.734 qpair failed and we were unable to recover it. 00:27:18.734 [2024-07-14 07:44:34.665729] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.734 [2024-07-14 07:44:34.665908] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.734 [2024-07-14 07:44:34.665937] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:18.734 qpair failed and we were unable to recover it. 00:27:18.734 [2024-07-14 07:44:34.666309] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.734 [2024-07-14 07:44:34.666732] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.734 [2024-07-14 07:44:34.666797] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:18.734 qpair failed and we were unable to recover it. 00:27:18.734 [2024-07-14 07:44:34.667005] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.734 [2024-07-14 07:44:34.667180] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.734 [2024-07-14 07:44:34.667224] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:18.734 qpair failed and we were unable to recover it. 00:27:18.734 [2024-07-14 07:44:34.667457] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.734 [2024-07-14 07:44:34.667636] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.734 [2024-07-14 07:44:34.667664] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:18.734 qpair failed and we were unable to recover it. 00:27:18.734 [2024-07-14 07:44:34.667874] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.734 [2024-07-14 07:44:34.668104] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.734 [2024-07-14 07:44:34.668133] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:18.734 qpair failed and we were unable to recover it. 00:27:18.734 [2024-07-14 07:44:34.668471] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.734 [2024-07-14 07:44:34.668909] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.734 [2024-07-14 07:44:34.668937] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:18.734 qpair failed and we were unable to recover it. 00:27:18.734 [2024-07-14 07:44:34.669143] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.734 [2024-07-14 07:44:34.669337] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.734 [2024-07-14 07:44:34.669363] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:18.734 qpair failed and we were unable to recover it. 00:27:18.734 [2024-07-14 07:44:34.669600] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.734 [2024-07-14 07:44:34.669806] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.734 [2024-07-14 07:44:34.669835] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:18.734 qpair failed and we were unable to recover it. 00:27:18.734 [2024-07-14 07:44:34.670021] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.734 [2024-07-14 07:44:34.670198] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.734 [2024-07-14 07:44:34.670223] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:18.734 qpair failed and we were unable to recover it. 00:27:18.734 [2024-07-14 07:44:34.670384] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.734 [2024-07-14 07:44:34.670581] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.734 [2024-07-14 07:44:34.670607] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:18.734 qpair failed and we were unable to recover it. 00:27:18.734 [2024-07-14 07:44:34.670842] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.734 [2024-07-14 07:44:34.671119] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.734 [2024-07-14 07:44:34.671146] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:18.734 qpair failed and we were unable to recover it. 00:27:18.734 [2024-07-14 07:44:34.671383] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.734 [2024-07-14 07:44:34.671762] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.734 [2024-07-14 07:44:34.671815] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:18.734 qpair failed and we were unable to recover it. 00:27:18.734 [2024-07-14 07:44:34.672029] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.734 [2024-07-14 07:44:34.672232] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.734 [2024-07-14 07:44:34.672260] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:18.734 qpair failed and we were unable to recover it. 00:27:18.734 [2024-07-14 07:44:34.672462] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.734 [2024-07-14 07:44:34.672789] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.734 [2024-07-14 07:44:34.672839] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:18.734 qpair failed and we were unable to recover it. 00:27:18.734 [2024-07-14 07:44:34.673136] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.734 [2024-07-14 07:44:34.673443] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.734 [2024-07-14 07:44:34.673471] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:18.734 qpair failed and we were unable to recover it. 00:27:18.734 [2024-07-14 07:44:34.673694] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.734 [2024-07-14 07:44:34.673875] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.734 [2024-07-14 07:44:34.673902] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:18.734 qpair failed and we were unable to recover it. 00:27:18.734 [2024-07-14 07:44:34.674087] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.734 [2024-07-14 07:44:34.674259] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.734 [2024-07-14 07:44:34.674290] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:18.734 qpair failed and we were unable to recover it. 00:27:18.734 [2024-07-14 07:44:34.674668] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.734 [2024-07-14 07:44:34.674929] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.734 [2024-07-14 07:44:34.674960] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:18.734 qpair failed and we were unable to recover it. 00:27:18.735 [2024-07-14 07:44:34.675169] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.735 [2024-07-14 07:44:34.675528] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.735 [2024-07-14 07:44:34.675580] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:18.735 qpair failed and we were unable to recover it. 00:27:18.735 [2024-07-14 07:44:34.675787] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.735 [2024-07-14 07:44:34.676007] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.735 [2024-07-14 07:44:34.676034] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:18.735 qpair failed and we were unable to recover it. 00:27:18.735 [2024-07-14 07:44:34.676265] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.735 [2024-07-14 07:44:34.676564] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.735 [2024-07-14 07:44:34.676593] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:18.735 qpair failed and we were unable to recover it. 00:27:18.735 [2024-07-14 07:44:34.676765] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.735 [2024-07-14 07:44:34.676941] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.735 [2024-07-14 07:44:34.676975] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:18.735 qpair failed and we were unable to recover it. 00:27:18.735 [2024-07-14 07:44:34.677185] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.735 [2024-07-14 07:44:34.677493] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.735 [2024-07-14 07:44:34.677549] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:18.735 qpair failed and we were unable to recover it. 00:27:18.735 [2024-07-14 07:44:34.677798] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.735 [2024-07-14 07:44:34.677994] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.735 [2024-07-14 07:44:34.678021] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:18.735 qpair failed and we were unable to recover it. 00:27:18.735 [2024-07-14 07:44:34.678226] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.735 [2024-07-14 07:44:34.678593] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.735 [2024-07-14 07:44:34.678647] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:18.735 qpair failed and we were unable to recover it. 00:27:18.735 [2024-07-14 07:44:34.678891] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.735 [2024-07-14 07:44:34.679101] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.735 [2024-07-14 07:44:34.679131] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:18.735 qpair failed and we were unable to recover it. 00:27:18.735 [2024-07-14 07:44:34.679355] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.735 [2024-07-14 07:44:34.679654] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.735 [2024-07-14 07:44:34.679715] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:18.735 qpair failed and we were unable to recover it. 00:27:18.735 [2024-07-14 07:44:34.679950] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.735 [2024-07-14 07:44:34.680158] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.735 [2024-07-14 07:44:34.680199] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:18.735 qpair failed and we were unable to recover it. 00:27:18.735 [2024-07-14 07:44:34.680452] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.735 [2024-07-14 07:44:34.680775] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.735 [2024-07-14 07:44:34.680837] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:18.735 qpair failed and we were unable to recover it. 00:27:18.735 [2024-07-14 07:44:34.681054] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.735 [2024-07-14 07:44:34.681322] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.735 [2024-07-14 07:44:34.681375] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:18.735 qpair failed and we were unable to recover it. 00:27:18.735 [2024-07-14 07:44:34.681603] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.735 [2024-07-14 07:44:34.681819] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.735 [2024-07-14 07:44:34.681848] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:18.735 qpair failed and we were unable to recover it. 00:27:18.735 [2024-07-14 07:44:34.682062] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.735 [2024-07-14 07:44:34.682275] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.735 [2024-07-14 07:44:34.682305] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:18.735 qpair failed and we were unable to recover it. 00:27:18.735 [2024-07-14 07:44:34.682590] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.735 [2024-07-14 07:44:34.682767] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.735 [2024-07-14 07:44:34.682798] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:18.735 qpair failed and we were unable to recover it. 00:27:18.735 [2024-07-14 07:44:34.682976] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.735 [2024-07-14 07:44:34.683279] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.735 [2024-07-14 07:44:34.683305] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:18.735 qpair failed and we were unable to recover it. 00:27:18.735 [2024-07-14 07:44:34.683527] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.735 [2024-07-14 07:44:34.683910] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.735 [2024-07-14 07:44:34.683940] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:18.735 qpair failed and we were unable to recover it. 00:27:18.735 [2024-07-14 07:44:34.684172] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.735 [2024-07-14 07:44:34.684518] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.735 [2024-07-14 07:44:34.684575] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:18.735 qpair failed and we were unable to recover it. 00:27:18.735 [2024-07-14 07:44:34.684806] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.735 [2024-07-14 07:44:34.685037] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.735 [2024-07-14 07:44:34.685063] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:18.735 qpair failed and we were unable to recover it. 00:27:18.735 [2024-07-14 07:44:34.685501] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.735 [2024-07-14 07:44:34.685911] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.735 [2024-07-14 07:44:34.685941] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:18.735 qpair failed and we were unable to recover it. 00:27:18.735 [2024-07-14 07:44:34.686162] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.735 [2024-07-14 07:44:34.686449] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.735 [2024-07-14 07:44:34.686501] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:18.735 qpair failed and we were unable to recover it. 00:27:18.735 [2024-07-14 07:44:34.686744] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.735 [2024-07-14 07:44:34.686924] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.735 [2024-07-14 07:44:34.686953] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:18.735 qpair failed and we were unable to recover it. 00:27:18.735 [2024-07-14 07:44:34.687163] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.735 [2024-07-14 07:44:34.687390] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.735 [2024-07-14 07:44:34.687418] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:18.735 qpair failed and we were unable to recover it. 00:27:18.735 [2024-07-14 07:44:34.687812] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.735 [2024-07-14 07:44:34.688083] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.735 [2024-07-14 07:44:34.688112] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:18.735 qpair failed and we were unable to recover it. 00:27:18.735 [2024-07-14 07:44:34.688350] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.735 [2024-07-14 07:44:34.688679] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.735 [2024-07-14 07:44:34.688733] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:18.735 qpair failed and we were unable to recover it. 00:27:18.735 [2024-07-14 07:44:34.688961] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.735 [2024-07-14 07:44:34.689223] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.735 [2024-07-14 07:44:34.689280] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:18.735 qpair failed and we were unable to recover it. 00:27:18.735 [2024-07-14 07:44:34.689483] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.735 [2024-07-14 07:44:34.689804] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.735 [2024-07-14 07:44:34.689859] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:18.735 qpair failed and we were unable to recover it. 00:27:18.735 [2024-07-14 07:44:34.690082] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.735 [2024-07-14 07:44:34.690505] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.735 [2024-07-14 07:44:34.690558] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:18.735 qpair failed and we were unable to recover it. 00:27:18.735 [2024-07-14 07:44:34.690870] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.735 [2024-07-14 07:44:34.691089] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.736 [2024-07-14 07:44:34.691118] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:18.736 qpair failed and we were unable to recover it. 00:27:18.736 [2024-07-14 07:44:34.691322] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.736 [2024-07-14 07:44:34.691683] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.736 [2024-07-14 07:44:34.691746] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:18.736 qpair failed and we were unable to recover it. 00:27:18.736 [2024-07-14 07:44:34.691995] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.736 [2024-07-14 07:44:34.692173] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.736 [2024-07-14 07:44:34.692199] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:18.736 qpair failed and we were unable to recover it. 00:27:18.736 [2024-07-14 07:44:34.692385] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.736 [2024-07-14 07:44:34.692625] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.736 [2024-07-14 07:44:34.692654] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:18.736 qpair failed and we were unable to recover it. 00:27:18.736 [2024-07-14 07:44:34.692852] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.736 [2024-07-14 07:44:34.693067] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.736 [2024-07-14 07:44:34.693095] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:18.736 qpair failed and we were unable to recover it. 00:27:18.736 [2024-07-14 07:44:34.693315] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.736 [2024-07-14 07:44:34.693549] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.736 [2024-07-14 07:44:34.693575] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:18.736 qpair failed and we were unable to recover it. 00:27:18.736 [2024-07-14 07:44:34.693758] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.736 [2024-07-14 07:44:34.693980] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.736 [2024-07-14 07:44:34.694010] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:18.736 qpair failed and we were unable to recover it. 00:27:18.736 [2024-07-14 07:44:34.694184] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.736 [2024-07-14 07:44:34.694390] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.736 [2024-07-14 07:44:34.694419] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:18.736 qpair failed and we were unable to recover it. 00:27:18.736 [2024-07-14 07:44:34.694617] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.736 [2024-07-14 07:44:34.694797] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.736 [2024-07-14 07:44:34.694826] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:18.736 qpair failed and we were unable to recover it. 00:27:18.736 [2024-07-14 07:44:34.695042] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.736 [2024-07-14 07:44:34.695246] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.736 [2024-07-14 07:44:34.695306] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:18.736 qpair failed and we were unable to recover it. 00:27:18.736 [2024-07-14 07:44:34.695543] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.736 [2024-07-14 07:44:34.695778] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.736 [2024-07-14 07:44:34.695804] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:18.736 qpair failed and we were unable to recover it. 00:27:18.736 [2024-07-14 07:44:34.696021] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.736 [2024-07-14 07:44:34.696236] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.736 [2024-07-14 07:44:34.696265] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:18.736 qpair failed and we were unable to recover it. 00:27:18.736 [2024-07-14 07:44:34.696481] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.736 [2024-07-14 07:44:34.696693] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.736 [2024-07-14 07:44:34.696734] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:18.736 qpair failed and we were unable to recover it. 00:27:18.736 [2024-07-14 07:44:34.696942] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.736 [2024-07-14 07:44:34.697153] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.736 [2024-07-14 07:44:34.697194] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:18.736 qpair failed and we were unable to recover it. 00:27:18.736 [2024-07-14 07:44:34.697413] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.736 [2024-07-14 07:44:34.697674] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.736 [2024-07-14 07:44:34.697728] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:18.736 qpair failed and we were unable to recover it. 00:27:18.736 [2024-07-14 07:44:34.697960] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.736 [2024-07-14 07:44:34.698139] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.736 [2024-07-14 07:44:34.698168] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:18.736 qpair failed and we were unable to recover it. 00:27:18.736 [2024-07-14 07:44:34.698404] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.736 [2024-07-14 07:44:34.698727] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.736 [2024-07-14 07:44:34.698788] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:18.736 qpair failed and we were unable to recover it. 00:27:18.736 [2024-07-14 07:44:34.699007] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.736 [2024-07-14 07:44:34.699203] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.736 [2024-07-14 07:44:34.699232] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:18.736 qpair failed and we were unable to recover it. 00:27:18.736 [2024-07-14 07:44:34.699438] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.736 [2024-07-14 07:44:34.699644] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.736 [2024-07-14 07:44:34.699673] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:18.736 qpair failed and we were unable to recover it. 00:27:18.736 [2024-07-14 07:44:34.699904] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.736 [2024-07-14 07:44:34.700087] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.736 [2024-07-14 07:44:34.700116] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:18.736 qpair failed and we were unable to recover it. 00:27:18.736 [2024-07-14 07:44:34.700528] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.736 [2024-07-14 07:44:34.700777] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.736 [2024-07-14 07:44:34.700806] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:18.736 qpair failed and we were unable to recover it. 00:27:18.736 [2024-07-14 07:44:34.701031] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.736 [2024-07-14 07:44:34.701252] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.736 [2024-07-14 07:44:34.701339] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:18.736 qpair failed and we were unable to recover it. 00:27:18.736 [2024-07-14 07:44:34.701542] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.736 [2024-07-14 07:44:34.701792] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.736 [2024-07-14 07:44:34.701844] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:18.736 qpair failed and we were unable to recover it. 00:27:18.736 [2024-07-14 07:44:34.702085] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.736 [2024-07-14 07:44:34.702381] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.736 [2024-07-14 07:44:34.702442] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:18.736 qpair failed and we were unable to recover it. 00:27:18.736 [2024-07-14 07:44:34.702737] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.736 [2024-07-14 07:44:34.702987] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.736 [2024-07-14 07:44:34.703018] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:18.736 qpair failed and we were unable to recover it. 00:27:18.736 [2024-07-14 07:44:34.703248] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.736 [2024-07-14 07:44:34.703600] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.736 [2024-07-14 07:44:34.703650] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:18.736 qpair failed and we were unable to recover it. 00:27:18.736 [2024-07-14 07:44:34.703891] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.736 [2024-07-14 07:44:34.704129] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.736 [2024-07-14 07:44:34.704159] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:18.736 qpair failed and we were unable to recover it. 00:27:18.736 [2024-07-14 07:44:34.704437] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.736 [2024-07-14 07:44:34.704690] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.736 [2024-07-14 07:44:34.704716] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:18.736 qpair failed and we were unable to recover it. 00:27:18.736 [2024-07-14 07:44:34.704938] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.736 [2024-07-14 07:44:34.705126] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.736 [2024-07-14 07:44:34.705155] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:18.736 qpair failed and we were unable to recover it. 00:27:18.736 [2024-07-14 07:44:34.705396] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.736 [2024-07-14 07:44:34.705585] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.736 [2024-07-14 07:44:34.705612] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:18.736 qpair failed and we were unable to recover it. 00:27:18.736 [2024-07-14 07:44:34.705822] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.736 [2024-07-14 07:44:34.706058] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.736 [2024-07-14 07:44:34.706087] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:18.736 qpair failed and we were unable to recover it. 00:27:18.736 [2024-07-14 07:44:34.706445] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.737 [2024-07-14 07:44:34.706784] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.737 [2024-07-14 07:44:34.706810] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:18.737 qpair failed and we were unable to recover it. 00:27:18.737 [2024-07-14 07:44:34.707029] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.737 [2024-07-14 07:44:34.707235] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.737 [2024-07-14 07:44:34.707261] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:18.737 qpair failed and we were unable to recover it. 00:27:18.737 [2024-07-14 07:44:34.707472] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.737 [2024-07-14 07:44:34.707678] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.737 [2024-07-14 07:44:34.707707] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:18.737 qpair failed and we were unable to recover it. 00:27:18.737 [2024-07-14 07:44:34.707910] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.737 [2024-07-14 07:44:34.708152] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.737 [2024-07-14 07:44:34.708181] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:18.737 qpair failed and we were unable to recover it. 00:27:18.737 [2024-07-14 07:44:34.708381] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.737 [2024-07-14 07:44:34.708621] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.737 [2024-07-14 07:44:34.708651] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:18.737 qpair failed and we were unable to recover it. 00:27:18.737 [2024-07-14 07:44:34.708886] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.737 [2024-07-14 07:44:34.709100] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.737 [2024-07-14 07:44:34.709130] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:18.737 qpair failed and we were unable to recover it. 00:27:18.737 [2024-07-14 07:44:34.709361] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.737 [2024-07-14 07:44:34.709551] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.737 [2024-07-14 07:44:34.709577] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:18.737 qpair failed and we were unable to recover it. 00:27:18.737 [2024-07-14 07:44:34.709780] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.737 [2024-07-14 07:44:34.709985] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.737 [2024-07-14 07:44:34.710012] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:18.737 qpair failed and we were unable to recover it. 00:27:18.737 [2024-07-14 07:44:34.710355] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.737 [2024-07-14 07:44:34.710635] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.737 [2024-07-14 07:44:34.710660] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:18.737 qpair failed and we were unable to recover it. 00:27:18.737 [2024-07-14 07:44:34.710891] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.737 [2024-07-14 07:44:34.711166] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.737 [2024-07-14 07:44:34.711195] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:18.737 qpair failed and we were unable to recover it. 00:27:18.737 [2024-07-14 07:44:34.711426] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.737 [2024-07-14 07:44:34.711686] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.737 [2024-07-14 07:44:34.711715] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:18.737 qpair failed and we were unable to recover it. 00:27:18.737 [2024-07-14 07:44:34.711922] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.737 [2024-07-14 07:44:34.712145] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.737 [2024-07-14 07:44:34.712174] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:18.737 qpair failed and we were unable to recover it. 00:27:18.737 [2024-07-14 07:44:34.712531] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.737 [2024-07-14 07:44:34.712755] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.737 [2024-07-14 07:44:34.712781] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:18.737 qpair failed and we were unable to recover it. 00:27:18.737 [2024-07-14 07:44:34.712976] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.737 [2024-07-14 07:44:34.713214] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.737 [2024-07-14 07:44:34.713243] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:18.737 qpair failed and we were unable to recover it. 00:27:18.737 [2024-07-14 07:44:34.713493] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.737 [2024-07-14 07:44:34.713685] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.737 [2024-07-14 07:44:34.713716] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:18.737 qpair failed and we were unable to recover it. 00:27:18.737 [2024-07-14 07:44:34.713923] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.737 [2024-07-14 07:44:34.714153] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.737 [2024-07-14 07:44:34.714182] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:18.737 qpair failed and we were unable to recover it. 00:27:18.737 [2024-07-14 07:44:34.714417] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.737 [2024-07-14 07:44:34.714598] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.737 [2024-07-14 07:44:34.714625] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:18.737 qpair failed and we were unable to recover it. 00:27:18.737 [2024-07-14 07:44:34.714776] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.737 [2024-07-14 07:44:34.714979] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.737 [2024-07-14 07:44:34.715006] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:18.737 qpair failed and we were unable to recover it. 00:27:18.737 [2024-07-14 07:44:34.715185] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.737 [2024-07-14 07:44:34.715530] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.737 [2024-07-14 07:44:34.715584] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:18.737 qpair failed and we were unable to recover it. 00:27:18.737 [2024-07-14 07:44:34.715787] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.737 [2024-07-14 07:44:34.715995] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.737 [2024-07-14 07:44:34.716025] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:18.737 qpair failed and we were unable to recover it. 00:27:18.737 [2024-07-14 07:44:34.716307] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.737 [2024-07-14 07:44:34.716655] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.737 [2024-07-14 07:44:34.716712] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:18.737 qpair failed and we were unable to recover it. 00:27:18.737 [2024-07-14 07:44:34.716971] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.737 [2024-07-14 07:44:34.717163] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.737 [2024-07-14 07:44:34.717188] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:18.737 qpair failed and we were unable to recover it. 00:27:18.737 [2024-07-14 07:44:34.717436] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.737 [2024-07-14 07:44:34.717679] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.737 [2024-07-14 07:44:34.717721] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:18.737 qpair failed and we were unable to recover it. 00:27:18.737 [2024-07-14 07:44:34.717925] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.737 [2024-07-14 07:44:34.718141] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.737 [2024-07-14 07:44:34.718167] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:18.737 qpair failed and we were unable to recover it. 00:27:18.737 [2024-07-14 07:44:34.718373] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.737 [2024-07-14 07:44:34.718575] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.737 [2024-07-14 07:44:34.718600] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:18.737 qpair failed and we were unable to recover it. 00:27:18.737 [2024-07-14 07:44:34.718860] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.737 [2024-07-14 07:44:34.719101] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.737 [2024-07-14 07:44:34.719127] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:18.737 qpair failed and we were unable to recover it. 00:27:18.737 [2024-07-14 07:44:34.719350] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.737 [2024-07-14 07:44:34.719743] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.737 [2024-07-14 07:44:34.719799] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:18.737 qpair failed and we were unable to recover it. 00:27:18.737 [2024-07-14 07:44:34.720002] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.737 [2024-07-14 07:44:34.720270] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.737 [2024-07-14 07:44:34.720328] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:18.737 qpair failed and we were unable to recover it. 00:27:18.737 [2024-07-14 07:44:34.720744] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.737 [2024-07-14 07:44:34.720972] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.737 [2024-07-14 07:44:34.720999] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:18.737 qpair failed and we were unable to recover it. 00:27:18.737 [2024-07-14 07:44:34.721200] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.737 [2024-07-14 07:44:34.721380] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.737 [2024-07-14 07:44:34.721406] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:18.737 qpair failed and we were unable to recover it. 00:27:18.737 [2024-07-14 07:44:34.721613] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.737 [2024-07-14 07:44:34.721850] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.737 [2024-07-14 07:44:34.721885] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:18.737 qpair failed and we were unable to recover it. 00:27:18.737 [2024-07-14 07:44:34.722085] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.737 [2024-07-14 07:44:34.722374] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.737 [2024-07-14 07:44:34.722403] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:18.737 qpair failed and we were unable to recover it. 00:27:18.737 [2024-07-14 07:44:34.722606] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.737 [2024-07-14 07:44:34.722814] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.737 [2024-07-14 07:44:34.722841] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:18.737 qpair failed and we were unable to recover it. 00:27:18.737 [2024-07-14 07:44:34.723362] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.737 [2024-07-14 07:44:34.723676] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.737 [2024-07-14 07:44:34.723705] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:18.737 qpair failed and we were unable to recover it. 00:27:18.737 [2024-07-14 07:44:34.723956] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.737 [2024-07-14 07:44:34.724193] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.738 [2024-07-14 07:44:34.724220] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:18.738 qpair failed and we were unable to recover it. 00:27:18.738 [2024-07-14 07:44:34.724418] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.738 [2024-07-14 07:44:34.724589] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.738 [2024-07-14 07:44:34.724615] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:18.738 qpair failed and we were unable to recover it. 00:27:18.738 [2024-07-14 07:44:34.724801] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.738 [2024-07-14 07:44:34.725021] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.738 [2024-07-14 07:44:34.725051] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:18.738 qpair failed and we were unable to recover it. 00:27:18.738 [2024-07-14 07:44:34.725265] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.738 [2024-07-14 07:44:34.725576] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.738 [2024-07-14 07:44:34.725602] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:18.738 qpair failed and we were unable to recover it. 00:27:18.738 [2024-07-14 07:44:34.725832] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.738 [2024-07-14 07:44:34.726062] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.738 [2024-07-14 07:44:34.726092] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:18.738 qpair failed and we were unable to recover it. 00:27:18.738 [2024-07-14 07:44:34.726296] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.738 [2024-07-14 07:44:34.726618] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.738 [2024-07-14 07:44:34.726677] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:18.738 qpair failed and we were unable to recover it. 00:27:18.738 [2024-07-14 07:44:34.726909] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.738 [2024-07-14 07:44:34.727139] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.738 [2024-07-14 07:44:34.727168] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:18.738 qpair failed and we were unable to recover it. 00:27:18.738 [2024-07-14 07:44:34.727393] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.738 [2024-07-14 07:44:34.727630] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.738 [2024-07-14 07:44:34.727659] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:18.738 qpair failed and we were unable to recover it. 00:27:18.738 [2024-07-14 07:44:34.727875] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.738 [2024-07-14 07:44:34.728105] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.738 [2024-07-14 07:44:34.728134] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:18.738 qpair failed and we were unable to recover it. 00:27:18.738 [2024-07-14 07:44:34.728343] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.738 [2024-07-14 07:44:34.728659] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.738 [2024-07-14 07:44:34.728722] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:18.738 qpair failed and we were unable to recover it. 00:27:18.738 [2024-07-14 07:44:34.729030] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.738 [2024-07-14 07:44:34.729289] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.738 [2024-07-14 07:44:34.729341] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:18.738 qpair failed and we were unable to recover it. 00:27:18.738 [2024-07-14 07:44:34.729617] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.738 [2024-07-14 07:44:34.729855] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.738 [2024-07-14 07:44:34.729892] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:18.738 qpair failed and we were unable to recover it. 00:27:18.738 [2024-07-14 07:44:34.730101] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.738 [2024-07-14 07:44:34.730477] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.738 [2024-07-14 07:44:34.730537] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:18.738 qpair failed and we were unable to recover it. 00:27:18.738 [2024-07-14 07:44:34.730743] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.738 [2024-07-14 07:44:34.730954] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.738 [2024-07-14 07:44:34.730984] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:18.738 qpair failed and we were unable to recover it. 00:27:18.738 [2024-07-14 07:44:34.731191] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.738 [2024-07-14 07:44:34.731419] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.738 [2024-07-14 07:44:34.731445] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:18.738 qpair failed and we were unable to recover it. 00:27:18.738 [2024-07-14 07:44:34.731657] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.738 [2024-07-14 07:44:34.731894] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.738 [2024-07-14 07:44:34.731923] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:18.738 qpair failed and we were unable to recover it. 00:27:18.738 [2024-07-14 07:44:34.732130] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.738 [2024-07-14 07:44:34.732340] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.738 [2024-07-14 07:44:34.732369] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:18.738 qpair failed and we were unable to recover it. 00:27:18.738 [2024-07-14 07:44:34.732542] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.738 [2024-07-14 07:44:34.732759] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.738 [2024-07-14 07:44:34.732785] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:18.738 qpair failed and we were unable to recover it. 00:27:18.738 [2024-07-14 07:44:34.733081] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.738 [2024-07-14 07:44:34.733277] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.738 [2024-07-14 07:44:34.733303] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:18.738 qpair failed and we were unable to recover it. 00:27:18.738 [2024-07-14 07:44:34.733510] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.738 [2024-07-14 07:44:34.733690] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.738 [2024-07-14 07:44:34.733717] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:18.738 qpair failed and we were unable to recover it. 00:27:18.738 [2024-07-14 07:44:34.733920] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.738 [2024-07-14 07:44:34.734090] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.738 [2024-07-14 07:44:34.734116] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:18.738 qpair failed and we were unable to recover it. 00:27:18.738 [2024-07-14 07:44:34.734273] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.738 [2024-07-14 07:44:34.734499] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.738 [2024-07-14 07:44:34.734528] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:18.738 qpair failed and we were unable to recover it. 00:27:18.738 [2024-07-14 07:44:34.734762] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.738 [2024-07-14 07:44:34.734994] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.738 [2024-07-14 07:44:34.735028] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:18.738 qpair failed and we were unable to recover it. 00:27:18.738 [2024-07-14 07:44:34.735275] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.738 [2024-07-14 07:44:34.735613] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.738 [2024-07-14 07:44:34.735664] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:18.738 qpair failed and we were unable to recover it. 00:27:18.738 [2024-07-14 07:44:34.735877] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.738 [2024-07-14 07:44:34.736047] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.738 [2024-07-14 07:44:34.736074] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:18.738 qpair failed and we were unable to recover it. 00:27:18.738 [2024-07-14 07:44:34.736309] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.738 [2024-07-14 07:44:34.736546] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.738 [2024-07-14 07:44:34.736575] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:18.738 qpair failed and we were unable to recover it. 00:27:18.738 [2024-07-14 07:44:34.736812] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.739 [2024-07-14 07:44:34.737000] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.739 [2024-07-14 07:44:34.737028] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:18.739 qpair failed and we were unable to recover it. 00:27:18.739 [2024-07-14 07:44:34.737215] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.739 [2024-07-14 07:44:34.737552] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.739 [2024-07-14 07:44:34.737577] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:18.739 qpair failed and we were unable to recover it. 00:27:18.739 [2024-07-14 07:44:34.737796] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.739 [2024-07-14 07:44:34.738020] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.739 [2024-07-14 07:44:34.738051] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:18.739 qpair failed and we were unable to recover it. 00:27:18.739 [2024-07-14 07:44:34.738255] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.739 [2024-07-14 07:44:34.738454] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.739 [2024-07-14 07:44:34.738483] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:18.739 qpair failed and we were unable to recover it. 00:27:18.739 [2024-07-14 07:44:34.738659] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.739 [2024-07-14 07:44:34.738884] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.739 [2024-07-14 07:44:34.738914] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:18.739 qpair failed and we were unable to recover it. 00:27:18.739 [2024-07-14 07:44:34.739113] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.739 [2024-07-14 07:44:34.739357] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.739 [2024-07-14 07:44:34.739386] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:18.739 qpair failed and we were unable to recover it. 00:27:18.739 [2024-07-14 07:44:34.739586] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.739 [2024-07-14 07:44:34.739768] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.739 [2024-07-14 07:44:34.739797] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:18.739 qpair failed and we were unable to recover it. 00:27:18.739 [2024-07-14 07:44:34.739970] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.739 [2024-07-14 07:44:34.740226] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.739 [2024-07-14 07:44:34.740279] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:18.739 qpair failed and we were unable to recover it. 00:27:18.739 [2024-07-14 07:44:34.740690] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.739 [2024-07-14 07:44:34.741023] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.739 [2024-07-14 07:44:34.741053] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:18.739 qpair failed and we were unable to recover it. 00:27:18.739 [2024-07-14 07:44:34.741283] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.739 [2024-07-14 07:44:34.741595] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.739 [2024-07-14 07:44:34.741652] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:18.739 qpair failed and we were unable to recover it. 00:27:18.739 [2024-07-14 07:44:34.741895] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.739 [2024-07-14 07:44:34.742100] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.739 [2024-07-14 07:44:34.742129] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:18.739 qpair failed and we were unable to recover it. 00:27:18.739 [2024-07-14 07:44:34.742367] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.739 [2024-07-14 07:44:34.742775] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.739 [2024-07-14 07:44:34.742832] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:18.739 qpair failed and we were unable to recover it. 00:27:18.739 [2024-07-14 07:44:34.743069] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.739 [2024-07-14 07:44:34.743333] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.739 [2024-07-14 07:44:34.743389] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:18.739 qpair failed and we were unable to recover it. 00:27:18.739 [2024-07-14 07:44:34.743598] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.739 [2024-07-14 07:44:34.743806] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.739 [2024-07-14 07:44:34.743832] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:18.739 qpair failed and we were unable to recover it. 00:27:18.739 [2024-07-14 07:44:34.744058] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.739 [2024-07-14 07:44:34.744283] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.739 [2024-07-14 07:44:34.744310] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:18.739 qpair failed and we were unable to recover it. 00:27:18.739 [2024-07-14 07:44:34.744528] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.739 [2024-07-14 07:44:34.744734] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.739 [2024-07-14 07:44:34.744760] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:18.739 qpair failed and we were unable to recover it. 00:27:18.739 [2024-07-14 07:44:34.744967] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.739 [2024-07-14 07:44:34.745203] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.739 [2024-07-14 07:44:34.745231] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:18.739 qpair failed and we were unable to recover it. 00:27:18.739 [2024-07-14 07:44:34.745480] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.739 [2024-07-14 07:44:34.745645] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.739 [2024-07-14 07:44:34.745671] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:18.739 qpair failed and we were unable to recover it. 00:27:18.739 [2024-07-14 07:44:34.745855] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.739 [2024-07-14 07:44:34.746046] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.739 [2024-07-14 07:44:34.746076] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:18.739 qpair failed and we were unable to recover it. 00:27:18.739 [2024-07-14 07:44:34.746309] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.739 [2024-07-14 07:44:34.746553] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.739 [2024-07-14 07:44:34.746579] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:18.739 qpair failed and we were unable to recover it. 00:27:18.739 [2024-07-14 07:44:34.746768] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.739 [2024-07-14 07:44:34.747005] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.739 [2024-07-14 07:44:34.747035] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:18.739 qpair failed and we were unable to recover it. 00:27:18.739 [2024-07-14 07:44:34.747266] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.739 [2024-07-14 07:44:34.747523] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.739 [2024-07-14 07:44:34.747580] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:18.739 qpair failed and we were unable to recover it. 00:27:18.739 [2024-07-14 07:44:34.747796] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.739 [2024-07-14 07:44:34.747972] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.739 [2024-07-14 07:44:34.748002] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:18.739 qpair failed and we were unable to recover it. 00:27:18.739 [2024-07-14 07:44:34.748224] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.739 [2024-07-14 07:44:34.748406] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.739 [2024-07-14 07:44:34.748431] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:18.739 qpair failed and we were unable to recover it. 00:27:18.739 [2024-07-14 07:44:34.748603] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.739 [2024-07-14 07:44:34.748805] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.739 [2024-07-14 07:44:34.748830] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:18.739 qpair failed and we were unable to recover it. 00:27:18.739 [2024-07-14 07:44:34.749047] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.739 [2024-07-14 07:44:34.749236] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.739 [2024-07-14 07:44:34.749262] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:18.739 qpair failed and we were unable to recover it. 00:27:18.739 [2024-07-14 07:44:34.749498] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.739 [2024-07-14 07:44:34.749822] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.739 [2024-07-14 07:44:34.749882] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:18.739 qpair failed and we were unable to recover it. 00:27:18.739 [2024-07-14 07:44:34.750067] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.739 [2024-07-14 07:44:34.750229] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.739 [2024-07-14 07:44:34.750256] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:18.739 qpair failed and we were unable to recover it. 00:27:18.739 [2024-07-14 07:44:34.750470] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.739 [2024-07-14 07:44:34.750746] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.739 [2024-07-14 07:44:34.750775] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:18.739 qpair failed and we were unable to recover it. 00:27:18.739 [2024-07-14 07:44:34.750986] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.739 [2024-07-14 07:44:34.751289] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.739 [2024-07-14 07:44:34.751344] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:18.739 qpair failed and we were unable to recover it. 00:27:18.739 [2024-07-14 07:44:34.751575] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.739 [2024-07-14 07:44:34.751779] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.739 [2024-07-14 07:44:34.751808] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:18.739 qpair failed and we were unable to recover it. 00:27:18.739 [2024-07-14 07:44:34.752014] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.739 [2024-07-14 07:44:34.752224] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.739 [2024-07-14 07:44:34.752264] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:18.739 qpair failed and we were unable to recover it. 00:27:18.739 [2024-07-14 07:44:34.752553] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.739 [2024-07-14 07:44:34.752833] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.739 [2024-07-14 07:44:34.752859] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:18.739 qpair failed and we were unable to recover it. 00:27:18.739 [2024-07-14 07:44:34.753061] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.739 [2024-07-14 07:44:34.753247] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.739 [2024-07-14 07:44:34.753276] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:18.739 qpair failed and we were unable to recover it. 00:27:18.739 [2024-07-14 07:44:34.753484] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.739 [2024-07-14 07:44:34.753779] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.739 [2024-07-14 07:44:34.753837] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:18.739 qpair failed and we were unable to recover it. 00:27:18.740 [2024-07-14 07:44:34.754061] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.740 [2024-07-14 07:44:34.754288] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.740 [2024-07-14 07:44:34.754317] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:18.740 qpair failed and we were unable to recover it. 00:27:18.740 [2024-07-14 07:44:34.754549] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.740 [2024-07-14 07:44:34.754834] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.740 [2024-07-14 07:44:34.754904] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:18.740 qpair failed and we were unable to recover it. 00:27:18.740 [2024-07-14 07:44:34.755165] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.740 [2024-07-14 07:44:34.755558] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.740 [2024-07-14 07:44:34.755608] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:18.740 qpair failed and we were unable to recover it. 00:27:18.740 [2024-07-14 07:44:34.755811] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.740 [2024-07-14 07:44:34.756006] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.740 [2024-07-14 07:44:34.756036] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:18.740 qpair failed and we were unable to recover it. 00:27:18.740 [2024-07-14 07:44:34.756244] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.740 [2024-07-14 07:44:34.756651] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.740 [2024-07-14 07:44:34.756702] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:18.740 qpair failed and we were unable to recover it. 00:27:18.740 [2024-07-14 07:44:34.756933] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.740 [2024-07-14 07:44:34.757150] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.740 [2024-07-14 07:44:34.757179] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:18.740 qpair failed and we were unable to recover it. 00:27:18.740 [2024-07-14 07:44:34.757460] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.740 [2024-07-14 07:44:34.757927] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.740 [2024-07-14 07:44:34.757956] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:18.740 qpair failed and we were unable to recover it. 00:27:18.740 [2024-07-14 07:44:34.758168] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.740 [2024-07-14 07:44:34.758348] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.740 [2024-07-14 07:44:34.758377] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:18.740 qpair failed and we were unable to recover it. 00:27:18.740 [2024-07-14 07:44:34.758602] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.740 [2024-07-14 07:44:34.758801] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.740 [2024-07-14 07:44:34.758830] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:18.740 qpair failed and we were unable to recover it. 00:27:18.740 [2024-07-14 07:44:34.759062] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.740 [2024-07-14 07:44:34.759376] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.740 [2024-07-14 07:44:34.759435] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:18.740 qpair failed and we were unable to recover it. 00:27:18.740 [2024-07-14 07:44:34.759659] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.740 [2024-07-14 07:44:34.759878] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.740 [2024-07-14 07:44:34.759905] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:18.740 qpair failed and we were unable to recover it. 00:27:18.740 [2024-07-14 07:44:34.760156] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.740 [2024-07-14 07:44:34.760463] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.740 [2024-07-14 07:44:34.760513] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:18.740 qpair failed and we were unable to recover it. 00:27:18.740 [2024-07-14 07:44:34.760889] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.740 [2024-07-14 07:44:34.761082] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.740 [2024-07-14 07:44:34.761116] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:18.740 qpair failed and we were unable to recover it. 00:27:18.740 [2024-07-14 07:44:34.761413] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.740 [2024-07-14 07:44:34.761648] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.740 [2024-07-14 07:44:34.761677] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:18.740 qpair failed and we were unable to recover it. 00:27:18.740 [2024-07-14 07:44:34.761877] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.740 [2024-07-14 07:44:34.762069] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.740 [2024-07-14 07:44:34.762095] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:18.740 qpair failed and we were unable to recover it. 00:27:18.740 [2024-07-14 07:44:34.762327] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.740 [2024-07-14 07:44:34.762693] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.740 [2024-07-14 07:44:34.762745] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:18.740 qpair failed and we were unable to recover it. 00:27:18.740 [2024-07-14 07:44:34.762952] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.740 [2024-07-14 07:44:34.763160] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.740 [2024-07-14 07:44:34.763189] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:18.740 qpair failed and we were unable to recover it. 00:27:18.740 [2024-07-14 07:44:34.763526] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.740 [2024-07-14 07:44:34.763925] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.740 [2024-07-14 07:44:34.763954] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:18.740 qpair failed and we were unable to recover it. 00:27:18.740 [2024-07-14 07:44:34.764174] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.740 [2024-07-14 07:44:34.764504] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.740 [2024-07-14 07:44:34.764529] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:18.740 qpair failed and we were unable to recover it. 00:27:18.740 [2024-07-14 07:44:34.764729] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.740 [2024-07-14 07:44:34.764948] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.740 [2024-07-14 07:44:34.764978] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:18.740 qpair failed and we were unable to recover it. 00:27:18.740 [2024-07-14 07:44:34.765211] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.740 [2024-07-14 07:44:34.765621] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.740 [2024-07-14 07:44:34.765676] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:18.740 qpair failed and we were unable to recover it. 00:27:18.740 [2024-07-14 07:44:34.765885] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.740 [2024-07-14 07:44:34.766115] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.740 [2024-07-14 07:44:34.766143] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:18.740 qpair failed and we were unable to recover it. 00:27:18.740 [2024-07-14 07:44:34.766347] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.740 [2024-07-14 07:44:34.766686] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.740 [2024-07-14 07:44:34.766751] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:18.740 qpair failed and we were unable to recover it. 00:27:18.740 [2024-07-14 07:44:34.766985] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.740 [2024-07-14 07:44:34.767219] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.740 [2024-07-14 07:44:34.767274] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:18.740 qpair failed and we were unable to recover it. 00:27:18.740 [2024-07-14 07:44:34.767475] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.740 [2024-07-14 07:44:34.767712] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.740 [2024-07-14 07:44:34.767738] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:18.740 qpair failed and we were unable to recover it. 00:27:18.740 [2024-07-14 07:44:34.767977] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.740 [2024-07-14 07:44:34.768213] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.740 [2024-07-14 07:44:34.768242] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:18.740 qpair failed and we were unable to recover it. 00:27:18.740 [2024-07-14 07:44:34.768434] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.740 [2024-07-14 07:44:34.768632] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.740 [2024-07-14 07:44:34.768658] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:18.740 qpair failed and we were unable to recover it. 00:27:18.740 [2024-07-14 07:44:34.768859] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.740 [2024-07-14 07:44:34.769096] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.740 [2024-07-14 07:44:34.769125] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:18.740 qpair failed and we were unable to recover it. 00:27:18.740 [2024-07-14 07:44:34.769330] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.740 [2024-07-14 07:44:34.769667] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.740 [2024-07-14 07:44:34.769717] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:18.740 qpair failed and we were unable to recover it. 00:27:18.740 [2024-07-14 07:44:34.770029] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.740 [2024-07-14 07:44:34.770307] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.740 [2024-07-14 07:44:34.770359] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:18.740 qpair failed and we were unable to recover it. 00:27:18.740 [2024-07-14 07:44:34.770566] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.740 [2024-07-14 07:44:34.770775] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.740 [2024-07-14 07:44:34.770804] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:18.740 qpair failed and we were unable to recover it. 00:27:18.740 [2024-07-14 07:44:34.771080] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.740 [2024-07-14 07:44:34.771385] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.740 [2024-07-14 07:44:34.771442] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:18.740 qpair failed and we were unable to recover it. 00:27:18.740 [2024-07-14 07:44:34.771680] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.740 [2024-07-14 07:44:34.771854] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.741 [2024-07-14 07:44:34.771890] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:18.741 qpair failed and we were unable to recover it. 00:27:18.741 [2024-07-14 07:44:34.772067] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.741 [2024-07-14 07:44:34.772299] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.741 [2024-07-14 07:44:34.772328] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:18.741 qpair failed and we were unable to recover it. 00:27:18.741 [2024-07-14 07:44:34.772527] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.741 [2024-07-14 07:44:34.772824] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.741 [2024-07-14 07:44:34.772889] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:18.741 qpair failed and we were unable to recover it. 00:27:18.741 [2024-07-14 07:44:34.773175] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.741 [2024-07-14 07:44:34.773585] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.741 [2024-07-14 07:44:34.773648] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:18.741 qpair failed and we were unable to recover it. 00:27:18.741 [2024-07-14 07:44:34.773880] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.741 [2024-07-14 07:44:34.774113] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.741 [2024-07-14 07:44:34.774142] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:18.741 qpair failed and we were unable to recover it. 00:27:18.741 [2024-07-14 07:44:34.774412] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.741 [2024-07-14 07:44:34.774602] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.741 [2024-07-14 07:44:34.774644] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:18.741 qpair failed and we were unable to recover it. 00:27:18.741 [2024-07-14 07:44:34.774880] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.741 [2024-07-14 07:44:34.775086] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.741 [2024-07-14 07:44:34.775112] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:18.741 qpair failed and we were unable to recover it. 00:27:18.741 [2024-07-14 07:44:34.775297] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.741 [2024-07-14 07:44:34.775567] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.741 [2024-07-14 07:44:34.775616] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:18.741 qpair failed and we were unable to recover it. 00:27:18.741 [2024-07-14 07:44:34.775823] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.741 [2024-07-14 07:44:34.776037] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.741 [2024-07-14 07:44:34.776064] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:18.741 qpair failed and we were unable to recover it. 00:27:18.741 [2024-07-14 07:44:34.776258] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.741 [2024-07-14 07:44:34.776451] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.741 [2024-07-14 07:44:34.776477] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:18.741 qpair failed and we were unable to recover it. 00:27:18.741 [2024-07-14 07:44:34.776691] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.741 [2024-07-14 07:44:34.776951] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.741 [2024-07-14 07:44:34.776981] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:18.741 qpair failed and we were unable to recover it. 00:27:18.741 [2024-07-14 07:44:34.777181] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.741 [2024-07-14 07:44:34.777344] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.741 [2024-07-14 07:44:34.777374] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:18.741 qpair failed and we were unable to recover it. 00:27:18.741 [2024-07-14 07:44:34.777571] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.741 [2024-07-14 07:44:34.777798] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.741 [2024-07-14 07:44:34.777826] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:18.741 qpair failed and we were unable to recover it. 00:27:18.741 [2024-07-14 07:44:34.778084] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.741 [2024-07-14 07:44:34.778298] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.741 [2024-07-14 07:44:34.778323] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:18.741 qpair failed and we were unable to recover it. 00:27:18.741 [2024-07-14 07:44:34.778519] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.741 [2024-07-14 07:44:34.778854] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.741 [2024-07-14 07:44:34.778926] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:18.741 qpair failed and we were unable to recover it. 00:27:18.741 [2024-07-14 07:44:34.779158] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.741 [2024-07-14 07:44:34.779433] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.741 [2024-07-14 07:44:34.779458] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:18.741 qpair failed and we were unable to recover it. 00:27:18.741 [2024-07-14 07:44:34.779670] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.741 [2024-07-14 07:44:34.779878] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.741 [2024-07-14 07:44:34.779907] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:18.741 qpair failed and we were unable to recover it. 00:27:18.741 [2024-07-14 07:44:34.780120] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.741 [2024-07-14 07:44:34.780399] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.741 [2024-07-14 07:44:34.780424] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:18.741 qpair failed and we were unable to recover it. 00:27:18.741 [2024-07-14 07:44:34.780622] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.741 [2024-07-14 07:44:34.780816] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.741 [2024-07-14 07:44:34.780841] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:18.741 qpair failed and we were unable to recover it. 00:27:18.741 [2024-07-14 07:44:34.781049] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.741 [2024-07-14 07:44:34.781237] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.741 [2024-07-14 07:44:34.781262] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:18.741 qpair failed and we were unable to recover it. 00:27:18.741 [2024-07-14 07:44:34.781474] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.741 [2024-07-14 07:44:34.781695] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.741 [2024-07-14 07:44:34.781721] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:18.741 qpair failed and we were unable to recover it. 00:27:18.741 [2024-07-14 07:44:34.781928] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.741 [2024-07-14 07:44:34.782119] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.741 [2024-07-14 07:44:34.782145] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:18.741 qpair failed and we were unable to recover it. 00:27:18.741 [2024-07-14 07:44:34.782380] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.741 [2024-07-14 07:44:34.782599] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.741 [2024-07-14 07:44:34.782628] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:18.741 qpair failed and we were unable to recover it. 00:27:18.741 [2024-07-14 07:44:34.782807] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.741 [2024-07-14 07:44:34.782973] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.741 [2024-07-14 07:44:34.783003] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:18.741 qpair failed and we were unable to recover it. 00:27:18.741 [2024-07-14 07:44:34.783208] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.741 [2024-07-14 07:44:34.783453] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.741 [2024-07-14 07:44:34.783478] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:18.741 qpair failed and we were unable to recover it. 00:27:18.741 [2024-07-14 07:44:34.783989] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.741 [2024-07-14 07:44:34.784209] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.741 [2024-07-14 07:44:34.784235] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:18.741 qpair failed and we were unable to recover it. 00:27:18.741 [2024-07-14 07:44:34.784470] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.741 [2024-07-14 07:44:34.784689] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.741 [2024-07-14 07:44:34.784715] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:18.741 qpair failed and we were unable to recover it. 00:27:18.741 [2024-07-14 07:44:34.784948] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.741 [2024-07-14 07:44:34.785140] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.741 [2024-07-14 07:44:34.785166] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:18.741 qpair failed and we were unable to recover it. 00:27:18.741 [2024-07-14 07:44:34.785370] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.741 [2024-07-14 07:44:34.785603] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.741 [2024-07-14 07:44:34.785631] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:18.741 qpair failed and we were unable to recover it. 00:27:18.741 [2024-07-14 07:44:34.785847] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.741 [2024-07-14 07:44:34.786041] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.741 [2024-07-14 07:44:34.786070] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:18.741 qpair failed and we were unable to recover it. 00:27:18.741 [2024-07-14 07:44:34.786265] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.741 [2024-07-14 07:44:34.786447] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.741 [2024-07-14 07:44:34.786488] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:18.741 qpair failed and we were unable to recover it. 00:27:18.741 [2024-07-14 07:44:34.786689] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.741 [2024-07-14 07:44:34.786891] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.741 [2024-07-14 07:44:34.786921] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:18.741 qpair failed and we were unable to recover it. 00:27:18.741 [2024-07-14 07:44:34.787131] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.741 [2024-07-14 07:44:34.787422] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.741 [2024-07-14 07:44:34.787471] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:18.741 qpair failed and we were unable to recover it. 00:27:18.741 [2024-07-14 07:44:34.787712] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.741 [2024-07-14 07:44:34.787918] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.742 [2024-07-14 07:44:34.787946] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:18.742 qpair failed and we were unable to recover it. 00:27:18.742 [2024-07-14 07:44:34.788119] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.742 [2024-07-14 07:44:34.788311] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.742 [2024-07-14 07:44:34.788336] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:18.742 qpair failed and we were unable to recover it. 00:27:18.742 [2024-07-14 07:44:34.788529] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.742 [2024-07-14 07:44:34.788717] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.742 [2024-07-14 07:44:34.788744] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:18.742 qpair failed and we were unable to recover it. 00:27:18.742 [2024-07-14 07:44:34.788957] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.742 [2024-07-14 07:44:34.789115] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.742 [2024-07-14 07:44:34.789146] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:18.742 qpair failed and we were unable to recover it. 00:27:18.742 [2024-07-14 07:44:34.789387] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.742 [2024-07-14 07:44:34.789569] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.742 [2024-07-14 07:44:34.789598] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:18.742 qpair failed and we were unable to recover it. 00:27:18.742 [2024-07-14 07:44:34.789848] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.742 [2024-07-14 07:44:34.790064] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.742 [2024-07-14 07:44:34.790090] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:18.742 qpair failed and we were unable to recover it. 00:27:18.742 [2024-07-14 07:44:34.790258] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.742 [2024-07-14 07:44:34.790472] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.742 [2024-07-14 07:44:34.790498] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:18.742 qpair failed and we were unable to recover it. 00:27:18.742 [2024-07-14 07:44:34.790689] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.742 [2024-07-14 07:44:34.790885] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.742 [2024-07-14 07:44:34.790924] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:18.742 qpair failed and we were unable to recover it. 00:27:18.742 [2024-07-14 07:44:34.791135] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.742 [2024-07-14 07:44:34.791376] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.742 [2024-07-14 07:44:34.791406] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:18.742 qpair failed and we were unable to recover it. 00:27:18.742 [2024-07-14 07:44:34.791621] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.742 [2024-07-14 07:44:34.791826] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.742 [2024-07-14 07:44:34.791874] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:18.742 qpair failed and we were unable to recover it. 00:27:18.742 [2024-07-14 07:44:34.792099] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.742 [2024-07-14 07:44:34.792284] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.742 [2024-07-14 07:44:34.792310] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:18.742 qpair failed and we were unable to recover it. 00:27:18.742 [2024-07-14 07:44:34.792510] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.742 [2024-07-14 07:44:34.792763] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.742 [2024-07-14 07:44:34.792821] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:18.742 qpair failed and we were unable to recover it. 00:27:18.742 [2024-07-14 07:44:34.793075] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.742 [2024-07-14 07:44:34.793407] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.742 [2024-07-14 07:44:34.793455] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:18.742 qpair failed and we were unable to recover it. 00:27:18.742 [2024-07-14 07:44:34.793630] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.742 [2024-07-14 07:44:34.793880] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.742 [2024-07-14 07:44:34.793936] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:18.742 qpair failed and we were unable to recover it. 00:27:18.742 [2024-07-14 07:44:34.794127] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.742 [2024-07-14 07:44:34.794321] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.742 [2024-07-14 07:44:34.794348] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:18.742 qpair failed and we were unable to recover it. 00:27:18.742 [2024-07-14 07:44:34.794536] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.742 [2024-07-14 07:44:34.794747] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.742 [2024-07-14 07:44:34.794774] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:18.742 qpair failed and we were unable to recover it. 00:27:18.742 [2024-07-14 07:44:34.795020] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.742 [2024-07-14 07:44:34.795201] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.742 [2024-07-14 07:44:34.795242] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:18.742 qpair failed and we were unable to recover it. 00:27:18.742 [2024-07-14 07:44:34.795459] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.742 [2024-07-14 07:44:34.795645] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.742 [2024-07-14 07:44:34.795672] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:18.742 qpair failed and we were unable to recover it. 00:27:18.742 [2024-07-14 07:44:34.795860] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.742 [2024-07-14 07:44:34.796058] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.742 [2024-07-14 07:44:34.796084] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:18.742 qpair failed and we were unable to recover it. 00:27:18.742 [2024-07-14 07:44:34.796290] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.742 [2024-07-14 07:44:34.796493] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.742 [2024-07-14 07:44:34.796521] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:18.742 qpair failed and we were unable to recover it. 00:27:18.742 [2024-07-14 07:44:34.796906] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.742 [2024-07-14 07:44:34.797160] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.742 [2024-07-14 07:44:34.797189] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:18.742 qpair failed and we were unable to recover it. 00:27:18.742 [2024-07-14 07:44:34.797374] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.742 [2024-07-14 07:44:34.797656] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.742 [2024-07-14 07:44:34.797681] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:18.742 qpair failed and we were unable to recover it. 00:27:18.742 [2024-07-14 07:44:34.797906] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.742 [2024-07-14 07:44:34.798204] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.742 [2024-07-14 07:44:34.798233] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:18.742 qpair failed and we were unable to recover it. 00:27:18.742 [2024-07-14 07:44:34.798467] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.742 [2024-07-14 07:44:34.798678] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.742 [2024-07-14 07:44:34.798704] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:18.742 qpair failed and we were unable to recover it. 00:27:18.742 [2024-07-14 07:44:34.798925] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.742 [2024-07-14 07:44:34.799118] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.742 [2024-07-14 07:44:34.799153] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:18.742 qpair failed and we were unable to recover it. 00:27:18.742 [2024-07-14 07:44:34.799363] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.742 [2024-07-14 07:44:34.799712] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.743 [2024-07-14 07:44:34.799737] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:18.743 qpair failed and we were unable to recover it. 00:27:18.743 [2024-07-14 07:44:34.799950] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.743 [2024-07-14 07:44:34.800116] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.743 [2024-07-14 07:44:34.800142] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:18.743 qpair failed and we were unable to recover it. 00:27:18.743 [2024-07-14 07:44:34.800410] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.743 [2024-07-14 07:44:34.800617] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.743 [2024-07-14 07:44:34.800643] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:18.743 qpair failed and we were unable to recover it. 00:27:18.743 [2024-07-14 07:44:34.800804] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.743 [2024-07-14 07:44:34.800988] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.743 [2024-07-14 07:44:34.801014] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:18.743 qpair failed and we were unable to recover it. 00:27:18.743 [2024-07-14 07:44:34.801222] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.743 [2024-07-14 07:44:34.801410] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.743 [2024-07-14 07:44:34.801436] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:18.743 qpair failed and we were unable to recover it. 00:27:18.743 [2024-07-14 07:44:34.801793] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.743 [2024-07-14 07:44:34.802063] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.743 [2024-07-14 07:44:34.802092] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:18.743 qpair failed and we were unable to recover it. 00:27:18.743 [2024-07-14 07:44:34.802286] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.743 [2024-07-14 07:44:34.802467] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.743 [2024-07-14 07:44:34.802493] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:18.743 qpair failed and we were unable to recover it. 00:27:18.743 [2024-07-14 07:44:34.802763] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.743 [2024-07-14 07:44:34.802979] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.743 [2024-07-14 07:44:34.803005] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:18.743 qpair failed and we were unable to recover it. 00:27:18.743 [2024-07-14 07:44:34.803193] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.743 [2024-07-14 07:44:34.803350] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.743 [2024-07-14 07:44:34.803376] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:18.743 qpair failed and we were unable to recover it. 00:27:18.743 [2024-07-14 07:44:34.803676] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.743 [2024-07-14 07:44:34.803909] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.743 [2024-07-14 07:44:34.803943] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:18.743 qpair failed and we were unable to recover it. 00:27:18.743 [2024-07-14 07:44:34.804174] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.743 [2024-07-14 07:44:34.804551] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.743 [2024-07-14 07:44:34.804601] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:18.743 qpair failed and we were unable to recover it. 00:27:18.743 [2024-07-14 07:44:34.804844] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.743 [2024-07-14 07:44:34.805041] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.743 [2024-07-14 07:44:34.805071] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:18.743 qpair failed and we were unable to recover it. 00:27:18.743 [2024-07-14 07:44:34.805282] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.743 [2024-07-14 07:44:34.805476] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.743 [2024-07-14 07:44:34.805505] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:18.743 qpair failed and we were unable to recover it. 00:27:18.743 [2024-07-14 07:44:34.805929] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.743 [2024-07-14 07:44:34.806169] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.743 [2024-07-14 07:44:34.806197] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:18.743 qpair failed and we were unable to recover it. 00:27:18.743 [2024-07-14 07:44:34.806435] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.743 [2024-07-14 07:44:34.806751] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.743 [2024-07-14 07:44:34.806801] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:18.743 qpair failed and we were unable to recover it. 00:27:18.743 [2024-07-14 07:44:34.807018] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.743 [2024-07-14 07:44:34.807288] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.743 [2024-07-14 07:44:34.807341] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:18.743 qpair failed and we were unable to recover it. 00:27:18.743 [2024-07-14 07:44:34.807551] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.743 [2024-07-14 07:44:34.807736] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.743 [2024-07-14 07:44:34.807762] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:18.743 qpair failed and we were unable to recover it. 00:27:18.743 [2024-07-14 07:44:34.808014] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.743 [2024-07-14 07:44:34.808203] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.743 [2024-07-14 07:44:34.808229] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:18.743 qpair failed and we were unable to recover it. 00:27:18.743 [2024-07-14 07:44:34.808439] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.743 [2024-07-14 07:44:34.808624] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.743 [2024-07-14 07:44:34.808651] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:18.743 qpair failed and we were unable to recover it. 00:27:18.743 [2024-07-14 07:44:34.808878] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.743 [2024-07-14 07:44:34.809054] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.743 [2024-07-14 07:44:34.809082] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:18.743 qpair failed and we were unable to recover it. 00:27:18.743 [2024-07-14 07:44:34.809284] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.743 [2024-07-14 07:44:34.809443] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.743 [2024-07-14 07:44:34.809469] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:18.743 qpair failed and we were unable to recover it. 00:27:18.743 [2024-07-14 07:44:34.809655] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.743 [2024-07-14 07:44:34.809807] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.743 [2024-07-14 07:44:34.809851] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:18.743 qpair failed and we were unable to recover it. 00:27:18.743 [2024-07-14 07:44:34.810058] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.743 [2024-07-14 07:44:34.810268] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.743 [2024-07-14 07:44:34.810297] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:18.743 qpair failed and we were unable to recover it. 00:27:18.743 [2024-07-14 07:44:34.810596] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.743 [2024-07-14 07:44:34.810863] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.743 [2024-07-14 07:44:34.810901] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:18.743 qpair failed and we were unable to recover it. 00:27:18.743 [2024-07-14 07:44:34.811118] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.743 [2024-07-14 07:44:34.811304] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.743 [2024-07-14 07:44:34.811330] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:18.743 qpair failed and we were unable to recover it. 00:27:18.743 [2024-07-14 07:44:34.811506] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.743 [2024-07-14 07:44:34.811791] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.743 [2024-07-14 07:44:34.811841] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:18.743 qpair failed and we were unable to recover it. 00:27:18.743 [2024-07-14 07:44:34.812063] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.743 [2024-07-14 07:44:34.812242] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.743 [2024-07-14 07:44:34.812269] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:18.743 qpair failed and we were unable to recover it. 00:27:18.743 [2024-07-14 07:44:34.812459] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.743 [2024-07-14 07:44:34.812649] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.743 [2024-07-14 07:44:34.812675] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:18.743 qpair failed and we were unable to recover it. 00:27:18.743 [2024-07-14 07:44:34.812883] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.743 [2024-07-14 07:44:34.813069] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.743 [2024-07-14 07:44:34.813094] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:18.743 qpair failed and we were unable to recover it. 00:27:18.743 [2024-07-14 07:44:34.813282] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.743 [2024-07-14 07:44:34.813493] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.743 [2024-07-14 07:44:34.813537] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:18.743 qpair failed and we were unable to recover it. 00:27:18.743 [2024-07-14 07:44:34.813755] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.743 [2024-07-14 07:44:34.813964] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.743 [2024-07-14 07:44:34.813990] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:18.743 qpair failed and we were unable to recover it. 00:27:18.743 [2024-07-14 07:44:34.814178] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.744 [2024-07-14 07:44:34.814339] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.744 [2024-07-14 07:44:34.814365] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:18.744 qpair failed and we were unable to recover it. 00:27:18.744 [2024-07-14 07:44:34.814574] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.744 [2024-07-14 07:44:34.814757] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.744 [2024-07-14 07:44:34.814783] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:18.744 qpair failed and we were unable to recover it. 00:27:18.744 [2024-07-14 07:44:34.814942] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.744 [2024-07-14 07:44:34.815125] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.744 [2024-07-14 07:44:34.815157] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:18.744 qpair failed and we were unable to recover it. 00:27:18.744 [2024-07-14 07:44:34.815338] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.744 [2024-07-14 07:44:34.815529] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.744 [2024-07-14 07:44:34.815580] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:18.744 qpair failed and we were unable to recover it. 00:27:18.744 [2024-07-14 07:44:34.815811] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.744 [2024-07-14 07:44:34.815992] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.744 [2024-07-14 07:44:34.816021] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:18.744 qpair failed and we were unable to recover it. 00:27:18.744 [2024-07-14 07:44:34.816218] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.744 [2024-07-14 07:44:34.816374] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.744 [2024-07-14 07:44:34.816400] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:18.744 qpair failed and we were unable to recover it. 00:27:18.744 [2024-07-14 07:44:34.816559] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.744 [2024-07-14 07:44:34.816711] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.744 [2024-07-14 07:44:34.816737] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:18.744 qpair failed and we were unable to recover it. 00:27:18.744 [2024-07-14 07:44:34.816921] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.744 [2024-07-14 07:44:34.817102] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.744 [2024-07-14 07:44:34.817128] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:18.744 qpair failed and we were unable to recover it. 00:27:18.744 [2024-07-14 07:44:34.817314] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.744 [2024-07-14 07:44:34.817509] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.744 [2024-07-14 07:44:34.817536] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:18.744 qpair failed and we were unable to recover it. 00:27:18.744 [2024-07-14 07:44:34.817725] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.744 [2024-07-14 07:44:34.817935] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.744 [2024-07-14 07:44:34.817964] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:18.744 qpair failed and we were unable to recover it. 00:27:18.744 [2024-07-14 07:44:34.818173] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.744 [2024-07-14 07:44:34.818517] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.744 [2024-07-14 07:44:34.818568] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:18.744 qpair failed and we were unable to recover it. 00:27:18.744 [2024-07-14 07:44:34.818794] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.744 [2024-07-14 07:44:34.819010] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.744 [2024-07-14 07:44:34.819039] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:18.744 qpair failed and we were unable to recover it. 00:27:18.744 [2024-07-14 07:44:34.819317] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.744 [2024-07-14 07:44:34.819640] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.744 [2024-07-14 07:44:34.819695] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:18.744 qpair failed and we were unable to recover it. 00:27:18.744 [2024-07-14 07:44:34.819925] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.744 [2024-07-14 07:44:34.820132] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.744 [2024-07-14 07:44:34.820178] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:18.744 qpair failed and we were unable to recover it. 00:27:18.744 [2024-07-14 07:44:34.820422] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.744 [2024-07-14 07:44:34.820840] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.744 [2024-07-14 07:44:34.820913] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:18.744 qpair failed and we were unable to recover it. 00:27:18.744 [2024-07-14 07:44:34.821154] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.744 [2024-07-14 07:44:34.821466] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.744 [2024-07-14 07:44:34.821524] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:18.744 qpair failed and we were unable to recover it. 00:27:18.744 [2024-07-14 07:44:34.821839] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.744 [2024-07-14 07:44:34.822058] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.744 [2024-07-14 07:44:34.822089] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:18.744 qpair failed and we were unable to recover it. 00:27:18.744 [2024-07-14 07:44:34.822327] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.744 [2024-07-14 07:44:34.822517] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.744 [2024-07-14 07:44:34.822557] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:18.744 qpair failed and we were unable to recover it. 00:27:18.744 [2024-07-14 07:44:34.822763] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.744 [2024-07-14 07:44:34.823000] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.744 [2024-07-14 07:44:34.823029] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:18.744 qpair failed and we were unable to recover it. 00:27:18.744 [2024-07-14 07:44:34.823270] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.744 [2024-07-14 07:44:34.823591] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.744 [2024-07-14 07:44:34.823641] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:18.744 qpair failed and we were unable to recover it. 00:27:18.744 [2024-07-14 07:44:34.823848] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.744 [2024-07-14 07:44:34.824063] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.744 [2024-07-14 07:44:34.824092] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:18.744 qpair failed and we were unable to recover it. 00:27:18.744 [2024-07-14 07:44:34.824298] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.744 [2024-07-14 07:44:34.824513] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.744 [2024-07-14 07:44:34.824565] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:18.744 qpair failed and we were unable to recover it. 00:27:18.744 [2024-07-14 07:44:34.824800] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.744 [2024-07-14 07:44:34.824983] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.744 [2024-07-14 07:44:34.825011] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:18.744 qpair failed and we were unable to recover it. 00:27:18.744 [2024-07-14 07:44:34.825255] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.744 [2024-07-14 07:44:34.825624] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.744 [2024-07-14 07:44:34.825678] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:18.744 qpair failed and we were unable to recover it. 00:27:18.744 [2024-07-14 07:44:34.825914] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.744 [2024-07-14 07:44:34.826121] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.744 [2024-07-14 07:44:34.826149] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:18.744 qpair failed and we were unable to recover it. 00:27:18.744 [2024-07-14 07:44:34.826343] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.744 [2024-07-14 07:44:34.826551] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.744 [2024-07-14 07:44:34.826580] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:18.744 qpair failed and we were unable to recover it. 00:27:18.744 [2024-07-14 07:44:34.826788] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.744 [2024-07-14 07:44:34.826996] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.744 [2024-07-14 07:44:34.827027] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:18.744 qpair failed and we were unable to recover it. 00:27:18.744 [2024-07-14 07:44:34.827214] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.744 [2024-07-14 07:44:34.827452] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.744 [2024-07-14 07:44:34.827481] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:18.744 qpair failed and we were unable to recover it. 00:27:18.744 [2024-07-14 07:44:34.827694] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.744 [2024-07-14 07:44:34.827935] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.744 [2024-07-14 07:44:34.827964] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:18.744 qpair failed and we were unable to recover it. 00:27:18.744 [2024-07-14 07:44:34.828177] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.744 [2024-07-14 07:44:34.828461] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.744 [2024-07-14 07:44:34.828487] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:18.744 qpair failed and we were unable to recover it. 00:27:18.745 [2024-07-14 07:44:34.828695] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.745 [2024-07-14 07:44:34.828885] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.745 [2024-07-14 07:44:34.828922] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:18.745 qpair failed and we were unable to recover it. 00:27:18.745 [2024-07-14 07:44:34.829109] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.745 [2024-07-14 07:44:34.829330] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.745 [2024-07-14 07:44:34.829356] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:18.745 qpair failed and we were unable to recover it. 00:27:18.745 [2024-07-14 07:44:34.829534] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.745 [2024-07-14 07:44:34.829741] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.745 [2024-07-14 07:44:34.829767] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:18.745 qpair failed and we were unable to recover it. 00:27:18.745 [2024-07-14 07:44:34.829995] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.745 [2024-07-14 07:44:34.830186] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.745 [2024-07-14 07:44:34.830211] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:18.745 qpair failed and we were unable to recover it. 00:27:18.745 [2024-07-14 07:44:34.830425] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.745 [2024-07-14 07:44:34.830616] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.745 [2024-07-14 07:44:34.830643] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:18.745 qpair failed and we were unable to recover it. 00:27:18.745 [2024-07-14 07:44:34.830848] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.745 [2024-07-14 07:44:34.831064] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.745 [2024-07-14 07:44:34.831093] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:18.745 qpair failed and we were unable to recover it. 00:27:18.745 [2024-07-14 07:44:34.831392] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.745 [2024-07-14 07:44:34.831625] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.745 [2024-07-14 07:44:34.831654] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:18.745 qpair failed and we were unable to recover it. 00:27:18.745 [2024-07-14 07:44:34.831892] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.745 [2024-07-14 07:44:34.832146] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.745 [2024-07-14 07:44:34.832175] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:18.745 qpair failed and we were unable to recover it. 00:27:18.745 [2024-07-14 07:44:34.832382] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.745 [2024-07-14 07:44:34.832699] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.745 [2024-07-14 07:44:34.832757] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:18.745 qpair failed and we were unable to recover it. 00:27:18.745 [2024-07-14 07:44:34.833012] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.745 [2024-07-14 07:44:34.833241] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.745 [2024-07-14 07:44:34.833270] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:18.745 qpair failed and we were unable to recover it. 00:27:18.745 [2024-07-14 07:44:34.833504] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.745 [2024-07-14 07:44:34.833802] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.745 [2024-07-14 07:44:34.833832] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:18.745 qpair failed and we were unable to recover it. 00:27:18.745 [2024-07-14 07:44:34.834090] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.745 [2024-07-14 07:44:34.834509] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.745 [2024-07-14 07:44:34.834560] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:18.745 qpair failed and we were unable to recover it. 00:27:18.745 [2024-07-14 07:44:34.834835] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.745 [2024-07-14 07:44:34.835090] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.745 [2024-07-14 07:44:34.835120] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:18.745 qpair failed and we were unable to recover it. 00:27:18.745 [2024-07-14 07:44:34.835363] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.745 [2024-07-14 07:44:34.835598] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.745 [2024-07-14 07:44:34.835624] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:18.745 qpair failed and we were unable to recover it. 00:27:18.745 [2024-07-14 07:44:34.835846] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.745 [2024-07-14 07:44:34.836034] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.745 [2024-07-14 07:44:34.836076] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:18.745 qpair failed and we were unable to recover it. 00:27:18.745 [2024-07-14 07:44:34.836272] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.745 [2024-07-14 07:44:34.836460] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.745 [2024-07-14 07:44:34.836502] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:18.745 qpair failed and we were unable to recover it. 00:27:18.745 [2024-07-14 07:44:34.836747] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.745 [2024-07-14 07:44:34.836969] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.745 [2024-07-14 07:44:34.836996] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:18.745 qpair failed and we were unable to recover it. 00:27:18.745 [2024-07-14 07:44:34.837231] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.745 [2024-07-14 07:44:34.837438] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.745 [2024-07-14 07:44:34.837467] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:18.745 qpair failed and we were unable to recover it. 00:27:18.745 [2024-07-14 07:44:34.837646] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.745 [2024-07-14 07:44:34.837890] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.745 [2024-07-14 07:44:34.837916] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:18.745 qpair failed and we were unable to recover it. 00:27:18.745 [2024-07-14 07:44:34.838106] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.745 [2024-07-14 07:44:34.838314] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.745 [2024-07-14 07:44:34.838343] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:18.745 qpair failed and we were unable to recover it. 00:27:18.745 [2024-07-14 07:44:34.838527] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.745 [2024-07-14 07:44:34.838734] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.745 [2024-07-14 07:44:34.838763] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:18.745 qpair failed and we were unable to recover it. 00:27:18.745 [2024-07-14 07:44:34.838965] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.745 [2024-07-14 07:44:34.839121] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.745 [2024-07-14 07:44:34.839147] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:18.745 qpair failed and we were unable to recover it. 00:27:18.745 [2024-07-14 07:44:34.839332] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.745 [2024-07-14 07:44:34.839525] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.745 [2024-07-14 07:44:34.839551] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:18.745 qpair failed and we were unable to recover it. 00:27:18.745 [2024-07-14 07:44:34.839733] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.745 [2024-07-14 07:44:34.839916] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.745 [2024-07-14 07:44:34.839943] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:18.745 qpair failed and we were unable to recover it. 00:27:18.745 [2024-07-14 07:44:34.840105] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.745 [2024-07-14 07:44:34.840373] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.745 [2024-07-14 07:44:34.840403] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:18.745 qpair failed and we were unable to recover it. 00:27:18.745 [2024-07-14 07:44:34.840595] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.745 [2024-07-14 07:44:34.840805] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.745 [2024-07-14 07:44:34.840832] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:18.745 qpair failed and we were unable to recover it. 00:27:18.745 [2024-07-14 07:44:34.841056] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.745 [2024-07-14 07:44:34.841367] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.745 [2024-07-14 07:44:34.841421] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:18.745 qpair failed and we were unable to recover it. 00:27:18.745 [2024-07-14 07:44:34.841651] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.745 [2024-07-14 07:44:34.841839] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.745 [2024-07-14 07:44:34.841872] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:18.745 qpair failed and we were unable to recover it. 00:27:18.745 [2024-07-14 07:44:34.842032] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.745 [2024-07-14 07:44:34.842211] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.745 [2024-07-14 07:44:34.842242] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:18.745 qpair failed and we were unable to recover it. 00:27:18.745 [2024-07-14 07:44:34.842452] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.745 [2024-07-14 07:44:34.842756] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.745 [2024-07-14 07:44:34.842807] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:18.745 qpair failed and we were unable to recover it. 00:27:18.745 [2024-07-14 07:44:34.843047] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.745 [2024-07-14 07:44:34.843326] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.745 [2024-07-14 07:44:34.843355] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:18.745 qpair failed and we were unable to recover it. 00:27:18.745 [2024-07-14 07:44:34.843539] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.745 [2024-07-14 07:44:34.843779] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.745 [2024-07-14 07:44:34.843831] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:18.745 qpair failed and we were unable to recover it. 00:27:18.745 [2024-07-14 07:44:34.844065] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.745 [2024-07-14 07:44:34.844450] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.746 [2024-07-14 07:44:34.844505] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:18.746 qpair failed and we were unable to recover it. 00:27:18.746 [2024-07-14 07:44:34.844714] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.746 [2024-07-14 07:44:34.844941] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.746 [2024-07-14 07:44:34.844971] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:18.746 qpair failed and we were unable to recover it. 00:27:18.746 [2024-07-14 07:44:34.845177] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.746 [2024-07-14 07:44:34.845356] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.746 [2024-07-14 07:44:34.845386] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:18.746 qpair failed and we were unable to recover it. 00:27:18.746 [2024-07-14 07:44:34.845605] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.746 [2024-07-14 07:44:34.845794] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.746 [2024-07-14 07:44:34.845821] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:18.746 qpair failed and we were unable to recover it. 00:27:18.746 [2024-07-14 07:44:34.846007] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.746 [2024-07-14 07:44:34.846219] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.746 [2024-07-14 07:44:34.846249] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:18.746 qpair failed and we were unable to recover it. 00:27:18.746 [2024-07-14 07:44:34.846482] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.746 [2024-07-14 07:44:34.846737] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.746 [2024-07-14 07:44:34.846766] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:18.746 qpair failed and we were unable to recover it. 00:27:18.746 [2024-07-14 07:44:34.846970] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.746 [2024-07-14 07:44:34.847159] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.746 [2024-07-14 07:44:34.847187] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:18.746 qpair failed and we were unable to recover it. 00:27:18.746 [2024-07-14 07:44:34.847401] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.746 [2024-07-14 07:44:34.847561] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.746 [2024-07-14 07:44:34.847587] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:18.746 qpair failed and we were unable to recover it. 00:27:18.746 [2024-07-14 07:44:34.847795] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.746 [2024-07-14 07:44:34.848005] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.746 [2024-07-14 07:44:34.848034] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:18.746 qpair failed and we were unable to recover it. 00:27:18.746 [2024-07-14 07:44:34.848264] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.746 [2024-07-14 07:44:34.848629] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.746 [2024-07-14 07:44:34.848683] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:18.746 qpair failed and we were unable to recover it. 00:27:18.746 [2024-07-14 07:44:34.848925] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.746 [2024-07-14 07:44:34.849155] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.746 [2024-07-14 07:44:34.849183] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:18.746 qpair failed and we were unable to recover it. 00:27:18.746 [2024-07-14 07:44:34.849389] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.746 [2024-07-14 07:44:34.849604] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.746 [2024-07-14 07:44:34.849656] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:18.746 qpair failed and we were unable to recover it. 00:27:18.746 [2024-07-14 07:44:34.849860] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.746 [2024-07-14 07:44:34.850111] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.746 [2024-07-14 07:44:34.850140] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:18.746 qpair failed and we were unable to recover it. 00:27:18.746 [2024-07-14 07:44:34.850352] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.746 [2024-07-14 07:44:34.850701] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.746 [2024-07-14 07:44:34.850762] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:18.746 qpair failed and we were unable to recover it. 00:27:18.746 [2024-07-14 07:44:34.850971] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.746 [2024-07-14 07:44:34.851149] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.746 [2024-07-14 07:44:34.851180] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:18.746 qpair failed and we were unable to recover it. 00:27:18.746 [2024-07-14 07:44:34.851406] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.746 [2024-07-14 07:44:34.851685] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.746 [2024-07-14 07:44:34.851740] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:18.746 qpair failed and we were unable to recover it. 00:27:18.746 [2024-07-14 07:44:34.851982] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.746 [2024-07-14 07:44:34.852166] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.746 [2024-07-14 07:44:34.852207] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:18.746 qpair failed and we were unable to recover it. 00:27:18.746 [2024-07-14 07:44:34.852381] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.746 [2024-07-14 07:44:34.852594] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.746 [2024-07-14 07:44:34.852623] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:18.746 qpair failed and we were unable to recover it. 00:27:18.746 [2024-07-14 07:44:34.852838] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.746 [2024-07-14 07:44:34.853076] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.746 [2024-07-14 07:44:34.853117] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:18.746 qpair failed and we were unable to recover it. 00:27:18.746 [2024-07-14 07:44:34.853330] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.746 [2024-07-14 07:44:34.853483] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.746 [2024-07-14 07:44:34.853509] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:18.746 qpair failed and we were unable to recover it. 00:27:18.746 [2024-07-14 07:44:34.853726] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.746 [2024-07-14 07:44:34.853927] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.746 [2024-07-14 07:44:34.853953] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:18.746 qpair failed and we were unable to recover it. 00:27:18.746 [2024-07-14 07:44:34.854145] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.746 [2024-07-14 07:44:34.854325] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.746 [2024-07-14 07:44:34.854367] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:18.746 qpair failed and we were unable to recover it. 00:27:18.746 [2024-07-14 07:44:34.854533] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.746 [2024-07-14 07:44:34.854719] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.746 [2024-07-14 07:44:34.854745] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:18.746 qpair failed and we were unable to recover it. 00:27:18.746 [2024-07-14 07:44:34.854939] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.746 [2024-07-14 07:44:34.855294] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.746 [2024-07-14 07:44:34.855346] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:18.746 qpair failed and we were unable to recover it. 00:27:18.746 [2024-07-14 07:44:34.855524] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.746 [2024-07-14 07:44:34.855734] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.746 [2024-07-14 07:44:34.855763] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:18.746 qpair failed and we were unable to recover it. 00:27:18.746 [2024-07-14 07:44:34.855968] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.746 [2024-07-14 07:44:34.856179] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.746 [2024-07-14 07:44:34.856208] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:18.746 qpair failed and we were unable to recover it. 00:27:18.746 [2024-07-14 07:44:34.856381] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.746 [2024-07-14 07:44:34.856671] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.747 [2024-07-14 07:44:34.856723] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:18.747 qpair failed and we were unable to recover it. 00:27:18.747 [2024-07-14 07:44:34.856992] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.747 [2024-07-14 07:44:34.857371] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.747 [2024-07-14 07:44:34.857423] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:18.747 qpair failed and we were unable to recover it. 00:27:18.747 [2024-07-14 07:44:34.857656] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.747 [2024-07-14 07:44:34.857829] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.747 [2024-07-14 07:44:34.857857] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:18.747 qpair failed and we were unable to recover it. 00:27:18.747 [2024-07-14 07:44:34.858055] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.747 [2024-07-14 07:44:34.858235] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.747 [2024-07-14 07:44:34.858275] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:18.747 qpair failed and we were unable to recover it. 00:27:18.747 [2024-07-14 07:44:34.858467] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.747 [2024-07-14 07:44:34.858656] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.747 [2024-07-14 07:44:34.858682] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:18.747 qpair failed and we were unable to recover it. 00:27:18.747 [2024-07-14 07:44:34.858837] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.747 [2024-07-14 07:44:34.859004] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.747 [2024-07-14 07:44:34.859031] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:18.747 qpair failed and we were unable to recover it. 00:27:18.747 [2024-07-14 07:44:34.859230] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.747 [2024-07-14 07:44:34.859497] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.747 [2024-07-14 07:44:34.859522] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:18.747 qpair failed and we were unable to recover it. 00:27:18.747 [2024-07-14 07:44:34.859759] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.747 [2024-07-14 07:44:34.860003] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.747 [2024-07-14 07:44:34.860030] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:18.747 qpair failed and we were unable to recover it. 00:27:18.747 [2024-07-14 07:44:34.860219] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.747 [2024-07-14 07:44:34.860465] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.747 [2024-07-14 07:44:34.860490] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:18.747 qpair failed and we were unable to recover it. 00:27:18.747 [2024-07-14 07:44:34.860693] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.747 [2024-07-14 07:44:34.860887] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.747 [2024-07-14 07:44:34.860913] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:18.747 qpair failed and we were unable to recover it. 00:27:18.747 [2024-07-14 07:44:34.861154] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.747 [2024-07-14 07:44:34.861400] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.747 [2024-07-14 07:44:34.861427] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:18.747 qpair failed and we were unable to recover it. 00:27:18.747 [2024-07-14 07:44:34.861707] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.747 [2024-07-14 07:44:34.861930] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.747 [2024-07-14 07:44:34.861957] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:18.747 qpair failed and we were unable to recover it. 00:27:18.747 [2024-07-14 07:44:34.862137] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.747 [2024-07-14 07:44:34.862507] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.747 [2024-07-14 07:44:34.862556] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:18.747 qpair failed and we were unable to recover it. 00:27:18.747 [2024-07-14 07:44:34.862835] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.747 [2024-07-14 07:44:34.863003] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.747 [2024-07-14 07:44:34.863030] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:18.747 qpair failed and we were unable to recover it. 00:27:18.747 [2024-07-14 07:44:34.863192] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.747 [2024-07-14 07:44:34.863411] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.747 [2024-07-14 07:44:34.863437] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:18.747 qpair failed and we were unable to recover it. 00:27:18.747 [2024-07-14 07:44:34.863667] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.747 [2024-07-14 07:44:34.863946] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.747 [2024-07-14 07:44:34.863975] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:18.747 qpair failed and we were unable to recover it. 00:27:18.747 [2024-07-14 07:44:34.864191] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.747 [2024-07-14 07:44:34.864566] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.747 [2024-07-14 07:44:34.864626] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:18.747 qpair failed and we were unable to recover it. 00:27:18.747 [2024-07-14 07:44:34.864876] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.747 [2024-07-14 07:44:34.865066] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.747 [2024-07-14 07:44:34.865092] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:18.747 qpair failed and we were unable to recover it. 00:27:18.747 [2024-07-14 07:44:34.865283] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.747 [2024-07-14 07:44:34.865690] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.747 [2024-07-14 07:44:34.865739] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:18.747 qpair failed and we were unable to recover it. 00:27:18.747 [2024-07-14 07:44:34.865973] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.747 [2024-07-14 07:44:34.866160] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.747 [2024-07-14 07:44:34.866186] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:18.747 qpair failed and we were unable to recover it. 00:27:18.747 [2024-07-14 07:44:34.866368] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.747 [2024-07-14 07:44:34.866610] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.747 [2024-07-14 07:44:34.866652] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:18.747 qpair failed and we were unable to recover it. 00:27:18.747 [2024-07-14 07:44:34.866922] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.747 [2024-07-14 07:44:34.867137] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.747 [2024-07-14 07:44:34.867168] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:18.747 qpair failed and we were unable to recover it. 00:27:18.747 [2024-07-14 07:44:34.867396] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.747 [2024-07-14 07:44:34.867669] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.747 [2024-07-14 07:44:34.867721] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:18.747 qpair failed and we were unable to recover it. 00:27:18.747 [2024-07-14 07:44:34.867932] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.747 [2024-07-14 07:44:34.868139] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.747 [2024-07-14 07:44:34.868168] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:18.747 qpair failed and we were unable to recover it. 00:27:18.747 [2024-07-14 07:44:34.868546] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.747 [2024-07-14 07:44:34.868841] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.747 [2024-07-14 07:44:34.868872] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:18.747 qpair failed and we were unable to recover it. 00:27:18.747 [2024-07-14 07:44:34.869159] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.747 [2024-07-14 07:44:34.869427] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.747 [2024-07-14 07:44:34.869453] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:18.747 qpair failed and we were unable to recover it. 00:27:18.747 [2024-07-14 07:44:34.869644] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.747 [2024-07-14 07:44:34.869847] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.747 [2024-07-14 07:44:34.869878] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:18.747 qpair failed and we were unable to recover it. 00:27:18.747 [2024-07-14 07:44:34.870095] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.747 [2024-07-14 07:44:34.870377] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.747 [2024-07-14 07:44:34.870406] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:18.747 qpair failed and we were unable to recover it. 00:27:18.747 [2024-07-14 07:44:34.870630] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.747 [2024-07-14 07:44:34.870803] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.747 [2024-07-14 07:44:34.870831] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:18.747 qpair failed and we were unable to recover it. 00:27:18.747 [2024-07-14 07:44:34.871051] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.747 [2024-07-14 07:44:34.871227] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.747 [2024-07-14 07:44:34.871253] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:18.747 qpair failed and we were unable to recover it. 00:27:18.747 [2024-07-14 07:44:34.871466] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.747 [2024-07-14 07:44:34.871644] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.747 [2024-07-14 07:44:34.871673] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:18.747 qpair failed and we were unable to recover it. 00:27:18.747 [2024-07-14 07:44:34.871950] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.747 [2024-07-14 07:44:34.872157] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.747 [2024-07-14 07:44:34.872182] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:18.747 qpair failed and we were unable to recover it. 00:27:18.747 [2024-07-14 07:44:34.872391] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.748 [2024-07-14 07:44:34.872579] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.748 [2024-07-14 07:44:34.872605] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:18.748 qpair failed and we were unable to recover it. 00:27:18.748 [2024-07-14 07:44:34.872840] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.748 [2024-07-14 07:44:34.873050] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.748 [2024-07-14 07:44:34.873077] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:18.748 qpair failed and we were unable to recover it. 00:27:18.748 [2024-07-14 07:44:34.873315] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.748 [2024-07-14 07:44:34.873670] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.748 [2024-07-14 07:44:34.873719] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:18.748 qpair failed and we were unable to recover it. 00:27:18.748 [2024-07-14 07:44:34.874014] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.748 [2024-07-14 07:44:34.874386] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.748 [2024-07-14 07:44:34.874444] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:18.748 qpair failed and we were unable to recover it. 00:27:18.748 [2024-07-14 07:44:34.874857] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.748 [2024-07-14 07:44:34.875138] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.748 [2024-07-14 07:44:34.875167] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:18.748 qpair failed and we were unable to recover it. 00:27:18.748 [2024-07-14 07:44:34.875437] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.748 [2024-07-14 07:44:34.875709] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.748 [2024-07-14 07:44:34.875760] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:18.748 qpair failed and we were unable to recover it. 00:27:18.748 [2024-07-14 07:44:34.875979] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.748 [2024-07-14 07:44:34.876166] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.748 [2024-07-14 07:44:34.876195] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:18.748 qpair failed and we were unable to recover it. 00:27:18.748 [2024-07-14 07:44:34.876428] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.748 [2024-07-14 07:44:34.876740] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.748 [2024-07-14 07:44:34.876799] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:18.748 qpair failed and we were unable to recover it. 00:27:18.748 [2024-07-14 07:44:34.877013] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.748 [2024-07-14 07:44:34.877231] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.748 [2024-07-14 07:44:34.877257] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:18.748 qpair failed and we were unable to recover it. 00:27:18.748 [2024-07-14 07:44:34.877468] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.748 [2024-07-14 07:44:34.877653] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.748 [2024-07-14 07:44:34.877679] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:18.748 qpair failed and we were unable to recover it. 00:27:18.748 [2024-07-14 07:44:34.877864] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.748 [2024-07-14 07:44:34.878105] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.748 [2024-07-14 07:44:34.878133] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:18.748 qpair failed and we were unable to recover it. 00:27:18.748 [2024-07-14 07:44:34.878367] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.748 [2024-07-14 07:44:34.878550] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.748 [2024-07-14 07:44:34.878579] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:18.748 qpair failed and we were unable to recover it. 00:27:18.748 [2024-07-14 07:44:34.878784] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.748 [2024-07-14 07:44:34.878999] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.748 [2024-07-14 07:44:34.879026] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:18.748 qpair failed and we were unable to recover it. 00:27:18.748 [2024-07-14 07:44:34.879252] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.748 [2024-07-14 07:44:34.879441] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.748 [2024-07-14 07:44:34.879485] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:18.748 qpair failed and we were unable to recover it. 00:27:18.748 [2024-07-14 07:44:34.879686] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.748 [2024-07-14 07:44:34.879896] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.748 [2024-07-14 07:44:34.879923] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:18.748 qpair failed and we were unable to recover it. 00:27:18.748 [2024-07-14 07:44:34.880136] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.748 [2024-07-14 07:44:34.880367] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.748 [2024-07-14 07:44:34.880411] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:18.748 qpair failed and we were unable to recover it. 00:27:18.748 [2024-07-14 07:44:34.880827] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.748 [2024-07-14 07:44:34.881100] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.748 [2024-07-14 07:44:34.881128] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:18.748 qpair failed and we were unable to recover it. 00:27:18.748 [2024-07-14 07:44:34.881319] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.748 [2024-07-14 07:44:34.881567] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.748 [2024-07-14 07:44:34.881596] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:18.748 qpair failed and we were unable to recover it. 00:27:18.748 [2024-07-14 07:44:34.881796] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.748 [2024-07-14 07:44:34.881970] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.748 [2024-07-14 07:44:34.881998] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:18.748 qpair failed and we were unable to recover it. 00:27:18.748 [2024-07-14 07:44:34.882183] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.748 [2024-07-14 07:44:34.882590] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.748 [2024-07-14 07:44:34.882634] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:18.748 qpair failed and we were unable to recover it. 00:27:18.748 [2024-07-14 07:44:34.882890] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.748 [2024-07-14 07:44:34.883124] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.748 [2024-07-14 07:44:34.883154] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:18.748 qpair failed and we were unable to recover it. 00:27:18.748 [2024-07-14 07:44:34.883405] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.748 [2024-07-14 07:44:34.883682] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.748 [2024-07-14 07:44:34.883733] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:18.748 qpair failed and we were unable to recover it. 00:27:18.748 [2024-07-14 07:44:34.883931] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.748 [2024-07-14 07:44:34.884142] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.748 [2024-07-14 07:44:34.884169] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:18.748 qpair failed and we were unable to recover it. 00:27:18.748 [2024-07-14 07:44:34.884401] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.748 [2024-07-14 07:44:34.884712] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.748 [2024-07-14 07:44:34.884757] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:18.748 qpair failed and we were unable to recover it. 00:27:18.748 [2024-07-14 07:44:34.885016] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.748 [2024-07-14 07:44:34.885223] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.748 [2024-07-14 07:44:34.885263] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:18.748 qpair failed and we were unable to recover it. 00:27:18.748 [2024-07-14 07:44:34.885479] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.748 [2024-07-14 07:44:34.885684] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.748 [2024-07-14 07:44:34.885712] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:18.748 qpair failed and we were unable to recover it. 00:27:18.748 [2024-07-14 07:44:34.885922] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.748 [2024-07-14 07:44:34.886111] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.748 [2024-07-14 07:44:34.886152] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:18.748 qpair failed and we were unable to recover it. 00:27:18.748 [2024-07-14 07:44:34.886358] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.748 [2024-07-14 07:44:34.886785] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.748 [2024-07-14 07:44:34.886842] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:18.748 qpair failed and we were unable to recover it. 00:27:18.748 [2024-07-14 07:44:34.887118] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.018 [2024-07-14 07:44:34.887334] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.018 [2024-07-14 07:44:34.887404] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:19.018 qpair failed and we were unable to recover it. 00:27:19.018 [2024-07-14 07:44:34.887620] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.018 [2024-07-14 07:44:34.887792] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.018 [2024-07-14 07:44:34.887845] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:19.018 qpair failed and we were unable to recover it. 00:27:19.018 [2024-07-14 07:44:34.888098] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.018 [2024-07-14 07:44:34.888378] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.018 [2024-07-14 07:44:34.888420] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:19.018 qpair failed and we were unable to recover it. 00:27:19.018 [2024-07-14 07:44:34.888662] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.018 [2024-07-14 07:44:34.888910] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.018 [2024-07-14 07:44:34.888956] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:19.018 qpair failed and we were unable to recover it. 00:27:19.018 [2024-07-14 07:44:34.889232] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.018 [2024-07-14 07:44:34.889671] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.018 [2024-07-14 07:44:34.889724] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:19.018 qpair failed and we were unable to recover it. 00:27:19.018 [2024-07-14 07:44:34.889920] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.018 [2024-07-14 07:44:34.890134] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.018 [2024-07-14 07:44:34.890161] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:19.018 qpair failed and we were unable to recover it. 00:27:19.018 [2024-07-14 07:44:34.890350] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.018 [2024-07-14 07:44:34.890512] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.018 [2024-07-14 07:44:34.890539] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:19.018 qpair failed and we were unable to recover it. 00:27:19.018 [2024-07-14 07:44:34.890701] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.018 [2024-07-14 07:44:34.890915] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.018 [2024-07-14 07:44:34.890942] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:19.018 qpair failed and we were unable to recover it. 00:27:19.018 [2024-07-14 07:44:34.891108] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.018 [2024-07-14 07:44:34.891323] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.018 [2024-07-14 07:44:34.891386] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:19.018 qpair failed and we were unable to recover it. 00:27:19.018 [2024-07-14 07:44:34.891567] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.018 [2024-07-14 07:44:34.891778] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.018 [2024-07-14 07:44:34.891804] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:19.018 qpair failed and we were unable to recover it. 00:27:19.018 [2024-07-14 07:44:34.891993] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.018 [2024-07-14 07:44:34.892179] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.018 [2024-07-14 07:44:34.892205] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:19.018 qpair failed and we were unable to recover it. 00:27:19.018 [2024-07-14 07:44:34.892364] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.018 [2024-07-14 07:44:34.892556] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.018 [2024-07-14 07:44:34.892583] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:19.018 qpair failed and we were unable to recover it. 00:27:19.018 [2024-07-14 07:44:34.892801] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.018 [2024-07-14 07:44:34.893008] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.018 [2024-07-14 07:44:34.893038] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:19.018 qpair failed and we were unable to recover it. 00:27:19.018 [2024-07-14 07:44:34.893229] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.019 [2024-07-14 07:44:34.893411] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.019 [2024-07-14 07:44:34.893437] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:19.019 qpair failed and we were unable to recover it. 00:27:19.019 [2024-07-14 07:44:34.893624] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.019 [2024-07-14 07:44:34.893779] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.019 [2024-07-14 07:44:34.893805] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:19.019 qpair failed and we were unable to recover it. 00:27:19.019 [2024-07-14 07:44:34.894042] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.019 [2024-07-14 07:44:34.894259] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.019 [2024-07-14 07:44:34.894329] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:19.019 qpair failed and we were unable to recover it. 00:27:19.019 [2024-07-14 07:44:34.894629] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.019 [2024-07-14 07:44:34.894858] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.019 [2024-07-14 07:44:34.894895] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:19.019 qpair failed and we were unable to recover it. 00:27:19.019 [2024-07-14 07:44:34.895095] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.019 [2024-07-14 07:44:34.895277] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.019 [2024-07-14 07:44:34.895304] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:19.019 qpair failed and we were unable to recover it. 00:27:19.019 [2024-07-14 07:44:34.895516] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.019 [2024-07-14 07:44:34.895704] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.019 [2024-07-14 07:44:34.895739] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:19.019 qpair failed and we were unable to recover it. 00:27:19.019 [2024-07-14 07:44:34.895967] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.019 [2024-07-14 07:44:34.896164] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.019 [2024-07-14 07:44:34.896193] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:19.019 qpair failed and we were unable to recover it. 00:27:19.019 [2024-07-14 07:44:34.896579] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.019 [2024-07-14 07:44:34.896844] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.019 [2024-07-14 07:44:34.896882] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:19.019 qpair failed and we were unable to recover it. 00:27:19.019 [2024-07-14 07:44:34.897099] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.019 [2024-07-14 07:44:34.897429] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.019 [2024-07-14 07:44:34.897493] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:19.019 qpair failed and we were unable to recover it. 00:27:19.019 [2024-07-14 07:44:34.897732] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.019 [2024-07-14 07:44:34.897907] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.019 [2024-07-14 07:44:34.897937] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:19.019 qpair failed and we were unable to recover it. 00:27:19.019 [2024-07-14 07:44:34.898171] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.019 [2024-07-14 07:44:34.898520] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.019 [2024-07-14 07:44:34.898569] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:19.019 qpair failed and we were unable to recover it. 00:27:19.019 [2024-07-14 07:44:34.898784] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.019 [2024-07-14 07:44:34.898972] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.019 [2024-07-14 07:44:34.899004] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:19.019 qpair failed and we were unable to recover it. 00:27:19.019 [2024-07-14 07:44:34.899221] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.019 [2024-07-14 07:44:34.899459] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.019 [2024-07-14 07:44:34.899486] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:19.019 qpair failed and we were unable to recover it. 00:27:19.019 [2024-07-14 07:44:34.899677] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.019 [2024-07-14 07:44:34.899836] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.019 [2024-07-14 07:44:34.899862] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:19.019 qpair failed and we were unable to recover it. 00:27:19.019 [2024-07-14 07:44:34.900058] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.019 [2024-07-14 07:44:34.900223] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.019 [2024-07-14 07:44:34.900250] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:19.019 qpair failed and we were unable to recover it. 00:27:19.019 [2024-07-14 07:44:34.900436] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.019 [2024-07-14 07:44:34.900721] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.019 [2024-07-14 07:44:34.900751] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:19.019 qpair failed and we were unable to recover it. 00:27:19.019 [2024-07-14 07:44:34.900996] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.019 [2024-07-14 07:44:34.901212] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.019 [2024-07-14 07:44:34.901239] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:19.019 qpair failed and we were unable to recover it. 00:27:19.019 [2024-07-14 07:44:34.901427] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.019 [2024-07-14 07:44:34.901614] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.019 [2024-07-14 07:44:34.901641] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:19.019 qpair failed and we were unable to recover it. 00:27:19.019 [2024-07-14 07:44:34.901829] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.019 [2024-07-14 07:44:34.902056] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.019 [2024-07-14 07:44:34.902085] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:19.019 qpair failed and we were unable to recover it. 00:27:19.019 [2024-07-14 07:44:34.902464] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.019 [2024-07-14 07:44:34.902760] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.019 [2024-07-14 07:44:34.902789] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:19.019 qpair failed and we were unable to recover it. 00:27:19.019 [2024-07-14 07:44:34.903079] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.019 [2024-07-14 07:44:34.903269] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.019 [2024-07-14 07:44:34.903311] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:19.019 qpair failed and we were unable to recover it. 00:27:19.019 [2024-07-14 07:44:34.903484] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.019 [2024-07-14 07:44:34.903691] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.019 [2024-07-14 07:44:34.903720] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:19.019 qpair failed and we were unable to recover it. 00:27:19.019 [2024-07-14 07:44:34.903954] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.019 [2024-07-14 07:44:34.904261] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.019 [2024-07-14 07:44:34.904322] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:19.019 qpair failed and we were unable to recover it. 00:27:19.019 [2024-07-14 07:44:34.904524] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.019 [2024-07-14 07:44:34.904722] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.019 [2024-07-14 07:44:34.904752] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:19.019 qpair failed and we were unable to recover it. 00:27:19.019 [2024-07-14 07:44:34.904923] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.019 [2024-07-14 07:44:34.905128] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.019 [2024-07-14 07:44:34.905157] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:19.019 qpair failed and we were unable to recover it. 00:27:19.019 [2024-07-14 07:44:34.905362] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.019 [2024-07-14 07:44:34.905680] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.019 [2024-07-14 07:44:34.905736] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:19.019 qpair failed and we were unable to recover it. 00:27:19.019 [2024-07-14 07:44:34.905965] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.019 [2024-07-14 07:44:34.906151] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.019 [2024-07-14 07:44:34.906192] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:19.019 qpair failed and we were unable to recover it. 00:27:19.019 [2024-07-14 07:44:34.906384] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.019 [2024-07-14 07:44:34.906655] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.019 [2024-07-14 07:44:34.906709] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:19.019 qpair failed and we were unable to recover it. 00:27:19.019 [2024-07-14 07:44:34.906959] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.019 [2024-07-14 07:44:34.907121] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.019 [2024-07-14 07:44:34.907163] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:19.019 qpair failed and we were unable to recover it. 00:27:19.019 [2024-07-14 07:44:34.907392] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.019 [2024-07-14 07:44:34.907572] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.019 [2024-07-14 07:44:34.907601] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:19.019 qpair failed and we were unable to recover it. 00:27:19.019 [2024-07-14 07:44:34.907802] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.019 [2024-07-14 07:44:34.907983] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.019 [2024-07-14 07:44:34.908016] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:19.019 qpair failed and we were unable to recover it. 00:27:19.019 [2024-07-14 07:44:34.908306] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.019 [2024-07-14 07:44:34.908527] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.019 [2024-07-14 07:44:34.908553] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:19.019 qpair failed and we were unable to recover it. 00:27:19.019 [2024-07-14 07:44:34.908718] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.019 [2024-07-14 07:44:34.908956] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.019 [2024-07-14 07:44:34.908983] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:19.019 qpair failed and we were unable to recover it. 00:27:19.019 [2024-07-14 07:44:34.909189] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.019 [2024-07-14 07:44:34.909402] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.020 [2024-07-14 07:44:34.909431] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:19.020 qpair failed and we were unable to recover it. 00:27:19.020 [2024-07-14 07:44:34.909643] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.020 [2024-07-14 07:44:34.909800] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.020 [2024-07-14 07:44:34.909826] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:19.020 qpair failed and we were unable to recover it. 00:27:19.020 [2024-07-14 07:44:34.909983] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.020 [2024-07-14 07:44:34.910168] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.020 [2024-07-14 07:44:34.910195] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:19.020 qpair failed and we were unable to recover it. 00:27:19.020 [2024-07-14 07:44:34.910473] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.020 [2024-07-14 07:44:34.910769] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.020 [2024-07-14 07:44:34.910798] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:19.020 qpair failed and we were unable to recover it. 00:27:19.020 [2024-07-14 07:44:34.911031] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.020 [2024-07-14 07:44:34.911292] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.020 [2024-07-14 07:44:34.911349] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:19.020 qpair failed and we were unable to recover it. 00:27:19.020 [2024-07-14 07:44:34.911589] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.020 [2024-07-14 07:44:34.911770] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.020 [2024-07-14 07:44:34.911799] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:19.020 qpair failed and we were unable to recover it. 00:27:19.020 [2024-07-14 07:44:34.912010] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.020 [2024-07-14 07:44:34.912221] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.020 [2024-07-14 07:44:34.912250] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:19.020 qpair failed and we were unable to recover it. 00:27:19.020 [2024-07-14 07:44:34.912457] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.020 [2024-07-14 07:44:34.912675] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.020 [2024-07-14 07:44:34.912701] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:19.020 qpair failed and we were unable to recover it. 00:27:19.020 [2024-07-14 07:44:34.912920] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.020 [2024-07-14 07:44:34.913105] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.020 [2024-07-14 07:44:34.913134] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:19.020 qpair failed and we were unable to recover it. 00:27:19.020 [2024-07-14 07:44:34.913362] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.020 [2024-07-14 07:44:34.913618] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.020 [2024-07-14 07:44:34.913692] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:19.020 qpair failed and we were unable to recover it. 00:27:19.020 [2024-07-14 07:44:34.913933] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.020 [2024-07-14 07:44:34.914110] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.020 [2024-07-14 07:44:34.914140] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:19.020 qpair failed and we were unable to recover it. 00:27:19.020 [2024-07-14 07:44:34.914347] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.020 [2024-07-14 07:44:34.914680] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.020 [2024-07-14 07:44:34.914739] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:19.020 qpair failed and we were unable to recover it. 00:27:19.020 [2024-07-14 07:44:34.914946] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.020 [2024-07-14 07:44:34.915172] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.020 [2024-07-14 07:44:34.915198] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:19.020 qpair failed and we were unable to recover it. 00:27:19.020 [2024-07-14 07:44:34.915377] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.020 [2024-07-14 07:44:34.915596] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.020 [2024-07-14 07:44:34.915623] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:19.020 qpair failed and we were unable to recover it. 00:27:19.020 [2024-07-14 07:44:34.915804] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.020 [2024-07-14 07:44:34.916015] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.020 [2024-07-14 07:44:34.916042] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:19.020 qpair failed and we were unable to recover it. 00:27:19.020 [2024-07-14 07:44:34.916199] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.020 [2024-07-14 07:44:34.916473] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.020 [2024-07-14 07:44:34.916524] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:19.020 qpair failed and we were unable to recover it. 00:27:19.020 [2024-07-14 07:44:34.916764] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.020 [2024-07-14 07:44:34.916990] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.020 [2024-07-14 07:44:34.917019] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:19.020 qpair failed and we were unable to recover it. 00:27:19.020 [2024-07-14 07:44:34.917254] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.020 [2024-07-14 07:44:34.917652] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.020 [2024-07-14 07:44:34.917707] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:19.020 qpair failed and we were unable to recover it. 00:27:19.020 [2024-07-14 07:44:34.917918] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.020 [2024-07-14 07:44:34.918125] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.020 [2024-07-14 07:44:34.918154] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:19.020 qpair failed and we were unable to recover it. 00:27:19.020 [2024-07-14 07:44:34.918376] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.020 [2024-07-14 07:44:34.918590] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.020 [2024-07-14 07:44:34.918616] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:19.020 qpair failed and we were unable to recover it. 00:27:19.020 [2024-07-14 07:44:34.918826] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.020 [2024-07-14 07:44:34.919020] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.020 [2024-07-14 07:44:34.919065] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:19.020 qpair failed and we were unable to recover it. 00:27:19.020 [2024-07-14 07:44:34.919297] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.020 [2024-07-14 07:44:34.919630] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.020 [2024-07-14 07:44:34.919680] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:19.020 qpair failed and we were unable to recover it. 00:27:19.020 [2024-07-14 07:44:34.919899] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.020 [2024-07-14 07:44:34.920133] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.020 [2024-07-14 07:44:34.920161] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:19.020 qpair failed and we were unable to recover it. 00:27:19.020 [2024-07-14 07:44:34.920323] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.020 [2024-07-14 07:44:34.920506] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.020 [2024-07-14 07:44:34.920551] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:19.020 qpair failed and we were unable to recover it. 00:27:19.020 [2024-07-14 07:44:34.920797] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.020 [2024-07-14 07:44:34.921045] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.020 [2024-07-14 07:44:34.921078] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:19.020 qpair failed and we were unable to recover it. 00:27:19.020 [2024-07-14 07:44:34.921322] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.020 [2024-07-14 07:44:34.921533] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.020 [2024-07-14 07:44:34.921571] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:19.020 qpair failed and we were unable to recover it. 00:27:19.020 [2024-07-14 07:44:34.921771] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.020 [2024-07-14 07:44:34.922034] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.020 [2024-07-14 07:44:34.922061] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:19.020 qpair failed and we were unable to recover it. 00:27:19.020 [2024-07-14 07:44:34.922328] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.020 [2024-07-14 07:44:34.922638] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.020 [2024-07-14 07:44:34.922678] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:19.020 qpair failed and we were unable to recover it. 00:27:19.020 [2024-07-14 07:44:34.922891] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.020 [2024-07-14 07:44:34.923172] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.020 [2024-07-14 07:44:34.923201] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:19.020 qpair failed and we were unable to recover it. 00:27:19.020 [2024-07-14 07:44:34.923397] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.020 [2024-07-14 07:44:34.923552] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.020 [2024-07-14 07:44:34.923577] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:19.020 qpair failed and we were unable to recover it. 00:27:19.020 [2024-07-14 07:44:34.923794] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.020 [2024-07-14 07:44:34.924035] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.020 [2024-07-14 07:44:34.924064] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:19.020 qpair failed and we were unable to recover it. 00:27:19.020 [2024-07-14 07:44:34.924278] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.020 [2024-07-14 07:44:34.924561] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.020 [2024-07-14 07:44:34.924611] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:19.020 qpair failed and we were unable to recover it. 00:27:19.020 [2024-07-14 07:44:34.924852] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.020 [2024-07-14 07:44:34.925069] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.021 [2024-07-14 07:44:34.925096] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:19.021 qpair failed and we were unable to recover it. 00:27:19.021 [2024-07-14 07:44:34.925284] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.021 [2024-07-14 07:44:34.925579] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.021 [2024-07-14 07:44:34.925633] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:19.021 qpair failed and we were unable to recover it. 00:27:19.021 [2024-07-14 07:44:34.925923] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.021 [2024-07-14 07:44:34.926127] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.021 [2024-07-14 07:44:34.926153] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:19.021 qpair failed and we were unable to recover it. 00:27:19.021 [2024-07-14 07:44:34.926345] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.021 [2024-07-14 07:44:34.926549] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.021 [2024-07-14 07:44:34.926574] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:19.021 qpair failed and we were unable to recover it. 00:27:19.021 [2024-07-14 07:44:34.926799] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.021 [2024-07-14 07:44:34.926999] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.021 [2024-07-14 07:44:34.927028] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:19.021 qpair failed and we were unable to recover it. 00:27:19.021 [2024-07-14 07:44:34.927228] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.021 [2024-07-14 07:44:34.927487] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.021 [2024-07-14 07:44:34.927542] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:19.021 qpair failed and we were unable to recover it. 00:27:19.021 [2024-07-14 07:44:34.927772] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.021 [2024-07-14 07:44:34.927980] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.021 [2024-07-14 07:44:34.928010] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:19.021 qpair failed and we were unable to recover it. 00:27:19.021 [2024-07-14 07:44:34.928215] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.021 [2024-07-14 07:44:34.928507] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.021 [2024-07-14 07:44:34.928533] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:19.021 qpair failed and we were unable to recover it. 00:27:19.021 [2024-07-14 07:44:34.928739] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.021 [2024-07-14 07:44:34.928942] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.021 [2024-07-14 07:44:34.928971] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:19.021 qpair failed and we were unable to recover it. 00:27:19.021 [2024-07-14 07:44:34.929177] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.021 [2024-07-14 07:44:34.929451] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.021 [2024-07-14 07:44:34.929480] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:19.021 qpair failed and we were unable to recover it. 00:27:19.021 [2024-07-14 07:44:34.929689] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.021 [2024-07-14 07:44:34.929895] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.021 [2024-07-14 07:44:34.929925] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:19.021 qpair failed and we were unable to recover it. 00:27:19.021 [2024-07-14 07:44:34.930107] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.021 [2024-07-14 07:44:34.930356] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.021 [2024-07-14 07:44:34.930381] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:19.021 qpair failed and we were unable to recover it. 00:27:19.021 [2024-07-14 07:44:34.930639] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.021 [2024-07-14 07:44:34.930878] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.021 [2024-07-14 07:44:34.930908] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:19.021 qpair failed and we were unable to recover it. 00:27:19.021 [2024-07-14 07:44:34.931118] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.021 [2024-07-14 07:44:34.931387] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.021 [2024-07-14 07:44:34.931412] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:19.021 qpair failed and we were unable to recover it. 00:27:19.021 [2024-07-14 07:44:34.931676] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.021 [2024-07-14 07:44:34.931891] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.021 [2024-07-14 07:44:34.931918] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:19.021 qpair failed and we were unable to recover it. 00:27:19.021 [2024-07-14 07:44:34.932110] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.021 [2024-07-14 07:44:34.932361] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.021 [2024-07-14 07:44:34.932386] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:19.021 qpair failed and we were unable to recover it. 00:27:19.021 [2024-07-14 07:44:34.932614] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.021 [2024-07-14 07:44:34.932811] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.021 [2024-07-14 07:44:34.932840] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:19.021 qpair failed and we were unable to recover it. 00:27:19.021 [2024-07-14 07:44:34.933083] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.021 [2024-07-14 07:44:34.933266] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.021 [2024-07-14 07:44:34.933295] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:19.021 qpair failed and we were unable to recover it. 00:27:19.021 [2024-07-14 07:44:34.933562] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.021 [2024-07-14 07:44:34.933796] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.021 [2024-07-14 07:44:34.933825] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:19.021 qpair failed and we were unable to recover it. 00:27:19.021 [2024-07-14 07:44:34.934033] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.021 [2024-07-14 07:44:34.934308] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.021 [2024-07-14 07:44:34.934358] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:19.021 qpair failed and we were unable to recover it. 00:27:19.021 [2024-07-14 07:44:34.934590] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.021 [2024-07-14 07:44:34.934793] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.021 [2024-07-14 07:44:34.934822] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:19.021 qpair failed and we were unable to recover it. 00:27:19.021 [2024-07-14 07:44:34.935018] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.021 [2024-07-14 07:44:34.935219] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.021 [2024-07-14 07:44:34.935247] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:19.021 qpair failed and we were unable to recover it. 00:27:19.021 [2024-07-14 07:44:34.935518] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.021 [2024-07-14 07:44:34.935904] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.021 [2024-07-14 07:44:34.935951] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:19.021 qpair failed and we were unable to recover it. 00:27:19.021 [2024-07-14 07:44:34.936184] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.021 [2024-07-14 07:44:34.936388] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.021 [2024-07-14 07:44:34.936413] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:19.021 qpair failed and we were unable to recover it. 00:27:19.021 [2024-07-14 07:44:34.936628] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.021 [2024-07-14 07:44:34.936839] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.021 [2024-07-14 07:44:34.936871] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:19.021 qpair failed and we were unable to recover it. 00:27:19.021 [2024-07-14 07:44:34.937158] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.021 [2024-07-14 07:44:34.937577] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.021 [2024-07-14 07:44:34.937629] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:19.021 qpair failed and we were unable to recover it. 00:27:19.021 [2024-07-14 07:44:34.937827] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.021 [2024-07-14 07:44:34.938062] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.021 [2024-07-14 07:44:34.938091] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:19.021 qpair failed and we were unable to recover it. 00:27:19.021 [2024-07-14 07:44:34.938304] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.021 [2024-07-14 07:44:34.938713] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.021 [2024-07-14 07:44:34.938764] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:19.021 qpair failed and we were unable to recover it. 00:27:19.021 [2024-07-14 07:44:34.939002] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.021 [2024-07-14 07:44:34.939197] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.021 [2024-07-14 07:44:34.939238] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:19.021 qpair failed and we were unable to recover it. 00:27:19.021 [2024-07-14 07:44:34.939446] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.021 [2024-07-14 07:44:34.939669] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.022 [2024-07-14 07:44:34.939695] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:19.022 qpair failed and we were unable to recover it. 00:27:19.022 [2024-07-14 07:44:34.939941] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.022 [2024-07-14 07:44:34.940151] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.022 [2024-07-14 07:44:34.940177] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:19.022 qpair failed and we were unable to recover it. 00:27:19.022 [2024-07-14 07:44:34.940410] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.022 [2024-07-14 07:44:34.940766] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.022 [2024-07-14 07:44:34.940816] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:19.022 qpair failed and we were unable to recover it. 00:27:19.022 [2024-07-14 07:44:34.941049] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.022 [2024-07-14 07:44:34.941236] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.022 [2024-07-14 07:44:34.941264] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:19.022 qpair failed and we were unable to recover it. 00:27:19.022 [2024-07-14 07:44:34.941430] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.022 [2024-07-14 07:44:34.941660] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.022 [2024-07-14 07:44:34.941713] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:19.022 qpair failed and we were unable to recover it. 00:27:19.022 [2024-07-14 07:44:34.942015] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.022 [2024-07-14 07:44:34.942228] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.022 [2024-07-14 07:44:34.942256] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:19.022 qpair failed and we were unable to recover it. 00:27:19.022 [2024-07-14 07:44:34.942466] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.022 [2024-07-14 07:44:34.942691] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.022 [2024-07-14 07:44:34.942716] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:19.022 qpair failed and we were unable to recover it. 00:27:19.022 [2024-07-14 07:44:34.942888] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.022 [2024-07-14 07:44:34.943046] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.022 [2024-07-14 07:44:34.943073] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:19.022 qpair failed and we were unable to recover it. 00:27:19.022 [2024-07-14 07:44:34.943360] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.022 [2024-07-14 07:44:34.943648] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.022 [2024-07-14 07:44:34.943674] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:19.022 qpair failed and we were unable to recover it. 00:27:19.022 [2024-07-14 07:44:34.943891] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.022 [2024-07-14 07:44:34.944090] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.022 [2024-07-14 07:44:34.944131] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:19.022 qpair failed and we were unable to recover it. 00:27:19.022 [2024-07-14 07:44:34.944375] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.022 [2024-07-14 07:44:34.944611] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.022 [2024-07-14 07:44:34.944640] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:19.022 qpair failed and we were unable to recover it. 00:27:19.022 [2024-07-14 07:44:34.944845] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.022 [2024-07-14 07:44:34.945060] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.022 [2024-07-14 07:44:34.945089] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:19.022 qpair failed and we were unable to recover it. 00:27:19.022 [2024-07-14 07:44:34.945298] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.022 [2024-07-14 07:44:34.945654] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.022 [2024-07-14 07:44:34.945701] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:19.022 qpair failed and we were unable to recover it. 00:27:19.022 [2024-07-14 07:44:34.945907] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.022 [2024-07-14 07:44:34.946139] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.022 [2024-07-14 07:44:34.946172] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:19.022 qpair failed and we were unable to recover it. 00:27:19.022 [2024-07-14 07:44:34.946379] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.022 [2024-07-14 07:44:34.946587] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.022 [2024-07-14 07:44:34.946612] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:19.022 qpair failed and we were unable to recover it. 00:27:19.022 [2024-07-14 07:44:34.946820] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.022 [2024-07-14 07:44:34.947057] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.022 [2024-07-14 07:44:34.947087] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:19.022 qpair failed and we were unable to recover it. 00:27:19.022 [2024-07-14 07:44:34.947277] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.022 [2024-07-14 07:44:34.947517] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.022 [2024-07-14 07:44:34.947546] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:19.022 qpair failed and we were unable to recover it. 00:27:19.022 [2024-07-14 07:44:34.947719] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.022 [2024-07-14 07:44:34.947959] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.022 [2024-07-14 07:44:34.947999] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:19.022 qpair failed and we were unable to recover it. 00:27:19.022 [2024-07-14 07:44:34.948219] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.022 [2024-07-14 07:44:34.948469] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.022 [2024-07-14 07:44:34.948521] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:19.022 qpair failed and we were unable to recover it. 00:27:19.022 [2024-07-14 07:44:34.948759] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.022 [2024-07-14 07:44:34.948973] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.022 [2024-07-14 07:44:34.949000] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:19.022 qpair failed and we were unable to recover it. 00:27:19.022 [2024-07-14 07:44:34.949156] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.022 [2024-07-14 07:44:34.949349] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.022 [2024-07-14 07:44:34.949375] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:19.022 qpair failed and we were unable to recover it. 00:27:19.022 [2024-07-14 07:44:34.949563] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.022 [2024-07-14 07:44:34.949767] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.022 [2024-07-14 07:44:34.949798] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:19.022 qpair failed and we were unable to recover it. 00:27:19.022 [2024-07-14 07:44:34.950002] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.022 [2024-07-14 07:44:34.950234] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.022 [2024-07-14 07:44:34.950297] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:19.022 qpair failed and we were unable to recover it. 00:27:19.022 [2024-07-14 07:44:34.950496] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.022 [2024-07-14 07:44:34.950677] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.022 [2024-07-14 07:44:34.950711] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:19.022 qpair failed and we were unable to recover it. 00:27:19.022 [2024-07-14 07:44:34.950942] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.022 [2024-07-14 07:44:34.951174] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.022 [2024-07-14 07:44:34.951204] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:19.022 qpair failed and we were unable to recover it. 00:27:19.022 [2024-07-14 07:44:34.951463] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.022 [2024-07-14 07:44:34.951823] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.022 [2024-07-14 07:44:34.951852] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:19.022 qpair failed and we were unable to recover it. 00:27:19.022 [2024-07-14 07:44:34.952076] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.023 [2024-07-14 07:44:34.952355] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.023 [2024-07-14 07:44:34.952384] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:19.023 qpair failed and we were unable to recover it. 00:27:19.023 [2024-07-14 07:44:34.952598] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.023 [2024-07-14 07:44:34.952864] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.023 [2024-07-14 07:44:34.952897] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:19.023 qpair failed and we were unable to recover it. 00:27:19.023 [2024-07-14 07:44:34.953080] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.023 [2024-07-14 07:44:34.953333] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.023 [2024-07-14 07:44:34.953359] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:19.023 qpair failed and we were unable to recover it. 00:27:19.023 [2024-07-14 07:44:34.953580] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.023 [2024-07-14 07:44:34.953768] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.023 [2024-07-14 07:44:34.953797] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:19.023 qpair failed and we were unable to recover it. 00:27:19.023 [2024-07-14 07:44:34.953999] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.023 [2024-07-14 07:44:34.954201] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.023 [2024-07-14 07:44:34.954230] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:19.023 qpair failed and we were unable to recover it. 00:27:19.023 [2024-07-14 07:44:34.954467] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.023 [2024-07-14 07:44:34.954658] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.023 [2024-07-14 07:44:34.954684] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:19.023 qpair failed and we were unable to recover it. 00:27:19.023 [2024-07-14 07:44:34.954895] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.023 [2024-07-14 07:44:34.955142] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.023 [2024-07-14 07:44:34.955171] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:19.023 qpair failed and we were unable to recover it. 00:27:19.023 [2024-07-14 07:44:34.955460] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.023 [2024-07-14 07:44:34.955793] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.023 [2024-07-14 07:44:34.955845] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:19.023 qpair failed and we were unable to recover it. 00:27:19.023 [2024-07-14 07:44:34.956111] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.023 [2024-07-14 07:44:34.956400] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.023 [2024-07-14 07:44:34.956425] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:19.023 qpair failed and we were unable to recover it. 00:27:19.023 [2024-07-14 07:44:34.956630] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.023 [2024-07-14 07:44:34.956878] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.023 [2024-07-14 07:44:34.956904] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:19.023 qpair failed and we were unable to recover it. 00:27:19.023 [2024-07-14 07:44:34.957185] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.023 [2024-07-14 07:44:34.957579] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.023 [2024-07-14 07:44:34.957632] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:19.023 qpair failed and we were unable to recover it. 00:27:19.023 [2024-07-14 07:44:34.957840] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.023 [2024-07-14 07:44:34.958079] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.023 [2024-07-14 07:44:34.958108] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:19.023 qpair failed and we were unable to recover it. 00:27:19.023 [2024-07-14 07:44:34.958380] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.023 [2024-07-14 07:44:34.958542] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.023 [2024-07-14 07:44:34.958568] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:19.023 qpair failed and we were unable to recover it. 00:27:19.023 [2024-07-14 07:44:34.958784] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.023 [2024-07-14 07:44:34.959025] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.023 [2024-07-14 07:44:34.959055] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:19.023 qpair failed and we were unable to recover it. 00:27:19.023 [2024-07-14 07:44:34.959243] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.023 [2024-07-14 07:44:34.959483] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.023 [2024-07-14 07:44:34.959512] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:19.023 qpair failed and we were unable to recover it. 00:27:19.023 [2024-07-14 07:44:34.959716] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.023 [2024-07-14 07:44:34.959987] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.023 [2024-07-14 07:44:34.960016] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:19.023 qpair failed and we were unable to recover it. 00:27:19.023 [2024-07-14 07:44:34.960299] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.023 [2024-07-14 07:44:34.960564] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.023 [2024-07-14 07:44:34.960593] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:19.023 qpair failed and we were unable to recover it. 00:27:19.023 [2024-07-14 07:44:34.960799] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.023 [2024-07-14 07:44:34.961008] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.023 [2024-07-14 07:44:34.961038] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:19.023 qpair failed and we were unable to recover it. 00:27:19.023 [2024-07-14 07:44:34.961254] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.023 [2024-07-14 07:44:34.961579] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.023 [2024-07-14 07:44:34.961634] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:19.023 qpair failed and we were unable to recover it. 00:27:19.023 [2024-07-14 07:44:34.961879] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.023 [2024-07-14 07:44:34.962115] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.023 [2024-07-14 07:44:34.962144] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:19.023 qpair failed and we were unable to recover it. 00:27:19.023 [2024-07-14 07:44:34.962378] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.023 [2024-07-14 07:44:34.962585] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.023 [2024-07-14 07:44:34.962613] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:19.023 qpair failed and we were unable to recover it. 00:27:19.023 [2024-07-14 07:44:34.962820] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.023 [2024-07-14 07:44:34.963060] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.023 [2024-07-14 07:44:34.963089] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:19.023 qpair failed and we were unable to recover it. 00:27:19.023 [2024-07-14 07:44:34.963342] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.023 [2024-07-14 07:44:34.963501] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.023 [2024-07-14 07:44:34.963528] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:19.023 qpair failed and we were unable to recover it. 00:27:19.023 [2024-07-14 07:44:34.963826] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.023 [2024-07-14 07:44:34.964112] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.023 [2024-07-14 07:44:34.964141] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:19.023 qpair failed and we were unable to recover it. 00:27:19.023 [2024-07-14 07:44:34.964363] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.023 [2024-07-14 07:44:34.964582] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.023 [2024-07-14 07:44:34.964611] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:19.023 qpair failed and we were unable to recover it. 00:27:19.023 [2024-07-14 07:44:34.964847] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.023 [2024-07-14 07:44:34.965069] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.023 [2024-07-14 07:44:34.965096] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:19.023 qpair failed and we were unable to recover it. 00:27:19.023 [2024-07-14 07:44:34.965315] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.023 [2024-07-14 07:44:34.965578] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.023 [2024-07-14 07:44:34.965630] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:19.023 qpair failed and we were unable to recover it. 00:27:19.023 [2024-07-14 07:44:34.965845] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.023 [2024-07-14 07:44:34.966059] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.023 [2024-07-14 07:44:34.966088] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:19.023 qpair failed and we were unable to recover it. 00:27:19.023 [2024-07-14 07:44:34.966298] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.023 [2024-07-14 07:44:34.966568] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.023 [2024-07-14 07:44:34.966624] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:19.023 qpair failed and we were unable to recover it. 00:27:19.023 [2024-07-14 07:44:34.966833] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.023 [2024-07-14 07:44:34.967046] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.023 [2024-07-14 07:44:34.967072] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:19.023 qpair failed and we were unable to recover it. 00:27:19.023 [2024-07-14 07:44:34.967330] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.023 [2024-07-14 07:44:34.967542] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.023 [2024-07-14 07:44:34.967571] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:19.023 qpair failed and we were unable to recover it. 00:27:19.023 [2024-07-14 07:44:34.967965] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.023 [2024-07-14 07:44:34.968174] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.023 [2024-07-14 07:44:34.968199] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:19.023 qpair failed and we were unable to recover it. 00:27:19.023 [2024-07-14 07:44:34.968359] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.023 [2024-07-14 07:44:34.968584] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.023 [2024-07-14 07:44:34.968611] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:19.023 qpair failed and we were unable to recover it. 00:27:19.023 [2024-07-14 07:44:34.968882] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.023 [2024-07-14 07:44:34.969068] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.023 [2024-07-14 07:44:34.969094] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:19.023 qpair failed and we were unable to recover it. 00:27:19.023 [2024-07-14 07:44:34.969306] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.023 [2024-07-14 07:44:34.969508] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.023 [2024-07-14 07:44:34.969537] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:19.023 qpair failed and we were unable to recover it. 00:27:19.024 [2024-07-14 07:44:34.969880] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.024 [2024-07-14 07:44:34.970155] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.024 [2024-07-14 07:44:34.970184] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:19.024 qpair failed and we were unable to recover it. 00:27:19.024 [2024-07-14 07:44:34.970428] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.024 [2024-07-14 07:44:34.970716] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.024 [2024-07-14 07:44:34.970742] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:19.024 qpair failed and we were unable to recover it. 00:27:19.024 [2024-07-14 07:44:34.971003] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.024 [2024-07-14 07:44:34.971228] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.024 [2024-07-14 07:44:34.971278] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:19.024 qpair failed and we were unable to recover it. 00:27:19.024 [2024-07-14 07:44:34.971568] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.024 [2024-07-14 07:44:34.971861] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.024 [2024-07-14 07:44:34.971920] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:19.024 qpair failed and we were unable to recover it. 00:27:19.024 [2024-07-14 07:44:34.972105] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.024 [2024-07-14 07:44:34.972278] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.024 [2024-07-14 07:44:34.972307] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:19.024 qpair failed and we were unable to recover it. 00:27:19.024 [2024-07-14 07:44:34.972522] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.024 [2024-07-14 07:44:34.972728] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.024 [2024-07-14 07:44:34.972757] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:19.024 qpair failed and we were unable to recover it. 00:27:19.024 [2024-07-14 07:44:34.972990] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.024 [2024-07-14 07:44:34.973225] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.024 [2024-07-14 07:44:34.973251] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:19.024 qpair failed and we were unable to recover it. 00:27:19.024 [2024-07-14 07:44:34.973429] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.024 [2024-07-14 07:44:34.973589] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.024 [2024-07-14 07:44:34.973616] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:19.024 qpair failed and we were unable to recover it. 00:27:19.024 [2024-07-14 07:44:34.973825] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.024 [2024-07-14 07:44:34.974046] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.024 [2024-07-14 07:44:34.974075] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:19.024 qpair failed and we were unable to recover it. 00:27:19.024 [2024-07-14 07:44:34.974282] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.024 [2024-07-14 07:44:34.974493] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.024 [2024-07-14 07:44:34.974519] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:19.024 qpair failed and we were unable to recover it. 00:27:19.024 [2024-07-14 07:44:34.974810] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.024 [2024-07-14 07:44:34.975018] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.024 [2024-07-14 07:44:34.975045] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:19.024 qpair failed and we were unable to recover it. 00:27:19.024 [2024-07-14 07:44:34.975206] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.024 [2024-07-14 07:44:34.975434] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.024 [2024-07-14 07:44:34.975459] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:19.024 qpair failed and we were unable to recover it. 00:27:19.024 [2024-07-14 07:44:34.975717] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.024 [2024-07-14 07:44:34.975958] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.024 [2024-07-14 07:44:34.975988] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:19.024 qpair failed and we were unable to recover it. 00:27:19.024 [2024-07-14 07:44:34.976180] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.024 [2024-07-14 07:44:34.976498] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.024 [2024-07-14 07:44:34.976553] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:19.024 qpair failed and we were unable to recover it. 00:27:19.024 [2024-07-14 07:44:34.976795] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.024 [2024-07-14 07:44:34.977001] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.024 [2024-07-14 07:44:34.977031] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:19.024 qpair failed and we were unable to recover it. 00:27:19.024 [2024-07-14 07:44:34.977238] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.024 [2024-07-14 07:44:34.977532] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.024 [2024-07-14 07:44:34.977585] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:19.024 qpair failed and we were unable to recover it. 00:27:19.024 [2024-07-14 07:44:34.977817] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.024 [2024-07-14 07:44:34.978000] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.024 [2024-07-14 07:44:34.978030] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:19.024 qpair failed and we were unable to recover it. 00:27:19.024 [2024-07-14 07:44:34.978272] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.024 [2024-07-14 07:44:34.978571] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.024 [2024-07-14 07:44:34.978599] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:19.024 qpair failed and we were unable to recover it. 00:27:19.024 [2024-07-14 07:44:34.978820] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.024 [2024-07-14 07:44:34.979031] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.024 [2024-07-14 07:44:34.979060] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:19.024 qpair failed and we were unable to recover it. 00:27:19.024 [2024-07-14 07:44:34.979257] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.024 [2024-07-14 07:44:34.979485] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.024 [2024-07-14 07:44:34.979512] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:19.024 qpair failed and we were unable to recover it. 00:27:19.024 [2024-07-14 07:44:34.979729] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.024 [2024-07-14 07:44:34.979941] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.024 [2024-07-14 07:44:34.979970] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:19.024 qpair failed and we were unable to recover it. 00:27:19.024 [2024-07-14 07:44:34.980180] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.024 [2024-07-14 07:44:34.980452] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.024 [2024-07-14 07:44:34.980477] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:19.024 qpair failed and we were unable to recover it. 00:27:19.024 [2024-07-14 07:44:34.980690] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.024 [2024-07-14 07:44:34.980920] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.024 [2024-07-14 07:44:34.980950] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:19.024 qpair failed and we were unable to recover it. 00:27:19.024 [2024-07-14 07:44:34.981184] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.024 [2024-07-14 07:44:34.981529] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.024 [2024-07-14 07:44:34.981575] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:19.024 qpair failed and we were unable to recover it. 00:27:19.024 [2024-07-14 07:44:34.981815] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.024 [2024-07-14 07:44:34.982050] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.024 [2024-07-14 07:44:34.982080] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:19.024 qpair failed and we were unable to recover it. 00:27:19.024 [2024-07-14 07:44:34.982262] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.024 [2024-07-14 07:44:34.982510] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.024 [2024-07-14 07:44:34.982569] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:19.024 qpair failed and we were unable to recover it. 00:27:19.024 [2024-07-14 07:44:34.982799] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.024 [2024-07-14 07:44:34.983001] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.024 [2024-07-14 07:44:34.983031] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:19.024 qpair failed and we were unable to recover it. 00:27:19.024 [2024-07-14 07:44:34.983316] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.024 [2024-07-14 07:44:34.983596] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.024 [2024-07-14 07:44:34.983622] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:19.024 qpair failed and we were unable to recover it. 00:27:19.024 [2024-07-14 07:44:34.983840] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.024 [2024-07-14 07:44:34.984120] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.024 [2024-07-14 07:44:34.984149] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:19.024 qpair failed and we were unable to recover it. 00:27:19.024 [2024-07-14 07:44:34.984434] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.024 [2024-07-14 07:44:34.984699] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.024 [2024-07-14 07:44:34.984750] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:19.024 qpair failed and we were unable to recover it. 00:27:19.024 [2024-07-14 07:44:34.984984] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.024 [2024-07-14 07:44:34.985332] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.024 [2024-07-14 07:44:34.985391] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:19.025 qpair failed and we were unable to recover it. 00:27:19.025 [2024-07-14 07:44:34.985597] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.025 [2024-07-14 07:44:34.985780] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.025 [2024-07-14 07:44:34.985809] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:19.025 qpair failed and we were unable to recover it. 00:27:19.025 [2024-07-14 07:44:34.986045] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.025 [2024-07-14 07:44:34.986326] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.025 [2024-07-14 07:44:34.986355] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:19.025 qpair failed and we were unable to recover it. 00:27:19.025 [2024-07-14 07:44:34.986573] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.025 [2024-07-14 07:44:34.986851] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.025 [2024-07-14 07:44:34.986888] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:19.025 qpair failed and we were unable to recover it. 00:27:19.025 [2024-07-14 07:44:34.987082] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.025 [2024-07-14 07:44:34.987287] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.025 [2024-07-14 07:44:34.987316] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:19.025 qpair failed and we were unable to recover it. 00:27:19.025 [2024-07-14 07:44:34.987544] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.025 [2024-07-14 07:44:34.987935] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.025 [2024-07-14 07:44:34.987964] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:19.025 qpair failed and we were unable to recover it. 00:27:19.025 [2024-07-14 07:44:34.988194] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.025 [2024-07-14 07:44:34.988604] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.025 [2024-07-14 07:44:34.988655] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:19.025 qpair failed and we were unable to recover it. 00:27:19.025 [2024-07-14 07:44:34.988852] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.025 [2024-07-14 07:44:34.989075] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.025 [2024-07-14 07:44:34.989103] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:19.025 qpair failed and we were unable to recover it. 00:27:19.025 [2024-07-14 07:44:34.989329] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.025 [2024-07-14 07:44:34.989559] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.025 [2024-07-14 07:44:34.989585] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:19.025 qpair failed and we were unable to recover it. 00:27:19.025 [2024-07-14 07:44:34.989861] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.025 [2024-07-14 07:44:34.990076] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.025 [2024-07-14 07:44:34.990105] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:19.025 qpair failed and we were unable to recover it. 00:27:19.025 [2024-07-14 07:44:34.990479] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.025 [2024-07-14 07:44:34.990702] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.025 [2024-07-14 07:44:34.990728] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:19.025 qpair failed and we were unable to recover it. 00:27:19.025 [2024-07-14 07:44:34.990938] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.025 [2024-07-14 07:44:34.991095] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.025 [2024-07-14 07:44:34.991121] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:19.025 qpair failed and we were unable to recover it. 00:27:19.025 [2024-07-14 07:44:34.991380] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.025 [2024-07-14 07:44:34.991719] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.025 [2024-07-14 07:44:34.991772] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:19.025 qpair failed and we were unable to recover it. 00:27:19.025 [2024-07-14 07:44:34.992056] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.025 [2024-07-14 07:44:34.992285] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.025 [2024-07-14 07:44:34.992310] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:19.025 qpair failed and we were unable to recover it. 00:27:19.025 [2024-07-14 07:44:34.992530] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.025 [2024-07-14 07:44:34.992924] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.025 [2024-07-14 07:44:34.992953] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:19.025 qpair failed and we were unable to recover it. 00:27:19.025 [2024-07-14 07:44:34.993169] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.025 [2024-07-14 07:44:34.993474] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.025 [2024-07-14 07:44:34.993533] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:19.025 qpair failed and we were unable to recover it. 00:27:19.025 [2024-07-14 07:44:34.993775] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.025 [2024-07-14 07:44:34.993982] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.025 [2024-07-14 07:44:34.994012] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:19.025 qpair failed and we were unable to recover it. 00:27:19.025 [2024-07-14 07:44:34.994242] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.025 [2024-07-14 07:44:34.994571] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.025 [2024-07-14 07:44:34.994619] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:19.025 qpair failed and we were unable to recover it. 00:27:19.025 [2024-07-14 07:44:34.994847] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.025 [2024-07-14 07:44:34.995059] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.025 [2024-07-14 07:44:34.995088] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:19.025 qpair failed and we were unable to recover it. 00:27:19.025 [2024-07-14 07:44:34.995302] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.025 [2024-07-14 07:44:34.995553] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.025 [2024-07-14 07:44:34.995594] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:19.025 qpair failed and we were unable to recover it. 00:27:19.025 [2024-07-14 07:44:34.995849] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.025 [2024-07-14 07:44:34.996097] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.025 [2024-07-14 07:44:34.996126] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:19.025 qpair failed and we were unable to recover it. 00:27:19.025 [2024-07-14 07:44:34.996332] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.025 [2024-07-14 07:44:34.996645] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.025 [2024-07-14 07:44:34.996698] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:19.025 qpair failed and we were unable to recover it. 00:27:19.025 [2024-07-14 07:44:34.996926] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.025 [2024-07-14 07:44:34.997137] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.025 [2024-07-14 07:44:34.997166] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:19.025 qpair failed and we were unable to recover it. 00:27:19.025 [2024-07-14 07:44:34.997400] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.025 [2024-07-14 07:44:34.997560] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.025 [2024-07-14 07:44:34.997586] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:19.025 qpair failed and we were unable to recover it. 00:27:19.025 [2024-07-14 07:44:34.997793] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.025 [2024-07-14 07:44:34.998013] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.025 [2024-07-14 07:44:34.998039] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:19.025 qpair failed and we were unable to recover it. 00:27:19.025 [2024-07-14 07:44:34.998276] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.025 [2024-07-14 07:44:34.998506] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.025 [2024-07-14 07:44:34.998532] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:19.025 qpair failed and we were unable to recover it. 00:27:19.025 [2024-07-14 07:44:34.998753] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.025 [2024-07-14 07:44:34.998987] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.025 [2024-07-14 07:44:34.999016] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:19.025 qpair failed and we were unable to recover it. 00:27:19.025 [2024-07-14 07:44:34.999228] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.025 [2024-07-14 07:44:34.999428] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.025 [2024-07-14 07:44:34.999455] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:19.025 qpair failed and we were unable to recover it. 00:27:19.025 [2024-07-14 07:44:34.999703] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.025 [2024-07-14 07:44:34.999898] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.025 [2024-07-14 07:44:34.999928] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:19.025 qpair failed and we were unable to recover it. 00:27:19.025 [2024-07-14 07:44:35.000109] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.025 [2024-07-14 07:44:35.000314] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.025 [2024-07-14 07:44:35.000343] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:19.025 qpair failed and we were unable to recover it. 00:27:19.025 [2024-07-14 07:44:35.000711] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.025 [2024-07-14 07:44:35.000978] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.025 [2024-07-14 07:44:35.001008] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:19.025 qpair failed and we were unable to recover it. 00:27:19.025 [2024-07-14 07:44:35.001246] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.025 [2024-07-14 07:44:35.001586] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.026 [2024-07-14 07:44:35.001638] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:19.026 qpair failed and we were unable to recover it. 00:27:19.026 [2024-07-14 07:44:35.001878] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.026 [2024-07-14 07:44:35.002054] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.026 [2024-07-14 07:44:35.002083] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:19.026 qpair failed and we were unable to recover it. 00:27:19.026 [2024-07-14 07:44:35.002260] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.026 [2024-07-14 07:44:35.002630] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.026 [2024-07-14 07:44:35.002684] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:19.026 qpair failed and we were unable to recover it. 00:27:19.026 [2024-07-14 07:44:35.002888] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.026 [2024-07-14 07:44:35.003071] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.026 [2024-07-14 07:44:35.003104] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:19.026 qpair failed and we were unable to recover it. 00:27:19.026 [2024-07-14 07:44:35.003312] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.026 [2024-07-14 07:44:35.003597] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.026 [2024-07-14 07:44:35.003626] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:19.026 qpair failed and we were unable to recover it. 00:27:19.026 [2024-07-14 07:44:35.003836] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.026 [2024-07-14 07:44:35.004054] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.026 [2024-07-14 07:44:35.004083] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:19.026 qpair failed and we were unable to recover it. 00:27:19.026 [2024-07-14 07:44:35.004286] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.026 [2024-07-14 07:44:35.004673] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.026 [2024-07-14 07:44:35.004724] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:19.026 qpair failed and we were unable to recover it. 00:27:19.026 [2024-07-14 07:44:35.005003] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.026 [2024-07-14 07:44:35.005424] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.026 [2024-07-14 07:44:35.005476] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:19.026 qpair failed and we were unable to recover it. 00:27:19.026 [2024-07-14 07:44:35.005680] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.026 [2024-07-14 07:44:35.005935] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.026 [2024-07-14 07:44:35.005965] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:19.026 qpair failed and we were unable to recover it. 00:27:19.026 [2024-07-14 07:44:35.006171] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.026 [2024-07-14 07:44:35.006354] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.026 [2024-07-14 07:44:35.006383] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:19.026 qpair failed and we were unable to recover it. 00:27:19.026 [2024-07-14 07:44:35.006587] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.026 [2024-07-14 07:44:35.006763] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.026 [2024-07-14 07:44:35.006792] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:19.026 qpair failed and we were unable to recover it. 00:27:19.026 [2024-07-14 07:44:35.007004] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.026 [2024-07-14 07:44:35.007336] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.026 [2024-07-14 07:44:35.007384] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:19.026 qpair failed and we were unable to recover it. 00:27:19.026 [2024-07-14 07:44:35.007571] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.026 [2024-07-14 07:44:35.007806] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.026 [2024-07-14 07:44:35.007831] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:19.026 qpair failed and we were unable to recover it. 00:27:19.026 [2024-07-14 07:44:35.008051] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.026 [2024-07-14 07:44:35.008410] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.026 [2024-07-14 07:44:35.008465] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:19.026 qpair failed and we were unable to recover it. 00:27:19.026 [2024-07-14 07:44:35.008702] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.026 [2024-07-14 07:44:35.008908] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.026 [2024-07-14 07:44:35.008938] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:19.026 qpair failed and we were unable to recover it. 00:27:19.026 [2024-07-14 07:44:35.009146] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.026 [2024-07-14 07:44:35.009473] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.026 [2024-07-14 07:44:35.009523] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:19.026 qpair failed and we were unable to recover it. 00:27:19.026 [2024-07-14 07:44:35.009818] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.026 [2024-07-14 07:44:35.010017] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.026 [2024-07-14 07:44:35.010047] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:19.026 qpair failed and we were unable to recover it. 00:27:19.026 [2024-07-14 07:44:35.010264] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.026 [2024-07-14 07:44:35.010494] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.026 [2024-07-14 07:44:35.010520] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:19.026 qpair failed and we were unable to recover it. 00:27:19.026 [2024-07-14 07:44:35.010811] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.026 [2024-07-14 07:44:35.011009] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.026 [2024-07-14 07:44:35.011035] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:19.026 qpair failed and we were unable to recover it. 00:27:19.026 [2024-07-14 07:44:35.011280] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.026 [2024-07-14 07:44:35.011667] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.026 [2024-07-14 07:44:35.011718] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:19.026 qpair failed and we were unable to recover it. 00:27:19.026 [2024-07-14 07:44:35.011932] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.026 [2024-07-14 07:44:35.012141] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.026 [2024-07-14 07:44:35.012170] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:19.026 qpair failed and we were unable to recover it. 00:27:19.026 [2024-07-14 07:44:35.012399] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.026 [2024-07-14 07:44:35.012623] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.026 [2024-07-14 07:44:35.012674] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:19.026 qpair failed and we were unable to recover it. 00:27:19.026 [2024-07-14 07:44:35.012894] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.026 [2024-07-14 07:44:35.013124] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.026 [2024-07-14 07:44:35.013153] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:19.026 qpair failed and we were unable to recover it. 00:27:19.026 [2024-07-14 07:44:35.013540] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.026 [2024-07-14 07:44:35.013952] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.026 [2024-07-14 07:44:35.013981] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:19.026 qpair failed and we were unable to recover it. 00:27:19.026 [2024-07-14 07:44:35.014222] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.026 [2024-07-14 07:44:35.014641] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.026 [2024-07-14 07:44:35.014691] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:19.026 qpair failed and we were unable to recover it. 00:27:19.026 [2024-07-14 07:44:35.014931] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.026 [2024-07-14 07:44:35.015145] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.026 [2024-07-14 07:44:35.015171] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:19.026 qpair failed and we were unable to recover it. 00:27:19.026 [2024-07-14 07:44:35.015351] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.026 [2024-07-14 07:44:35.015578] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.026 [2024-07-14 07:44:35.015607] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:19.026 qpair failed and we were unable to recover it. 00:27:19.026 [2024-07-14 07:44:35.015814] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.026 [2024-07-14 07:44:35.016015] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.026 [2024-07-14 07:44:35.016045] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:19.026 qpair failed and we were unable to recover it. 00:27:19.026 [2024-07-14 07:44:35.016239] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.027 [2024-07-14 07:44:35.016497] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.027 [2024-07-14 07:44:35.016556] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:19.027 qpair failed and we were unable to recover it. 00:27:19.027 [2024-07-14 07:44:35.016787] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.027 [2024-07-14 07:44:35.016957] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.027 [2024-07-14 07:44:35.016986] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:19.027 qpair failed and we were unable to recover it. 00:27:19.027 [2024-07-14 07:44:35.017224] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.027 [2024-07-14 07:44:35.017543] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.027 [2024-07-14 07:44:35.017594] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:19.027 qpair failed and we were unable to recover it. 00:27:19.027 [2024-07-14 07:44:35.017801] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.027 [2024-07-14 07:44:35.018029] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.027 [2024-07-14 07:44:35.018055] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:19.027 qpair failed and we were unable to recover it. 00:27:19.027 [2024-07-14 07:44:35.018298] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.027 [2024-07-14 07:44:35.018525] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.027 [2024-07-14 07:44:35.018555] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:19.027 qpair failed and we were unable to recover it. 00:27:19.027 [2024-07-14 07:44:35.018797] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.027 [2024-07-14 07:44:35.019046] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.027 [2024-07-14 07:44:35.019073] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:19.027 qpair failed and we were unable to recover it. 00:27:19.027 [2024-07-14 07:44:35.019270] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.027 [2024-07-14 07:44:35.019474] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.027 [2024-07-14 07:44:35.019499] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:19.027 qpair failed and we were unable to recover it. 00:27:19.027 [2024-07-14 07:44:35.019697] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.027 [2024-07-14 07:44:35.019910] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.027 [2024-07-14 07:44:35.019940] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:19.027 qpair failed and we were unable to recover it. 00:27:19.027 [2024-07-14 07:44:35.020154] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.027 [2024-07-14 07:44:35.020352] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.027 [2024-07-14 07:44:35.020377] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:19.027 qpair failed and we were unable to recover it. 00:27:19.027 [2024-07-14 07:44:35.020624] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.027 [2024-07-14 07:44:35.020834] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.027 [2024-07-14 07:44:35.020887] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:19.027 qpair failed and we were unable to recover it. 00:27:19.027 [2024-07-14 07:44:35.021073] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.027 [2024-07-14 07:44:35.021437] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.027 [2024-07-14 07:44:35.021495] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:19.027 qpair failed and we were unable to recover it. 00:27:19.027 [2024-07-14 07:44:35.021847] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.027 [2024-07-14 07:44:35.022101] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.027 [2024-07-14 07:44:35.022131] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:19.027 qpair failed and we were unable to recover it. 00:27:19.027 [2024-07-14 07:44:35.022353] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.027 [2024-07-14 07:44:35.022675] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.027 [2024-07-14 07:44:35.022729] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:19.027 qpair failed and we were unable to recover it. 00:27:19.027 [2024-07-14 07:44:35.022988] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.027 [2024-07-14 07:44:35.023162] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.027 [2024-07-14 07:44:35.023188] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:19.027 qpair failed and we were unable to recover it. 00:27:19.027 [2024-07-14 07:44:35.023401] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.027 [2024-07-14 07:44:35.023626] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.027 [2024-07-14 07:44:35.023655] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:19.027 qpair failed and we were unable to recover it. 00:27:19.027 [2024-07-14 07:44:35.023886] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.027 [2024-07-14 07:44:35.024069] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.027 [2024-07-14 07:44:35.024098] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:19.027 qpair failed and we were unable to recover it. 00:27:19.027 [2024-07-14 07:44:35.024301] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.027 [2024-07-14 07:44:35.024626] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.027 [2024-07-14 07:44:35.024677] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:19.027 qpair failed and we were unable to recover it. 00:27:19.027 [2024-07-14 07:44:35.024886] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.027 [2024-07-14 07:44:35.025088] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.027 [2024-07-14 07:44:35.025119] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:19.027 qpair failed and we were unable to recover it. 00:27:19.027 [2024-07-14 07:44:35.025323] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.027 [2024-07-14 07:44:35.025522] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.027 [2024-07-14 07:44:35.025551] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:19.027 qpair failed and we were unable to recover it. 00:27:19.027 [2024-07-14 07:44:35.025726] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.027 [2024-07-14 07:44:35.025922] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.027 [2024-07-14 07:44:35.025952] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:19.027 qpair failed and we were unable to recover it. 00:27:19.027 [2024-07-14 07:44:35.026166] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.027 [2024-07-14 07:44:35.026520] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.027 [2024-07-14 07:44:35.026573] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:19.027 qpair failed and we were unable to recover it. 00:27:19.027 [2024-07-14 07:44:35.026810] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.027 [2024-07-14 07:44:35.027001] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.027 [2024-07-14 07:44:35.027030] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:19.027 qpair failed and we were unable to recover it. 00:27:19.027 [2024-07-14 07:44:35.027229] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.027 [2024-07-14 07:44:35.027401] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.027 [2024-07-14 07:44:35.027429] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:19.027 qpair failed and we were unable to recover it. 00:27:19.027 [2024-07-14 07:44:35.027639] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.027 [2024-07-14 07:44:35.027887] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.027 [2024-07-14 07:44:35.027917] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:19.027 qpair failed and we were unable to recover it. 00:27:19.027 [2024-07-14 07:44:35.028134] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.027 [2024-07-14 07:44:35.028444] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.027 [2024-07-14 07:44:35.028502] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:19.027 qpair failed and we were unable to recover it. 00:27:19.027 [2024-07-14 07:44:35.028733] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.027 [2024-07-14 07:44:35.029018] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.027 [2024-07-14 07:44:35.029048] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:19.027 qpair failed and we were unable to recover it. 00:27:19.027 [2024-07-14 07:44:35.029277] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.027 [2024-07-14 07:44:35.029670] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.027 [2024-07-14 07:44:35.029733] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:19.027 qpair failed and we were unable to recover it. 00:27:19.027 [2024-07-14 07:44:35.029941] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.027 [2024-07-14 07:44:35.030154] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.027 [2024-07-14 07:44:35.030183] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:19.027 qpair failed and we were unable to recover it. 00:27:19.027 [2024-07-14 07:44:35.030391] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.028 [2024-07-14 07:44:35.030580] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.028 [2024-07-14 07:44:35.030606] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:19.028 qpair failed and we were unable to recover it. 00:27:19.028 [2024-07-14 07:44:35.030862] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.028 [2024-07-14 07:44:35.031065] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.028 [2024-07-14 07:44:35.031093] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:19.028 qpair failed and we were unable to recover it. 00:27:19.028 [2024-07-14 07:44:35.031308] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.028 [2024-07-14 07:44:35.031645] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.028 [2024-07-14 07:44:35.031702] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:19.028 qpair failed and we were unable to recover it. 00:27:19.028 [2024-07-14 07:44:35.031909] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.028 [2024-07-14 07:44:35.032091] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.028 [2024-07-14 07:44:35.032122] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:19.028 qpair failed and we were unable to recover it. 00:27:19.028 [2024-07-14 07:44:35.032365] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.028 [2024-07-14 07:44:35.032594] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.028 [2024-07-14 07:44:35.032619] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:19.028 qpair failed and we were unable to recover it. 00:27:19.028 [2024-07-14 07:44:35.032923] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.028 [2024-07-14 07:44:35.033105] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.028 [2024-07-14 07:44:35.033134] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:19.028 qpair failed and we were unable to recover it. 00:27:19.028 [2024-07-14 07:44:35.033326] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.028 [2024-07-14 07:44:35.033525] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.028 [2024-07-14 07:44:35.033550] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:19.028 qpair failed and we were unable to recover it. 00:27:19.028 [2024-07-14 07:44:35.033756] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.028 [2024-07-14 07:44:35.033941] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.028 [2024-07-14 07:44:35.033968] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:19.028 qpair failed and we were unable to recover it. 00:27:19.028 [2024-07-14 07:44:35.034148] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.028 [2024-07-14 07:44:35.034401] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.028 [2024-07-14 07:44:35.034446] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:19.028 qpair failed and we were unable to recover it. 00:27:19.028 [2024-07-14 07:44:35.034660] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.028 [2024-07-14 07:44:35.034872] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.028 [2024-07-14 07:44:35.034901] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:19.028 qpair failed and we were unable to recover it. 00:27:19.028 [2024-07-14 07:44:35.035087] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.028 [2024-07-14 07:44:35.035325] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.028 [2024-07-14 07:44:35.035355] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:19.028 qpair failed and we were unable to recover it. 00:27:19.028 [2024-07-14 07:44:35.035656] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.028 [2024-07-14 07:44:35.035942] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.028 [2024-07-14 07:44:35.035972] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:19.028 qpair failed and we were unable to recover it. 00:27:19.028 [2024-07-14 07:44:35.036214] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.028 [2024-07-14 07:44:35.036578] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.028 [2024-07-14 07:44:35.036638] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:19.028 qpair failed and we were unable to recover it. 00:27:19.028 [2024-07-14 07:44:35.036873] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.028 [2024-07-14 07:44:35.037039] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.028 [2024-07-14 07:44:35.037068] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:19.028 qpair failed and we were unable to recover it. 00:27:19.028 [2024-07-14 07:44:35.037276] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.028 [2024-07-14 07:44:35.037672] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.028 [2024-07-14 07:44:35.037730] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:19.028 qpair failed and we were unable to recover it. 00:27:19.028 [2024-07-14 07:44:35.037934] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.028 [2024-07-14 07:44:35.038112] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.028 [2024-07-14 07:44:35.038142] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:19.028 qpair failed and we were unable to recover it. 00:27:19.028 [2024-07-14 07:44:35.038352] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.028 [2024-07-14 07:44:35.038539] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.028 [2024-07-14 07:44:35.038568] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:19.028 qpair failed and we were unable to recover it. 00:27:19.028 [2024-07-14 07:44:35.038783] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.028 [2024-07-14 07:44:35.038989] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.028 [2024-07-14 07:44:35.039020] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:19.028 qpair failed and we were unable to recover it. 00:27:19.028 [2024-07-14 07:44:35.039224] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.028 [2024-07-14 07:44:35.039448] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.028 [2024-07-14 07:44:35.039474] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:19.028 qpair failed and we were unable to recover it. 00:27:19.028 [2024-07-14 07:44:35.039876] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.028 [2024-07-14 07:44:35.040184] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.028 [2024-07-14 07:44:35.040211] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:19.028 qpair failed and we were unable to recover it. 00:27:19.028 [2024-07-14 07:44:35.040449] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.028 [2024-07-14 07:44:35.040646] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.028 [2024-07-14 07:44:35.040672] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:19.028 qpair failed and we were unable to recover it. 00:27:19.028 [2024-07-14 07:44:35.040855] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.028 [2024-07-14 07:44:35.041017] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.028 [2024-07-14 07:44:35.041045] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:19.028 qpair failed and we were unable to recover it. 00:27:19.028 [2024-07-14 07:44:35.041316] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.028 [2024-07-14 07:44:35.041510] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.028 [2024-07-14 07:44:35.041535] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:19.028 qpair failed and we were unable to recover it. 00:27:19.028 [2024-07-14 07:44:35.041761] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.028 [2024-07-14 07:44:35.041942] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.028 [2024-07-14 07:44:35.041972] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:19.028 qpair failed and we were unable to recover it. 00:27:19.028 [2024-07-14 07:44:35.042196] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.028 [2024-07-14 07:44:35.042378] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.028 [2024-07-14 07:44:35.042404] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:19.028 qpair failed and we were unable to recover it. 00:27:19.028 [2024-07-14 07:44:35.042605] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.028 [2024-07-14 07:44:35.042811] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.028 [2024-07-14 07:44:35.042840] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:19.028 qpair failed and we were unable to recover it. 00:27:19.028 [2024-07-14 07:44:35.043029] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.028 [2024-07-14 07:44:35.043318] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.028 [2024-07-14 07:44:35.043381] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:19.028 qpair failed and we were unable to recover it. 00:27:19.028 [2024-07-14 07:44:35.043759] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.028 [2024-07-14 07:44:35.044006] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.028 [2024-07-14 07:44:35.044036] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:19.028 qpair failed and we were unable to recover it. 00:27:19.028 [2024-07-14 07:44:35.044321] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.028 [2024-07-14 07:44:35.044728] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.028 [2024-07-14 07:44:35.044779] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:19.028 qpair failed and we were unable to recover it. 00:27:19.028 [2024-07-14 07:44:35.045025] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.028 [2024-07-14 07:44:35.045250] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.028 [2024-07-14 07:44:35.045309] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:19.028 qpair failed and we were unable to recover it. 00:27:19.028 [2024-07-14 07:44:35.045536] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.028 [2024-07-14 07:44:35.045717] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.028 [2024-07-14 07:44:35.045748] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:19.029 qpair failed and we were unable to recover it. 00:27:19.029 [2024-07-14 07:44:35.045959] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.029 [2024-07-14 07:44:35.046240] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.029 [2024-07-14 07:44:35.046266] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:19.029 qpair failed and we were unable to recover it. 00:27:19.029 [2024-07-14 07:44:35.046448] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.029 [2024-07-14 07:44:35.046721] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.029 [2024-07-14 07:44:35.046746] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:19.029 qpair failed and we were unable to recover it. 00:27:19.029 [2024-07-14 07:44:35.046940] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.029 [2024-07-14 07:44:35.047124] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.029 [2024-07-14 07:44:35.047151] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:19.029 qpair failed and we were unable to recover it. 00:27:19.029 [2024-07-14 07:44:35.047432] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.029 [2024-07-14 07:44:35.047633] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.029 [2024-07-14 07:44:35.047662] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:19.029 qpair failed and we were unable to recover it. 00:27:19.029 [2024-07-14 07:44:35.047858] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.029 [2024-07-14 07:44:35.048088] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.029 [2024-07-14 07:44:35.048114] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:19.029 qpair failed and we were unable to recover it. 00:27:19.029 [2024-07-14 07:44:35.048301] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.029 [2024-07-14 07:44:35.048756] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.029 [2024-07-14 07:44:35.048810] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:19.029 qpair failed and we were unable to recover it. 00:27:19.029 [2024-07-14 07:44:35.049039] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.029 [2024-07-14 07:44:35.049337] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.029 [2024-07-14 07:44:35.049395] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:19.029 qpair failed and we were unable to recover it. 00:27:19.029 [2024-07-14 07:44:35.049605] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.029 [2024-07-14 07:44:35.049891] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.029 [2024-07-14 07:44:35.049921] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:19.029 qpair failed and we were unable to recover it. 00:27:19.029 [2024-07-14 07:44:35.050153] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.029 [2024-07-14 07:44:35.050459] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.029 [2024-07-14 07:44:35.050510] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:19.029 qpair failed and we were unable to recover it. 00:27:19.029 [2024-07-14 07:44:35.050775] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.029 [2024-07-14 07:44:35.051003] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.029 [2024-07-14 07:44:35.051030] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:19.029 qpair failed and we were unable to recover it. 00:27:19.029 [2024-07-14 07:44:35.051218] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.029 [2024-07-14 07:44:35.051524] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.029 [2024-07-14 07:44:35.051573] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:19.029 qpair failed and we were unable to recover it. 00:27:19.029 [2024-07-14 07:44:35.051811] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.029 [2024-07-14 07:44:35.052034] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.029 [2024-07-14 07:44:35.052064] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:19.029 qpair failed and we were unable to recover it. 00:27:19.029 [2024-07-14 07:44:35.052281] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.029 [2024-07-14 07:44:35.052537] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.029 [2024-07-14 07:44:35.052563] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:19.029 qpair failed and we were unable to recover it. 00:27:19.029 [2024-07-14 07:44:35.052752] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.029 [2024-07-14 07:44:35.053044] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.029 [2024-07-14 07:44:35.053074] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:19.029 qpair failed and we were unable to recover it. 00:27:19.029 [2024-07-14 07:44:35.053290] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.029 [2024-07-14 07:44:35.053592] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.029 [2024-07-14 07:44:35.053648] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:19.029 qpair failed and we were unable to recover it. 00:27:19.029 [2024-07-14 07:44:35.053879] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.029 [2024-07-14 07:44:35.054084] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.029 [2024-07-14 07:44:35.054112] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:19.029 qpair failed and we were unable to recover it. 00:27:19.029 [2024-07-14 07:44:35.054448] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.029 [2024-07-14 07:44:35.054758] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.029 [2024-07-14 07:44:35.054788] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:19.029 qpair failed and we were unable to recover it. 00:27:19.029 [2024-07-14 07:44:35.055002] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.029 [2024-07-14 07:44:35.055204] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.029 [2024-07-14 07:44:35.055229] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:19.029 qpair failed and we were unable to recover it. 00:27:19.029 [2024-07-14 07:44:35.055463] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.029 [2024-07-14 07:44:35.055678] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.029 [2024-07-14 07:44:35.055703] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:19.029 qpair failed and we were unable to recover it. 00:27:19.029 [2024-07-14 07:44:35.055984] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.029 [2024-07-14 07:44:35.056338] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.029 [2024-07-14 07:44:35.056413] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:19.029 qpair failed and we were unable to recover it. 00:27:19.029 [2024-07-14 07:44:35.056673] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.029 [2024-07-14 07:44:35.056940] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.029 [2024-07-14 07:44:35.056966] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:19.029 qpair failed and we were unable to recover it. 00:27:19.029 [2024-07-14 07:44:35.057141] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.029 [2024-07-14 07:44:35.057356] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.029 [2024-07-14 07:44:35.057383] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:19.029 qpair failed and we were unable to recover it. 00:27:19.029 [2024-07-14 07:44:35.057671] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.029 [2024-07-14 07:44:35.057878] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.029 [2024-07-14 07:44:35.057924] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:19.029 qpair failed and we were unable to recover it. 00:27:19.029 [2024-07-14 07:44:35.058109] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.029 [2024-07-14 07:44:35.058309] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.029 [2024-07-14 07:44:35.058335] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:19.029 qpair failed and we were unable to recover it. 00:27:19.029 [2024-07-14 07:44:35.058550] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.029 [2024-07-14 07:44:35.058798] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.029 [2024-07-14 07:44:35.058839] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:19.029 qpair failed and we were unable to recover it. 00:27:19.029 [2024-07-14 07:44:35.059138] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.029 [2024-07-14 07:44:35.059505] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.029 [2024-07-14 07:44:35.059570] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:19.029 qpair failed and we were unable to recover it. 00:27:19.029 [2024-07-14 07:44:35.059810] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.029 [2024-07-14 07:44:35.059989] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.029 [2024-07-14 07:44:35.060020] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:19.029 qpair failed and we were unable to recover it. 00:27:19.029 [2024-07-14 07:44:35.060214] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.029 [2024-07-14 07:44:35.060528] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.029 [2024-07-14 07:44:35.060587] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:19.029 qpair failed and we were unable to recover it. 00:27:19.029 [2024-07-14 07:44:35.060793] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.029 [2024-07-14 07:44:35.061005] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.029 [2024-07-14 07:44:35.061039] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:19.029 qpair failed and we were unable to recover it. 00:27:19.029 [2024-07-14 07:44:35.061256] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.029 [2024-07-14 07:44:35.061437] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.029 [2024-07-14 07:44:35.061463] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:19.029 qpair failed and we were unable to recover it. 00:27:19.029 [2024-07-14 07:44:35.061698] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.029 [2024-07-14 07:44:35.061905] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.029 [2024-07-14 07:44:35.061935] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:19.029 qpair failed and we were unable to recover it. 00:27:19.029 [2024-07-14 07:44:35.062159] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.029 [2024-07-14 07:44:35.062376] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.029 [2024-07-14 07:44:35.062403] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:19.029 qpair failed and we were unable to recover it. 00:27:19.029 [2024-07-14 07:44:35.062615] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.030 [2024-07-14 07:44:35.062832] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.030 [2024-07-14 07:44:35.062862] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:19.030 qpair failed and we were unable to recover it. 00:27:19.030 [2024-07-14 07:44:35.063053] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.030 [2024-07-14 07:44:35.063346] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.030 [2024-07-14 07:44:35.063404] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:19.030 qpair failed and we were unable to recover it. 00:27:19.030 [2024-07-14 07:44:35.063619] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.030 [2024-07-14 07:44:35.063839] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.030 [2024-07-14 07:44:35.063877] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:19.030 qpair failed and we were unable to recover it. 00:27:19.030 [2024-07-14 07:44:35.064117] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.030 [2024-07-14 07:44:35.064301] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.030 [2024-07-14 07:44:35.064342] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:19.030 qpair failed and we were unable to recover it. 00:27:19.030 [2024-07-14 07:44:35.064748] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.030 [2024-07-14 07:44:35.065008] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.030 [2024-07-14 07:44:35.065038] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:19.030 qpair failed and we were unable to recover it. 00:27:19.030 [2024-07-14 07:44:35.065241] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.030 [2024-07-14 07:44:35.065502] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.030 [2024-07-14 07:44:35.065529] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:19.030 qpair failed and we were unable to recover it. 00:27:19.030 [2024-07-14 07:44:35.065720] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.030 [2024-07-14 07:44:35.065936] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.030 [2024-07-14 07:44:35.065962] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:19.030 qpair failed and we were unable to recover it. 00:27:19.030 [2024-07-14 07:44:35.066153] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.030 [2024-07-14 07:44:35.066398] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.030 [2024-07-14 07:44:35.066455] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:19.030 qpair failed and we were unable to recover it. 00:27:19.030 [2024-07-14 07:44:35.066776] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.030 [2024-07-14 07:44:35.066998] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.030 [2024-07-14 07:44:35.067028] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:19.030 qpair failed and we were unable to recover it. 00:27:19.030 [2024-07-14 07:44:35.067260] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.030 [2024-07-14 07:44:35.067422] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.030 [2024-07-14 07:44:35.067463] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:19.030 qpair failed and we were unable to recover it. 00:27:19.030 [2024-07-14 07:44:35.067703] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.030 [2024-07-14 07:44:35.067897] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.030 [2024-07-14 07:44:35.067926] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:19.030 qpair failed and we were unable to recover it. 00:27:19.030 [2024-07-14 07:44:35.068166] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.030 [2024-07-14 07:44:35.068398] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.030 [2024-07-14 07:44:35.068427] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:19.030 qpair failed and we were unable to recover it. 00:27:19.030 [2024-07-14 07:44:35.068631] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.030 [2024-07-14 07:44:35.068832] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.030 [2024-07-14 07:44:35.068862] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:19.030 qpair failed and we were unable to recover it. 00:27:19.030 [2024-07-14 07:44:35.069084] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.030 [2024-07-14 07:44:35.069255] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.030 [2024-07-14 07:44:35.069296] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:19.030 qpair failed and we were unable to recover it. 00:27:19.030 [2024-07-14 07:44:35.069539] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.030 [2024-07-14 07:44:35.069749] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.030 [2024-07-14 07:44:35.069775] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:19.030 qpair failed and we were unable to recover it. 00:27:19.030 [2024-07-14 07:44:35.069992] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.030 [2024-07-14 07:44:35.070156] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.030 [2024-07-14 07:44:35.070198] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:19.030 qpair failed and we were unable to recover it. 00:27:19.030 [2024-07-14 07:44:35.070510] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.030 [2024-07-14 07:44:35.070783] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.030 [2024-07-14 07:44:35.070812] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:19.030 qpair failed and we were unable to recover it. 00:27:19.030 [2024-07-14 07:44:35.071035] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.030 [2024-07-14 07:44:35.071312] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.030 [2024-07-14 07:44:35.071360] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:19.030 qpair failed and we were unable to recover it. 00:27:19.030 [2024-07-14 07:44:35.071611] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.030 [2024-07-14 07:44:35.071806] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.030 [2024-07-14 07:44:35.071832] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:19.030 qpair failed and we were unable to recover it. 00:27:19.030 [2024-07-14 07:44:35.072013] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.030 [2024-07-14 07:44:35.072349] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.030 [2024-07-14 07:44:35.072408] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:19.030 qpair failed and we were unable to recover it. 00:27:19.030 [2024-07-14 07:44:35.072766] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.030 [2024-07-14 07:44:35.073005] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.030 [2024-07-14 07:44:35.073032] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:19.030 qpair failed and we were unable to recover it. 00:27:19.030 [2024-07-14 07:44:35.073251] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.030 [2024-07-14 07:44:35.073462] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.030 [2024-07-14 07:44:35.073505] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:19.030 qpair failed and we were unable to recover it. 00:27:19.030 [2024-07-14 07:44:35.073690] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.030 [2024-07-14 07:44:35.073924] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.030 [2024-07-14 07:44:35.073954] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:19.030 qpair failed and we were unable to recover it. 00:27:19.030 [2024-07-14 07:44:35.074186] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.030 [2024-07-14 07:44:35.074555] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.030 [2024-07-14 07:44:35.074614] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:19.030 qpair failed and we were unable to recover it. 00:27:19.030 [2024-07-14 07:44:35.074901] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.030 [2024-07-14 07:44:35.075132] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.030 [2024-07-14 07:44:35.075161] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:19.030 qpair failed and we were unable to recover it. 00:27:19.030 [2024-07-14 07:44:35.075398] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.030 [2024-07-14 07:44:35.075592] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.030 [2024-07-14 07:44:35.075619] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:19.030 qpair failed and we were unable to recover it. 00:27:19.030 [2024-07-14 07:44:35.075802] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.030 [2024-07-14 07:44:35.075987] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.030 [2024-07-14 07:44:35.076017] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:19.030 qpair failed and we were unable to recover it. 00:27:19.030 [2024-07-14 07:44:35.076252] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.030 [2024-07-14 07:44:35.076479] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.030 [2024-07-14 07:44:35.076505] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:19.030 qpair failed and we were unable to recover it. 00:27:19.030 [2024-07-14 07:44:35.076738] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.030 [2024-07-14 07:44:35.076969] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.030 [2024-07-14 07:44:35.076999] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:19.030 qpair failed and we were unable to recover it. 00:27:19.030 [2024-07-14 07:44:35.077232] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.030 [2024-07-14 07:44:35.077441] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.030 [2024-07-14 07:44:35.077468] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:19.030 qpair failed and we were unable to recover it. 00:27:19.030 [2024-07-14 07:44:35.077628] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.030 [2024-07-14 07:44:35.077840] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.030 [2024-07-14 07:44:35.077892] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:19.030 qpair failed and we were unable to recover it. 00:27:19.030 [2024-07-14 07:44:35.078099] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.030 [2024-07-14 07:44:35.078471] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.031 [2024-07-14 07:44:35.078517] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:19.031 qpair failed and we were unable to recover it. 00:27:19.031 [2024-07-14 07:44:35.078706] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.031 [2024-07-14 07:44:35.078860] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.031 [2024-07-14 07:44:35.078896] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:19.031 qpair failed and we were unable to recover it. 00:27:19.031 [2024-07-14 07:44:35.079083] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.031 [2024-07-14 07:44:35.079392] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.031 [2024-07-14 07:44:35.079449] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:19.031 qpair failed and we were unable to recover it. 00:27:19.031 [2024-07-14 07:44:35.079630] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.031 [2024-07-14 07:44:35.079830] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.031 [2024-07-14 07:44:35.079877] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:19.031 qpair failed and we were unable to recover it. 00:27:19.031 [2024-07-14 07:44:35.080080] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.031 [2024-07-14 07:44:35.080408] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.031 [2024-07-14 07:44:35.080460] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:19.031 qpair failed and we were unable to recover it. 00:27:19.031 [2024-07-14 07:44:35.080864] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.031 [2024-07-14 07:44:35.081124] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.031 [2024-07-14 07:44:35.081153] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:19.031 qpair failed and we were unable to recover it. 00:27:19.031 [2024-07-14 07:44:35.081387] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.031 [2024-07-14 07:44:35.081576] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.031 [2024-07-14 07:44:35.081604] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:19.031 qpair failed and we were unable to recover it. 00:27:19.031 [2024-07-14 07:44:35.081767] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.031 [2024-07-14 07:44:35.081957] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.031 [2024-07-14 07:44:35.081984] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:19.031 qpair failed and we were unable to recover it. 00:27:19.031 [2024-07-14 07:44:35.082141] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.031 [2024-07-14 07:44:35.082325] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.031 [2024-07-14 07:44:35.082367] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:19.031 qpair failed and we were unable to recover it. 00:27:19.031 [2024-07-14 07:44:35.082747] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.031 [2024-07-14 07:44:35.083002] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.031 [2024-07-14 07:44:35.083032] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:19.031 qpair failed and we were unable to recover it. 00:27:19.031 [2024-07-14 07:44:35.083255] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.031 [2024-07-14 07:44:35.083447] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.031 [2024-07-14 07:44:35.083473] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:19.031 qpair failed and we were unable to recover it. 00:27:19.031 [2024-07-14 07:44:35.083659] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.031 [2024-07-14 07:44:35.083849] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.031 [2024-07-14 07:44:35.083892] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:19.031 qpair failed and we were unable to recover it. 00:27:19.031 [2024-07-14 07:44:35.084125] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.031 [2024-07-14 07:44:35.084330] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.031 [2024-07-14 07:44:35.084359] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:19.031 qpair failed and we were unable to recover it. 00:27:19.031 [2024-07-14 07:44:35.084783] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.031 [2024-07-14 07:44:35.085019] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.031 [2024-07-14 07:44:35.085049] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:19.031 qpair failed and we were unable to recover it. 00:27:19.031 [2024-07-14 07:44:35.085254] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.031 [2024-07-14 07:44:35.085417] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.031 [2024-07-14 07:44:35.085443] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:19.031 qpair failed and we were unable to recover it. 00:27:19.031 [2024-07-14 07:44:35.085631] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.031 [2024-07-14 07:44:35.085817] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.031 [2024-07-14 07:44:35.085843] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:19.031 qpair failed and we were unable to recover it. 00:27:19.031 [2024-07-14 07:44:35.086063] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.031 [2024-07-14 07:44:35.086263] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.031 [2024-07-14 07:44:35.086293] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:19.031 qpair failed and we were unable to recover it. 00:27:19.031 [2024-07-14 07:44:35.086498] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.031 [2024-07-14 07:44:35.086927] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.031 [2024-07-14 07:44:35.086957] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:19.031 qpair failed and we were unable to recover it. 00:27:19.031 [2024-07-14 07:44:35.087165] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.031 [2024-07-14 07:44:35.087399] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.031 [2024-07-14 07:44:35.087425] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:19.031 qpair failed and we were unable to recover it. 00:27:19.031 [2024-07-14 07:44:35.087635] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.031 [2024-07-14 07:44:35.087846] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.031 [2024-07-14 07:44:35.087882] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:19.031 qpair failed and we were unable to recover it. 00:27:19.031 [2024-07-14 07:44:35.088124] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.031 [2024-07-14 07:44:35.088293] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.031 [2024-07-14 07:44:35.088324] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:19.031 qpair failed and we were unable to recover it. 00:27:19.031 [2024-07-14 07:44:35.088631] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.031 [2024-07-14 07:44:35.088877] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.031 [2024-07-14 07:44:35.088904] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:19.031 qpair failed and we were unable to recover it. 00:27:19.031 [2024-07-14 07:44:35.089115] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.031 [2024-07-14 07:44:35.089289] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.031 [2024-07-14 07:44:35.089315] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:19.031 qpair failed and we were unable to recover it. 00:27:19.031 [2024-07-14 07:44:35.089512] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.031 [2024-07-14 07:44:35.089754] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.031 [2024-07-14 07:44:35.089796] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:19.031 qpair failed and we were unable to recover it. 00:27:19.031 [2024-07-14 07:44:35.089980] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.031 [2024-07-14 07:44:35.090169] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.031 [2024-07-14 07:44:35.090195] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:19.031 qpair failed and we were unable to recover it. 00:27:19.031 [2024-07-14 07:44:35.090587] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.031 [2024-07-14 07:44:35.090831] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.031 [2024-07-14 07:44:35.090879] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:19.031 qpair failed and we were unable to recover it. 00:27:19.031 [2024-07-14 07:44:35.091066] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.031 [2024-07-14 07:44:35.091299] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.032 [2024-07-14 07:44:35.091328] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:19.032 qpair failed and we were unable to recover it. 00:27:19.032 [2024-07-14 07:44:35.091530] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.032 [2024-07-14 07:44:35.091798] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.032 [2024-07-14 07:44:35.091846] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:19.032 qpair failed and we were unable to recover it. 00:27:19.032 [2024-07-14 07:44:35.092137] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.032 [2024-07-14 07:44:35.092347] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.032 [2024-07-14 07:44:35.092373] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:19.032 qpair failed and we were unable to recover it. 00:27:19.032 [2024-07-14 07:44:35.092551] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.032 [2024-07-14 07:44:35.092741] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.032 [2024-07-14 07:44:35.092783] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:19.032 qpair failed and we were unable to recover it. 00:27:19.032 [2024-07-14 07:44:35.092964] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.032 [2024-07-14 07:44:35.093195] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.032 [2024-07-14 07:44:35.093224] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:19.032 qpair failed and we were unable to recover it. 00:27:19.032 [2024-07-14 07:44:35.093526] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.032 [2024-07-14 07:44:35.093776] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.032 [2024-07-14 07:44:35.093802] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:19.032 qpair failed and we were unable to recover it. 00:27:19.032 [2024-07-14 07:44:35.094032] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.032 [2024-07-14 07:44:35.094200] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.032 [2024-07-14 07:44:35.094240] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:19.032 qpair failed and we were unable to recover it. 00:27:19.032 [2024-07-14 07:44:35.094462] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.032 [2024-07-14 07:44:35.094806] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.032 [2024-07-14 07:44:35.094876] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:19.032 qpair failed and we were unable to recover it. 00:27:19.032 [2024-07-14 07:44:35.095140] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.032 [2024-07-14 07:44:35.095372] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.032 [2024-07-14 07:44:35.095397] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:19.032 qpair failed and we were unable to recover it. 00:27:19.032 [2024-07-14 07:44:35.095647] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.032 [2024-07-14 07:44:35.095824] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.032 [2024-07-14 07:44:35.095853] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:19.032 qpair failed and we were unable to recover it. 00:27:19.032 [2024-07-14 07:44:35.096093] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.032 [2024-07-14 07:44:35.096492] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.032 [2024-07-14 07:44:35.096544] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:19.032 qpair failed and we were unable to recover it. 00:27:19.032 [2024-07-14 07:44:35.096913] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.032 [2024-07-14 07:44:35.097151] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.032 [2024-07-14 07:44:35.097182] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:19.032 qpair failed and we were unable to recover it. 00:27:19.032 [2024-07-14 07:44:35.097402] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.032 [2024-07-14 07:44:35.097594] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.032 [2024-07-14 07:44:35.097621] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:19.032 qpair failed and we were unable to recover it. 00:27:19.032 [2024-07-14 07:44:35.097808] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.032 [2024-07-14 07:44:35.098017] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.032 [2024-07-14 07:44:35.098048] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:19.032 qpair failed and we were unable to recover it. 00:27:19.032 [2024-07-14 07:44:35.098283] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.032 [2024-07-14 07:44:35.098520] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.032 [2024-07-14 07:44:35.098549] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:19.032 qpair failed and we were unable to recover it. 00:27:19.032 [2024-07-14 07:44:35.098748] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.032 [2024-07-14 07:44:35.098953] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.032 [2024-07-14 07:44:35.098982] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:19.032 qpair failed and we were unable to recover it. 00:27:19.032 [2024-07-14 07:44:35.099198] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.032 [2024-07-14 07:44:35.099392] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.032 [2024-07-14 07:44:35.099418] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:19.032 qpair failed and we were unable to recover it. 00:27:19.032 [2024-07-14 07:44:35.099604] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.032 [2024-07-14 07:44:35.099754] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.032 [2024-07-14 07:44:35.099780] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:19.032 qpair failed and we were unable to recover it. 00:27:19.032 [2024-07-14 07:44:35.099942] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.032 [2024-07-14 07:44:35.100131] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.032 [2024-07-14 07:44:35.100158] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:19.032 qpair failed and we were unable to recover it. 00:27:19.032 [2024-07-14 07:44:35.100354] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.032 [2024-07-14 07:44:35.100520] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.032 [2024-07-14 07:44:35.100548] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:19.032 qpair failed and we were unable to recover it. 00:27:19.032 [2024-07-14 07:44:35.100762] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.032 [2024-07-14 07:44:35.100972] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.032 [2024-07-14 07:44:35.101002] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:19.032 qpair failed and we were unable to recover it. 00:27:19.032 [2024-07-14 07:44:35.101246] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.032 [2024-07-14 07:44:35.101502] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.032 [2024-07-14 07:44:35.101542] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:19.032 qpair failed and we were unable to recover it. 00:27:19.032 [2024-07-14 07:44:35.101740] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.032 [2024-07-14 07:44:35.101934] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.032 [2024-07-14 07:44:35.101961] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:19.032 qpair failed and we were unable to recover it. 00:27:19.032 [2024-07-14 07:44:35.102116] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.032 [2024-07-14 07:44:35.102335] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.032 [2024-07-14 07:44:35.102363] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:19.032 qpair failed and we were unable to recover it. 00:27:19.032 [2024-07-14 07:44:35.102564] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.032 [2024-07-14 07:44:35.102731] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.032 [2024-07-14 07:44:35.102756] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:19.032 qpair failed and we were unable to recover it. 00:27:19.032 [2024-07-14 07:44:35.102959] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.032 [2024-07-14 07:44:35.103123] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.032 [2024-07-14 07:44:35.103151] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:19.032 qpair failed and we were unable to recover it. 00:27:19.032 [2024-07-14 07:44:35.103398] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.032 [2024-07-14 07:44:35.103586] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.032 [2024-07-14 07:44:35.103612] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:19.032 qpair failed and we were unable to recover it. 00:27:19.032 [2024-07-14 07:44:35.103792] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.032 [2024-07-14 07:44:35.103983] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.032 [2024-07-14 07:44:35.104010] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:19.032 qpair failed and we were unable to recover it. 00:27:19.032 [2024-07-14 07:44:35.104160] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.032 [2024-07-14 07:44:35.104372] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.032 [2024-07-14 07:44:35.104399] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:19.032 qpair failed and we were unable to recover it. 00:27:19.032 [2024-07-14 07:44:35.104583] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.032 [2024-07-14 07:44:35.104771] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.032 [2024-07-14 07:44:35.104797] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:19.032 qpair failed and we were unable to recover it. 00:27:19.032 [2024-07-14 07:44:35.104978] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.032 [2024-07-14 07:44:35.105161] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.032 [2024-07-14 07:44:35.105188] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:19.032 qpair failed and we were unable to recover it. 00:27:19.032 [2024-07-14 07:44:35.105362] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.032 [2024-07-14 07:44:35.105602] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.033 [2024-07-14 07:44:35.105631] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:19.033 qpair failed and we were unable to recover it. 00:27:19.033 [2024-07-14 07:44:35.105856] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.033 [2024-07-14 07:44:35.106043] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.033 [2024-07-14 07:44:35.106072] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:19.033 qpair failed and we were unable to recover it. 00:27:19.033 [2024-07-14 07:44:35.106308] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.033 [2024-07-14 07:44:35.106666] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.033 [2024-07-14 07:44:35.106718] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:19.033 qpair failed and we were unable to recover it. 00:27:19.033 [2024-07-14 07:44:35.106916] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.033 [2024-07-14 07:44:35.107094] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.033 [2024-07-14 07:44:35.107123] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:19.033 qpair failed and we were unable to recover it. 00:27:19.033 [2024-07-14 07:44:35.107534] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.033 [2024-07-14 07:44:35.107788] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.033 [2024-07-14 07:44:35.107817] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:19.033 qpair failed and we were unable to recover it. 00:27:19.033 [2024-07-14 07:44:35.108033] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.033 [2024-07-14 07:44:35.108222] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.033 [2024-07-14 07:44:35.108263] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:19.033 qpair failed and we were unable to recover it. 00:27:19.033 [2024-07-14 07:44:35.108501] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.033 [2024-07-14 07:44:35.108855] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.033 [2024-07-14 07:44:35.108924] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:19.033 qpair failed and we were unable to recover it. 00:27:19.033 [2024-07-14 07:44:35.109158] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.033 [2024-07-14 07:44:35.109439] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.033 [2024-07-14 07:44:35.109468] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:19.033 qpair failed and we were unable to recover it. 00:27:19.033 [2024-07-14 07:44:35.109674] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.033 [2024-07-14 07:44:35.109878] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.033 [2024-07-14 07:44:35.109907] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:19.033 qpair failed and we were unable to recover it. 00:27:19.033 [2024-07-14 07:44:35.110111] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.033 [2024-07-14 07:44:35.110453] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.033 [2024-07-14 07:44:35.110511] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:19.033 qpair failed and we were unable to recover it. 00:27:19.033 [2024-07-14 07:44:35.110726] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.033 [2024-07-14 07:44:35.111017] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.033 [2024-07-14 07:44:35.111051] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:19.033 qpair failed and we were unable to recover it. 00:27:19.033 [2024-07-14 07:44:35.111288] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.033 [2024-07-14 07:44:35.111640] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.033 [2024-07-14 07:44:35.111691] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:19.033 qpair failed and we were unable to recover it. 00:27:19.033 [2024-07-14 07:44:35.111874] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.033 [2024-07-14 07:44:35.112054] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.033 [2024-07-14 07:44:35.112079] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:19.033 qpair failed and we were unable to recover it. 00:27:19.033 [2024-07-14 07:44:35.112269] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.033 [2024-07-14 07:44:35.112600] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.033 [2024-07-14 07:44:35.112661] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:19.033 qpair failed and we were unable to recover it. 00:27:19.033 [2024-07-14 07:44:35.112877] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.033 [2024-07-14 07:44:35.113065] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.033 [2024-07-14 07:44:35.113090] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:19.033 qpair failed and we were unable to recover it. 00:27:19.033 [2024-07-14 07:44:35.113280] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.033 [2024-07-14 07:44:35.113460] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.033 [2024-07-14 07:44:35.113486] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:19.033 qpair failed and we were unable to recover it. 00:27:19.033 [2024-07-14 07:44:35.113672] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.033 [2024-07-14 07:44:35.113834] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.033 [2024-07-14 07:44:35.113860] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:19.033 qpair failed and we were unable to recover it. 00:27:19.033 [2024-07-14 07:44:35.114031] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.033 [2024-07-14 07:44:35.114241] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.033 [2024-07-14 07:44:35.114271] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:19.033 qpair failed and we were unable to recover it. 00:27:19.033 [2024-07-14 07:44:35.114458] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.033 [2024-07-14 07:44:35.114663] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.033 [2024-07-14 07:44:35.114692] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:19.033 qpair failed and we were unable to recover it. 00:27:19.033 [2024-07-14 07:44:35.114913] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.033 [2024-07-14 07:44:35.115115] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.033 [2024-07-14 07:44:35.115141] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:19.033 qpair failed and we were unable to recover it. 00:27:19.033 [2024-07-14 07:44:35.115328] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.033 [2024-07-14 07:44:35.115512] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.033 [2024-07-14 07:44:35.115538] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:19.033 qpair failed and we were unable to recover it. 00:27:19.033 [2024-07-14 07:44:35.115698] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.033 [2024-07-14 07:44:35.115887] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.033 [2024-07-14 07:44:35.115915] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:19.033 qpair failed and we were unable to recover it. 00:27:19.033 [2024-07-14 07:44:35.116134] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.033 [2024-07-14 07:44:35.116398] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.033 [2024-07-14 07:44:35.116452] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:19.033 qpair failed and we were unable to recover it. 00:27:19.033 [2024-07-14 07:44:35.116682] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.033 [2024-07-14 07:44:35.116879] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.033 [2024-07-14 07:44:35.116908] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:19.033 qpair failed and we were unable to recover it. 00:27:19.033 [2024-07-14 07:44:35.117111] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.033 [2024-07-14 07:44:35.117468] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.033 [2024-07-14 07:44:35.117519] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:19.033 qpair failed and we were unable to recover it. 00:27:19.033 [2024-07-14 07:44:35.117706] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.033 [2024-07-14 07:44:35.117920] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.033 [2024-07-14 07:44:35.117947] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:19.033 qpair failed and we were unable to recover it. 00:27:19.033 [2024-07-14 07:44:35.118194] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.033 [2024-07-14 07:44:35.118588] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.033 [2024-07-14 07:44:35.118642] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:19.033 qpair failed and we were unable to recover it. 00:27:19.033 [2024-07-14 07:44:35.118878] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.033 [2024-07-14 07:44:35.119080] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.033 [2024-07-14 07:44:35.119109] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:19.033 qpair failed and we were unable to recover it. 00:27:19.033 [2024-07-14 07:44:35.119480] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.033 [2024-07-14 07:44:35.119883] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.033 [2024-07-14 07:44:35.119944] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:19.033 qpair failed and we were unable to recover it. 00:27:19.033 [2024-07-14 07:44:35.120154] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.033 [2024-07-14 07:44:35.120318] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.033 [2024-07-14 07:44:35.120344] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:19.033 qpair failed and we were unable to recover it. 00:27:19.033 [2024-07-14 07:44:35.120557] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.033 [2024-07-14 07:44:35.120739] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.033 [2024-07-14 07:44:35.120768] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:19.033 qpair failed and we were unable to recover it. 00:27:19.033 [2024-07-14 07:44:35.121005] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.033 [2024-07-14 07:44:35.121241] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.033 [2024-07-14 07:44:35.121303] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:19.033 qpair failed and we were unable to recover it. 00:27:19.034 [2024-07-14 07:44:35.121543] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.034 [2024-07-14 07:44:35.121722] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.034 [2024-07-14 07:44:35.121751] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:19.034 qpair failed and we were unable to recover it. 00:27:19.034 [2024-07-14 07:44:35.121964] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.034 [2024-07-14 07:44:35.122244] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.034 [2024-07-14 07:44:35.122303] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:19.034 qpair failed and we were unable to recover it. 00:27:19.034 [2024-07-14 07:44:35.122510] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.034 [2024-07-14 07:44:35.122770] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.034 [2024-07-14 07:44:35.122796] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:19.034 qpair failed and we were unable to recover it. 00:27:19.034 [2024-07-14 07:44:35.122974] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.034 [2024-07-14 07:44:35.123183] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.034 [2024-07-14 07:44:35.123216] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:19.034 qpair failed and we were unable to recover it. 00:27:19.034 [2024-07-14 07:44:35.123562] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.034 [2024-07-14 07:44:35.123807] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.034 [2024-07-14 07:44:35.123836] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:19.034 qpair failed and we were unable to recover it. 00:27:19.034 [2024-07-14 07:44:35.124089] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.034 [2024-07-14 07:44:35.124449] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.034 [2024-07-14 07:44:35.124516] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:19.034 qpair failed and we were unable to recover it. 00:27:19.034 [2024-07-14 07:44:35.124722] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.034 [2024-07-14 07:44:35.124989] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.034 [2024-07-14 07:44:35.125019] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:19.034 qpair failed and we were unable to recover it. 00:27:19.034 [2024-07-14 07:44:35.125256] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.034 [2024-07-14 07:44:35.125424] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.034 [2024-07-14 07:44:35.125453] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:19.034 qpair failed and we were unable to recover it. 00:27:19.034 [2024-07-14 07:44:35.125618] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.034 [2024-07-14 07:44:35.125822] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.034 [2024-07-14 07:44:35.125851] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:19.034 qpair failed and we were unable to recover it. 00:27:19.034 [2024-07-14 07:44:35.126063] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.034 [2024-07-14 07:44:35.126433] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.034 [2024-07-14 07:44:35.126490] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:19.034 qpair failed and we were unable to recover it. 00:27:19.034 [2024-07-14 07:44:35.126829] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.034 [2024-07-14 07:44:35.127106] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.034 [2024-07-14 07:44:35.127132] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:19.034 qpair failed and we were unable to recover it. 00:27:19.034 [2024-07-14 07:44:35.127308] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.034 [2024-07-14 07:44:35.127459] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.034 [2024-07-14 07:44:35.127502] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:19.034 qpair failed and we were unable to recover it. 00:27:19.034 [2024-07-14 07:44:35.127786] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.034 [2024-07-14 07:44:35.128019] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.034 [2024-07-14 07:44:35.128049] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:19.034 qpair failed and we were unable to recover it. 00:27:19.034 [2024-07-14 07:44:35.128261] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.034 [2024-07-14 07:44:35.128616] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.034 [2024-07-14 07:44:35.128667] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:19.034 qpair failed and we were unable to recover it. 00:27:19.034 [2024-07-14 07:44:35.128880] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.034 [2024-07-14 07:44:35.129097] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.034 [2024-07-14 07:44:35.129123] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:19.034 qpair failed and we were unable to recover it. 00:27:19.034 [2024-07-14 07:44:35.129329] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.034 [2024-07-14 07:44:35.129531] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.034 [2024-07-14 07:44:35.129560] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:19.034 qpair failed and we were unable to recover it. 00:27:19.034 [2024-07-14 07:44:35.129728] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.034 [2024-07-14 07:44:35.129958] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.034 [2024-07-14 07:44:35.129988] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:19.034 qpair failed and we were unable to recover it. 00:27:19.034 [2024-07-14 07:44:35.130226] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.034 [2024-07-14 07:44:35.130437] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.034 [2024-07-14 07:44:35.130467] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:19.034 qpair failed and we were unable to recover it. 00:27:19.034 [2024-07-14 07:44:35.130679] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.034 [2024-07-14 07:44:35.130915] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.034 [2024-07-14 07:44:35.130941] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:19.034 qpair failed and we were unable to recover it. 00:27:19.034 [2024-07-14 07:44:35.131239] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.034 [2024-07-14 07:44:35.131621] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.034 [2024-07-14 07:44:35.131680] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:19.034 qpair failed and we were unable to recover it. 00:27:19.034 [2024-07-14 07:44:35.131892] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.034 [2024-07-14 07:44:35.132119] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.034 [2024-07-14 07:44:35.132149] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:19.034 qpair failed and we were unable to recover it. 00:27:19.034 [2024-07-14 07:44:35.132366] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.034 [2024-07-14 07:44:35.132789] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.034 [2024-07-14 07:44:35.132840] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:19.034 qpair failed and we were unable to recover it. 00:27:19.034 [2024-07-14 07:44:35.133086] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.034 [2024-07-14 07:44:35.133436] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.034 [2024-07-14 07:44:35.133490] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:19.034 qpair failed and we were unable to recover it. 00:27:19.034 [2024-07-14 07:44:35.133696] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.034 [2024-07-14 07:44:35.133905] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.034 [2024-07-14 07:44:35.133935] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:19.034 qpair failed and we were unable to recover it. 00:27:19.034 [2024-07-14 07:44:35.134108] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.034 [2024-07-14 07:44:35.134402] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.034 [2024-07-14 07:44:35.134431] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:19.034 qpair failed and we were unable to recover it. 00:27:19.034 [2024-07-14 07:44:35.134643] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.034 [2024-07-14 07:44:35.134921] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.034 [2024-07-14 07:44:35.134951] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:19.034 qpair failed and we were unable to recover it. 00:27:19.034 [2024-07-14 07:44:35.135167] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.034 [2024-07-14 07:44:35.135453] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.034 [2024-07-14 07:44:35.135482] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:19.034 qpair failed and we were unable to recover it. 00:27:19.034 [2024-07-14 07:44:35.135687] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.034 [2024-07-14 07:44:35.135913] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.034 [2024-07-14 07:44:35.135943] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:19.034 qpair failed and we were unable to recover it. 00:27:19.034 [2024-07-14 07:44:35.136128] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.034 [2024-07-14 07:44:35.136441] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.034 [2024-07-14 07:44:35.136471] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:19.034 qpair failed and we were unable to recover it. 00:27:19.034 [2024-07-14 07:44:35.136696] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.034 [2024-07-14 07:44:35.136900] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.034 [2024-07-14 07:44:35.136934] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:19.034 qpair failed and we were unable to recover it. 00:27:19.034 [2024-07-14 07:44:35.137164] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.034 [2024-07-14 07:44:35.137415] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.034 [2024-07-14 07:44:35.137467] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:19.034 qpair failed and we were unable to recover it. 00:27:19.034 [2024-07-14 07:44:35.137671] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.034 [2024-07-14 07:44:35.137876] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.034 [2024-07-14 07:44:35.137906] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:19.034 qpair failed and we were unable to recover it. 00:27:19.034 [2024-07-14 07:44:35.138112] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.034 [2024-07-14 07:44:35.138432] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.034 [2024-07-14 07:44:35.138487] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:19.034 qpair failed and we were unable to recover it. 00:27:19.034 [2024-07-14 07:44:35.138713] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.035 [2024-07-14 07:44:35.138890] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.035 [2024-07-14 07:44:35.138919] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:19.035 qpair failed and we were unable to recover it. 00:27:19.035 [2024-07-14 07:44:35.139225] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.035 [2024-07-14 07:44:35.139506] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.035 [2024-07-14 07:44:35.139560] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:19.035 qpair failed and we were unable to recover it. 00:27:19.035 [2024-07-14 07:44:35.139790] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.035 [2024-07-14 07:44:35.139997] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.035 [2024-07-14 07:44:35.140027] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:19.035 qpair failed and we were unable to recover it. 00:27:19.035 [2024-07-14 07:44:35.140305] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.035 [2024-07-14 07:44:35.140534] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.035 [2024-07-14 07:44:35.140559] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:19.035 qpair failed and we were unable to recover it. 00:27:19.035 [2024-07-14 07:44:35.140747] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.035 [2024-07-14 07:44:35.140965] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.035 [2024-07-14 07:44:35.140997] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:19.035 qpair failed and we were unable to recover it. 00:27:19.035 [2024-07-14 07:44:35.141203] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.035 [2024-07-14 07:44:35.141513] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.035 [2024-07-14 07:44:35.141569] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:19.035 qpair failed and we were unable to recover it. 00:27:19.035 [2024-07-14 07:44:35.141774] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.035 [2024-07-14 07:44:35.141996] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.035 [2024-07-14 07:44:35.142026] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:19.035 qpair failed and we were unable to recover it. 00:27:19.035 [2024-07-14 07:44:35.142247] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.035 [2024-07-14 07:44:35.142448] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.035 [2024-07-14 07:44:35.142477] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:19.035 qpair failed and we were unable to recover it. 00:27:19.035 [2024-07-14 07:44:35.142744] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.035 [2024-07-14 07:44:35.142936] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.035 [2024-07-14 07:44:35.142965] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:19.035 qpair failed and we were unable to recover it. 00:27:19.035 [2024-07-14 07:44:35.143191] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.035 [2024-07-14 07:44:35.143604] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.035 [2024-07-14 07:44:35.143666] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:19.035 qpair failed and we were unable to recover it. 00:27:19.035 [2024-07-14 07:44:35.143891] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.035 [2024-07-14 07:44:35.144077] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.035 [2024-07-14 07:44:35.144103] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:19.035 qpair failed and we were unable to recover it. 00:27:19.035 [2024-07-14 07:44:35.144343] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.035 [2024-07-14 07:44:35.144678] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.035 [2024-07-14 07:44:35.144739] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:19.035 qpair failed and we were unable to recover it. 00:27:19.035 [2024-07-14 07:44:35.144976] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.035 [2024-07-14 07:44:35.145143] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.035 [2024-07-14 07:44:35.145185] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:19.035 qpair failed and we were unable to recover it. 00:27:19.035 [2024-07-14 07:44:35.145373] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.035 [2024-07-14 07:44:35.145559] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.035 [2024-07-14 07:44:35.145586] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:19.035 qpair failed and we were unable to recover it. 00:27:19.035 [2024-07-14 07:44:35.145792] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.035 [2024-07-14 07:44:35.146044] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.035 [2024-07-14 07:44:35.146074] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:19.035 qpair failed and we were unable to recover it. 00:27:19.035 [2024-07-14 07:44:35.146444] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.035 [2024-07-14 07:44:35.146830] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.035 [2024-07-14 07:44:35.146888] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:19.035 qpair failed and we were unable to recover it. 00:27:19.035 [2024-07-14 07:44:35.147089] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.035 [2024-07-14 07:44:35.147292] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.035 [2024-07-14 07:44:35.147321] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:19.035 qpair failed and we were unable to recover it. 00:27:19.035 [2024-07-14 07:44:35.147521] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.035 [2024-07-14 07:44:35.147760] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.035 [2024-07-14 07:44:35.147814] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:19.035 qpair failed and we were unable to recover it. 00:27:19.035 [2024-07-14 07:44:35.148047] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.035 [2024-07-14 07:44:35.148277] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.035 [2024-07-14 07:44:35.148330] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:19.035 qpair failed and we were unable to recover it. 00:27:19.035 [2024-07-14 07:44:35.148620] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.035 [2024-07-14 07:44:35.148886] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.035 [2024-07-14 07:44:35.148913] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:19.035 qpair failed and we were unable to recover it. 00:27:19.035 [2024-07-14 07:44:35.149104] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.035 [2024-07-14 07:44:35.149574] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.035 [2024-07-14 07:44:35.149626] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:19.035 qpair failed and we were unable to recover it. 00:27:19.035 [2024-07-14 07:44:35.149871] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.035 [2024-07-14 07:44:35.150041] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.035 [2024-07-14 07:44:35.150070] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:19.035 qpair failed and we were unable to recover it. 00:27:19.035 [2024-07-14 07:44:35.150270] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.035 [2024-07-14 07:44:35.150444] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.035 [2024-07-14 07:44:35.150470] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:19.035 qpair failed and we were unable to recover it. 00:27:19.035 [2024-07-14 07:44:35.150811] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.035 [2024-07-14 07:44:35.151061] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.035 [2024-07-14 07:44:35.151090] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:19.035 qpair failed and we were unable to recover it. 00:27:19.035 [2024-07-14 07:44:35.151322] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.035 [2024-07-14 07:44:35.151505] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.036 [2024-07-14 07:44:35.151574] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:19.036 qpair failed and we were unable to recover it. 00:27:19.036 [2024-07-14 07:44:35.151784] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.036 [2024-07-14 07:44:35.151971] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.036 [2024-07-14 07:44:35.152000] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:19.036 qpair failed and we were unable to recover it. 00:27:19.036 [2024-07-14 07:44:35.152205] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.036 [2024-07-14 07:44:35.152482] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.036 [2024-07-14 07:44:35.152532] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:19.036 qpair failed and we were unable to recover it. 00:27:19.036 [2024-07-14 07:44:35.152776] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.036 [2024-07-14 07:44:35.152986] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.036 [2024-07-14 07:44:35.153016] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:19.036 qpair failed and we were unable to recover it. 00:27:19.036 [2024-07-14 07:44:35.153205] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.036 [2024-07-14 07:44:35.153465] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.036 [2024-07-14 07:44:35.153491] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:19.036 qpair failed and we were unable to recover it. 00:27:19.036 [2024-07-14 07:44:35.153705] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.036 [2024-07-14 07:44:35.153912] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.036 [2024-07-14 07:44:35.153942] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:19.036 qpair failed and we were unable to recover it. 00:27:19.036 [2024-07-14 07:44:35.154172] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.036 [2024-07-14 07:44:35.154545] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.036 [2024-07-14 07:44:35.154603] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:19.036 qpair failed and we were unable to recover it. 00:27:19.036 [2024-07-14 07:44:35.154841] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.036 [2024-07-14 07:44:35.155062] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.036 [2024-07-14 07:44:35.155091] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:19.036 qpair failed and we were unable to recover it. 00:27:19.036 [2024-07-14 07:44:35.155314] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.036 [2024-07-14 07:44:35.155531] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.036 [2024-07-14 07:44:35.155560] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:19.036 qpair failed and we were unable to recover it. 00:27:19.036 [2024-07-14 07:44:35.155777] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.036 [2024-07-14 07:44:35.156018] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.036 [2024-07-14 07:44:35.156057] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:19.036 qpair failed and we were unable to recover it. 00:27:19.036 [2024-07-14 07:44:35.156269] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.036 [2024-07-14 07:44:35.156484] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.036 [2024-07-14 07:44:35.156510] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:19.036 qpair failed and we were unable to recover it. 00:27:19.036 [2024-07-14 07:44:35.156725] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.036 [2024-07-14 07:44:35.156965] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.036 [2024-07-14 07:44:35.156996] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:19.036 qpair failed and we were unable to recover it. 00:27:19.036 [2024-07-14 07:44:35.157212] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.036 [2024-07-14 07:44:35.157411] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.036 [2024-07-14 07:44:35.157437] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:19.036 qpair failed and we were unable to recover it. 00:27:19.036 [2024-07-14 07:44:35.157729] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.036 [2024-07-14 07:44:35.157928] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.036 [2024-07-14 07:44:35.157955] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:19.036 qpair failed and we were unable to recover it. 00:27:19.036 [2024-07-14 07:44:35.158149] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.036 [2024-07-14 07:44:35.158419] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.036 [2024-07-14 07:44:35.158444] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:19.036 qpair failed and we were unable to recover it. 00:27:19.036 [2024-07-14 07:44:35.158643] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.036 [2024-07-14 07:44:35.158887] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.036 [2024-07-14 07:44:35.158922] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:19.036 qpair failed and we were unable to recover it. 00:27:19.036 [2024-07-14 07:44:35.159243] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.036 [2024-07-14 07:44:35.159487] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.036 [2024-07-14 07:44:35.159539] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:19.036 qpair failed and we were unable to recover it. 00:27:19.036 [2024-07-14 07:44:35.159744] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.036 [2024-07-14 07:44:35.159996] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.036 [2024-07-14 07:44:35.160023] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:19.036 qpair failed and we were unable to recover it. 00:27:19.036 [2024-07-14 07:44:35.160216] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.036 [2024-07-14 07:44:35.160414] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.036 [2024-07-14 07:44:35.160439] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:19.036 qpair failed and we were unable to recover it. 00:27:19.036 [2024-07-14 07:44:35.160812] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.036 [2024-07-14 07:44:35.161061] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.036 [2024-07-14 07:44:35.161087] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:19.036 qpair failed and we were unable to recover it. 00:27:19.036 [2024-07-14 07:44:35.161355] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.036 [2024-07-14 07:44:35.161641] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.036 [2024-07-14 07:44:35.161670] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:19.036 qpair failed and we were unable to recover it. 00:27:19.036 [2024-07-14 07:44:35.161880] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.036 [2024-07-14 07:44:35.162127] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.036 [2024-07-14 07:44:35.162152] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:19.036 qpair failed and we were unable to recover it. 00:27:19.036 [2024-07-14 07:44:35.162406] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.036 [2024-07-14 07:44:35.162728] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.036 [2024-07-14 07:44:35.162782] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:19.036 qpair failed and we were unable to recover it. 00:27:19.036 [2024-07-14 07:44:35.162978] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.036 [2024-07-14 07:44:35.163220] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.036 [2024-07-14 07:44:35.163277] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:19.036 qpair failed and we were unable to recover it. 00:27:19.036 [2024-07-14 07:44:35.163466] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.036 [2024-07-14 07:44:35.163678] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.036 [2024-07-14 07:44:35.163704] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:19.036 qpair failed and we were unable to recover it. 00:27:19.036 [2024-07-14 07:44:35.163905] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.036 [2024-07-14 07:44:35.164173] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.036 [2024-07-14 07:44:35.164236] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:19.036 qpair failed and we were unable to recover it. 00:27:19.036 [2024-07-14 07:44:35.164445] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.036 [2024-07-14 07:44:35.164760] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.036 [2024-07-14 07:44:35.164816] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:19.036 qpair failed and we were unable to recover it. 00:27:19.036 [2024-07-14 07:44:35.165057] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.036 [2024-07-14 07:44:35.165262] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.036 [2024-07-14 07:44:35.165291] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:19.036 qpair failed and we were unable to recover it. 00:27:19.036 [2024-07-14 07:44:35.165523] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.036 [2024-07-14 07:44:35.165786] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.036 [2024-07-14 07:44:35.165826] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:19.036 qpair failed and we were unable to recover it. 00:27:19.036 [2024-07-14 07:44:35.166047] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.036 [2024-07-14 07:44:35.166409] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.036 [2024-07-14 07:44:35.166463] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:19.036 qpair failed and we were unable to recover it. 00:27:19.036 [2024-07-14 07:44:35.166693] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.036 [2024-07-14 07:44:35.166896] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.036 [2024-07-14 07:44:35.166926] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:19.036 qpair failed and we were unable to recover it. 00:27:19.036 [2024-07-14 07:44:35.167155] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.037 [2024-07-14 07:44:35.167461] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.037 [2024-07-14 07:44:35.167516] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:19.037 qpair failed and we were unable to recover it. 00:27:19.037 [2024-07-14 07:44:35.167795] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.037 [2024-07-14 07:44:35.168010] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.037 [2024-07-14 07:44:35.168040] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:19.037 qpair failed and we were unable to recover it. 00:27:19.037 [2024-07-14 07:44:35.168245] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.037 [2024-07-14 07:44:35.168613] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.037 [2024-07-14 07:44:35.168667] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:19.037 qpair failed and we were unable to recover it. 00:27:19.037 [2024-07-14 07:44:35.168876] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.037 [2024-07-14 07:44:35.169080] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.037 [2024-07-14 07:44:35.169109] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:19.037 qpair failed and we were unable to recover it. 00:27:19.037 [2024-07-14 07:44:35.169389] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.037 [2024-07-14 07:44:35.169584] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.037 [2024-07-14 07:44:35.169613] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:19.037 qpair failed and we were unable to recover it. 00:27:19.037 [2024-07-14 07:44:35.169840] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.037 [2024-07-14 07:44:35.170055] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.037 [2024-07-14 07:44:35.170084] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:19.037 qpair failed and we were unable to recover it. 00:27:19.037 [2024-07-14 07:44:35.170262] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.037 [2024-07-14 07:44:35.170492] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.037 [2024-07-14 07:44:35.170518] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:19.037 qpair failed and we were unable to recover it. 00:27:19.037 [2024-07-14 07:44:35.170741] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.037 [2024-07-14 07:44:35.170948] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.037 [2024-07-14 07:44:35.170978] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:19.037 qpair failed and we were unable to recover it. 00:27:19.037 [2024-07-14 07:44:35.171172] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.037 [2024-07-14 07:44:35.171375] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.037 [2024-07-14 07:44:35.171400] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:19.037 qpair failed and we were unable to recover it. 00:27:19.037 [2024-07-14 07:44:35.171626] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.037 [2024-07-14 07:44:35.171801] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.037 [2024-07-14 07:44:35.171830] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:19.037 qpair failed and we were unable to recover it. 00:27:19.037 [2024-07-14 07:44:35.172072] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.037 [2024-07-14 07:44:35.172287] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.037 [2024-07-14 07:44:35.172313] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:19.037 qpair failed and we were unable to recover it. 00:27:19.037 [2024-07-14 07:44:35.172588] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.037 [2024-07-14 07:44:35.172822] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.037 [2024-07-14 07:44:35.172877] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:19.037 qpair failed and we were unable to recover it. 00:27:19.037 [2024-07-14 07:44:35.173068] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.037 [2024-07-14 07:44:35.173299] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.037 [2024-07-14 07:44:35.173331] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:19.037 qpair failed and we were unable to recover it. 00:27:19.037 [2024-07-14 07:44:35.173558] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.037 [2024-07-14 07:44:35.173878] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.037 [2024-07-14 07:44:35.173908] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:19.037 qpair failed and we were unable to recover it. 00:27:19.037 [2024-07-14 07:44:35.174115] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.037 [2024-07-14 07:44:35.174381] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.037 [2024-07-14 07:44:35.174442] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:19.037 qpair failed and we were unable to recover it. 00:27:19.037 [2024-07-14 07:44:35.174706] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.037 [2024-07-14 07:44:35.174935] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.037 [2024-07-14 07:44:35.174967] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:19.037 qpair failed and we were unable to recover it. 00:27:19.037 [2024-07-14 07:44:35.175155] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.037 [2024-07-14 07:44:35.175376] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.037 [2024-07-14 07:44:35.175407] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:19.037 qpair failed and we were unable to recover it. 00:27:19.037 [2024-07-14 07:44:35.175591] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.037 [2024-07-14 07:44:35.175794] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.037 [2024-07-14 07:44:35.175823] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:19.037 qpair failed and we were unable to recover it. 00:27:19.037 [2024-07-14 07:44:35.176041] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.037 [2024-07-14 07:44:35.176363] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.037 [2024-07-14 07:44:35.176427] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:19.037 qpair failed and we were unable to recover it. 00:27:19.037 [2024-07-14 07:44:35.176681] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.037 [2024-07-14 07:44:35.176918] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.037 [2024-07-14 07:44:35.176948] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:19.037 qpair failed and we were unable to recover it. 00:27:19.037 [2024-07-14 07:44:35.177140] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.307 [2024-07-14 07:44:35.177433] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.307 [2024-07-14 07:44:35.177485] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:19.307 qpair failed and we were unable to recover it. 00:27:19.307 [2024-07-14 07:44:35.177703] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.307 [2024-07-14 07:44:35.177942] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.307 [2024-07-14 07:44:35.177982] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:19.307 qpair failed and we were unable to recover it. 00:27:19.307 [2024-07-14 07:44:35.178246] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.307 [2024-07-14 07:44:35.178425] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.307 [2024-07-14 07:44:35.178460] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:19.307 qpair failed and we were unable to recover it. 00:27:19.307 [2024-07-14 07:44:35.178714] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.307 [2024-07-14 07:44:35.178937] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.307 [2024-07-14 07:44:35.178980] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:19.307 qpair failed and we were unable to recover it. 00:27:19.307 [2024-07-14 07:44:35.179280] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.307 [2024-07-14 07:44:35.179651] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.307 [2024-07-14 07:44:35.179682] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:19.307 qpair failed and we were unable to recover it. 00:27:19.307 [2024-07-14 07:44:35.179928] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.307 [2024-07-14 07:44:35.180150] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.307 [2024-07-14 07:44:35.180187] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:19.307 qpair failed and we were unable to recover it. 00:27:19.307 [2024-07-14 07:44:35.180413] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.307 [2024-07-14 07:44:35.180746] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.307 [2024-07-14 07:44:35.180797] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:19.307 qpair failed and we were unable to recover it. 00:27:19.307 [2024-07-14 07:44:35.180998] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.307 [2024-07-14 07:44:35.181201] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.307 [2024-07-14 07:44:35.181230] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:19.307 qpair failed and we were unable to recover it. 00:27:19.307 [2024-07-14 07:44:35.181463] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.307 [2024-07-14 07:44:35.181661] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.307 [2024-07-14 07:44:35.181713] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:19.307 qpair failed and we were unable to recover it. 00:27:19.307 [2024-07-14 07:44:35.182001] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.307 [2024-07-14 07:44:35.182217] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.307 [2024-07-14 07:44:35.182304] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:19.307 qpair failed and we were unable to recover it. 00:27:19.307 [2024-07-14 07:44:35.182538] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.307 [2024-07-14 07:44:35.182750] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.307 [2024-07-14 07:44:35.182777] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:19.307 qpair failed and we were unable to recover it. 00:27:19.307 [2024-07-14 07:44:35.182985] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.307 [2024-07-14 07:44:35.183226] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.307 [2024-07-14 07:44:35.183252] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:19.307 qpair failed and we were unable to recover it. 00:27:19.307 [2024-07-14 07:44:35.183584] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.307 [2024-07-14 07:44:35.183842] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.307 [2024-07-14 07:44:35.183879] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:19.307 qpair failed and we were unable to recover it. 00:27:19.307 [2024-07-14 07:44:35.184092] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.307 [2024-07-14 07:44:35.184465] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.307 [2024-07-14 07:44:35.184516] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:19.307 qpair failed and we were unable to recover it. 00:27:19.307 [2024-07-14 07:44:35.184904] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.307 [2024-07-14 07:44:35.185109] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.307 [2024-07-14 07:44:35.185138] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:19.307 qpair failed and we were unable to recover it. 00:27:19.307 [2024-07-14 07:44:35.185365] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.307 [2024-07-14 07:44:35.185664] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.307 [2024-07-14 07:44:35.185730] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:19.307 qpair failed and we were unable to recover it. 00:27:19.307 [2024-07-14 07:44:35.185962] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.307 [2024-07-14 07:44:35.186140] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.307 [2024-07-14 07:44:35.186169] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:19.307 qpair failed and we were unable to recover it. 00:27:19.307 [2024-07-14 07:44:35.186379] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.307 [2024-07-14 07:44:35.186612] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.307 [2024-07-14 07:44:35.186640] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:19.307 qpair failed and we were unable to recover it. 00:27:19.307 [2024-07-14 07:44:35.186879] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.307 [2024-07-14 07:44:35.187108] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.307 [2024-07-14 07:44:35.187137] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:19.307 qpair failed and we were unable to recover it. 00:27:19.307 [2024-07-14 07:44:35.187369] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.307 [2024-07-14 07:44:35.187730] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.307 [2024-07-14 07:44:35.187785] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:19.307 qpair failed and we were unable to recover it. 00:27:19.307 [2024-07-14 07:44:35.188022] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.307 [2024-07-14 07:44:35.188375] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.307 [2024-07-14 07:44:35.188432] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:19.307 qpair failed and we were unable to recover it. 00:27:19.307 [2024-07-14 07:44:35.188667] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.307 [2024-07-14 07:44:35.188889] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.307 [2024-07-14 07:44:35.188919] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:19.307 qpair failed and we were unable to recover it. 00:27:19.307 [2024-07-14 07:44:35.189152] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.307 [2024-07-14 07:44:35.189335] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.307 [2024-07-14 07:44:35.189364] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:19.307 qpair failed and we were unable to recover it. 00:27:19.307 [2024-07-14 07:44:35.189567] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.307 [2024-07-14 07:44:35.189892] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.307 [2024-07-14 07:44:35.189945] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:19.307 qpair failed and we were unable to recover it. 00:27:19.307 [2024-07-14 07:44:35.190178] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.307 [2024-07-14 07:44:35.190431] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.307 [2024-07-14 07:44:35.190488] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:19.307 qpair failed and we were unable to recover it. 00:27:19.307 [2024-07-14 07:44:35.190732] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.307 [2024-07-14 07:44:35.190925] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.307 [2024-07-14 07:44:35.190951] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:19.307 qpair failed and we were unable to recover it. 00:27:19.308 [2024-07-14 07:44:35.191187] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.308 [2024-07-14 07:44:35.191506] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.308 [2024-07-14 07:44:35.191563] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:19.308 qpair failed and we were unable to recover it. 00:27:19.308 [2024-07-14 07:44:35.191765] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.308 [2024-07-14 07:44:35.191953] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.308 [2024-07-14 07:44:35.191978] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:19.308 qpair failed and we were unable to recover it. 00:27:19.308 [2024-07-14 07:44:35.192163] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.308 [2024-07-14 07:44:35.192356] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.308 [2024-07-14 07:44:35.192385] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:19.308 qpair failed and we were unable to recover it. 00:27:19.308 [2024-07-14 07:44:35.192566] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.308 [2024-07-14 07:44:35.192730] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.308 [2024-07-14 07:44:35.192774] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:19.308 qpair failed and we were unable to recover it. 00:27:19.308 [2024-07-14 07:44:35.192979] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.308 [2024-07-14 07:44:35.193169] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.308 [2024-07-14 07:44:35.193210] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:19.308 qpair failed and we were unable to recover it. 00:27:19.308 [2024-07-14 07:44:35.193427] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.308 [2024-07-14 07:44:35.193786] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.308 [2024-07-14 07:44:35.193839] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:19.308 qpair failed and we were unable to recover it. 00:27:19.308 [2024-07-14 07:44:35.194058] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.308 [2024-07-14 07:44:35.194236] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.308 [2024-07-14 07:44:35.194265] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:19.308 qpair failed and we were unable to recover it. 00:27:19.308 [2024-07-14 07:44:35.194515] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.308 [2024-07-14 07:44:35.194728] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.308 [2024-07-14 07:44:35.194761] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:19.308 qpair failed and we were unable to recover it. 00:27:19.308 [2024-07-14 07:44:35.194928] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.308 [2024-07-14 07:44:35.195125] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.308 [2024-07-14 07:44:35.195154] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:19.308 qpair failed and we were unable to recover it. 00:27:19.308 [2024-07-14 07:44:35.195334] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.308 [2024-07-14 07:44:35.195535] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.308 [2024-07-14 07:44:35.195564] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:19.308 qpair failed and we were unable to recover it. 00:27:19.308 [2024-07-14 07:44:35.195797] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.308 [2024-07-14 07:44:35.196026] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.308 [2024-07-14 07:44:35.196057] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:19.308 qpair failed and we were unable to recover it. 00:27:19.308 [2024-07-14 07:44:35.196296] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.308 [2024-07-14 07:44:35.196464] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.308 [2024-07-14 07:44:35.196504] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:19.308 qpair failed and we were unable to recover it. 00:27:19.308 [2024-07-14 07:44:35.196709] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.308 [2024-07-14 07:44:35.196917] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.308 [2024-07-14 07:44:35.196943] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:19.308 qpair failed and we were unable to recover it. 00:27:19.308 [2024-07-14 07:44:35.197142] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.308 [2024-07-14 07:44:35.197458] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.308 [2024-07-14 07:44:35.197509] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:19.308 qpair failed and we were unable to recover it. 00:27:19.308 [2024-07-14 07:44:35.197774] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.308 [2024-07-14 07:44:35.198005] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.308 [2024-07-14 07:44:35.198034] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:19.308 qpair failed and we were unable to recover it. 00:27:19.308 [2024-07-14 07:44:35.198252] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.308 [2024-07-14 07:44:35.198628] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.308 [2024-07-14 07:44:35.198681] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:19.308 qpair failed and we were unable to recover it. 00:27:19.308 [2024-07-14 07:44:35.198889] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.308 [2024-07-14 07:44:35.199080] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.308 [2024-07-14 07:44:35.199109] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:19.308 qpair failed and we were unable to recover it. 00:27:19.308 [2024-07-14 07:44:35.199350] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.308 [2024-07-14 07:44:35.199520] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.308 [2024-07-14 07:44:35.199547] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:19.308 qpair failed and we were unable to recover it. 00:27:19.308 [2024-07-14 07:44:35.199769] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.308 [2024-07-14 07:44:35.199933] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.308 [2024-07-14 07:44:35.199963] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:19.308 qpair failed and we were unable to recover it. 00:27:19.308 [2024-07-14 07:44:35.200144] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.308 [2024-07-14 07:44:35.200332] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.308 [2024-07-14 07:44:35.200359] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:19.308 qpair failed and we were unable to recover it. 00:27:19.308 [2024-07-14 07:44:35.200659] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.308 [2024-07-14 07:44:35.200935] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.308 [2024-07-14 07:44:35.200962] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:19.308 qpair failed and we were unable to recover it. 00:27:19.308 [2024-07-14 07:44:35.201222] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.308 [2024-07-14 07:44:35.201436] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.308 [2024-07-14 07:44:35.201463] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:19.308 qpair failed and we were unable to recover it. 00:27:19.308 [2024-07-14 07:44:35.201679] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.308 [2024-07-14 07:44:35.201863] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.308 [2024-07-14 07:44:35.201899] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:19.308 qpair failed and we were unable to recover it. 00:27:19.308 [2024-07-14 07:44:35.202087] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.308 [2024-07-14 07:44:35.202362] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.308 [2024-07-14 07:44:35.202413] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:19.308 qpair failed and we were unable to recover it. 00:27:19.308 [2024-07-14 07:44:35.202612] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.308 [2024-07-14 07:44:35.202837] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.308 [2024-07-14 07:44:35.202862] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:19.308 qpair failed and we were unable to recover it. 00:27:19.308 [2024-07-14 07:44:35.203117] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.308 [2024-07-14 07:44:35.203392] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.308 [2024-07-14 07:44:35.203444] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:19.308 qpair failed and we were unable to recover it. 00:27:19.308 [2024-07-14 07:44:35.203841] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.308 [2024-07-14 07:44:35.204068] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.308 [2024-07-14 07:44:35.204096] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:19.308 qpair failed and we were unable to recover it. 00:27:19.308 [2024-07-14 07:44:35.204289] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.308 [2024-07-14 07:44:35.204519] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.308 [2024-07-14 07:44:35.204576] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:19.308 qpair failed and we were unable to recover it. 00:27:19.308 [2024-07-14 07:44:35.204797] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.308 [2024-07-14 07:44:35.204972] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.308 [2024-07-14 07:44:35.205001] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:19.308 qpair failed and we were unable to recover it. 00:27:19.308 [2024-07-14 07:44:35.205186] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.308 [2024-07-14 07:44:35.205420] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.308 [2024-07-14 07:44:35.205446] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:19.308 qpair failed and we were unable to recover it. 00:27:19.308 [2024-07-14 07:44:35.205703] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.308 [2024-07-14 07:44:35.205880] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.308 [2024-07-14 07:44:35.205910] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:19.308 qpair failed and we were unable to recover it. 00:27:19.308 [2024-07-14 07:44:35.206091] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.308 [2024-07-14 07:44:35.206412] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.308 [2024-07-14 07:44:35.206438] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:19.308 qpair failed and we were unable to recover it. 00:27:19.308 [2024-07-14 07:44:35.206631] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.308 [2024-07-14 07:44:35.206781] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.308 [2024-07-14 07:44:35.206807] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:19.308 qpair failed and we were unable to recover it. 00:27:19.308 [2024-07-14 07:44:35.207020] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.308 [2024-07-14 07:44:35.207254] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.309 [2024-07-14 07:44:35.207284] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:19.309 qpair failed and we were unable to recover it. 00:27:19.309 [2024-07-14 07:44:35.207602] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.309 [2024-07-14 07:44:35.207836] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.309 [2024-07-14 07:44:35.207871] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:19.309 qpair failed and we were unable to recover it. 00:27:19.309 [2024-07-14 07:44:35.208088] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.309 [2024-07-14 07:44:35.208390] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.309 [2024-07-14 07:44:35.208448] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:19.309 qpair failed and we were unable to recover it. 00:27:19.309 [2024-07-14 07:44:35.208674] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.309 [2024-07-14 07:44:35.208897] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.309 [2024-07-14 07:44:35.208926] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:19.309 qpair failed and we were unable to recover it. 00:27:19.309 [2024-07-14 07:44:35.209114] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.309 [2024-07-14 07:44:35.209509] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.309 [2024-07-14 07:44:35.209571] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:19.309 qpair failed and we were unable to recover it. 00:27:19.309 [2024-07-14 07:44:35.209805] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.309 [2024-07-14 07:44:35.210013] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.309 [2024-07-14 07:44:35.210053] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:19.309 qpair failed and we were unable to recover it. 00:27:19.309 [2024-07-14 07:44:35.210283] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.309 [2024-07-14 07:44:35.210571] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.309 [2024-07-14 07:44:35.210600] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:19.309 qpair failed and we were unable to recover it. 00:27:19.309 [2024-07-14 07:44:35.210807] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.309 [2024-07-14 07:44:35.211016] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.309 [2024-07-14 07:44:35.211045] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:19.309 qpair failed and we were unable to recover it. 00:27:19.309 [2024-07-14 07:44:35.211258] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.309 [2024-07-14 07:44:35.211611] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.309 [2024-07-14 07:44:35.211670] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:19.309 qpair failed and we were unable to recover it. 00:27:19.309 [2024-07-14 07:44:35.211880] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.309 [2024-07-14 07:44:35.212105] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.309 [2024-07-14 07:44:35.212134] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:19.309 qpair failed and we were unable to recover it. 00:27:19.309 [2024-07-14 07:44:35.212374] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.309 [2024-07-14 07:44:35.212715] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.309 [2024-07-14 07:44:35.212774] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:19.309 qpair failed and we were unable to recover it. 00:27:19.309 [2024-07-14 07:44:35.212985] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.309 [2024-07-14 07:44:35.213170] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.309 [2024-07-14 07:44:35.213200] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:19.309 qpair failed and we were unable to recover it. 00:27:19.309 [2024-07-14 07:44:35.213405] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.309 [2024-07-14 07:44:35.213653] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.309 [2024-07-14 07:44:35.213716] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:19.309 qpair failed and we were unable to recover it. 00:27:19.309 [2024-07-14 07:44:35.214007] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.309 [2024-07-14 07:44:35.214257] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.309 [2024-07-14 07:44:35.214283] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:19.309 qpair failed and we were unable to recover it. 00:27:19.309 [2024-07-14 07:44:35.214502] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.309 [2024-07-14 07:44:35.214785] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.309 [2024-07-14 07:44:35.214814] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:19.309 qpair failed and we were unable to recover it. 00:27:19.309 [2024-07-14 07:44:35.215022] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.309 [2024-07-14 07:44:35.215269] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.309 [2024-07-14 07:44:35.215318] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:19.309 qpair failed and we were unable to recover it. 00:27:19.309 [2024-07-14 07:44:35.215552] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.309 [2024-07-14 07:44:35.215785] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.309 [2024-07-14 07:44:35.215813] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:19.309 qpair failed and we were unable to recover it. 00:27:19.309 [2024-07-14 07:44:35.216012] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.309 [2024-07-14 07:44:35.216317] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.309 [2024-07-14 07:44:35.216369] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:19.309 qpair failed and we were unable to recover it. 00:27:19.309 [2024-07-14 07:44:35.216586] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.309 [2024-07-14 07:44:35.216771] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.309 [2024-07-14 07:44:35.216797] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:19.309 qpair failed and we were unable to recover it. 00:27:19.309 [2024-07-14 07:44:35.217015] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.309 [2024-07-14 07:44:35.217326] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.309 [2024-07-14 07:44:35.217384] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:19.309 qpair failed and we were unable to recover it. 00:27:19.309 [2024-07-14 07:44:35.217623] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.309 [2024-07-14 07:44:35.217836] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.309 [2024-07-14 07:44:35.217871] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:19.309 qpair failed and we were unable to recover it. 00:27:19.309 [2024-07-14 07:44:35.218107] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.309 [2024-07-14 07:44:35.218339] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.309 [2024-07-14 07:44:35.218365] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:19.309 qpair failed and we were unable to recover it. 00:27:19.309 [2024-07-14 07:44:35.218532] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.309 [2024-07-14 07:44:35.218729] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.309 [2024-07-14 07:44:35.218755] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:19.309 qpair failed and we were unable to recover it. 00:27:19.309 [2024-07-14 07:44:35.218996] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.309 [2024-07-14 07:44:35.219262] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.309 [2024-07-14 07:44:35.219316] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:19.309 qpair failed and we were unable to recover it. 00:27:19.309 [2024-07-14 07:44:35.219557] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.309 [2024-07-14 07:44:35.219759] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.309 [2024-07-14 07:44:35.219788] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:19.309 qpair failed and we were unable to recover it. 00:27:19.309 [2024-07-14 07:44:35.220005] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.309 [2024-07-14 07:44:35.220281] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.309 [2024-07-14 07:44:35.220337] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:19.309 qpair failed and we were unable to recover it. 00:27:19.309 [2024-07-14 07:44:35.220574] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.309 [2024-07-14 07:44:35.220791] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.309 [2024-07-14 07:44:35.220820] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:19.309 qpair failed and we were unable to recover it. 00:27:19.309 [2024-07-14 07:44:35.221043] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.309 [2024-07-14 07:44:35.221296] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.309 [2024-07-14 07:44:35.221348] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:19.309 qpair failed and we were unable to recover it. 00:27:19.309 [2024-07-14 07:44:35.221561] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.309 [2024-07-14 07:44:35.221785] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.309 [2024-07-14 07:44:35.221811] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:19.309 qpair failed and we were unable to recover it. 00:27:19.309 [2024-07-14 07:44:35.222041] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.309 [2024-07-14 07:44:35.222292] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.309 [2024-07-14 07:44:35.222345] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:19.309 qpair failed and we were unable to recover it. 00:27:19.309 [2024-07-14 07:44:35.222638] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.309 [2024-07-14 07:44:35.222911] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.309 [2024-07-14 07:44:35.222938] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:19.309 qpair failed and we were unable to recover it. 00:27:19.309 [2024-07-14 07:44:35.223133] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.309 [2024-07-14 07:44:35.223466] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.309 [2024-07-14 07:44:35.223519] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:19.309 qpair failed and we were unable to recover it. 00:27:19.309 [2024-07-14 07:44:35.223751] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.309 [2024-07-14 07:44:35.223968] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.309 [2024-07-14 07:44:35.223997] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:19.309 qpair failed and we were unable to recover it. 00:27:19.310 [2024-07-14 07:44:35.224291] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.310 [2024-07-14 07:44:35.224738] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.310 [2024-07-14 07:44:35.224790] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:19.310 qpair failed and we were unable to recover it. 00:27:19.310 [2024-07-14 07:44:35.224994] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.310 [2024-07-14 07:44:35.225310] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.310 [2024-07-14 07:44:35.225368] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:19.310 qpair failed and we were unable to recover it. 00:27:19.310 [2024-07-14 07:44:35.225574] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.310 [2024-07-14 07:44:35.225757] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.310 [2024-07-14 07:44:35.225790] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:19.310 qpair failed and we were unable to recover it. 00:27:19.310 [2024-07-14 07:44:35.226033] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.310 [2024-07-14 07:44:35.226236] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.310 [2024-07-14 07:44:35.226262] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:19.310 qpair failed and we were unable to recover it. 00:27:19.310 [2024-07-14 07:44:35.226480] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.310 [2024-07-14 07:44:35.226856] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.310 [2024-07-14 07:44:35.226920] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:19.310 qpair failed and we were unable to recover it. 00:27:19.310 [2024-07-14 07:44:35.227169] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.310 [2024-07-14 07:44:35.227483] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.310 [2024-07-14 07:44:35.227549] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:19.310 qpair failed and we were unable to recover it. 00:27:19.310 [2024-07-14 07:44:35.227801] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.310 [2024-07-14 07:44:35.228021] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.310 [2024-07-14 07:44:35.228048] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:19.310 qpair failed and we were unable to recover it. 00:27:19.310 [2024-07-14 07:44:35.228273] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.310 [2024-07-14 07:44:35.228501] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.310 [2024-07-14 07:44:35.228530] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:19.310 qpair failed and we were unable to recover it. 00:27:19.310 [2024-07-14 07:44:35.228927] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.310 [2024-07-14 07:44:35.229130] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.310 [2024-07-14 07:44:35.229159] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:19.310 qpair failed and we were unable to recover it. 00:27:19.310 [2024-07-14 07:44:35.229390] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.310 [2024-07-14 07:44:35.229582] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.310 [2024-07-14 07:44:35.229608] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:19.310 qpair failed and we were unable to recover it. 00:27:19.310 [2024-07-14 07:44:35.229864] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.310 [2024-07-14 07:44:35.230080] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.310 [2024-07-14 07:44:35.230106] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:19.310 qpair failed and we were unable to recover it. 00:27:19.310 [2024-07-14 07:44:35.230265] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.310 [2024-07-14 07:44:35.230487] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.310 [2024-07-14 07:44:35.230516] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:19.310 qpair failed and we were unable to recover it. 00:27:19.310 [2024-07-14 07:44:35.230892] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.310 [2024-07-14 07:44:35.231118] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.310 [2024-07-14 07:44:35.231145] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:19.310 qpair failed and we were unable to recover it. 00:27:19.310 [2024-07-14 07:44:35.231349] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.310 [2024-07-14 07:44:35.231537] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.310 [2024-07-14 07:44:35.231563] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:19.310 qpair failed and we were unable to recover it. 00:27:19.310 [2024-07-14 07:44:35.231813] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.310 [2024-07-14 07:44:35.232072] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.310 [2024-07-14 07:44:35.232102] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:19.310 qpair failed and we were unable to recover it. 00:27:19.310 [2024-07-14 07:44:35.232279] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.310 [2024-07-14 07:44:35.232514] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.310 [2024-07-14 07:44:35.232540] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:19.310 qpair failed and we were unable to recover it. 00:27:19.310 [2024-07-14 07:44:35.232735] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.310 [2024-07-14 07:44:35.233068] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.310 [2024-07-14 07:44:35.233094] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:19.310 qpair failed and we were unable to recover it. 00:27:19.310 [2024-07-14 07:44:35.233302] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.310 [2024-07-14 07:44:35.233498] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.310 [2024-07-14 07:44:35.233525] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:19.310 qpair failed and we were unable to recover it. 00:27:19.310 [2024-07-14 07:44:35.233737] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.310 [2024-07-14 07:44:35.233942] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.310 [2024-07-14 07:44:35.233972] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:19.310 qpair failed and we were unable to recover it. 00:27:19.310 [2024-07-14 07:44:35.234177] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.310 [2024-07-14 07:44:35.234393] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.310 [2024-07-14 07:44:35.234419] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:19.310 qpair failed and we were unable to recover it. 00:27:19.310 [2024-07-14 07:44:35.234630] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.310 [2024-07-14 07:44:35.234861] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.310 [2024-07-14 07:44:35.234897] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:19.310 qpair failed and we were unable to recover it. 00:27:19.310 [2024-07-14 07:44:35.235117] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.310 [2024-07-14 07:44:35.235469] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.310 [2024-07-14 07:44:35.235520] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:19.310 qpair failed and we were unable to recover it. 00:27:19.310 [2024-07-14 07:44:35.235751] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.310 [2024-07-14 07:44:35.235991] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.310 [2024-07-14 07:44:35.236018] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:19.310 qpair failed and we were unable to recover it. 00:27:19.310 [2024-07-14 07:44:35.236198] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.310 [2024-07-14 07:44:35.236526] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.310 [2024-07-14 07:44:35.236584] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:19.310 qpair failed and we were unable to recover it. 00:27:19.310 [2024-07-14 07:44:35.236813] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.310 [2024-07-14 07:44:35.237040] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.310 [2024-07-14 07:44:35.237070] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:19.310 qpair failed and we were unable to recover it. 00:27:19.310 [2024-07-14 07:44:35.237345] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.310 [2024-07-14 07:44:35.237636] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.310 [2024-07-14 07:44:35.237694] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:19.310 qpair failed and we were unable to recover it. 00:27:19.310 [2024-07-14 07:44:35.237943] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.310 [2024-07-14 07:44:35.238172] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.310 [2024-07-14 07:44:35.238201] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:19.310 qpair failed and we were unable to recover it. 00:27:19.310 [2024-07-14 07:44:35.238432] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.310 [2024-07-14 07:44:35.238648] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.310 [2024-07-14 07:44:35.238674] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:19.310 qpair failed and we were unable to recover it. 00:27:19.310 [2024-07-14 07:44:35.238910] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.310 [2024-07-14 07:44:35.239117] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.310 [2024-07-14 07:44:35.239145] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:19.310 qpair failed and we were unable to recover it. 00:27:19.310 [2024-07-14 07:44:35.239355] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.310 [2024-07-14 07:44:35.239591] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.310 [2024-07-14 07:44:35.239641] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:19.310 qpair failed and we were unable to recover it. 00:27:19.310 [2024-07-14 07:44:35.239872] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.310 [2024-07-14 07:44:35.240090] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.310 [2024-07-14 07:44:35.240116] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:19.311 qpair failed and we were unable to recover it. 00:27:19.311 [2024-07-14 07:44:35.240350] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.311 [2024-07-14 07:44:35.240755] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.311 [2024-07-14 07:44:35.240814] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:19.311 qpair failed and we were unable to recover it. 00:27:19.311 [2024-07-14 07:44:35.241041] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.311 [2024-07-14 07:44:35.241368] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.311 [2024-07-14 07:44:35.241419] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:19.311 qpair failed and we were unable to recover it. 00:27:19.311 [2024-07-14 07:44:35.241697] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.311 [2024-07-14 07:44:35.241939] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.311 [2024-07-14 07:44:35.241969] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:19.311 qpair failed and we were unable to recover it. 00:27:19.311 [2024-07-14 07:44:35.242175] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.311 [2024-07-14 07:44:35.242520] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.311 [2024-07-14 07:44:35.242576] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:19.311 qpair failed and we were unable to recover it. 00:27:19.311 [2024-07-14 07:44:35.242782] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.311 [2024-07-14 07:44:35.243022] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.311 [2024-07-14 07:44:35.243052] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:19.311 qpair failed and we were unable to recover it. 00:27:19.311 [2024-07-14 07:44:35.243237] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.311 [2024-07-14 07:44:35.243475] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.311 [2024-07-14 07:44:35.243501] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:19.311 qpair failed and we were unable to recover it. 00:27:19.311 [2024-07-14 07:44:35.243713] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.311 [2024-07-14 07:44:35.243921] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.311 [2024-07-14 07:44:35.243951] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:19.311 qpair failed and we were unable to recover it. 00:27:19.311 [2024-07-14 07:44:35.244154] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.311 [2024-07-14 07:44:35.244388] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.311 [2024-07-14 07:44:35.244414] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:19.311 qpair failed and we were unable to recover it. 00:27:19.311 [2024-07-14 07:44:35.244629] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.311 [2024-07-14 07:44:35.244832] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.311 [2024-07-14 07:44:35.244858] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:19.311 qpair failed and we were unable to recover it. 00:27:19.311 [2024-07-14 07:44:35.245095] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.311 [2024-07-14 07:44:35.245409] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.311 [2024-07-14 07:44:35.245462] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:19.311 qpair failed and we were unable to recover it. 00:27:19.311 [2024-07-14 07:44:35.245738] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.311 [2024-07-14 07:44:35.245981] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.311 [2024-07-14 07:44:35.246011] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:19.311 qpair failed and we were unable to recover it. 00:27:19.311 [2024-07-14 07:44:35.246212] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.311 [2024-07-14 07:44:35.246478] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.311 [2024-07-14 07:44:35.246528] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:19.311 qpair failed and we were unable to recover it. 00:27:19.311 [2024-07-14 07:44:35.246699] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.311 [2024-07-14 07:44:35.246936] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.311 [2024-07-14 07:44:35.246966] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:19.311 qpair failed and we were unable to recover it. 00:27:19.311 [2024-07-14 07:44:35.247334] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.311 [2024-07-14 07:44:35.247776] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.311 [2024-07-14 07:44:35.247838] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:19.311 qpair failed and we were unable to recover it. 00:27:19.311 [2024-07-14 07:44:35.248100] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.311 [2024-07-14 07:44:35.248503] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.311 [2024-07-14 07:44:35.248558] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:19.311 qpair failed and we were unable to recover it. 00:27:19.311 [2024-07-14 07:44:35.248763] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.311 [2024-07-14 07:44:35.248966] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.311 [2024-07-14 07:44:35.248996] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:19.311 qpair failed and we were unable to recover it. 00:27:19.311 [2024-07-14 07:44:35.249192] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.311 [2024-07-14 07:44:35.249492] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.311 [2024-07-14 07:44:35.249544] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:19.311 qpair failed and we were unable to recover it. 00:27:19.311 [2024-07-14 07:44:35.249803] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.311 [2024-07-14 07:44:35.249997] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.311 [2024-07-14 07:44:35.250027] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:19.311 qpair failed and we were unable to recover it. 00:27:19.311 [2024-07-14 07:44:35.250265] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.311 [2024-07-14 07:44:35.250478] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.311 [2024-07-14 07:44:35.250507] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:19.311 qpair failed and we were unable to recover it. 00:27:19.311 [2024-07-14 07:44:35.250714] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.311 [2024-07-14 07:44:35.250947] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.311 [2024-07-14 07:44:35.250976] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:19.311 qpair failed and we were unable to recover it. 00:27:19.311 [2024-07-14 07:44:35.251189] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.311 [2024-07-14 07:44:35.251425] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.311 [2024-07-14 07:44:35.251455] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:19.311 qpair failed and we were unable to recover it. 00:27:19.311 [2024-07-14 07:44:35.251690] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.311 [2024-07-14 07:44:35.251893] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.311 [2024-07-14 07:44:35.251923] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:19.311 qpair failed and we were unable to recover it. 00:27:19.311 [2024-07-14 07:44:35.252158] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.311 [2024-07-14 07:44:35.252463] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.311 [2024-07-14 07:44:35.252531] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:19.311 qpair failed and we were unable to recover it. 00:27:19.311 [2024-07-14 07:44:35.252713] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.311 [2024-07-14 07:44:35.252919] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.312 [2024-07-14 07:44:35.252948] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:19.312 qpair failed and we were unable to recover it. 00:27:19.312 [2024-07-14 07:44:35.253143] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.312 [2024-07-14 07:44:35.253328] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.312 [2024-07-14 07:44:35.253354] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:19.312 qpair failed and we were unable to recover it. 00:27:19.312 [2024-07-14 07:44:35.253696] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.312 [2024-07-14 07:44:35.254121] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.312 [2024-07-14 07:44:35.254178] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:19.312 qpair failed and we were unable to recover it. 00:27:19.312 [2024-07-14 07:44:35.254377] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.312 [2024-07-14 07:44:35.254588] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.312 [2024-07-14 07:44:35.254617] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:19.312 qpair failed and we were unable to recover it. 00:27:19.312 [2024-07-14 07:44:35.254824] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.312 [2024-07-14 07:44:35.255065] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.312 [2024-07-14 07:44:35.255094] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:19.312 qpair failed and we were unable to recover it. 00:27:19.312 [2024-07-14 07:44:35.255329] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.312 [2024-07-14 07:44:35.255717] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.312 [2024-07-14 07:44:35.255788] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:19.312 qpair failed and we were unable to recover it. 00:27:19.312 [2024-07-14 07:44:35.256004] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.312 [2024-07-14 07:44:35.256182] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.312 [2024-07-14 07:44:35.256212] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:19.312 qpair failed and we were unable to recover it. 00:27:19.312 [2024-07-14 07:44:35.256376] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.312 [2024-07-14 07:44:35.256548] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.312 [2024-07-14 07:44:35.256588] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:19.312 qpair failed and we were unable to recover it. 00:27:19.312 [2024-07-14 07:44:35.256797] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.312 [2024-07-14 07:44:35.256965] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.312 [2024-07-14 07:44:35.256994] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:19.312 qpair failed and we were unable to recover it. 00:27:19.312 [2024-07-14 07:44:35.257222] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.312 [2024-07-14 07:44:35.257400] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.312 [2024-07-14 07:44:35.257428] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:19.312 qpair failed and we were unable to recover it. 00:27:19.312 [2024-07-14 07:44:35.257659] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.312 [2024-07-14 07:44:35.257860] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.312 [2024-07-14 07:44:35.257899] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:19.312 qpair failed and we were unable to recover it. 00:27:19.312 [2024-07-14 07:44:35.258076] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.312 [2024-07-14 07:44:35.258254] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.312 [2024-07-14 07:44:35.258280] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:19.312 qpair failed and we were unable to recover it. 00:27:19.312 [2024-07-14 07:44:35.258511] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.312 [2024-07-14 07:44:35.258717] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.312 [2024-07-14 07:44:35.258746] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:19.312 qpair failed and we were unable to recover it. 00:27:19.312 [2024-07-14 07:44:35.258956] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.312 [2024-07-14 07:44:35.259140] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.312 [2024-07-14 07:44:35.259180] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:19.312 qpair failed and we were unable to recover it. 00:27:19.312 [2024-07-14 07:44:35.259374] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.312 [2024-07-14 07:44:35.259565] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.312 [2024-07-14 07:44:35.259622] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:19.312 qpair failed and we were unable to recover it. 00:27:19.312 [2024-07-14 07:44:35.259889] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.312 [2024-07-14 07:44:35.260100] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.312 [2024-07-14 07:44:35.260126] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:19.312 qpair failed and we were unable to recover it. 00:27:19.312 [2024-07-14 07:44:35.260346] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.312 [2024-07-14 07:44:35.260574] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.312 [2024-07-14 07:44:35.260626] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:19.312 qpair failed and we were unable to recover it. 00:27:19.312 [2024-07-14 07:44:35.260838] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.312 [2024-07-14 07:44:35.261074] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.312 [2024-07-14 07:44:35.261103] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:19.312 qpair failed and we were unable to recover it. 00:27:19.312 [2024-07-14 07:44:35.261483] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.312 [2024-07-14 07:44:35.261902] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.312 [2024-07-14 07:44:35.261932] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:19.312 qpair failed and we were unable to recover it. 00:27:19.312 [2024-07-14 07:44:35.262146] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.312 [2024-07-14 07:44:35.262405] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.312 [2024-07-14 07:44:35.262430] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:19.312 qpair failed and we were unable to recover it. 00:27:19.312 [2024-07-14 07:44:35.262626] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.312 [2024-07-14 07:44:35.262842] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.312 [2024-07-14 07:44:35.262874] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:19.312 qpair failed and we were unable to recover it. 00:27:19.312 [2024-07-14 07:44:35.263121] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.312 [2024-07-14 07:44:35.263445] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.312 [2024-07-14 07:44:35.263496] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:19.312 qpair failed and we were unable to recover it. 00:27:19.312 [2024-07-14 07:44:35.263922] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.312 [2024-07-14 07:44:35.264158] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.312 [2024-07-14 07:44:35.264187] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:19.312 qpair failed and we were unable to recover it. 00:27:19.312 [2024-07-14 07:44:35.264421] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.312 [2024-07-14 07:44:35.264603] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.312 [2024-07-14 07:44:35.264632] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:19.312 qpair failed and we were unable to recover it. 00:27:19.312 [2024-07-14 07:44:35.264838] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.312 [2024-07-14 07:44:35.265088] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.312 [2024-07-14 07:44:35.265114] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:19.312 qpair failed and we were unable to recover it. 00:27:19.312 [2024-07-14 07:44:35.265307] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.312 [2024-07-14 07:44:35.265509] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.312 [2024-07-14 07:44:35.265535] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:19.312 qpair failed and we were unable to recover it. 00:27:19.312 [2024-07-14 07:44:35.265746] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.312 [2024-07-14 07:44:35.265962] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.312 [2024-07-14 07:44:35.265992] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:19.312 qpair failed and we were unable to recover it. 00:27:19.312 [2024-07-14 07:44:35.266208] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.312 [2024-07-14 07:44:35.266410] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.312 [2024-07-14 07:44:35.266436] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:19.312 qpair failed and we were unable to recover it. 00:27:19.312 [2024-07-14 07:44:35.266598] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.312 [2024-07-14 07:44:35.266807] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.312 [2024-07-14 07:44:35.266836] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:19.312 qpair failed and we were unable to recover it. 00:27:19.312 [2024-07-14 07:44:35.267055] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.312 [2024-07-14 07:44:35.267295] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.312 [2024-07-14 07:44:35.267338] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:19.312 qpair failed and we were unable to recover it. 00:27:19.312 [2024-07-14 07:44:35.267701] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.312 [2024-07-14 07:44:35.267982] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.312 [2024-07-14 07:44:35.268012] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:19.312 qpair failed and we were unable to recover it. 00:27:19.312 [2024-07-14 07:44:35.268240] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.312 [2024-07-14 07:44:35.268655] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.312 [2024-07-14 07:44:35.268706] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:19.312 qpair failed and we were unable to recover it. 00:27:19.312 [2024-07-14 07:44:35.268890] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.312 [2024-07-14 07:44:35.269105] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.312 [2024-07-14 07:44:35.269131] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:19.312 qpair failed and we were unable to recover it. 00:27:19.312 [2024-07-14 07:44:35.269332] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.312 [2024-07-14 07:44:35.269677] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.313 [2024-07-14 07:44:35.269729] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:19.313 qpair failed and we were unable to recover it. 00:27:19.313 [2024-07-14 07:44:35.269960] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.313 [2024-07-14 07:44:35.270263] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.313 [2024-07-14 07:44:35.270322] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:19.313 qpair failed and we were unable to recover it. 00:27:19.313 [2024-07-14 07:44:35.270555] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.313 [2024-07-14 07:44:35.270788] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.313 [2024-07-14 07:44:35.270818] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:19.313 qpair failed and we were unable to recover it. 00:27:19.313 [2024-07-14 07:44:35.271044] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.313 [2024-07-14 07:44:35.271210] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.313 [2024-07-14 07:44:35.271237] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:19.313 qpair failed and we were unable to recover it. 00:27:19.313 [2024-07-14 07:44:35.271399] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.313 [2024-07-14 07:44:35.271590] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.313 [2024-07-14 07:44:35.271619] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:19.313 qpair failed and we were unable to recover it. 00:27:19.313 [2024-07-14 07:44:35.271826] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.313 [2024-07-14 07:44:35.272037] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.313 [2024-07-14 07:44:35.272068] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:19.313 qpair failed and we were unable to recover it. 00:27:19.313 [2024-07-14 07:44:35.272281] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.313 [2024-07-14 07:44:35.272525] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.313 [2024-07-14 07:44:35.272554] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:19.313 qpair failed and we were unable to recover it. 00:27:19.313 [2024-07-14 07:44:35.272785] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.313 [2024-07-14 07:44:35.272972] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.313 [2024-07-14 07:44:35.273002] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:19.313 qpair failed and we were unable to recover it. 00:27:19.313 [2024-07-14 07:44:35.273226] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.313 [2024-07-14 07:44:35.273443] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.313 [2024-07-14 07:44:35.273469] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:19.313 qpair failed and we were unable to recover it. 00:27:19.313 [2024-07-14 07:44:35.273841] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.313 [2024-07-14 07:44:35.274080] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.313 [2024-07-14 07:44:35.274109] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:19.313 qpair failed and we were unable to recover it. 00:27:19.313 [2024-07-14 07:44:35.274350] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.313 [2024-07-14 07:44:35.274589] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.313 [2024-07-14 07:44:35.274618] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:19.313 qpair failed and we were unable to recover it. 00:27:19.313 [2024-07-14 07:44:35.274846] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.313 [2024-07-14 07:44:35.275056] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.313 [2024-07-14 07:44:35.275082] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:19.313 qpair failed and we were unable to recover it. 00:27:19.313 [2024-07-14 07:44:35.275289] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.313 [2024-07-14 07:44:35.275481] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.313 [2024-07-14 07:44:35.275507] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:19.313 qpair failed and we were unable to recover it. 00:27:19.313 [2024-07-14 07:44:35.275721] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.313 [2024-07-14 07:44:35.275974] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.313 [2024-07-14 07:44:35.276000] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:19.313 qpair failed and we were unable to recover it. 00:27:19.313 [2024-07-14 07:44:35.276200] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.313 [2024-07-14 07:44:35.276418] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.313 [2024-07-14 07:44:35.276444] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:19.313 qpair failed and we were unable to recover it. 00:27:19.313 [2024-07-14 07:44:35.276687] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.313 [2024-07-14 07:44:35.276891] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.313 [2024-07-14 07:44:35.276933] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:19.313 qpair failed and we were unable to recover it. 00:27:19.313 [2024-07-14 07:44:35.277138] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.313 [2024-07-14 07:44:35.277517] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.313 [2024-07-14 07:44:35.277577] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:19.313 qpair failed and we were unable to recover it. 00:27:19.313 [2024-07-14 07:44:35.277816] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.313 [2024-07-14 07:44:35.277995] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.313 [2024-07-14 07:44:35.278032] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:19.313 qpair failed and we were unable to recover it. 00:27:19.313 [2024-07-14 07:44:35.278209] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.313 [2024-07-14 07:44:35.278448] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.313 [2024-07-14 07:44:35.278477] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:19.313 qpair failed and we were unable to recover it. 00:27:19.313 [2024-07-14 07:44:35.278684] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.313 [2024-07-14 07:44:35.278901] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.313 [2024-07-14 07:44:35.278927] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:19.313 qpair failed and we were unable to recover it. 00:27:19.313 [2024-07-14 07:44:35.279116] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.313 [2024-07-14 07:44:35.279567] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.313 [2024-07-14 07:44:35.279617] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:19.313 qpair failed and we were unable to recover it. 00:27:19.313 [2024-07-14 07:44:35.279834] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.313 [2024-07-14 07:44:35.280026] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.313 [2024-07-14 07:44:35.280056] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:19.313 qpair failed and we were unable to recover it. 00:27:19.313 [2024-07-14 07:44:35.280247] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.313 [2024-07-14 07:44:35.280440] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.313 [2024-07-14 07:44:35.280466] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:19.313 qpair failed and we were unable to recover it. 00:27:19.313 [2024-07-14 07:44:35.280700] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.313 [2024-07-14 07:44:35.280913] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.313 [2024-07-14 07:44:35.280944] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:19.313 qpair failed and we were unable to recover it. 00:27:19.313 [2024-07-14 07:44:35.281153] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.313 [2024-07-14 07:44:35.281326] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.313 [2024-07-14 07:44:35.281355] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:19.313 qpair failed and we were unable to recover it. 00:27:19.313 [2024-07-14 07:44:35.281556] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.313 [2024-07-14 07:44:35.281807] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.313 [2024-07-14 07:44:35.281833] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:19.313 qpair failed and we were unable to recover it. 00:27:19.313 [2024-07-14 07:44:35.282032] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.313 [2024-07-14 07:44:35.282253] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.313 [2024-07-14 07:44:35.282279] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:19.313 qpair failed and we were unable to recover it. 00:27:19.313 [2024-07-14 07:44:35.282446] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.313 [2024-07-14 07:44:35.282660] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.313 [2024-07-14 07:44:35.282686] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:19.313 qpair failed and we were unable to recover it. 00:27:19.313 [2024-07-14 07:44:35.282922] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.313 [2024-07-14 07:44:35.283106] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.313 [2024-07-14 07:44:35.283136] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:19.313 qpair failed and we were unable to recover it. 00:27:19.313 [2024-07-14 07:44:35.283498] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.313 [2024-07-14 07:44:35.283854] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.313 [2024-07-14 07:44:35.283920] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:19.313 qpair failed and we were unable to recover it. 00:27:19.313 [2024-07-14 07:44:35.284204] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.313 [2024-07-14 07:44:35.284426] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.313 [2024-07-14 07:44:35.284455] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:19.313 qpair failed and we were unable to recover it. 00:27:19.313 [2024-07-14 07:44:35.284665] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.314 [2024-07-14 07:44:35.284820] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.314 [2024-07-14 07:44:35.284861] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:19.314 qpair failed and we were unable to recover it. 00:27:19.314 [2024-07-14 07:44:35.285123] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.314 [2024-07-14 07:44:35.285327] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.314 [2024-07-14 07:44:35.285355] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:19.314 qpair failed and we were unable to recover it. 00:27:19.314 [2024-07-14 07:44:35.285643] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.314 [2024-07-14 07:44:35.285903] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.314 [2024-07-14 07:44:35.285931] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:19.314 qpair failed and we were unable to recover it. 00:27:19.314 [2024-07-14 07:44:35.286110] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.314 [2024-07-14 07:44:35.286268] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.314 [2024-07-14 07:44:35.286295] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:19.314 qpair failed and we were unable to recover it. 00:27:19.314 [2024-07-14 07:44:35.286505] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.314 [2024-07-14 07:44:35.286728] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.314 [2024-07-14 07:44:35.286753] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:19.314 qpair failed and we were unable to recover it. 00:27:19.314 [2024-07-14 07:44:35.286966] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.314 [2024-07-14 07:44:35.287175] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.314 [2024-07-14 07:44:35.287202] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:19.314 qpair failed and we were unable to recover it. 00:27:19.314 [2024-07-14 07:44:35.287389] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.314 [2024-07-14 07:44:35.287682] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.314 [2024-07-14 07:44:35.287734] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:19.314 qpair failed and we were unable to recover it. 00:27:19.314 [2024-07-14 07:44:35.287947] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.314 [2024-07-14 07:44:35.288138] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.314 [2024-07-14 07:44:35.288179] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:19.314 qpair failed and we were unable to recover it. 00:27:19.314 [2024-07-14 07:44:35.288386] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.314 [2024-07-14 07:44:35.288569] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.314 [2024-07-14 07:44:35.288595] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:19.314 qpair failed and we were unable to recover it. 00:27:19.314 [2024-07-14 07:44:35.288784] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.314 [2024-07-14 07:44:35.288981] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.314 [2024-07-14 07:44:35.289010] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:19.314 qpair failed and we were unable to recover it. 00:27:19.314 [2024-07-14 07:44:35.289288] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.314 [2024-07-14 07:44:35.289669] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.314 [2024-07-14 07:44:35.289719] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:19.314 qpair failed and we were unable to recover it. 00:27:19.314 [2024-07-14 07:44:35.289964] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.314 [2024-07-14 07:44:35.290172] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.314 [2024-07-14 07:44:35.290198] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:19.314 qpair failed and we were unable to recover it. 00:27:19.314 [2024-07-14 07:44:35.290357] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.314 [2024-07-14 07:44:35.290575] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.314 [2024-07-14 07:44:35.290605] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:19.314 qpair failed and we were unable to recover it. 00:27:19.314 [2024-07-14 07:44:35.290832] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.314 [2024-07-14 07:44:35.291044] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.314 [2024-07-14 07:44:35.291073] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:19.314 qpair failed and we were unable to recover it. 00:27:19.314 [2024-07-14 07:44:35.291276] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.314 [2024-07-14 07:44:35.291774] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.314 [2024-07-14 07:44:35.291825] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:19.314 qpair failed and we were unable to recover it. 00:27:19.314 [2024-07-14 07:44:35.292107] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.314 [2024-07-14 07:44:35.292288] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.314 [2024-07-14 07:44:35.292312] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:19.314 qpair failed and we were unable to recover it. 00:27:19.314 [2024-07-14 07:44:35.292554] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.314 [2024-07-14 07:44:35.292783] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.314 [2024-07-14 07:44:35.292813] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:19.314 qpair failed and we were unable to recover it. 00:27:19.314 [2024-07-14 07:44:35.293100] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.314 [2024-07-14 07:44:35.293425] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.314 [2024-07-14 07:44:35.293477] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:19.314 qpair failed and we were unable to recover it. 00:27:19.314 [2024-07-14 07:44:35.293748] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.314 [2024-07-14 07:44:35.293979] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.314 [2024-07-14 07:44:35.294003] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:19.314 qpair failed and we were unable to recover it. 00:27:19.314 [2024-07-14 07:44:35.294189] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.314 [2024-07-14 07:44:35.294369] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.314 [2024-07-14 07:44:35.294393] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:19.314 qpair failed and we were unable to recover it. 00:27:19.314 [2024-07-14 07:44:35.294610] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.314 [2024-07-14 07:44:35.294776] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.314 [2024-07-14 07:44:35.294803] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:19.314 qpair failed and we were unable to recover it. 00:27:19.314 [2024-07-14 07:44:35.295011] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.314 [2024-07-14 07:44:35.295220] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.314 [2024-07-14 07:44:35.295247] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:19.314 qpair failed and we were unable to recover it. 00:27:19.314 [2024-07-14 07:44:35.295565] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.314 [2024-07-14 07:44:35.295819] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.314 [2024-07-14 07:44:35.295847] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:19.314 qpair failed and we were unable to recover it. 00:27:19.314 [2024-07-14 07:44:35.296042] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.314 [2024-07-14 07:44:35.296491] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.314 [2024-07-14 07:44:35.296543] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:19.314 qpair failed and we were unable to recover it. 00:27:19.314 [2024-07-14 07:44:35.296777] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.314 [2024-07-14 07:44:35.296941] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.314 [2024-07-14 07:44:35.296968] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:19.314 qpair failed and we were unable to recover it. 00:27:19.314 [2024-07-14 07:44:35.297158] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.314 [2024-07-14 07:44:35.297440] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.314 [2024-07-14 07:44:35.297492] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:19.314 qpair failed and we were unable to recover it. 00:27:19.314 [2024-07-14 07:44:35.297845] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.314 [2024-07-14 07:44:35.298107] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.314 [2024-07-14 07:44:35.298136] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:19.314 qpair failed and we were unable to recover it. 00:27:19.314 [2024-07-14 07:44:35.298521] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.314 [2024-07-14 07:44:35.298895] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.314 [2024-07-14 07:44:35.298951] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:19.314 qpair failed and we were unable to recover it. 00:27:19.314 [2024-07-14 07:44:35.299156] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.314 [2024-07-14 07:44:35.299531] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.314 [2024-07-14 07:44:35.299588] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:19.314 qpair failed and we were unable to recover it. 00:27:19.314 [2024-07-14 07:44:35.299883] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.314 [2024-07-14 07:44:35.300107] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.314 [2024-07-14 07:44:35.300136] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:19.314 qpair failed and we were unable to recover it. 00:27:19.314 [2024-07-14 07:44:35.300343] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.314 [2024-07-14 07:44:35.300553] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.314 [2024-07-14 07:44:35.300603] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:19.314 qpair failed and we were unable to recover it. 00:27:19.314 [2024-07-14 07:44:35.300799] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.314 [2024-07-14 07:44:35.301056] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.314 [2024-07-14 07:44:35.301086] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:19.314 qpair failed and we were unable to recover it. 00:27:19.314 [2024-07-14 07:44:35.301288] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.314 [2024-07-14 07:44:35.301496] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.314 [2024-07-14 07:44:35.301525] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:19.314 qpair failed and we were unable to recover it. 00:27:19.314 [2024-07-14 07:44:35.301734] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.315 [2024-07-14 07:44:35.301974] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.315 [2024-07-14 07:44:35.302004] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:19.315 qpair failed and we were unable to recover it. 00:27:19.315 [2024-07-14 07:44:35.302253] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.315 [2024-07-14 07:44:35.302539] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.315 [2024-07-14 07:44:35.302602] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:19.315 qpair failed and we were unable to recover it. 00:27:19.315 [2024-07-14 07:44:35.302838] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.315 [2024-07-14 07:44:35.303006] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.315 [2024-07-14 07:44:35.303033] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:19.315 qpair failed and we were unable to recover it. 00:27:19.315 [2024-07-14 07:44:35.303199] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.315 [2024-07-14 07:44:35.303498] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.315 [2024-07-14 07:44:35.303550] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:19.315 qpair failed and we were unable to recover it. 00:27:19.315 [2024-07-14 07:44:35.303766] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.315 [2024-07-14 07:44:35.303965] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.315 [2024-07-14 07:44:35.303999] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:19.315 qpair failed and we were unable to recover it. 00:27:19.315 [2024-07-14 07:44:35.304173] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.315 [2024-07-14 07:44:35.304331] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.315 [2024-07-14 07:44:35.304360] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:19.315 qpair failed and we were unable to recover it. 00:27:19.315 [2024-07-14 07:44:35.304561] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.315 [2024-07-14 07:44:35.304760] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.315 [2024-07-14 07:44:35.304789] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:19.315 qpair failed and we were unable to recover it. 00:27:19.315 [2024-07-14 07:44:35.305022] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.315 [2024-07-14 07:44:35.305321] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.315 [2024-07-14 07:44:35.305373] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:19.315 qpair failed and we were unable to recover it. 00:27:19.315 [2024-07-14 07:44:35.305579] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.315 [2024-07-14 07:44:35.305775] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.315 [2024-07-14 07:44:35.305805] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:19.315 qpair failed and we were unable to recover it. 00:27:19.315 [2024-07-14 07:44:35.306032] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.315 [2024-07-14 07:44:35.306256] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.315 [2024-07-14 07:44:35.306305] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:19.315 qpair failed and we were unable to recover it. 00:27:19.315 [2024-07-14 07:44:35.306549] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.315 [2024-07-14 07:44:35.306726] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.315 [2024-07-14 07:44:35.306770] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:19.315 qpair failed and we were unable to recover it. 00:27:19.315 [2024-07-14 07:44:35.306977] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.315 [2024-07-14 07:44:35.307277] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.315 [2024-07-14 07:44:35.307334] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:19.315 qpair failed and we were unable to recover it. 00:27:19.315 [2024-07-14 07:44:35.307560] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.315 [2024-07-14 07:44:35.307763] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.315 [2024-07-14 07:44:35.307792] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:19.315 qpair failed and we were unable to recover it. 00:27:19.315 [2024-07-14 07:44:35.308038] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.315 [2024-07-14 07:44:35.308282] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.315 [2024-07-14 07:44:35.308333] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:19.315 qpair failed and we were unable to recover it. 00:27:19.315 [2024-07-14 07:44:35.308532] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.315 [2024-07-14 07:44:35.308773] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.315 [2024-07-14 07:44:35.308802] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:19.315 qpair failed and we were unable to recover it. 00:27:19.315 [2024-07-14 07:44:35.309042] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.315 [2024-07-14 07:44:35.309370] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.315 [2024-07-14 07:44:35.309421] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:19.315 qpair failed and we were unable to recover it. 00:27:19.315 [2024-07-14 07:44:35.309630] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.315 [2024-07-14 07:44:35.309855] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.315 [2024-07-14 07:44:35.309893] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:19.315 qpair failed and we were unable to recover it. 00:27:19.315 [2024-07-14 07:44:35.310103] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.315 [2024-07-14 07:44:35.310397] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.315 [2024-07-14 07:44:35.310426] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:19.315 qpair failed and we were unable to recover it. 00:27:19.315 [2024-07-14 07:44:35.310635] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.315 [2024-07-14 07:44:35.310932] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.315 [2024-07-14 07:44:35.310962] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:19.315 qpair failed and we were unable to recover it. 00:27:19.315 [2024-07-14 07:44:35.311141] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.315 [2024-07-14 07:44:35.311369] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.315 [2024-07-14 07:44:35.311395] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:19.315 qpair failed and we were unable to recover it. 00:27:19.315 [2024-07-14 07:44:35.311591] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.315 [2024-07-14 07:44:35.311755] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.315 [2024-07-14 07:44:35.311781] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:19.315 qpair failed and we were unable to recover it. 00:27:19.315 [2024-07-14 07:44:35.311981] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.315 [2024-07-14 07:44:35.312166] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.315 [2024-07-14 07:44:35.312192] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:19.315 qpair failed and we were unable to recover it. 00:27:19.315 [2024-07-14 07:44:35.312351] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.315 [2024-07-14 07:44:35.312541] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.315 [2024-07-14 07:44:35.312569] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:19.315 qpair failed and we were unable to recover it. 00:27:19.315 [2024-07-14 07:44:35.312757] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.315 [2024-07-14 07:44:35.312946] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.315 [2024-07-14 07:44:35.312976] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:19.315 qpair failed and we were unable to recover it. 00:27:19.315 [2024-07-14 07:44:35.313144] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.315 [2024-07-14 07:44:35.313383] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.315 [2024-07-14 07:44:35.313412] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:19.315 qpair failed and we were unable to recover it. 00:27:19.315 [2024-07-14 07:44:35.313715] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.315 [2024-07-14 07:44:35.313921] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.315 [2024-07-14 07:44:35.313951] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:19.315 qpair failed and we were unable to recover it. 00:27:19.315 [2024-07-14 07:44:35.314222] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.315 [2024-07-14 07:44:35.314432] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.316 [2024-07-14 07:44:35.314457] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:19.316 qpair failed and we were unable to recover it. 00:27:19.316 [2024-07-14 07:44:35.314708] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.316 [2024-07-14 07:44:35.314911] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.316 [2024-07-14 07:44:35.314940] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:19.316 qpair failed and we were unable to recover it. 00:27:19.316 [2024-07-14 07:44:35.315169] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.316 [2024-07-14 07:44:35.315372] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.316 [2024-07-14 07:44:35.315401] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:19.316 qpair failed and we were unable to recover it. 00:27:19.316 [2024-07-14 07:44:35.315637] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.316 [2024-07-14 07:44:35.315856] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.316 [2024-07-14 07:44:35.315890] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:19.316 qpair failed and we were unable to recover it. 00:27:19.316 [2024-07-14 07:44:35.316092] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.316 [2024-07-14 07:44:35.316273] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.316 [2024-07-14 07:44:35.316299] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:19.316 qpair failed and we were unable to recover it. 00:27:19.316 [2024-07-14 07:44:35.316486] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.316 [2024-07-14 07:44:35.316699] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.316 [2024-07-14 07:44:35.316725] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:19.316 qpair failed and we were unable to recover it. 00:27:19.316 [2024-07-14 07:44:35.316913] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.316 [2024-07-14 07:44:35.317099] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.316 [2024-07-14 07:44:35.317128] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:19.316 qpair failed and we were unable to recover it. 00:27:19.316 [2024-07-14 07:44:35.317316] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.316 [2024-07-14 07:44:35.317500] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.316 [2024-07-14 07:44:35.317541] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:19.316 qpair failed and we were unable to recover it. 00:27:19.316 [2024-07-14 07:44:35.317723] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.316 [2024-07-14 07:44:35.317922] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.316 [2024-07-14 07:44:35.317949] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:19.316 qpair failed and we were unable to recover it. 00:27:19.316 [2024-07-14 07:44:35.318137] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.316 [2024-07-14 07:44:35.318319] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.316 [2024-07-14 07:44:35.318345] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:19.316 qpair failed and we were unable to recover it. 00:27:19.316 [2024-07-14 07:44:35.318546] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.316 [2024-07-14 07:44:35.318776] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.316 [2024-07-14 07:44:35.318805] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:19.316 qpair failed and we were unable to recover it. 00:27:19.316 [2024-07-14 07:44:35.319018] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.316 [2024-07-14 07:44:35.319225] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.316 [2024-07-14 07:44:35.319254] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:19.316 qpair failed and we were unable to recover it. 00:27:19.316 [2024-07-14 07:44:35.319583] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.316 [2024-07-14 07:44:35.319801] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.316 [2024-07-14 07:44:35.319827] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:19.316 qpair failed and we were unable to recover it. 00:27:19.316 [2024-07-14 07:44:35.320023] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.316 [2024-07-14 07:44:35.320205] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.316 [2024-07-14 07:44:35.320234] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:19.316 qpair failed and we were unable to recover it. 00:27:19.316 [2024-07-14 07:44:35.320419] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.316 [2024-07-14 07:44:35.320620] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.316 [2024-07-14 07:44:35.320649] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:19.316 qpair failed and we were unable to recover it. 00:27:19.316 [2024-07-14 07:44:35.320864] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.316 [2024-07-14 07:44:35.321072] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.316 [2024-07-14 07:44:35.321101] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:19.316 qpair failed and we were unable to recover it. 00:27:19.316 [2024-07-14 07:44:35.321334] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.316 [2024-07-14 07:44:35.321541] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.316 [2024-07-14 07:44:35.321566] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:19.316 qpair failed and we were unable to recover it. 00:27:19.316 [2024-07-14 07:44:35.321824] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.316 [2024-07-14 07:44:35.322011] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.316 [2024-07-14 07:44:35.322041] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:19.316 qpair failed and we were unable to recover it. 00:27:19.316 [2024-07-14 07:44:35.322252] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.316 [2024-07-14 07:44:35.322461] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.316 [2024-07-14 07:44:35.322490] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:19.316 qpair failed and we were unable to recover it. 00:27:19.316 [2024-07-14 07:44:35.322665] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.316 [2024-07-14 07:44:35.322898] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.316 [2024-07-14 07:44:35.322928] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:19.316 qpair failed and we were unable to recover it. 00:27:19.316 [2024-07-14 07:44:35.323161] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.316 [2024-07-14 07:44:35.323371] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.316 [2024-07-14 07:44:35.323432] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:19.316 qpair failed and we were unable to recover it. 00:27:19.316 [2024-07-14 07:44:35.323633] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.316 [2024-07-14 07:44:35.323861] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.316 [2024-07-14 07:44:35.323897] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:19.316 qpair failed and we were unable to recover it. 00:27:19.316 [2024-07-14 07:44:35.324127] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.316 [2024-07-14 07:44:35.324339] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.316 [2024-07-14 07:44:35.324366] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:19.316 qpair failed and we were unable to recover it. 00:27:19.316 [2024-07-14 07:44:35.324557] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.316 [2024-07-14 07:44:35.324726] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.316 [2024-07-14 07:44:35.324752] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:19.316 qpair failed and we were unable to recover it. 00:27:19.316 [2024-07-14 07:44:35.324910] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.316 [2024-07-14 07:44:35.325076] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.316 [2024-07-14 07:44:35.325102] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:19.316 qpair failed and we were unable to recover it. 00:27:19.316 [2024-07-14 07:44:35.325315] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.316 [2024-07-14 07:44:35.325514] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.316 [2024-07-14 07:44:35.325544] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:19.316 qpair failed and we were unable to recover it. 00:27:19.316 [2024-07-14 07:44:35.325743] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.316 [2024-07-14 07:44:35.325924] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.316 [2024-07-14 07:44:35.325954] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:19.316 qpair failed and we were unable to recover it. 00:27:19.316 [2024-07-14 07:44:35.326266] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.316 [2024-07-14 07:44:35.326722] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.316 [2024-07-14 07:44:35.326774] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:19.316 qpair failed and we were unable to recover it. 00:27:19.316 [2024-07-14 07:44:35.326974] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.316 [2024-07-14 07:44:35.327155] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.316 [2024-07-14 07:44:35.327199] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:19.316 qpair failed and we were unable to recover it. 00:27:19.316 [2024-07-14 07:44:35.327427] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.316 [2024-07-14 07:44:35.327625] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.316 [2024-07-14 07:44:35.327655] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:19.316 qpair failed and we were unable to recover it. 00:27:19.316 [2024-07-14 07:44:35.327850] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.316 [2024-07-14 07:44:35.328023] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.316 [2024-07-14 07:44:35.328051] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:19.316 qpair failed and we were unable to recover it. 00:27:19.317 [2024-07-14 07:44:35.328356] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.317 [2024-07-14 07:44:35.328628] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.317 [2024-07-14 07:44:35.328654] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:19.317 qpair failed and we were unable to recover it. 00:27:19.317 [2024-07-14 07:44:35.328888] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.317 [2024-07-14 07:44:35.329080] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.317 [2024-07-14 07:44:35.329109] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:19.317 qpair failed and we were unable to recover it. 00:27:19.317 [2024-07-14 07:44:35.329295] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.317 [2024-07-14 07:44:35.329462] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.317 [2024-07-14 07:44:35.329488] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:19.317 qpair failed and we were unable to recover it. 00:27:19.317 [2024-07-14 07:44:35.329695] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.317 [2024-07-14 07:44:35.329895] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.317 [2024-07-14 07:44:35.329925] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:19.317 qpair failed and we were unable to recover it. 00:27:19.317 [2024-07-14 07:44:35.330116] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.317 [2024-07-14 07:44:35.330328] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.317 [2024-07-14 07:44:35.330355] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:19.317 qpair failed and we were unable to recover it. 00:27:19.317 [2024-07-14 07:44:35.330537] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.317 [2024-07-14 07:44:35.330843] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.317 [2024-07-14 07:44:35.330902] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:19.317 qpair failed and we were unable to recover it. 00:27:19.317 [2024-07-14 07:44:35.331118] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.317 [2024-07-14 07:44:35.331324] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.317 [2024-07-14 07:44:35.331353] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:19.317 qpair failed and we were unable to recover it. 00:27:19.317 [2024-07-14 07:44:35.331586] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.317 [2024-07-14 07:44:35.331818] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.317 [2024-07-14 07:44:35.331847] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:19.317 qpair failed and we were unable to recover it. 00:27:19.317 [2024-07-14 07:44:35.332088] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.317 [2024-07-14 07:44:35.332267] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.317 [2024-07-14 07:44:35.332301] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:19.317 qpair failed and we were unable to recover it. 00:27:19.317 [2024-07-14 07:44:35.332484] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.317 [2024-07-14 07:44:35.332698] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.317 [2024-07-14 07:44:35.332724] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:19.317 qpair failed and we were unable to recover it. 00:27:19.317 [2024-07-14 07:44:35.332885] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.317 [2024-07-14 07:44:35.333044] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.317 [2024-07-14 07:44:35.333071] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:19.317 qpair failed and we were unable to recover it. 00:27:19.317 [2024-07-14 07:44:35.333302] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.317 [2024-07-14 07:44:35.333619] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.317 [2024-07-14 07:44:35.333680] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:19.317 qpair failed and we were unable to recover it. 00:27:19.317 [2024-07-14 07:44:35.333885] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.317 [2024-07-14 07:44:35.334073] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.317 [2024-07-14 07:44:35.334102] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:19.317 qpair failed and we were unable to recover it. 00:27:19.317 [2024-07-14 07:44:35.334338] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.317 [2024-07-14 07:44:35.334507] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.317 [2024-07-14 07:44:35.334535] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:19.317 qpair failed and we were unable to recover it. 00:27:19.317 [2024-07-14 07:44:35.334733] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.317 [2024-07-14 07:44:35.334931] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.317 [2024-07-14 07:44:35.334960] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:19.317 qpair failed and we were unable to recover it. 00:27:19.317 [2024-07-14 07:44:35.335167] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.317 [2024-07-14 07:44:35.335445] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.317 [2024-07-14 07:44:35.335474] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:19.317 qpair failed and we were unable to recover it. 00:27:19.317 [2024-07-14 07:44:35.335745] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.317 [2024-07-14 07:44:35.335983] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.317 [2024-07-14 07:44:35.336013] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:19.317 qpair failed and we were unable to recover it. 00:27:19.317 [2024-07-14 07:44:35.336221] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.317 [2024-07-14 07:44:35.336403] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.317 [2024-07-14 07:44:35.336429] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:19.317 qpair failed and we were unable to recover it. 00:27:19.317 [2024-07-14 07:44:35.336642] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.317 [2024-07-14 07:44:35.336823] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.317 [2024-07-14 07:44:35.336885] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:19.317 qpair failed and we were unable to recover it. 00:27:19.317 [2024-07-14 07:44:35.337102] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.317 [2024-07-14 07:44:35.337454] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.317 [2024-07-14 07:44:35.337507] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:19.317 qpair failed and we were unable to recover it. 00:27:19.317 [2024-07-14 07:44:35.337941] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.317 [2024-07-14 07:44:35.338175] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.317 [2024-07-14 07:44:35.338201] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:19.317 qpair failed and we were unable to recover it. 00:27:19.317 [2024-07-14 07:44:35.338383] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.317 [2024-07-14 07:44:35.338592] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.317 [2024-07-14 07:44:35.338621] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:19.317 qpair failed and we were unable to recover it. 00:27:19.317 [2024-07-14 07:44:35.338823] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.317 [2024-07-14 07:44:35.339017] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.317 [2024-07-14 07:44:35.339046] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:19.317 qpair failed and we were unable to recover it. 00:27:19.317 [2024-07-14 07:44:35.339246] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.317 [2024-07-14 07:44:35.339422] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.317 [2024-07-14 07:44:35.339451] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:19.317 qpair failed and we were unable to recover it. 00:27:19.317 [2024-07-14 07:44:35.339825] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.317 [2024-07-14 07:44:35.340087] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.317 [2024-07-14 07:44:35.340117] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:19.317 qpair failed and we were unable to recover it. 00:27:19.317 [2024-07-14 07:44:35.340320] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.317 [2024-07-14 07:44:35.340532] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.317 [2024-07-14 07:44:35.340561] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:19.317 qpair failed and we were unable to recover it. 00:27:19.317 [2024-07-14 07:44:35.340774] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.317 [2024-07-14 07:44:35.340978] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.317 [2024-07-14 07:44:35.341007] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:19.317 qpair failed and we were unable to recover it. 00:27:19.317 [2024-07-14 07:44:35.341209] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.317 [2024-07-14 07:44:35.341542] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.317 [2024-07-14 07:44:35.341601] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:19.317 qpair failed and we were unable to recover it. 00:27:19.317 [2024-07-14 07:44:35.341810] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.317 [2024-07-14 07:44:35.341971] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.317 [2024-07-14 07:44:35.341999] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:19.317 qpair failed and we were unable to recover it. 00:27:19.317 [2024-07-14 07:44:35.342213] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.317 [2024-07-14 07:44:35.342393] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.317 [2024-07-14 07:44:35.342419] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:19.317 qpair failed and we were unable to recover it. 00:27:19.317 [2024-07-14 07:44:35.342614] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.317 [2024-07-14 07:44:35.342831] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.317 [2024-07-14 07:44:35.342858] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:19.317 qpair failed and we were unable to recover it. 00:27:19.318 [2024-07-14 07:44:35.343068] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.318 [2024-07-14 07:44:35.343386] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.318 [2024-07-14 07:44:35.343437] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:19.318 qpair failed and we were unable to recover it. 00:27:19.318 [2024-07-14 07:44:35.343668] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.318 [2024-07-14 07:44:35.343850] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.318 [2024-07-14 07:44:35.343883] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:19.318 qpair failed and we were unable to recover it. 00:27:19.318 [2024-07-14 07:44:35.344073] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.318 [2024-07-14 07:44:35.344228] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.318 [2024-07-14 07:44:35.344269] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:19.318 qpair failed and we were unable to recover it. 00:27:19.318 [2024-07-14 07:44:35.344461] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.318 [2024-07-14 07:44:35.344675] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.318 [2024-07-14 07:44:35.344702] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:19.318 qpair failed and we were unable to recover it. 00:27:19.318 [2024-07-14 07:44:35.344955] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.318 [2024-07-14 07:44:35.345135] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.318 [2024-07-14 07:44:35.345176] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:19.318 qpair failed and we were unable to recover it. 00:27:19.318 [2024-07-14 07:44:35.345530] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.318 [2024-07-14 07:44:35.345892] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.318 [2024-07-14 07:44:35.345920] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:19.318 qpair failed and we were unable to recover it. 00:27:19.318 [2024-07-14 07:44:35.346162] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.318 [2024-07-14 07:44:35.346350] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.318 [2024-07-14 07:44:35.346376] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:19.318 qpair failed and we were unable to recover it. 00:27:19.318 [2024-07-14 07:44:35.346558] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.318 [2024-07-14 07:44:35.346891] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.318 [2024-07-14 07:44:35.346952] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:19.318 qpair failed and we were unable to recover it. 00:27:19.318 [2024-07-14 07:44:35.347184] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.318 [2024-07-14 07:44:35.347510] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.318 [2024-07-14 07:44:35.347560] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:19.318 qpair failed and we were unable to recover it. 00:27:19.318 [2024-07-14 07:44:35.347898] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.318 [2024-07-14 07:44:35.348154] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.318 [2024-07-14 07:44:35.348182] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:19.318 qpair failed and we were unable to recover it. 00:27:19.318 [2024-07-14 07:44:35.348480] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.318 [2024-07-14 07:44:35.348675] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.318 [2024-07-14 07:44:35.348701] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:19.318 qpair failed and we were unable to recover it. 00:27:19.318 [2024-07-14 07:44:35.348932] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.318 [2024-07-14 07:44:35.349116] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.318 [2024-07-14 07:44:35.349145] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:19.318 qpair failed and we were unable to recover it. 00:27:19.318 [2024-07-14 07:44:35.349338] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.318 [2024-07-14 07:44:35.349607] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.318 [2024-07-14 07:44:35.349659] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:19.318 qpair failed and we were unable to recover it. 00:27:19.318 [2024-07-14 07:44:35.349904] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.318 [2024-07-14 07:44:35.350117] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.318 [2024-07-14 07:44:35.350144] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:19.318 qpair failed and we were unable to recover it. 00:27:19.318 [2024-07-14 07:44:35.350321] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.318 [2024-07-14 07:44:35.350598] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.318 [2024-07-14 07:44:35.350659] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:19.318 qpair failed and we were unable to recover it. 00:27:19.318 [2024-07-14 07:44:35.350885] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.318 [2024-07-14 07:44:35.351089] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.318 [2024-07-14 07:44:35.351118] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:19.318 qpair failed and we were unable to recover it. 00:27:19.318 [2024-07-14 07:44:35.351286] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.318 [2024-07-14 07:44:35.351572] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.318 [2024-07-14 07:44:35.351624] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:19.318 qpair failed and we were unable to recover it. 00:27:19.318 [2024-07-14 07:44:35.351859] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.318 [2024-07-14 07:44:35.352032] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.318 [2024-07-14 07:44:35.352058] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:19.318 qpair failed and we were unable to recover it. 00:27:19.318 [2024-07-14 07:44:35.352270] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.318 [2024-07-14 07:44:35.352662] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.318 [2024-07-14 07:44:35.352721] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:19.318 qpair failed and we were unable to recover it. 00:27:19.318 [2024-07-14 07:44:35.352966] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.318 [2024-07-14 07:44:35.353192] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.318 [2024-07-14 07:44:35.353260] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:19.318 qpair failed and we were unable to recover it. 00:27:19.318 [2024-07-14 07:44:35.353491] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.318 [2024-07-14 07:44:35.353724] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.318 [2024-07-14 07:44:35.353752] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:19.318 qpair failed and we were unable to recover it. 00:27:19.318 [2024-07-14 07:44:35.353959] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.318 [2024-07-14 07:44:35.354172] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.318 [2024-07-14 07:44:35.354201] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:19.318 qpair failed and we were unable to recover it. 00:27:19.318 [2024-07-14 07:44:35.354432] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.318 [2024-07-14 07:44:35.354774] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.318 [2024-07-14 07:44:35.354824] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:19.318 qpair failed and we were unable to recover it. 00:27:19.318 [2024-07-14 07:44:35.355049] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.318 [2024-07-14 07:44:35.355423] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.318 [2024-07-14 07:44:35.355468] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:19.318 qpair failed and we were unable to recover it. 00:27:19.318 [2024-07-14 07:44:35.355699] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.318 [2024-07-14 07:44:35.355880] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.318 [2024-07-14 07:44:35.355907] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:19.318 qpair failed and we were unable to recover it. 00:27:19.318 [2024-07-14 07:44:35.356123] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.318 [2024-07-14 07:44:35.356430] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.318 [2024-07-14 07:44:35.356482] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:19.318 qpair failed and we were unable to recover it. 00:27:19.318 [2024-07-14 07:44:35.356761] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.318 [2024-07-14 07:44:35.356963] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.318 [2024-07-14 07:44:35.356993] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:19.318 qpair failed and we were unable to recover it. 00:27:19.318 [2024-07-14 07:44:35.357194] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.318 [2024-07-14 07:44:35.357400] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.318 [2024-07-14 07:44:35.357429] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:19.318 qpair failed and we were unable to recover it. 00:27:19.318 [2024-07-14 07:44:35.357631] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.318 [2024-07-14 07:44:35.357850] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.318 [2024-07-14 07:44:35.357888] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:19.318 qpair failed and we were unable to recover it. 00:27:19.318 [2024-07-14 07:44:35.358056] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.318 [2024-07-14 07:44:35.358259] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.318 [2024-07-14 07:44:35.358285] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:19.318 qpair failed and we were unable to recover it. 00:27:19.318 [2024-07-14 07:44:35.358468] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.318 [2024-07-14 07:44:35.358716] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.318 [2024-07-14 07:44:35.358745] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:19.318 qpair failed and we were unable to recover it. 00:27:19.318 [2024-07-14 07:44:35.358960] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.318 [2024-07-14 07:44:35.359188] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.318 [2024-07-14 07:44:35.359258] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:19.318 qpair failed and we were unable to recover it. 00:27:19.318 [2024-07-14 07:44:35.359465] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.318 [2024-07-14 07:44:35.359666] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.318 [2024-07-14 07:44:35.359695] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:19.318 qpair failed and we were unable to recover it. 00:27:19.318 [2024-07-14 07:44:35.359890] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.318 [2024-07-14 07:44:35.360104] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.318 [2024-07-14 07:44:35.360129] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:19.319 qpair failed and we were unable to recover it. 00:27:19.319 [2024-07-14 07:44:35.360280] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.319 [2024-07-14 07:44:35.360511] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.319 [2024-07-14 07:44:35.360536] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:19.319 qpair failed and we were unable to recover it. 00:27:19.319 [2024-07-14 07:44:35.360738] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.319 [2024-07-14 07:44:35.360924] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.319 [2024-07-14 07:44:35.360969] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:19.319 qpair failed and we were unable to recover it. 00:27:19.319 [2024-07-14 07:44:35.361174] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.319 [2024-07-14 07:44:35.361502] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.319 [2024-07-14 07:44:35.361560] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:19.319 qpair failed and we were unable to recover it. 00:27:19.319 [2024-07-14 07:44:35.361792] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.319 [2024-07-14 07:44:35.361991] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.319 [2024-07-14 07:44:35.362020] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:19.319 qpair failed and we were unable to recover it. 00:27:19.319 [2024-07-14 07:44:35.362229] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.319 [2024-07-14 07:44:35.362497] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.319 [2024-07-14 07:44:35.362547] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:19.319 qpair failed and we were unable to recover it. 00:27:19.319 [2024-07-14 07:44:35.362782] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.319 [2024-07-14 07:44:35.362969] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.319 [2024-07-14 07:44:35.363000] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:19.319 qpair failed and we were unable to recover it. 00:27:19.319 [2024-07-14 07:44:35.363204] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.319 [2024-07-14 07:44:35.363415] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.319 [2024-07-14 07:44:35.363441] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:19.319 qpair failed and we were unable to recover it. 00:27:19.319 [2024-07-14 07:44:35.363674] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.319 [2024-07-14 07:44:35.363839] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.319 [2024-07-14 07:44:35.363875] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:19.319 qpair failed and we were unable to recover it. 00:27:19.319 [2024-07-14 07:44:35.364093] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.319 [2024-07-14 07:44:35.364278] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.319 [2024-07-14 07:44:35.364304] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:19.319 qpair failed and we were unable to recover it. 00:27:19.319 [2024-07-14 07:44:35.364487] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.319 [2024-07-14 07:44:35.364823] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.319 [2024-07-14 07:44:35.364890] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:19.319 qpair failed and we were unable to recover it. 00:27:19.319 [2024-07-14 07:44:35.365103] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.319 [2024-07-14 07:44:35.365308] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.319 [2024-07-14 07:44:35.365334] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:19.319 qpair failed and we were unable to recover it. 00:27:19.319 [2024-07-14 07:44:35.365553] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.319 [2024-07-14 07:44:35.365793] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.319 [2024-07-14 07:44:35.365822] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:19.319 qpair failed and we were unable to recover it. 00:27:19.319 [2024-07-14 07:44:35.366036] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.319 [2024-07-14 07:44:35.366217] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.319 [2024-07-14 07:44:35.366243] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:19.319 qpair failed and we were unable to recover it. 00:27:19.319 [2024-07-14 07:44:35.366405] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.319 [2024-07-14 07:44:35.366594] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.319 [2024-07-14 07:44:35.366621] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:19.319 qpair failed and we were unable to recover it. 00:27:19.319 [2024-07-14 07:44:35.366805] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.319 [2024-07-14 07:44:35.366992] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.319 [2024-07-14 07:44:35.367019] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:19.319 qpair failed and we were unable to recover it. 00:27:19.319 [2024-07-14 07:44:35.367186] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.319 [2024-07-14 07:44:35.367393] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.319 [2024-07-14 07:44:35.367422] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:19.319 qpair failed and we were unable to recover it. 00:27:19.319 [2024-07-14 07:44:35.367659] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.319 [2024-07-14 07:44:35.367876] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.319 [2024-07-14 07:44:35.367903] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:19.319 qpair failed and we were unable to recover it. 00:27:19.319 [2024-07-14 07:44:35.368057] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.319 [2024-07-14 07:44:35.368223] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.319 [2024-07-14 07:44:35.368251] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:19.319 qpair failed and we were unable to recover it. 00:27:19.319 [2024-07-14 07:44:35.368441] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.319 [2024-07-14 07:44:35.368648] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.319 [2024-07-14 07:44:35.368675] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:19.319 qpair failed and we were unable to recover it. 00:27:19.319 [2024-07-14 07:44:35.368897] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.319 [2024-07-14 07:44:35.369056] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.319 [2024-07-14 07:44:35.369084] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:19.319 qpair failed and we were unable to recover it. 00:27:19.319 [2024-07-14 07:44:35.369270] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.319 [2024-07-14 07:44:35.369630] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.319 [2024-07-14 07:44:35.369687] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:19.319 qpair failed and we were unable to recover it. 00:27:19.319 [2024-07-14 07:44:35.369888] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.319 [2024-07-14 07:44:35.370092] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.319 [2024-07-14 07:44:35.370121] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:19.319 qpair failed and we were unable to recover it. 00:27:19.319 [2024-07-14 07:44:35.370318] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.319 [2024-07-14 07:44:35.370530] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.319 [2024-07-14 07:44:35.370558] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:19.319 qpair failed and we were unable to recover it. 00:27:19.319 [2024-07-14 07:44:35.370771] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.319 [2024-07-14 07:44:35.370934] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.319 [2024-07-14 07:44:35.370961] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:19.319 qpair failed and we were unable to recover it. 00:27:19.319 [2024-07-14 07:44:35.371173] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.319 [2024-07-14 07:44:35.371355] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.319 [2024-07-14 07:44:35.371381] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:19.319 qpair failed and we were unable to recover it. 00:27:19.319 [2024-07-14 07:44:35.371591] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.319 [2024-07-14 07:44:35.371794] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.319 [2024-07-14 07:44:35.371823] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:19.319 qpair failed and we were unable to recover it. 00:27:19.319 [2024-07-14 07:44:35.372035] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.319 [2024-07-14 07:44:35.372242] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.319 [2024-07-14 07:44:35.372271] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:19.319 qpair failed and we were unable to recover it. 00:27:19.319 [2024-07-14 07:44:35.372473] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.319 [2024-07-14 07:44:35.372680] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.319 [2024-07-14 07:44:35.372709] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:19.320 qpair failed and we were unable to recover it. 00:27:19.320 [2024-07-14 07:44:35.372918] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.320 [2024-07-14 07:44:35.373140] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.320 [2024-07-14 07:44:35.373166] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:19.320 qpair failed and we were unable to recover it. 00:27:19.320 [2024-07-14 07:44:35.373392] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.320 [2024-07-14 07:44:35.373720] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.320 [2024-07-14 07:44:35.373769] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:19.320 qpair failed and we were unable to recover it. 00:27:19.320 [2024-07-14 07:44:35.373985] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.320 [2024-07-14 07:44:35.374174] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.320 [2024-07-14 07:44:35.374217] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:19.320 qpair failed and we were unable to recover it. 00:27:19.320 [2024-07-14 07:44:35.374421] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.320 [2024-07-14 07:44:35.374822] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.320 [2024-07-14 07:44:35.374882] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:19.320 qpair failed and we were unable to recover it. 00:27:19.320 [2024-07-14 07:44:35.375183] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.320 [2024-07-14 07:44:35.375387] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.320 [2024-07-14 07:44:35.375414] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:19.320 qpair failed and we were unable to recover it. 00:27:19.320 [2024-07-14 07:44:35.375590] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.320 [2024-07-14 07:44:35.375813] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.320 [2024-07-14 07:44:35.375839] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:19.320 qpair failed and we were unable to recover it. 00:27:19.320 [2024-07-14 07:44:35.376048] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.320 [2024-07-14 07:44:35.376214] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.320 [2024-07-14 07:44:35.376240] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:19.320 qpair failed and we were unable to recover it. 00:27:19.320 [2024-07-14 07:44:35.376404] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.320 [2024-07-14 07:44:35.376612] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.320 [2024-07-14 07:44:35.376654] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:19.320 qpair failed and we were unable to recover it. 00:27:19.320 [2024-07-14 07:44:35.376903] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.320 [2024-07-14 07:44:35.377146] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.320 [2024-07-14 07:44:35.377175] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:19.320 qpair failed and we were unable to recover it. 00:27:19.320 [2024-07-14 07:44:35.377393] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.320 [2024-07-14 07:44:35.377637] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.320 [2024-07-14 07:44:35.377663] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:19.320 qpair failed and we were unable to recover it. 00:27:19.320 [2024-07-14 07:44:35.377825] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.320 [2024-07-14 07:44:35.378021] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.320 [2024-07-14 07:44:35.378048] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:19.320 qpair failed and we were unable to recover it. 00:27:19.320 [2024-07-14 07:44:35.378563] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.320 [2024-07-14 07:44:35.378764] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.320 [2024-07-14 07:44:35.378787] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:19.320 qpair failed and we were unable to recover it. 00:27:19.320 [2024-07-14 07:44:35.379003] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.320 [2024-07-14 07:44:35.379205] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.320 [2024-07-14 07:44:35.379230] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:19.320 qpair failed and we were unable to recover it. 00:27:19.320 [2024-07-14 07:44:35.379428] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.320 [2024-07-14 07:44:35.379603] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.320 [2024-07-14 07:44:35.379631] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:19.320 qpair failed and we were unable to recover it. 00:27:19.320 [2024-07-14 07:44:35.379877] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.320 [2024-07-14 07:44:35.380078] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.320 [2024-07-14 07:44:35.380107] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:19.320 qpair failed and we were unable to recover it. 00:27:19.320 [2024-07-14 07:44:35.380511] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.320 [2024-07-14 07:44:35.380795] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.320 [2024-07-14 07:44:35.380824] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:19.320 qpair failed and we were unable to recover it. 00:27:19.320 [2024-07-14 07:44:35.381107] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.320 [2024-07-14 07:44:35.381372] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.320 [2024-07-14 07:44:35.381397] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:19.320 qpair failed and we were unable to recover it. 00:27:19.320 [2024-07-14 07:44:35.381584] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.320 [2024-07-14 07:44:35.381788] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.320 [2024-07-14 07:44:35.381822] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:19.320 qpair failed and we were unable to recover it. 00:27:19.320 [2024-07-14 07:44:35.382011] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.320 [2024-07-14 07:44:35.382220] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.320 [2024-07-14 07:44:35.382272] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:19.320 qpair failed and we were unable to recover it. 00:27:19.320 [2024-07-14 07:44:35.382649] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.320 [2024-07-14 07:44:35.382861] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.320 [2024-07-14 07:44:35.382893] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:19.320 qpair failed and we were unable to recover it. 00:27:19.320 [2024-07-14 07:44:35.383098] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.320 [2024-07-14 07:44:35.383289] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.320 [2024-07-14 07:44:35.383315] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:19.320 qpair failed and we were unable to recover it. 00:27:19.320 [2024-07-14 07:44:35.383535] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.320 [2024-07-14 07:44:35.383776] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.320 [2024-07-14 07:44:35.383805] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:19.320 qpair failed and we were unable to recover it. 00:27:19.320 [2024-07-14 07:44:35.384016] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.320 [2024-07-14 07:44:35.384196] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.320 [2024-07-14 07:44:35.384225] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:19.320 qpair failed and we were unable to recover it. 00:27:19.320 [2024-07-14 07:44:35.384439] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.320 [2024-07-14 07:44:35.384637] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.320 [2024-07-14 07:44:35.384662] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:19.320 qpair failed and we were unable to recover it. 00:27:19.320 [2024-07-14 07:44:35.384887] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.320 [2024-07-14 07:44:35.385101] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.320 [2024-07-14 07:44:35.385128] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:19.320 qpair failed and we were unable to recover it. 00:27:19.320 [2024-07-14 07:44:35.385316] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.320 [2024-07-14 07:44:35.385546] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.320 [2024-07-14 07:44:35.385575] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:19.320 qpair failed and we were unable to recover it. 00:27:19.320 [2024-07-14 07:44:35.385777] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.320 [2024-07-14 07:44:35.385958] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.320 [2024-07-14 07:44:35.385988] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:19.320 qpair failed and we were unable to recover it. 00:27:19.320 [2024-07-14 07:44:35.386259] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.320 [2024-07-14 07:44:35.386545] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.320 [2024-07-14 07:44:35.386571] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:19.320 qpair failed and we were unable to recover it. 00:27:19.320 [2024-07-14 07:44:35.386793] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.320 [2024-07-14 07:44:35.386985] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.320 [2024-07-14 07:44:35.387016] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:19.320 qpair failed and we were unable to recover it. 00:27:19.320 [2024-07-14 07:44:35.387215] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.320 [2024-07-14 07:44:35.387441] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.320 [2024-07-14 07:44:35.387470] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:19.320 qpair failed and we were unable to recover it. 00:27:19.320 [2024-07-14 07:44:35.387678] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.320 [2024-07-14 07:44:35.387920] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.320 [2024-07-14 07:44:35.387950] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:19.320 qpair failed and we were unable to recover it. 00:27:19.320 [2024-07-14 07:44:35.388153] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.320 [2024-07-14 07:44:35.388458] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.321 [2024-07-14 07:44:35.388515] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:19.321 qpair failed and we were unable to recover it. 00:27:19.321 [2024-07-14 07:44:35.388744] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.321 [2024-07-14 07:44:35.388937] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.321 [2024-07-14 07:44:35.388964] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:19.321 qpair failed and we were unable to recover it. 00:27:19.321 [2024-07-14 07:44:35.389176] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.321 [2024-07-14 07:44:35.389360] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.321 [2024-07-14 07:44:35.389386] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:19.321 qpair failed and we were unable to recover it. 00:27:19.321 [2024-07-14 07:44:35.389536] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.321 [2024-07-14 07:44:35.389718] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.321 [2024-07-14 07:44:35.389745] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:19.321 qpair failed and we were unable to recover it. 00:27:19.321 [2024-07-14 07:44:35.389906] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.321 [2024-07-14 07:44:35.390083] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.321 [2024-07-14 07:44:35.390110] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:19.321 qpair failed and we were unable to recover it. 00:27:19.321 [2024-07-14 07:44:35.390318] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.321 [2024-07-14 07:44:35.390504] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.321 [2024-07-14 07:44:35.390531] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:19.321 qpair failed and we were unable to recover it. 00:27:19.321 [2024-07-14 07:44:35.390700] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.321 [2024-07-14 07:44:35.390911] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.321 [2024-07-14 07:44:35.390938] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:19.321 qpair failed and we were unable to recover it. 00:27:19.321 [2024-07-14 07:44:35.391180] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.321 [2024-07-14 07:44:35.391554] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.321 [2024-07-14 07:44:35.391606] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:19.321 qpair failed and we were unable to recover it. 00:27:19.321 [2024-07-14 07:44:35.391822] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.321 [2024-07-14 07:44:35.392010] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.321 [2024-07-14 07:44:35.392040] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:19.321 qpair failed and we were unable to recover it. 00:27:19.321 [2024-07-14 07:44:35.392222] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.321 [2024-07-14 07:44:35.392399] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.321 [2024-07-14 07:44:35.392426] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:19.321 qpair failed and we were unable to recover it. 00:27:19.321 [2024-07-14 07:44:35.392657] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.321 [2024-07-14 07:44:35.392893] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.321 [2024-07-14 07:44:35.392923] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:19.321 qpair failed and we were unable to recover it. 00:27:19.321 [2024-07-14 07:44:35.393158] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.321 [2024-07-14 07:44:35.393336] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.321 [2024-07-14 07:44:35.393363] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:19.321 qpair failed and we were unable to recover it. 00:27:19.321 [2024-07-14 07:44:35.393578] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.321 [2024-07-14 07:44:35.393737] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.321 [2024-07-14 07:44:35.393764] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:19.321 qpair failed and we were unable to recover it. 00:27:19.321 [2024-07-14 07:44:35.393973] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.321 [2024-07-14 07:44:35.394209] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.321 [2024-07-14 07:44:35.394267] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:19.321 qpair failed and we were unable to recover it. 00:27:19.321 [2024-07-14 07:44:35.394469] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.321 [2024-07-14 07:44:35.394843] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.321 [2024-07-14 07:44:35.394902] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:19.321 qpair failed and we were unable to recover it. 00:27:19.321 [2024-07-14 07:44:35.395108] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.321 [2024-07-14 07:44:35.395390] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.321 [2024-07-14 07:44:35.395419] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:19.321 qpair failed and we were unable to recover it. 00:27:19.321 [2024-07-14 07:44:35.395632] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.321 [2024-07-14 07:44:35.395823] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.321 [2024-07-14 07:44:35.395849] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:19.321 qpair failed and we were unable to recover it. 00:27:19.321 [2024-07-14 07:44:35.396071] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.321 [2024-07-14 07:44:35.396282] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.321 [2024-07-14 07:44:35.396309] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:19.321 qpair failed and we were unable to recover it. 00:27:19.321 [2024-07-14 07:44:35.396530] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.321 [2024-07-14 07:44:35.396719] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.321 [2024-07-14 07:44:35.396746] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:19.321 qpair failed and we were unable to recover it. 00:27:19.321 [2024-07-14 07:44:35.396933] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.321 [2024-07-14 07:44:35.397143] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.321 [2024-07-14 07:44:35.397169] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:19.321 qpair failed and we were unable to recover it. 00:27:19.321 [2024-07-14 07:44:35.397414] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.321 [2024-07-14 07:44:35.397730] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.321 [2024-07-14 07:44:35.397756] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:19.321 qpair failed and we were unable to recover it. 00:27:19.321 [2024-07-14 07:44:35.397978] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.321 [2024-07-14 07:44:35.398136] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.321 [2024-07-14 07:44:35.398163] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:19.321 qpair failed and we were unable to recover it. 00:27:19.321 [2024-07-14 07:44:35.398345] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.321 [2024-07-14 07:44:35.398530] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.321 [2024-07-14 07:44:35.398556] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:19.321 qpair failed and we were unable to recover it. 00:27:19.321 [2024-07-14 07:44:35.398768] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.321 [2024-07-14 07:44:35.398963] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.321 [2024-07-14 07:44:35.398993] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:19.321 qpair failed and we were unable to recover it. 00:27:19.321 [2024-07-14 07:44:35.399245] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.321 [2024-07-14 07:44:35.399550] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.321 [2024-07-14 07:44:35.399575] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:19.321 qpair failed and we were unable to recover it. 00:27:19.321 [2024-07-14 07:44:35.399785] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.321 [2024-07-14 07:44:35.399991] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.321 [2024-07-14 07:44:35.400021] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:19.321 qpair failed and we were unable to recover it. 00:27:19.321 [2024-07-14 07:44:35.400207] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.321 [2024-07-14 07:44:35.400385] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.321 [2024-07-14 07:44:35.400414] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:19.321 qpair failed and we were unable to recover it. 00:27:19.321 [2024-07-14 07:44:35.400644] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.321 [2024-07-14 07:44:35.400856] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.321 [2024-07-14 07:44:35.400891] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:19.321 qpair failed and we were unable to recover it. 00:27:19.321 [2024-07-14 07:44:35.401118] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.321 [2024-07-14 07:44:35.401535] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.321 [2024-07-14 07:44:35.401586] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:19.321 qpair failed and we were unable to recover it. 00:27:19.321 [2024-07-14 07:44:35.401793] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.321 [2024-07-14 07:44:35.401983] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.321 [2024-07-14 07:44:35.402011] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:19.321 qpair failed and we were unable to recover it. 00:27:19.321 [2024-07-14 07:44:35.402201] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.321 [2024-07-14 07:44:35.402392] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.321 [2024-07-14 07:44:35.402418] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:19.321 qpair failed and we were unable to recover it. 00:27:19.321 [2024-07-14 07:44:35.402609] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.321 [2024-07-14 07:44:35.402787] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.321 [2024-07-14 07:44:35.402813] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:19.321 qpair failed and we were unable to recover it. 00:27:19.321 [2024-07-14 07:44:35.403025] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.321 [2024-07-14 07:44:35.403253] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.321 [2024-07-14 07:44:35.403279] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:19.322 qpair failed and we were unable to recover it. 00:27:19.322 [2024-07-14 07:44:35.403479] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.322 [2024-07-14 07:44:35.403690] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.322 [2024-07-14 07:44:35.403716] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:19.322 qpair failed and we were unable to recover it. 00:27:19.322 [2024-07-14 07:44:35.403904] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.322 [2024-07-14 07:44:35.404063] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.322 [2024-07-14 07:44:35.404090] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:19.322 qpair failed and we were unable to recover it. 00:27:19.322 [2024-07-14 07:44:35.404277] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.322 [2024-07-14 07:44:35.404459] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.322 [2024-07-14 07:44:35.404485] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:19.322 qpair failed and we were unable to recover it. 00:27:19.322 [2024-07-14 07:44:35.404807] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.322 [2024-07-14 07:44:35.405083] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.322 [2024-07-14 07:44:35.405113] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:19.322 qpair failed and we were unable to recover it. 00:27:19.322 [2024-07-14 07:44:35.405350] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.322 [2024-07-14 07:44:35.405538] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.322 [2024-07-14 07:44:35.405568] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:19.322 qpair failed and we were unable to recover it. 00:27:19.322 [2024-07-14 07:44:35.405760] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.322 [2024-07-14 07:44:35.405944] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.322 [2024-07-14 07:44:35.405989] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:19.322 qpair failed and we were unable to recover it. 00:27:19.322 [2024-07-14 07:44:35.406219] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.322 [2024-07-14 07:44:35.406570] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.322 [2024-07-14 07:44:35.406616] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:19.322 qpair failed and we were unable to recover it. 00:27:19.322 [2024-07-14 07:44:35.406821] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.322 [2024-07-14 07:44:35.407033] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.322 [2024-07-14 07:44:35.407063] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:19.322 qpair failed and we were unable to recover it. 00:27:19.322 [2024-07-14 07:44:35.407255] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.322 [2024-07-14 07:44:35.407412] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.322 [2024-07-14 07:44:35.407437] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:19.322 qpair failed and we were unable to recover it. 00:27:19.322 [2024-07-14 07:44:35.407646] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.322 [2024-07-14 07:44:35.407819] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.322 [2024-07-14 07:44:35.407848] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:19.322 qpair failed and we were unable to recover it. 00:27:19.322 [2024-07-14 07:44:35.408086] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.322 [2024-07-14 07:44:35.408460] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.322 [2024-07-14 07:44:35.408513] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:19.322 qpair failed and we were unable to recover it. 00:27:19.322 [2024-07-14 07:44:35.408747] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.322 [2024-07-14 07:44:35.408960] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.322 [2024-07-14 07:44:35.408987] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:19.322 qpair failed and we were unable to recover it. 00:27:19.322 [2024-07-14 07:44:35.409169] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.322 [2024-07-14 07:44:35.409507] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.322 [2024-07-14 07:44:35.409562] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:19.322 qpair failed and we were unable to recover it. 00:27:19.322 [2024-07-14 07:44:35.409763] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.322 [2024-07-14 07:44:35.409971] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.322 [2024-07-14 07:44:35.410001] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:19.322 qpair failed and we were unable to recover it. 00:27:19.322 [2024-07-14 07:44:35.410209] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.322 [2024-07-14 07:44:35.410592] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.322 [2024-07-14 07:44:35.410654] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:19.322 qpair failed and we were unable to recover it. 00:27:19.322 [2024-07-14 07:44:35.410858] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.322 [2024-07-14 07:44:35.411070] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.322 [2024-07-14 07:44:35.411099] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:19.322 qpair failed and we were unable to recover it. 00:27:19.322 [2024-07-14 07:44:35.411338] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.322 [2024-07-14 07:44:35.411546] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.322 [2024-07-14 07:44:35.411588] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:19.322 qpair failed and we were unable to recover it. 00:27:19.322 [2024-07-14 07:44:35.411829] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.322 [2024-07-14 07:44:35.412069] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.322 [2024-07-14 07:44:35.412098] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:19.322 qpair failed and we were unable to recover it. 00:27:19.322 [2024-07-14 07:44:35.412329] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.322 [2024-07-14 07:44:35.412707] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.322 [2024-07-14 07:44:35.412754] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:19.322 qpair failed and we were unable to recover it. 00:27:19.322 [2024-07-14 07:44:35.412996] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.322 [2024-07-14 07:44:35.413202] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.322 [2024-07-14 07:44:35.413270] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:19.322 qpair failed and we were unable to recover it. 00:27:19.322 [2024-07-14 07:44:35.413521] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.322 [2024-07-14 07:44:35.413716] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.322 [2024-07-14 07:44:35.413742] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:19.322 qpair failed and we were unable to recover it. 00:27:19.322 [2024-07-14 07:44:35.413965] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.322 [2024-07-14 07:44:35.414243] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.322 [2024-07-14 07:44:35.414293] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:19.322 qpair failed and we were unable to recover it. 00:27:19.322 [2024-07-14 07:44:35.414497] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.322 [2024-07-14 07:44:35.414696] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.322 [2024-07-14 07:44:35.414725] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:19.322 qpair failed and we were unable to recover it. 00:27:19.322 [2024-07-14 07:44:35.414962] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.322 [2024-07-14 07:44:35.415150] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.322 [2024-07-14 07:44:35.415176] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:19.322 qpair failed and we were unable to recover it. 00:27:19.322 [2024-07-14 07:44:35.415406] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.322 [2024-07-14 07:44:35.415761] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.322 [2024-07-14 07:44:35.415814] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:19.322 qpair failed and we were unable to recover it. 00:27:19.322 [2024-07-14 07:44:35.416062] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.322 [2024-07-14 07:44:35.416232] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.322 [2024-07-14 07:44:35.416261] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:19.322 qpair failed and we were unable to recover it. 00:27:19.322 [2024-07-14 07:44:35.416464] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.322 [2024-07-14 07:44:35.416879] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.322 [2024-07-14 07:44:35.416925] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:19.322 qpair failed and we were unable to recover it. 00:27:19.322 [2024-07-14 07:44:35.417131] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.322 [2024-07-14 07:44:35.417526] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.322 [2024-07-14 07:44:35.417576] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:19.322 qpair failed and we were unable to recover it. 00:27:19.322 [2024-07-14 07:44:35.417751] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.322 [2024-07-14 07:44:35.417972] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.322 [2024-07-14 07:44:35.418002] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:19.322 qpair failed and we were unable to recover it. 00:27:19.322 [2024-07-14 07:44:35.418285] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.322 [2024-07-14 07:44:35.418651] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.322 [2024-07-14 07:44:35.418699] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:19.322 qpair failed and we were unable to recover it. 00:27:19.322 [2024-07-14 07:44:35.418905] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.322 [2024-07-14 07:44:35.419129] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.322 [2024-07-14 07:44:35.419155] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:19.322 qpair failed and we were unable to recover it. 00:27:19.322 [2024-07-14 07:44:35.419375] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.322 [2024-07-14 07:44:35.419676] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.322 [2024-07-14 07:44:35.419730] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:19.322 qpair failed and we were unable to recover it. 00:27:19.322 [2024-07-14 07:44:35.419944] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.322 [2024-07-14 07:44:35.420131] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.322 [2024-07-14 07:44:35.420157] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:19.322 qpair failed and we were unable to recover it. 00:27:19.322 [2024-07-14 07:44:35.420382] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.322 [2024-07-14 07:44:35.420615] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.322 [2024-07-14 07:44:35.420644] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:19.322 qpair failed and we were unable to recover it. 00:27:19.323 [2024-07-14 07:44:35.420851] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.323 [2024-07-14 07:44:35.421077] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.323 [2024-07-14 07:44:35.421106] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:19.323 qpair failed and we were unable to recover it. 00:27:19.323 [2024-07-14 07:44:35.421289] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.323 [2024-07-14 07:44:35.421664] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.323 [2024-07-14 07:44:35.421722] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:19.323 qpair failed and we were unable to recover it. 00:27:19.323 [2024-07-14 07:44:35.421964] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.323 [2024-07-14 07:44:35.422150] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.323 [2024-07-14 07:44:35.422178] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:19.323 qpair failed and we were unable to recover it. 00:27:19.323 [2024-07-14 07:44:35.422343] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.323 [2024-07-14 07:44:35.422578] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.323 [2024-07-14 07:44:35.422605] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:19.323 qpair failed and we were unable to recover it. 00:27:19.323 [2024-07-14 07:44:35.422889] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.323 [2024-07-14 07:44:35.423062] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.323 [2024-07-14 07:44:35.423088] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:19.323 qpair failed and we were unable to recover it. 00:27:19.323 [2024-07-14 07:44:35.423295] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.323 [2024-07-14 07:44:35.423477] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.323 [2024-07-14 07:44:35.423503] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:19.323 qpair failed and we were unable to recover it. 00:27:19.323 [2024-07-14 07:44:35.423722] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.323 [2024-07-14 07:44:35.423931] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.323 [2024-07-14 07:44:35.423958] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:19.323 qpair failed and we were unable to recover it. 00:27:19.323 [2024-07-14 07:44:35.424169] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.323 [2024-07-14 07:44:35.424318] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.323 [2024-07-14 07:44:35.424345] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:19.323 qpair failed and we were unable to recover it. 00:27:19.323 [2024-07-14 07:44:35.424556] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.323 [2024-07-14 07:44:35.424731] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.323 [2024-07-14 07:44:35.424760] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:19.323 qpair failed and we were unable to recover it. 00:27:19.323 [2024-07-14 07:44:35.424987] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.323 [2024-07-14 07:44:35.425216] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.323 [2024-07-14 07:44:35.425242] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:19.323 qpair failed and we were unable to recover it. 00:27:19.323 [2024-07-14 07:44:35.425435] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.323 [2024-07-14 07:44:35.425626] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.323 [2024-07-14 07:44:35.425652] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:19.323 qpair failed and we were unable to recover it. 00:27:19.323 [2024-07-14 07:44:35.425808] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.323 [2024-07-14 07:44:35.426003] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.323 [2024-07-14 07:44:35.426030] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:19.323 qpair failed and we were unable to recover it. 00:27:19.323 [2024-07-14 07:44:35.426217] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.323 [2024-07-14 07:44:35.426574] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.323 [2024-07-14 07:44:35.426632] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:19.323 qpair failed and we were unable to recover it. 00:27:19.323 [2024-07-14 07:44:35.426837] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.323 [2024-07-14 07:44:35.427031] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.323 [2024-07-14 07:44:35.427061] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:19.323 qpair failed and we were unable to recover it. 00:27:19.323 [2024-07-14 07:44:35.427286] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.323 [2024-07-14 07:44:35.427702] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.323 [2024-07-14 07:44:35.427753] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:19.323 qpair failed and we were unable to recover it. 00:27:19.323 [2024-07-14 07:44:35.427981] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.323 [2024-07-14 07:44:35.428234] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.323 [2024-07-14 07:44:35.428292] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:19.323 qpair failed and we were unable to recover it. 00:27:19.323 [2024-07-14 07:44:35.428498] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.323 [2024-07-14 07:44:35.428762] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.323 [2024-07-14 07:44:35.428818] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:19.323 qpair failed and we were unable to recover it. 00:27:19.323 [2024-07-14 07:44:35.429051] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.323 [2024-07-14 07:44:35.429376] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.323 [2024-07-14 07:44:35.429436] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:19.323 qpair failed and we were unable to recover it. 00:27:19.323 [2024-07-14 07:44:35.429718] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.323 [2024-07-14 07:44:35.429936] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.323 [2024-07-14 07:44:35.429965] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:19.323 qpair failed and we were unable to recover it. 00:27:19.323 [2024-07-14 07:44:35.430171] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.323 [2024-07-14 07:44:35.430456] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.323 [2024-07-14 07:44:35.430512] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:19.323 qpair failed and we were unable to recover it. 00:27:19.323 [2024-07-14 07:44:35.430740] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.323 [2024-07-14 07:44:35.430948] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.323 [2024-07-14 07:44:35.430977] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:19.323 qpair failed and we were unable to recover it. 00:27:19.323 [2024-07-14 07:44:35.431210] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.323 [2024-07-14 07:44:35.431533] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.323 [2024-07-14 07:44:35.431594] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:19.323 qpair failed and we were unable to recover it. 00:27:19.323 [2024-07-14 07:44:35.431897] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.323 [2024-07-14 07:44:35.432134] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.323 [2024-07-14 07:44:35.432163] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:19.323 qpair failed and we were unable to recover it. 00:27:19.323 [2024-07-14 07:44:35.432379] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.323 [2024-07-14 07:44:35.432695] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.323 [2024-07-14 07:44:35.432747] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:19.323 qpair failed and we were unable to recover it. 00:27:19.323 [2024-07-14 07:44:35.432972] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.323 [2024-07-14 07:44:35.433177] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.323 [2024-07-14 07:44:35.433205] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:19.323 qpair failed and we were unable to recover it. 00:27:19.323 [2024-07-14 07:44:35.433426] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.324 [2024-07-14 07:44:35.433636] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.324 [2024-07-14 07:44:35.433662] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:19.324 qpair failed and we were unable to recover it. 00:27:19.324 [2024-07-14 07:44:35.433878] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.324 [2024-07-14 07:44:35.434083] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.324 [2024-07-14 07:44:35.434112] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:19.324 qpair failed and we were unable to recover it. 00:27:19.324 [2024-07-14 07:44:35.434314] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.324 [2024-07-14 07:44:35.434531] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.324 [2024-07-14 07:44:35.434559] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:19.324 qpair failed and we were unable to recover it. 00:27:19.324 [2024-07-14 07:44:35.434735] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.324 [2024-07-14 07:44:35.434938] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.324 [2024-07-14 07:44:35.434969] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:19.324 qpair failed and we were unable to recover it. 00:27:19.324 [2024-07-14 07:44:35.435136] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.324 [2024-07-14 07:44:35.435429] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.324 [2024-07-14 07:44:35.435454] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:19.324 qpair failed and we were unable to recover it. 00:27:19.324 [2024-07-14 07:44:35.435679] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.324 [2024-07-14 07:44:35.435923] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.324 [2024-07-14 07:44:35.435950] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:19.324 qpair failed and we were unable to recover it. 00:27:19.324 [2024-07-14 07:44:35.436200] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.324 [2024-07-14 07:44:35.436620] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.324 [2024-07-14 07:44:35.436681] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:19.324 qpair failed and we were unable to recover it. 00:27:19.324 [2024-07-14 07:44:35.436881] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.324 [2024-07-14 07:44:35.437075] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.324 [2024-07-14 07:44:35.437101] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:19.324 qpair failed and we were unable to recover it. 00:27:19.324 [2024-07-14 07:44:35.437264] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.324 [2024-07-14 07:44:35.437522] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.324 [2024-07-14 07:44:35.437553] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:19.324 qpair failed and we were unable to recover it. 00:27:19.324 [2024-07-14 07:44:35.437765] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.324 [2024-07-14 07:44:35.437972] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.324 [2024-07-14 07:44:35.438003] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:19.324 qpair failed and we were unable to recover it. 00:27:19.324 [2024-07-14 07:44:35.438219] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.324 [2024-07-14 07:44:35.438465] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.324 [2024-07-14 07:44:35.438494] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:19.324 qpair failed and we were unable to recover it. 00:27:19.324 [2024-07-14 07:44:35.438731] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.324 [2024-07-14 07:44:35.438956] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.324 [2024-07-14 07:44:35.438982] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:19.324 qpair failed and we were unable to recover it. 00:27:19.324 [2024-07-14 07:44:35.439253] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.324 [2024-07-14 07:44:35.439466] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.324 [2024-07-14 07:44:35.439492] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:19.324 qpair failed and we were unable to recover it. 00:27:19.324 [2024-07-14 07:44:35.439711] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.324 [2024-07-14 07:44:35.439950] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.324 [2024-07-14 07:44:35.439980] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:19.324 qpair failed and we were unable to recover it. 00:27:19.324 [2024-07-14 07:44:35.440217] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.324 [2024-07-14 07:44:35.440553] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.324 [2024-07-14 07:44:35.440597] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:19.324 qpair failed and we were unable to recover it. 00:27:19.324 [2024-07-14 07:44:35.440801] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.324 [2024-07-14 07:44:35.441006] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.324 [2024-07-14 07:44:35.441036] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:19.324 qpair failed and we were unable to recover it. 00:27:19.324 [2024-07-14 07:44:35.441241] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.324 [2024-07-14 07:44:35.441474] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.324 [2024-07-14 07:44:35.441503] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:19.324 qpair failed and we were unable to recover it. 00:27:19.324 [2024-07-14 07:44:35.441731] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.324 [2024-07-14 07:44:35.441946] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.324 [2024-07-14 07:44:35.441976] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:19.324 qpair failed and we were unable to recover it. 00:27:19.324 [2024-07-14 07:44:35.442226] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.324 [2024-07-14 07:44:35.442479] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.324 [2024-07-14 07:44:35.442522] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:19.324 qpair failed and we were unable to recover it. 00:27:19.324 [2024-07-14 07:44:35.442713] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.324 [2024-07-14 07:44:35.442943] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.324 [2024-07-14 07:44:35.442973] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:19.324 qpair failed and we were unable to recover it. 00:27:19.324 [2024-07-14 07:44:35.443300] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.324 [2024-07-14 07:44:35.443749] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.324 [2024-07-14 07:44:35.443800] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:19.324 qpair failed and we were unable to recover it. 00:27:19.324 [2024-07-14 07:44:35.444100] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.324 [2024-07-14 07:44:35.444339] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.324 [2024-07-14 07:44:35.444367] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:19.324 qpair failed and we were unable to recover it. 00:27:19.324 [2024-07-14 07:44:35.444569] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.324 [2024-07-14 07:44:35.444805] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.324 [2024-07-14 07:44:35.444834] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:19.324 qpair failed and we were unable to recover it. 00:27:19.324 [2024-07-14 07:44:35.445104] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.324 [2024-07-14 07:44:35.445375] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.324 [2024-07-14 07:44:35.445401] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:19.324 qpair failed and we were unable to recover it. 00:27:19.324 [2024-07-14 07:44:35.445643] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.324 [2024-07-14 07:44:35.445849] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.324 [2024-07-14 07:44:35.445886] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:19.324 qpair failed and we were unable to recover it. 00:27:19.324 [2024-07-14 07:44:35.446091] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.324 [2024-07-14 07:44:35.446325] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.324 [2024-07-14 07:44:35.446378] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:19.324 qpair failed and we were unable to recover it. 00:27:19.324 [2024-07-14 07:44:35.446607] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.324 [2024-07-14 07:44:35.446838] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.324 [2024-07-14 07:44:35.446876] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:19.324 qpair failed and we were unable to recover it. 00:27:19.324 [2024-07-14 07:44:35.447093] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.324 [2024-07-14 07:44:35.447294] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.324 [2024-07-14 07:44:35.447323] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:19.324 qpair failed and we were unable to recover it. 00:27:19.324 [2024-07-14 07:44:35.447554] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.324 [2024-07-14 07:44:35.447747] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.324 [2024-07-14 07:44:35.447775] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:19.324 qpair failed and we were unable to recover it. 00:27:19.324 [2024-07-14 07:44:35.447984] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.324 [2024-07-14 07:44:35.448195] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.324 [2024-07-14 07:44:35.448225] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:19.324 qpair failed and we were unable to recover it. 00:27:19.324 [2024-07-14 07:44:35.448430] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.324 [2024-07-14 07:44:35.448805] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.324 [2024-07-14 07:44:35.448861] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:19.324 qpair failed and we were unable to recover it. 00:27:19.324 [2024-07-14 07:44:35.449110] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.324 [2024-07-14 07:44:35.449397] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.324 [2024-07-14 07:44:35.449451] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:19.324 qpair failed and we were unable to recover it. 00:27:19.324 [2024-07-14 07:44:35.449653] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.324 [2024-07-14 07:44:35.449847] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.324 [2024-07-14 07:44:35.449882] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:19.324 qpair failed and we were unable to recover it. 00:27:19.324 [2024-07-14 07:44:35.450072] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.324 [2024-07-14 07:44:35.450325] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.325 [2024-07-14 07:44:35.450353] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:19.325 qpair failed and we were unable to recover it. 00:27:19.325 [2024-07-14 07:44:35.450551] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.325 [2024-07-14 07:44:35.450949] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.325 [2024-07-14 07:44:35.450978] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:19.325 qpair failed and we were unable to recover it. 00:27:19.325 [2024-07-14 07:44:35.451211] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.325 [2024-07-14 07:44:35.451555] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.325 [2024-07-14 07:44:35.451606] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:19.325 qpair failed and we were unable to recover it. 00:27:19.325 [2024-07-14 07:44:35.451846] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.325 [2024-07-14 07:44:35.452043] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.325 [2024-07-14 07:44:35.452085] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:19.325 qpair failed and we were unable to recover it. 00:27:19.325 [2024-07-14 07:44:35.452306] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.325 [2024-07-14 07:44:35.452549] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.325 [2024-07-14 07:44:35.452578] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:19.325 qpair failed and we were unable to recover it. 00:27:19.325 [2024-07-14 07:44:35.452805] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.325 [2024-07-14 07:44:35.453103] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.325 [2024-07-14 07:44:35.453129] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:19.325 qpair failed and we were unable to recover it. 00:27:19.325 [2024-07-14 07:44:35.453372] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.325 [2024-07-14 07:44:35.453766] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.325 [2024-07-14 07:44:35.453821] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:19.325 qpair failed and we were unable to recover it. 00:27:19.325 [2024-07-14 07:44:35.454009] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.325 [2024-07-14 07:44:35.454354] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.325 [2024-07-14 07:44:35.454407] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:19.325 qpair failed and we were unable to recover it. 00:27:19.325 [2024-07-14 07:44:35.454696] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.325 [2024-07-14 07:44:35.454908] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.325 [2024-07-14 07:44:35.454938] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:19.325 qpair failed and we were unable to recover it. 00:27:19.325 [2024-07-14 07:44:35.455145] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.325 [2024-07-14 07:44:35.455373] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.325 [2024-07-14 07:44:35.455402] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:19.325 qpair failed and we were unable to recover it. 00:27:19.325 [2024-07-14 07:44:35.455636] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.325 [2024-07-14 07:44:35.455853] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.325 [2024-07-14 07:44:35.455885] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:19.325 qpair failed and we were unable to recover it. 00:27:19.325 [2024-07-14 07:44:35.456093] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.325 [2024-07-14 07:44:35.456276] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.325 [2024-07-14 07:44:35.456304] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:19.325 qpair failed and we were unable to recover it. 00:27:19.325 [2024-07-14 07:44:35.456498] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.325 [2024-07-14 07:44:35.456918] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.325 [2024-07-14 07:44:35.456948] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:19.325 qpair failed and we were unable to recover it. 00:27:19.325 [2024-07-14 07:44:35.457176] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.325 [2024-07-14 07:44:35.457523] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.325 [2024-07-14 07:44:35.457579] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:19.325 qpair failed and we were unable to recover it. 00:27:19.325 [2024-07-14 07:44:35.457860] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.325 [2024-07-14 07:44:35.458077] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.325 [2024-07-14 07:44:35.458106] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:19.325 qpair failed and we were unable to recover it. 00:27:19.325 [2024-07-14 07:44:35.458462] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.325 [2024-07-14 07:44:35.458910] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.325 [2024-07-14 07:44:35.458959] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:19.325 qpair failed and we were unable to recover it. 00:27:19.325 [2024-07-14 07:44:35.459247] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.325 [2024-07-14 07:44:35.459645] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.325 [2024-07-14 07:44:35.459701] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:19.325 qpair failed and we were unable to recover it. 00:27:19.325 [2024-07-14 07:44:35.459947] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.325 [2024-07-14 07:44:35.460153] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.325 [2024-07-14 07:44:35.460182] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:19.325 qpair failed and we were unable to recover it. 00:27:19.325 [2024-07-14 07:44:35.460385] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.325 [2024-07-14 07:44:35.460602] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.325 [2024-07-14 07:44:35.460628] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:19.325 qpair failed and we were unable to recover it. 00:27:19.325 [2024-07-14 07:44:35.460893] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.325 [2024-07-14 07:44:35.461125] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.325 [2024-07-14 07:44:35.461154] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:19.325 qpair failed and we were unable to recover it. 00:27:19.325 [2024-07-14 07:44:35.461330] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.325 [2024-07-14 07:44:35.461543] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.325 [2024-07-14 07:44:35.461627] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:19.325 qpair failed and we were unable to recover it. 00:27:19.325 [2024-07-14 07:44:35.461900] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.325 [2024-07-14 07:44:35.462097] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.325 [2024-07-14 07:44:35.462134] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:19.325 qpair failed and we were unable to recover it. 00:27:19.325 [2024-07-14 07:44:35.462385] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.325 [2024-07-14 07:44:35.462722] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.325 [2024-07-14 07:44:35.462775] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:19.325 qpair failed and we were unable to recover it. 00:27:19.325 [2024-07-14 07:44:35.463011] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.325 [2024-07-14 07:44:35.463233] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.325 [2024-07-14 07:44:35.463279] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:19.325 qpair failed and we were unable to recover it. 00:27:19.325 [2024-07-14 07:44:35.463542] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.325 [2024-07-14 07:44:35.463774] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.325 [2024-07-14 07:44:35.463832] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:19.325 qpair failed and we were unable to recover it. 00:27:19.325 [2024-07-14 07:44:35.464078] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.325 [2024-07-14 07:44:35.464294] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.325 [2024-07-14 07:44:35.464366] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:19.325 qpair failed and we were unable to recover it. 00:27:19.325 [2024-07-14 07:44:35.464544] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.325 [2024-07-14 07:44:35.464738] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.325 [2024-07-14 07:44:35.464763] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:19.325 qpair failed and we were unable to recover it. 00:27:19.325 [2024-07-14 07:44:35.464988] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.325 [2024-07-14 07:44:35.465266] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.325 [2024-07-14 07:44:35.465316] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:19.325 qpair failed and we were unable to recover it. 00:27:19.325 [2024-07-14 07:44:35.465528] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.325 [2024-07-14 07:44:35.465777] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.325 [2024-07-14 07:44:35.465810] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:19.325 qpair failed and we were unable to recover it. 00:27:19.325 [2024-07-14 07:44:35.465992] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.595 [2024-07-14 07:44:35.466181] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.595 [2024-07-14 07:44:35.466212] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:19.595 qpair failed and we were unable to recover it. 00:27:19.595 [2024-07-14 07:44:35.466449] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.595 [2024-07-14 07:44:35.466638] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.595 [2024-07-14 07:44:35.466676] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:19.595 qpair failed and we were unable to recover it. 00:27:19.595 [2024-07-14 07:44:35.466935] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.595 [2024-07-14 07:44:35.467151] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.595 [2024-07-14 07:44:35.467191] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:19.595 qpair failed and we were unable to recover it. 00:27:19.595 [2024-07-14 07:44:35.467426] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.595 [2024-07-14 07:44:35.467764] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.595 [2024-07-14 07:44:35.467820] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:19.595 qpair failed and we were unable to recover it. 00:27:19.595 [2024-07-14 07:44:35.468054] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.595 [2024-07-14 07:44:35.468273] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.595 [2024-07-14 07:44:35.468309] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:19.595 qpair failed and we were unable to recover it. 00:27:19.595 [2024-07-14 07:44:35.468542] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.595 [2024-07-14 07:44:35.468776] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.595 [2024-07-14 07:44:35.468807] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:19.595 qpair failed and we were unable to recover it. 00:27:19.595 [2024-07-14 07:44:35.469024] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.595 [2024-07-14 07:44:35.469226] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.595 [2024-07-14 07:44:35.469253] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:19.595 qpair failed and we were unable to recover it. 00:27:19.595 [2024-07-14 07:44:35.469510] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.595 [2024-07-14 07:44:35.469862] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.595 [2024-07-14 07:44:35.469921] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:19.595 qpair failed and we were unable to recover it. 00:27:19.595 [2024-07-14 07:44:35.470158] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.595 [2024-07-14 07:44:35.470393] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.595 [2024-07-14 07:44:35.470435] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:19.595 qpair failed and we were unable to recover it. 00:27:19.595 [2024-07-14 07:44:35.470675] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.595 [2024-07-14 07:44:35.470884] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.595 [2024-07-14 07:44:35.470913] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:19.595 qpair failed and we were unable to recover it. 00:27:19.595 [2024-07-14 07:44:35.471117] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.595 [2024-07-14 07:44:35.471398] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.595 [2024-07-14 07:44:35.471426] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:19.596 qpair failed and we were unable to recover it. 00:27:19.596 [2024-07-14 07:44:35.471605] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.596 [2024-07-14 07:44:35.471812] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.596 [2024-07-14 07:44:35.471842] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:19.596 qpair failed and we were unable to recover it. 00:27:19.596 [2024-07-14 07:44:35.472086] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.596 [2024-07-14 07:44:35.472365] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.596 [2024-07-14 07:44:35.472411] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:19.596 qpair failed and we were unable to recover it. 00:27:19.596 [2024-07-14 07:44:35.472622] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.596 [2024-07-14 07:44:35.472837] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.596 [2024-07-14 07:44:35.472873] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:19.596 qpair failed and we were unable to recover it. 00:27:19.596 [2024-07-14 07:44:35.473100] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.596 [2024-07-14 07:44:35.473299] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.596 [2024-07-14 07:44:35.473324] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:19.596 qpair failed and we were unable to recover it. 00:27:19.596 [2024-07-14 07:44:35.473505] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.596 [2024-07-14 07:44:35.473704] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.596 [2024-07-14 07:44:35.473729] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:19.596 qpair failed and we were unable to recover it. 00:27:19.596 [2024-07-14 07:44:35.473952] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.596 [2024-07-14 07:44:35.474171] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.596 [2024-07-14 07:44:35.474218] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:19.596 qpair failed and we were unable to recover it. 00:27:19.596 [2024-07-14 07:44:35.474419] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.596 [2024-07-14 07:44:35.474787] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.596 [2024-07-14 07:44:35.474839] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:19.596 qpair failed and we were unable to recover it. 00:27:19.596 [2024-07-14 07:44:35.475087] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.596 [2024-07-14 07:44:35.475315] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.596 [2024-07-14 07:44:35.475344] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:19.596 qpair failed and we were unable to recover it. 00:27:19.596 [2024-07-14 07:44:35.475560] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.596 [2024-07-14 07:44:35.475730] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.596 [2024-07-14 07:44:35.475758] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:19.596 qpair failed and we were unable to recover it. 00:27:19.596 [2024-07-14 07:44:35.475971] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.596 [2024-07-14 07:44:35.476175] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.596 [2024-07-14 07:44:35.476200] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:19.596 qpair failed and we were unable to recover it. 00:27:19.596 [2024-07-14 07:44:35.476439] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.596 [2024-07-14 07:44:35.476756] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.596 [2024-07-14 07:44:35.476814] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:19.596 qpair failed and we were unable to recover it. 00:27:19.596 [2024-07-14 07:44:35.477052] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.596 [2024-07-14 07:44:35.477285] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.596 [2024-07-14 07:44:35.477331] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:19.596 qpair failed and we were unable to recover it. 00:27:19.596 [2024-07-14 07:44:35.477564] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.596 [2024-07-14 07:44:35.477773] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.596 [2024-07-14 07:44:35.477801] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:19.596 qpair failed and we were unable to recover it. 00:27:19.596 [2024-07-14 07:44:35.478040] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.596 [2024-07-14 07:44:35.478262] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.596 [2024-07-14 07:44:35.478307] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:19.596 qpair failed and we were unable to recover it. 00:27:19.596 [2024-07-14 07:44:35.478537] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.596 [2024-07-14 07:44:35.478747] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.596 [2024-07-14 07:44:35.478775] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:19.596 qpair failed and we were unable to recover it. 00:27:19.596 [2024-07-14 07:44:35.479017] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.596 [2024-07-14 07:44:35.479308] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.596 [2024-07-14 07:44:35.479353] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:19.596 qpair failed and we were unable to recover it. 00:27:19.596 [2024-07-14 07:44:35.479630] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.596 [2024-07-14 07:44:35.479879] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.596 [2024-07-14 07:44:35.479908] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:19.596 qpair failed and we were unable to recover it. 00:27:19.596 [2024-07-14 07:44:35.480115] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.596 [2024-07-14 07:44:35.480491] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.596 [2024-07-14 07:44:35.480551] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:19.596 qpair failed and we were unable to recover it. 00:27:19.596 [2024-07-14 07:44:35.480786] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.596 [2024-07-14 07:44:35.481019] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.596 [2024-07-14 07:44:35.481046] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:19.596 qpair failed and we were unable to recover it. 00:27:19.596 [2024-07-14 07:44:35.481420] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.596 [2024-07-14 07:44:35.481816] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.596 [2024-07-14 07:44:35.481882] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:19.596 qpair failed and we were unable to recover it. 00:27:19.596 [2024-07-14 07:44:35.482167] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.596 [2024-07-14 07:44:35.482488] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.596 [2024-07-14 07:44:35.482547] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:19.596 qpair failed and we were unable to recover it. 00:27:19.596 [2024-07-14 07:44:35.482787] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.596 [2024-07-14 07:44:35.482981] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.596 [2024-07-14 07:44:35.483010] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:19.596 qpair failed and we were unable to recover it. 00:27:19.596 [2024-07-14 07:44:35.483215] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.596 [2024-07-14 07:44:35.483595] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.596 [2024-07-14 07:44:35.483654] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:19.596 qpair failed and we were unable to recover it. 00:27:19.596 [2024-07-14 07:44:35.483855] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.596 [2024-07-14 07:44:35.484067] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.596 [2024-07-14 07:44:35.484095] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:19.596 qpair failed and we were unable to recover it. 00:27:19.596 [2024-07-14 07:44:35.484291] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.596 [2024-07-14 07:44:35.484638] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.596 [2024-07-14 07:44:35.484693] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:19.596 qpair failed and we were unable to recover it. 00:27:19.596 [2024-07-14 07:44:35.484898] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.596 [2024-07-14 07:44:35.485146] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.596 [2024-07-14 07:44:35.485173] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:19.596 qpair failed and we were unable to recover it. 00:27:19.596 [2024-07-14 07:44:35.485367] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.596 [2024-07-14 07:44:35.485594] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.596 [2024-07-14 07:44:35.485646] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:19.596 qpair failed and we were unable to recover it. 00:27:19.596 [2024-07-14 07:44:35.485881] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.596 [2024-07-14 07:44:35.486052] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.596 [2024-07-14 07:44:35.486081] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:19.596 qpair failed and we were unable to recover it. 00:27:19.596 [2024-07-14 07:44:35.486315] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.596 [2024-07-14 07:44:35.486506] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.596 [2024-07-14 07:44:35.486533] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:19.596 qpair failed and we were unable to recover it. 00:27:19.596 [2024-07-14 07:44:35.486932] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.596 [2024-07-14 07:44:35.487180] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.596 [2024-07-14 07:44:35.487206] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:19.596 qpair failed and we were unable to recover it. 00:27:19.597 [2024-07-14 07:44:35.487441] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.597 [2024-07-14 07:44:35.487684] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.597 [2024-07-14 07:44:35.487724] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:19.597 qpair failed and we were unable to recover it. 00:27:19.597 [2024-07-14 07:44:35.487942] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.597 [2024-07-14 07:44:35.488147] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.597 [2024-07-14 07:44:35.488187] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:19.597 qpair failed and we were unable to recover it. 00:27:19.597 [2024-07-14 07:44:35.488372] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.597 [2024-07-14 07:44:35.488628] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.597 [2024-07-14 07:44:35.488678] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:19.597 qpair failed and we were unable to recover it. 00:27:19.597 [2024-07-14 07:44:35.488904] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.597 [2024-07-14 07:44:35.489108] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.597 [2024-07-14 07:44:35.489136] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:19.597 qpair failed and we were unable to recover it. 00:27:19.597 [2024-07-14 07:44:35.489415] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.597 [2024-07-14 07:44:35.489775] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.597 [2024-07-14 07:44:35.489824] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:19.597 qpair failed and we were unable to recover it. 00:27:19.597 [2024-07-14 07:44:35.490041] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.597 [2024-07-14 07:44:35.490229] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.597 [2024-07-14 07:44:35.490273] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:19.597 qpair failed and we were unable to recover it. 00:27:19.597 [2024-07-14 07:44:35.490531] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.597 [2024-07-14 07:44:35.490903] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.597 [2024-07-14 07:44:35.490957] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:19.597 qpair failed and we were unable to recover it. 00:27:19.597 [2024-07-14 07:44:35.491165] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.597 [2024-07-14 07:44:35.491414] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.597 [2024-07-14 07:44:35.491464] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:19.597 qpair failed and we were unable to recover it. 00:27:19.597 [2024-07-14 07:44:35.491666] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.597 [2024-07-14 07:44:35.491859] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.597 [2024-07-14 07:44:35.491894] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:19.597 qpair failed and we were unable to recover it. 00:27:19.597 [2024-07-14 07:44:35.492090] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.597 [2024-07-14 07:44:35.492477] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.597 [2024-07-14 07:44:35.492534] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:19.597 qpair failed and we were unable to recover it. 00:27:19.597 [2024-07-14 07:44:35.492767] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.597 [2024-07-14 07:44:35.493008] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.597 [2024-07-14 07:44:35.493038] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:19.597 qpair failed and we were unable to recover it. 00:27:19.597 [2024-07-14 07:44:35.493252] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.597 [2024-07-14 07:44:35.493471] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.597 [2024-07-14 07:44:35.493496] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:19.597 qpair failed and we were unable to recover it. 00:27:19.597 [2024-07-14 07:44:35.493741] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.597 [2024-07-14 07:44:35.493916] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.597 [2024-07-14 07:44:35.493946] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:19.597 qpair failed and we were unable to recover it. 00:27:19.597 [2024-07-14 07:44:35.494155] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.597 [2024-07-14 07:44:35.494552] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.597 [2024-07-14 07:44:35.494605] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:19.597 qpair failed and we were unable to recover it. 00:27:19.597 [2024-07-14 07:44:35.494827] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.597 [2024-07-14 07:44:35.495053] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.597 [2024-07-14 07:44:35.495081] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:19.597 qpair failed and we were unable to recover it. 00:27:19.597 [2024-07-14 07:44:35.495274] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.597 [2024-07-14 07:44:35.495478] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.597 [2024-07-14 07:44:35.495502] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:19.597 qpair failed and we were unable to recover it. 00:27:19.597 [2024-07-14 07:44:35.495722] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.597 [2024-07-14 07:44:35.495922] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.597 [2024-07-14 07:44:35.495951] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:19.597 qpair failed and we were unable to recover it. 00:27:19.597 [2024-07-14 07:44:35.496140] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.597 [2024-07-14 07:44:35.496370] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.597 [2024-07-14 07:44:35.496398] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:19.597 qpair failed and we were unable to recover it. 00:27:19.597 [2024-07-14 07:44:35.496641] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.597 [2024-07-14 07:44:35.496827] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.597 [2024-07-14 07:44:35.496862] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:19.597 qpair failed and we were unable to recover it. 00:27:19.597 [2024-07-14 07:44:35.497202] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.597 [2024-07-14 07:44:35.497487] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.597 [2024-07-14 07:44:35.497513] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:19.597 qpair failed and we were unable to recover it. 00:27:19.597 [2024-07-14 07:44:35.497751] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.597 [2024-07-14 07:44:35.497983] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.597 [2024-07-14 07:44:35.498016] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:19.597 qpair failed and we were unable to recover it. 00:27:19.597 [2024-07-14 07:44:35.498231] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.597 [2024-07-14 07:44:35.498557] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.597 [2024-07-14 07:44:35.498622] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:19.597 qpair failed and we were unable to recover it. 00:27:19.597 [2024-07-14 07:44:35.498835] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.597 [2024-07-14 07:44:35.499046] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.597 [2024-07-14 07:44:35.499072] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:19.597 qpair failed and we were unable to recover it. 00:27:19.597 [2024-07-14 07:44:35.499598] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.597 [2024-07-14 07:44:35.499808] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.597 [2024-07-14 07:44:35.499838] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:19.597 qpair failed and we were unable to recover it. 00:27:19.597 [2024-07-14 07:44:35.500036] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.597 [2024-07-14 07:44:35.500245] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.597 [2024-07-14 07:44:35.500303] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:19.597 qpair failed and we were unable to recover it. 00:27:19.597 [2024-07-14 07:44:35.500582] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.597 [2024-07-14 07:44:35.500812] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.597 [2024-07-14 07:44:35.500840] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:19.597 qpair failed and we were unable to recover it. 00:27:19.597 [2024-07-14 07:44:35.501086] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.597 [2024-07-14 07:44:35.501295] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.597 [2024-07-14 07:44:35.501324] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:19.597 qpair failed and we were unable to recover it. 00:27:19.597 [2024-07-14 07:44:35.501579] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.597 [2024-07-14 07:44:35.501811] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.597 [2024-07-14 07:44:35.501840] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:19.597 qpair failed and we were unable to recover it. 00:27:19.597 [2024-07-14 07:44:35.502036] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.597 [2024-07-14 07:44:35.502368] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.597 [2024-07-14 07:44:35.502427] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:19.597 qpair failed and we were unable to recover it. 00:27:19.597 [2024-07-14 07:44:35.502795] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.597 [2024-07-14 07:44:35.503058] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.597 [2024-07-14 07:44:35.503089] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:19.597 qpair failed and we were unable to recover it. 00:27:19.597 [2024-07-14 07:44:35.503363] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.597 [2024-07-14 07:44:35.503770] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.597 [2024-07-14 07:44:35.503822] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:19.598 qpair failed and we were unable to recover it. 00:27:19.598 [2024-07-14 07:44:35.504038] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.598 [2024-07-14 07:44:35.504294] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.598 [2024-07-14 07:44:35.504343] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:19.598 qpair failed and we were unable to recover it. 00:27:19.598 [2024-07-14 07:44:35.504574] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.598 [2024-07-14 07:44:35.504778] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.598 [2024-07-14 07:44:35.504804] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:19.598 qpair failed and we were unable to recover it. 00:27:19.598 [2024-07-14 07:44:35.504967] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.598 [2024-07-14 07:44:35.505276] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.598 [2024-07-14 07:44:35.505332] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:19.598 qpair failed and we were unable to recover it. 00:27:19.598 [2024-07-14 07:44:35.505572] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.598 [2024-07-14 07:44:35.505751] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.598 [2024-07-14 07:44:35.505779] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:19.598 qpair failed and we were unable to recover it. 00:27:19.598 [2024-07-14 07:44:35.505989] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.598 [2024-07-14 07:44:35.506225] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.598 [2024-07-14 07:44:35.506287] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:19.598 qpair failed and we were unable to recover it. 00:27:19.598 [2024-07-14 07:44:35.506531] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.598 [2024-07-14 07:44:35.506890] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.598 [2024-07-14 07:44:35.506954] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:19.598 qpair failed and we were unable to recover it. 00:27:19.598 [2024-07-14 07:44:35.507161] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.598 [2024-07-14 07:44:35.507573] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.598 [2024-07-14 07:44:35.507633] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:19.598 qpair failed and we were unable to recover it. 00:27:19.598 [2024-07-14 07:44:35.507908] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.598 [2024-07-14 07:44:35.508151] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.598 [2024-07-14 07:44:35.508179] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:19.598 qpair failed and we were unable to recover it. 00:27:19.598 [2024-07-14 07:44:35.508359] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.598 [2024-07-14 07:44:35.508558] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.598 [2024-07-14 07:44:35.508587] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:19.598 qpair failed and we were unable to recover it. 00:27:19.598 [2024-07-14 07:44:35.508824] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.598 [2024-07-14 07:44:35.509012] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.598 [2024-07-14 07:44:35.509039] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:19.598 qpair failed and we were unable to recover it. 00:27:19.598 [2024-07-14 07:44:35.509283] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.598 [2024-07-14 07:44:35.509684] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.598 [2024-07-14 07:44:35.509739] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:19.598 qpair failed and we were unable to recover it. 00:27:19.598 [2024-07-14 07:44:35.509976] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.598 [2024-07-14 07:44:35.510162] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.598 [2024-07-14 07:44:35.510187] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:19.598 qpair failed and we were unable to recover it. 00:27:19.598 [2024-07-14 07:44:35.510351] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.598 [2024-07-14 07:44:35.510565] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.598 [2024-07-14 07:44:35.510591] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:19.598 qpair failed and we were unable to recover it. 00:27:19.598 [2024-07-14 07:44:35.510813] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.598 [2024-07-14 07:44:35.511038] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.598 [2024-07-14 07:44:35.511067] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:19.598 qpair failed and we were unable to recover it. 00:27:19.598 [2024-07-14 07:44:35.511344] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.598 [2024-07-14 07:44:35.511643] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.598 [2024-07-14 07:44:35.511672] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:19.598 qpair failed and we were unable to recover it. 00:27:19.598 [2024-07-14 07:44:35.511858] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.598 [2024-07-14 07:44:35.512068] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.598 [2024-07-14 07:44:35.512096] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:19.598 qpair failed and we were unable to recover it. 00:27:19.598 [2024-07-14 07:44:35.512264] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.598 [2024-07-14 07:44:35.512472] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.598 [2024-07-14 07:44:35.512501] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:19.598 qpair failed and we were unable to recover it. 00:27:19.598 [2024-07-14 07:44:35.512712] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.598 [2024-07-14 07:44:35.512962] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.598 [2024-07-14 07:44:35.512992] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:19.598 qpair failed and we were unable to recover it. 00:27:19.598 [2024-07-14 07:44:35.513165] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.598 [2024-07-14 07:44:35.513348] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.598 [2024-07-14 07:44:35.513378] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:19.598 qpair failed and we were unable to recover it. 00:27:19.598 [2024-07-14 07:44:35.513589] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.598 [2024-07-14 07:44:35.513757] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.598 [2024-07-14 07:44:35.513786] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:19.598 qpair failed and we were unable to recover it. 00:27:19.598 [2024-07-14 07:44:35.514005] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.598 [2024-07-14 07:44:35.514190] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.598 [2024-07-14 07:44:35.514218] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:19.598 qpair failed and we were unable to recover it. 00:27:19.598 [2024-07-14 07:44:35.514459] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.598 [2024-07-14 07:44:35.514664] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.598 [2024-07-14 07:44:35.514712] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:19.598 qpair failed and we were unable to recover it. 00:27:19.598 [2024-07-14 07:44:35.515039] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.598 [2024-07-14 07:44:35.515274] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.598 [2024-07-14 07:44:35.515299] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:19.598 qpair failed and we were unable to recover it. 00:27:19.598 [2024-07-14 07:44:35.515486] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.598 [2024-07-14 07:44:35.515702] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.598 [2024-07-14 07:44:35.515728] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:19.598 qpair failed and we were unable to recover it. 00:27:19.598 [2024-07-14 07:44:35.515979] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.598 [2024-07-14 07:44:35.516139] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.598 [2024-07-14 07:44:35.516165] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:19.598 qpair failed and we were unable to recover it. 00:27:19.598 [2024-07-14 07:44:35.516338] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.598 [2024-07-14 07:44:35.516631] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.598 [2024-07-14 07:44:35.516680] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:19.598 qpair failed and we were unable to recover it. 00:27:19.598 [2024-07-14 07:44:35.516894] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.598 [2024-07-14 07:44:35.517075] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.598 [2024-07-14 07:44:35.517100] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:19.598 qpair failed and we were unable to recover it. 00:27:19.598 [2024-07-14 07:44:35.517313] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.598 [2024-07-14 07:44:35.517508] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.598 [2024-07-14 07:44:35.517552] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:19.598 qpair failed and we were unable to recover it. 00:27:19.598 [2024-07-14 07:44:35.517762] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.598 [2024-07-14 07:44:35.518021] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.598 [2024-07-14 07:44:35.518048] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:19.598 qpair failed and we were unable to recover it. 00:27:19.598 [2024-07-14 07:44:35.518205] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.598 [2024-07-14 07:44:35.518436] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.598 [2024-07-14 07:44:35.518482] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:19.598 qpair failed and we were unable to recover it. 00:27:19.599 [2024-07-14 07:44:35.518725] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.599 [2024-07-14 07:44:35.518958] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.599 [2024-07-14 07:44:35.518985] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:19.599 qpair failed and we were unable to recover it. 00:27:19.599 [2024-07-14 07:44:35.519178] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.599 [2024-07-14 07:44:35.519389] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.599 [2024-07-14 07:44:35.519419] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:19.599 qpair failed and we were unable to recover it. 00:27:19.599 [2024-07-14 07:44:35.519633] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.599 [2024-07-14 07:44:35.519837] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.599 [2024-07-14 07:44:35.519862] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:19.599 qpair failed and we were unable to recover it. 00:27:19.599 [2024-07-14 07:44:35.520087] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.599 [2024-07-14 07:44:35.520353] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.599 [2024-07-14 07:44:35.520395] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:19.599 qpair failed and we were unable to recover it. 00:27:19.599 [2024-07-14 07:44:35.520624] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.599 [2024-07-14 07:44:35.520814] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.599 [2024-07-14 07:44:35.520843] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:19.599 qpair failed and we were unable to recover it. 00:27:19.599 [2024-07-14 07:44:35.521046] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.599 [2024-07-14 07:44:35.521307] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.599 [2024-07-14 07:44:35.521336] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:19.599 qpair failed and we were unable to recover it. 00:27:19.599 [2024-07-14 07:44:35.521569] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.599 [2024-07-14 07:44:35.521766] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.599 [2024-07-14 07:44:35.521795] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:19.599 qpair failed and we were unable to recover it. 00:27:19.599 [2024-07-14 07:44:35.521993] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.599 [2024-07-14 07:44:35.522161] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.599 [2024-07-14 07:44:35.522187] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:19.599 qpair failed and we were unable to recover it. 00:27:19.599 [2024-07-14 07:44:35.522408] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.599 [2024-07-14 07:44:35.522652] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.599 [2024-07-14 07:44:35.522698] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:19.599 qpair failed and we were unable to recover it. 00:27:19.599 [2024-07-14 07:44:35.522885] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.599 [2024-07-14 07:44:35.523042] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.599 [2024-07-14 07:44:35.523067] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:19.599 qpair failed and we were unable to recover it. 00:27:19.599 [2024-07-14 07:44:35.523249] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.599 [2024-07-14 07:44:35.523471] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.599 [2024-07-14 07:44:35.523516] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:19.599 qpair failed and we were unable to recover it. 00:27:19.599 [2024-07-14 07:44:35.523817] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.599 [2024-07-14 07:44:35.524033] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.599 [2024-07-14 07:44:35.524059] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:19.599 qpair failed and we were unable to recover it. 00:27:19.599 [2024-07-14 07:44:35.524218] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.599 [2024-07-14 07:44:35.524425] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.599 [2024-07-14 07:44:35.524453] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:19.599 qpair failed and we were unable to recover it. 00:27:19.599 [2024-07-14 07:44:35.524668] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.599 [2024-07-14 07:44:35.524891] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.599 [2024-07-14 07:44:35.524940] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:19.599 qpair failed and we were unable to recover it. 00:27:19.599 [2024-07-14 07:44:35.525136] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.599 [2024-07-14 07:44:35.525385] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.599 [2024-07-14 07:44:35.525414] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:19.599 qpair failed and we were unable to recover it. 00:27:19.599 [2024-07-14 07:44:35.525652] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.599 [2024-07-14 07:44:35.525829] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.599 [2024-07-14 07:44:35.525857] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:19.599 qpair failed and we were unable to recover it. 00:27:19.599 [2024-07-14 07:44:35.526084] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.599 [2024-07-14 07:44:35.526307] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.599 [2024-07-14 07:44:35.526335] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:19.599 qpair failed and we were unable to recover it. 00:27:19.599 [2024-07-14 07:44:35.526548] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.599 [2024-07-14 07:44:35.526736] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.599 [2024-07-14 07:44:35.526763] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:19.599 qpair failed and we were unable to recover it. 00:27:19.599 [2024-07-14 07:44:35.526972] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.599 [2024-07-14 07:44:35.527150] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.599 [2024-07-14 07:44:35.527176] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:19.599 qpair failed and we were unable to recover it. 00:27:19.599 [2024-07-14 07:44:35.527358] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.599 [2024-07-14 07:44:35.527596] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.599 [2024-07-14 07:44:35.527621] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:19.599 qpair failed and we were unable to recover it. 00:27:19.599 [2024-07-14 07:44:35.527806] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.599 [2024-07-14 07:44:35.528002] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.599 [2024-07-14 07:44:35.528028] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:19.599 qpair failed and we were unable to recover it. 00:27:19.599 [2024-07-14 07:44:35.528187] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.599 [2024-07-14 07:44:35.528356] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.599 [2024-07-14 07:44:35.528383] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:19.599 qpair failed and we were unable to recover it. 00:27:19.599 [2024-07-14 07:44:35.528571] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.599 [2024-07-14 07:44:35.528750] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.599 [2024-07-14 07:44:35.528776] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:19.599 qpair failed and we were unable to recover it. 00:27:19.599 [2024-07-14 07:44:35.528978] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.599 [2024-07-14 07:44:35.529223] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.599 [2024-07-14 07:44:35.529249] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:19.599 qpair failed and we were unable to recover it. 00:27:19.599 [2024-07-14 07:44:35.529438] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.599 [2024-07-14 07:44:35.529674] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.599 [2024-07-14 07:44:35.529721] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:19.599 qpair failed and we were unable to recover it. 00:27:19.599 [2024-07-14 07:44:35.529939] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.599 [2024-07-14 07:44:35.530121] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.599 [2024-07-14 07:44:35.530161] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:19.599 qpair failed and we were unable to recover it. 00:27:19.599 [2024-07-14 07:44:35.530381] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.599 [2024-07-14 07:44:35.530553] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.599 [2024-07-14 07:44:35.530582] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:19.601 qpair failed and we were unable to recover it. 00:27:19.601 [2024-07-14 07:44:35.530828] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.601 [2024-07-14 07:44:35.531041] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.601 [2024-07-14 07:44:35.531067] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:19.601 qpair failed and we were unable to recover it. 00:27:19.601 [2024-07-14 07:44:35.531279] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.601 [2024-07-14 07:44:35.531437] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.601 [2024-07-14 07:44:35.531463] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:19.601 qpair failed and we were unable to recover it. 00:27:19.601 [2024-07-14 07:44:35.531650] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.601 [2024-07-14 07:44:35.531807] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.601 [2024-07-14 07:44:35.531833] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:19.601 qpair failed and we were unable to recover it. 00:27:19.601 [2024-07-14 07:44:35.532048] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.601 [2024-07-14 07:44:35.532233] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.601 [2024-07-14 07:44:35.532262] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:19.601 qpair failed and we were unable to recover it. 00:27:19.601 [2024-07-14 07:44:35.532461] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.601 [2024-07-14 07:44:35.532677] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.601 [2024-07-14 07:44:35.532704] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:19.601 qpair failed and we were unable to recover it. 00:27:19.601 [2024-07-14 07:44:35.532930] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.601 [2024-07-14 07:44:35.533111] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.601 [2024-07-14 07:44:35.533136] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:19.601 qpair failed and we were unable to recover it. 00:27:19.601 [2024-07-14 07:44:35.533294] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.601 [2024-07-14 07:44:35.533558] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.601 [2024-07-14 07:44:35.533585] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:19.601 qpair failed and we were unable to recover it. 00:27:19.601 [2024-07-14 07:44:35.533798] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.601 [2024-07-14 07:44:35.533986] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.601 [2024-07-14 07:44:35.534013] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:19.601 qpair failed and we were unable to recover it. 00:27:19.601 [2024-07-14 07:44:35.534196] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.601 [2024-07-14 07:44:35.534390] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.601 [2024-07-14 07:44:35.534417] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:19.601 qpair failed and we were unable to recover it. 00:27:19.601 [2024-07-14 07:44:35.534687] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.601 [2024-07-14 07:44:35.534932] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.601 [2024-07-14 07:44:35.534958] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:19.601 qpair failed and we were unable to recover it. 00:27:19.601 [2024-07-14 07:44:35.535117] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.601 [2024-07-14 07:44:35.535307] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.601 [2024-07-14 07:44:35.535333] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:19.601 qpair failed and we were unable to recover it. 00:27:19.601 [2024-07-14 07:44:35.535541] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.601 [2024-07-14 07:44:35.535723] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.601 [2024-07-14 07:44:35.535749] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:19.601 qpair failed and we were unable to recover it. 00:27:19.601 [2024-07-14 07:44:35.535938] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.601 [2024-07-14 07:44:35.536095] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.601 [2024-07-14 07:44:35.536131] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:19.601 qpair failed and we were unable to recover it. 00:27:19.601 [2024-07-14 07:44:35.536286] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.601 [2024-07-14 07:44:35.536550] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.601 [2024-07-14 07:44:35.536597] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:19.601 qpair failed and we were unable to recover it. 00:27:19.601 [2024-07-14 07:44:35.536900] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.601 [2024-07-14 07:44:35.537074] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.601 [2024-07-14 07:44:35.537100] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:19.601 qpair failed and we were unable to recover it. 00:27:19.601 [2024-07-14 07:44:35.537297] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.601 [2024-07-14 07:44:35.537567] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.601 [2024-07-14 07:44:35.537596] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:19.601 qpair failed and we were unable to recover it. 00:27:19.601 [2024-07-14 07:44:35.537800] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.601 [2024-07-14 07:44:35.537991] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.601 [2024-07-14 07:44:35.538018] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:19.601 qpair failed and we were unable to recover it. 00:27:19.601 [2024-07-14 07:44:35.538227] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.601 [2024-07-14 07:44:35.538455] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.601 [2024-07-14 07:44:35.538502] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:19.601 qpair failed and we were unable to recover it. 00:27:19.601 [2024-07-14 07:44:35.538680] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.601 [2024-07-14 07:44:35.538959] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.601 [2024-07-14 07:44:35.538985] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:19.601 qpair failed and we were unable to recover it. 00:27:19.601 [2024-07-14 07:44:35.539145] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.601 [2024-07-14 07:44:35.539349] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.601 [2024-07-14 07:44:35.539376] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:19.601 qpair failed and we were unable to recover it. 00:27:19.601 [2024-07-14 07:44:35.539537] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.601 [2024-07-14 07:44:35.539737] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.601 [2024-07-14 07:44:35.539763] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:19.601 qpair failed and we were unable to recover it. 00:27:19.601 [2024-07-14 07:44:35.539925] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.601 [2024-07-14 07:44:35.540094] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.601 [2024-07-14 07:44:35.540119] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:19.601 qpair failed and we were unable to recover it. 00:27:19.601 [2024-07-14 07:44:35.540334] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.601 [2024-07-14 07:44:35.540548] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.601 [2024-07-14 07:44:35.540577] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:19.601 qpair failed and we were unable to recover it. 00:27:19.601 [2024-07-14 07:44:35.540789] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.601 [2024-07-14 07:44:35.540955] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.601 [2024-07-14 07:44:35.540982] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:19.601 qpair failed and we were unable to recover it. 00:27:19.601 [2024-07-14 07:44:35.541144] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.601 [2024-07-14 07:44:35.541322] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.601 [2024-07-14 07:44:35.541348] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:19.601 qpair failed and we were unable to recover it. 00:27:19.602 [2024-07-14 07:44:35.541536] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.602 [2024-07-14 07:44:35.541721] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.602 [2024-07-14 07:44:35.541746] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:19.602 qpair failed and we were unable to recover it. 00:27:19.602 [2024-07-14 07:44:35.541961] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.602 [2024-07-14 07:44:35.542149] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.602 [2024-07-14 07:44:35.542174] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:19.602 qpair failed and we were unable to recover it. 00:27:19.602 [2024-07-14 07:44:35.542362] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.602 [2024-07-14 07:44:35.542548] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.602 [2024-07-14 07:44:35.542574] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:19.602 qpair failed and we were unable to recover it. 00:27:19.602 [2024-07-14 07:44:35.542761] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.602 [2024-07-14 07:44:35.542916] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.602 [2024-07-14 07:44:35.542942] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:19.602 qpair failed and we were unable to recover it. 00:27:19.602 [2024-07-14 07:44:35.543111] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.602 [2024-07-14 07:44:35.543349] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.602 [2024-07-14 07:44:35.543378] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:19.602 qpair failed and we were unable to recover it. 00:27:19.602 [2024-07-14 07:44:35.543668] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.602 [2024-07-14 07:44:35.543894] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.602 [2024-07-14 07:44:35.543950] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:19.602 qpair failed and we were unable to recover it. 00:27:19.602 [2024-07-14 07:44:35.544115] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.602 [2024-07-14 07:44:35.544401] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.602 [2024-07-14 07:44:35.544447] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:19.602 qpair failed and we were unable to recover it. 00:27:19.602 [2024-07-14 07:44:35.544648] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.602 [2024-07-14 07:44:35.544883] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.602 [2024-07-14 07:44:35.544929] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:19.602 qpair failed and we were unable to recover it. 00:27:19.602 [2024-07-14 07:44:35.545097] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.602 [2024-07-14 07:44:35.545299] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.602 [2024-07-14 07:44:35.545327] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:19.602 qpair failed and we were unable to recover it. 00:27:19.602 [2024-07-14 07:44:35.545558] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.602 [2024-07-14 07:44:35.545751] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.602 [2024-07-14 07:44:35.545780] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:19.602 qpair failed and we were unable to recover it. 00:27:19.602 [2024-07-14 07:44:35.545974] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.602 [2024-07-14 07:44:35.546137] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.602 [2024-07-14 07:44:35.546163] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:19.602 qpair failed and we were unable to recover it. 00:27:19.602 [2024-07-14 07:44:35.546350] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.602 [2024-07-14 07:44:35.546534] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.602 [2024-07-14 07:44:35.546560] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:19.602 qpair failed and we were unable to recover it. 00:27:19.602 [2024-07-14 07:44:35.546723] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.602 [2024-07-14 07:44:35.546916] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.602 [2024-07-14 07:44:35.546942] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:19.602 qpair failed and we were unable to recover it. 00:27:19.602 [2024-07-14 07:44:35.547205] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.602 [2024-07-14 07:44:35.547427] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.602 [2024-07-14 07:44:35.547459] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:19.602 qpair failed and we were unable to recover it. 00:27:19.602 [2024-07-14 07:44:35.547743] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.602 [2024-07-14 07:44:35.548034] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.602 [2024-07-14 07:44:35.548060] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:19.602 qpair failed and we were unable to recover it. 00:27:19.602 [2024-07-14 07:44:35.548327] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.602 [2024-07-14 07:44:35.548510] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.602 [2024-07-14 07:44:35.548536] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:19.602 qpair failed and we were unable to recover it. 00:27:19.602 [2024-07-14 07:44:35.548723] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.602 [2024-07-14 07:44:35.548905] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.602 [2024-07-14 07:44:35.548932] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:19.602 qpair failed and we were unable to recover it. 00:27:19.602 [2024-07-14 07:44:35.549196] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.602 [2024-07-14 07:44:35.549411] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.602 [2024-07-14 07:44:35.549436] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:19.602 qpair failed and we were unable to recover it. 00:27:19.602 [2024-07-14 07:44:35.549647] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.602 [2024-07-14 07:44:35.549848] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.602 [2024-07-14 07:44:35.549885] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:19.602 qpair failed and we were unable to recover it. 00:27:19.602 [2024-07-14 07:44:35.550060] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.602 [2024-07-14 07:44:35.550226] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.602 [2024-07-14 07:44:35.550251] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:19.602 qpair failed and we were unable to recover it. 00:27:19.602 [2024-07-14 07:44:35.550431] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.602 [2024-07-14 07:44:35.550656] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.602 [2024-07-14 07:44:35.550702] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:19.602 qpair failed and we were unable to recover it. 00:27:19.602 [2024-07-14 07:44:35.550887] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.602 [2024-07-14 07:44:35.551047] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.602 [2024-07-14 07:44:35.551072] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:19.602 qpair failed and we were unable to recover it. 00:27:19.602 [2024-07-14 07:44:35.551240] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.602 [2024-07-14 07:44:35.551449] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.602 [2024-07-14 07:44:35.551477] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:19.602 qpair failed and we were unable to recover it. 00:27:19.602 [2024-07-14 07:44:35.551676] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.602 [2024-07-14 07:44:35.551893] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.602 [2024-07-14 07:44:35.551919] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:19.602 qpair failed and we were unable to recover it. 00:27:19.602 [2024-07-14 07:44:35.552081] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.602 [2024-07-14 07:44:35.552264] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.602 [2024-07-14 07:44:35.552289] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:19.602 qpair failed and we were unable to recover it. 00:27:19.602 [2024-07-14 07:44:35.552564] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.602 [2024-07-14 07:44:35.552884] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.602 [2024-07-14 07:44:35.552914] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:19.602 qpair failed and we were unable to recover it. 00:27:19.602 [2024-07-14 07:44:35.553098] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.602 [2024-07-14 07:44:35.553296] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.602 [2024-07-14 07:44:35.553322] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:19.602 qpair failed and we were unable to recover it. 00:27:19.602 [2024-07-14 07:44:35.553536] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.602 [2024-07-14 07:44:35.553747] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.602 [2024-07-14 07:44:35.553773] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:19.602 qpair failed and we were unable to recover it. 00:27:19.602 [2024-07-14 07:44:35.553946] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.602 [2024-07-14 07:44:35.554112] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.602 [2024-07-14 07:44:35.554138] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:19.602 qpair failed and we were unable to recover it. 00:27:19.602 [2024-07-14 07:44:35.554318] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.602 [2024-07-14 07:44:35.554497] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.602 [2024-07-14 07:44:35.554523] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:19.602 qpair failed and we were unable to recover it. 00:27:19.602 [2024-07-14 07:44:35.554707] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.602 [2024-07-14 07:44:35.554899] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.603 [2024-07-14 07:44:35.554925] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:19.603 qpair failed and we were unable to recover it. 00:27:19.603 [2024-07-14 07:44:35.555092] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.603 [2024-07-14 07:44:35.555273] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.603 [2024-07-14 07:44:35.555299] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:19.603 qpair failed and we were unable to recover it. 00:27:19.603 [2024-07-14 07:44:35.555487] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.603 [2024-07-14 07:44:35.555697] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.603 [2024-07-14 07:44:35.555742] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:19.603 qpair failed and we were unable to recover it. 00:27:19.603 [2024-07-14 07:44:35.555934] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.603 [2024-07-14 07:44:35.556094] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.603 [2024-07-14 07:44:35.556120] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:19.603 qpair failed and we were unable to recover it. 00:27:19.603 [2024-07-14 07:44:35.556342] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.603 [2024-07-14 07:44:35.556552] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.603 [2024-07-14 07:44:35.556580] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:19.603 qpair failed and we were unable to recover it. 00:27:19.603 [2024-07-14 07:44:35.556790] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.603 [2024-07-14 07:44:35.557000] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.603 [2024-07-14 07:44:35.557027] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:19.603 qpair failed and we were unable to recover it. 00:27:19.603 [2024-07-14 07:44:35.557183] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.603 [2024-07-14 07:44:35.557361] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.603 [2024-07-14 07:44:35.557387] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:19.603 qpair failed and we were unable to recover it. 00:27:19.603 [2024-07-14 07:44:35.557620] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.603 [2024-07-14 07:44:35.557831] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.603 [2024-07-14 07:44:35.557860] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:19.603 qpair failed and we were unable to recover it. 00:27:19.603 [2024-07-14 07:44:35.558084] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.603 [2024-07-14 07:44:35.558242] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.603 [2024-07-14 07:44:35.558268] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:19.603 qpair failed and we were unable to recover it. 00:27:19.603 [2024-07-14 07:44:35.558432] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.603 [2024-07-14 07:44:35.558645] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.603 [2024-07-14 07:44:35.558671] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:19.603 qpair failed and we were unable to recover it. 00:27:19.603 [2024-07-14 07:44:35.558853] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.603 [2024-07-14 07:44:35.559023] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.603 [2024-07-14 07:44:35.559049] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:19.603 qpair failed and we were unable to recover it. 00:27:19.603 [2024-07-14 07:44:35.559217] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.603 [2024-07-14 07:44:35.559429] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.603 [2024-07-14 07:44:35.559458] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:19.603 qpair failed and we were unable to recover it. 00:27:19.603 [2024-07-14 07:44:35.559667] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.603 [2024-07-14 07:44:35.559905] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.603 [2024-07-14 07:44:35.559950] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:19.603 qpair failed and we were unable to recover it. 00:27:19.603 [2024-07-14 07:44:35.560111] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.603 [2024-07-14 07:44:35.560271] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.603 [2024-07-14 07:44:35.560297] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:19.603 qpair failed and we were unable to recover it. 00:27:19.603 [2024-07-14 07:44:35.560465] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.603 [2024-07-14 07:44:35.560684] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.603 [2024-07-14 07:44:35.560712] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:19.603 qpair failed and we were unable to recover it. 00:27:19.603 [2024-07-14 07:44:35.560927] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.603 [2024-07-14 07:44:35.561090] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.603 [2024-07-14 07:44:35.561116] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:19.603 qpair failed and we were unable to recover it. 00:27:19.603 [2024-07-14 07:44:35.561326] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.603 [2024-07-14 07:44:35.561506] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.603 [2024-07-14 07:44:35.561531] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:19.603 qpair failed and we were unable to recover it. 00:27:19.603 [2024-07-14 07:44:35.561688] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.603 [2024-07-14 07:44:35.561846] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.603 [2024-07-14 07:44:35.561880] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:19.603 qpair failed and we were unable to recover it. 00:27:19.603 [2024-07-14 07:44:35.562072] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.603 [2024-07-14 07:44:35.562235] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.603 [2024-07-14 07:44:35.562261] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:19.603 qpair failed and we were unable to recover it. 00:27:19.603 [2024-07-14 07:44:35.562526] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.603 [2024-07-14 07:44:35.562721] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.603 [2024-07-14 07:44:35.562749] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:19.603 qpair failed and we were unable to recover it. 00:27:19.603 [2024-07-14 07:44:35.562936] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.603 [2024-07-14 07:44:35.563104] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.603 [2024-07-14 07:44:35.563131] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:19.603 qpair failed and we were unable to recover it. 00:27:19.603 [2024-07-14 07:44:35.563317] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.603 [2024-07-14 07:44:35.563481] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.603 [2024-07-14 07:44:35.563507] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:19.603 qpair failed and we were unable to recover it. 00:27:19.603 [2024-07-14 07:44:35.563665] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.603 [2024-07-14 07:44:35.563840] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.603 [2024-07-14 07:44:35.563873] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:19.603 qpair failed and we were unable to recover it. 00:27:19.603 [2024-07-14 07:44:35.564030] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.603 [2024-07-14 07:44:35.564193] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.603 [2024-07-14 07:44:35.564218] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:19.603 qpair failed and we were unable to recover it. 00:27:19.603 [2024-07-14 07:44:35.564375] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.603 [2024-07-14 07:44:35.564559] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.603 [2024-07-14 07:44:35.564584] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:19.603 qpair failed and we were unable to recover it. 00:27:19.603 [2024-07-14 07:44:35.564798] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.603 [2024-07-14 07:44:35.565009] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.603 [2024-07-14 07:44:35.565040] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:19.603 qpair failed and we were unable to recover it. 00:27:19.603 [2024-07-14 07:44:35.565200] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.603 [2024-07-14 07:44:35.565386] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.603 [2024-07-14 07:44:35.565414] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:19.603 qpair failed and we were unable to recover it. 00:27:19.603 [2024-07-14 07:44:35.565600] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.603 [2024-07-14 07:44:35.565787] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.603 [2024-07-14 07:44:35.565815] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:19.603 qpair failed and we were unable to recover it. 00:27:19.603 [2024-07-14 07:44:35.566000] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.603 [2024-07-14 07:44:35.566162] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.603 [2024-07-14 07:44:35.566188] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:19.603 qpair failed and we were unable to recover it. 00:27:19.603 [2024-07-14 07:44:35.566427] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.603 [2024-07-14 07:44:35.566668] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.603 [2024-07-14 07:44:35.566694] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:19.603 qpair failed and we were unable to recover it. 00:27:19.603 [2024-07-14 07:44:35.566884] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.603 [2024-07-14 07:44:35.567072] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.603 [2024-07-14 07:44:35.567097] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:19.603 qpair failed and we were unable to recover it. 00:27:19.603 [2024-07-14 07:44:35.567259] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.603 [2024-07-14 07:44:35.567423] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.604 [2024-07-14 07:44:35.567449] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:19.604 qpair failed and we were unable to recover it. 00:27:19.604 [2024-07-14 07:44:35.567634] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.604 [2024-07-14 07:44:35.567832] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.604 [2024-07-14 07:44:35.567860] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:19.604 qpair failed and we were unable to recover it. 00:27:19.604 [2024-07-14 07:44:35.568052] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.604 [2024-07-14 07:44:35.568211] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.604 [2024-07-14 07:44:35.568253] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:19.604 qpair failed and we were unable to recover it. 00:27:19.604 [2024-07-14 07:44:35.568459] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.604 [2024-07-14 07:44:35.568690] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.604 [2024-07-14 07:44:35.568718] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:19.604 qpair failed and we were unable to recover it. 00:27:19.604 [2024-07-14 07:44:35.568946] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.604 [2024-07-14 07:44:35.569111] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.604 [2024-07-14 07:44:35.569138] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:19.604 qpair failed and we were unable to recover it. 00:27:19.604 [2024-07-14 07:44:35.569353] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.604 [2024-07-14 07:44:35.569556] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.604 [2024-07-14 07:44:35.569603] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:19.604 qpair failed and we were unable to recover it. 00:27:19.604 [2024-07-14 07:44:35.569810] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.604 [2024-07-14 07:44:35.570004] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.604 [2024-07-14 07:44:35.570030] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:19.604 qpair failed and we were unable to recover it. 00:27:19.604 [2024-07-14 07:44:35.570198] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.604 [2024-07-14 07:44:35.570386] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.604 [2024-07-14 07:44:35.570412] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:19.604 qpair failed and we were unable to recover it. 00:27:19.604 [2024-07-14 07:44:35.570598] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.604 [2024-07-14 07:44:35.570800] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.604 [2024-07-14 07:44:35.570829] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:19.604 qpair failed and we were unable to recover it. 00:27:19.604 [2024-07-14 07:44:35.571016] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.604 [2024-07-14 07:44:35.571190] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.604 [2024-07-14 07:44:35.571218] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:19.604 qpair failed and we were unable to recover it. 00:27:19.604 [2024-07-14 07:44:35.571410] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.604 [2024-07-14 07:44:35.571611] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.604 [2024-07-14 07:44:35.571640] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:19.604 qpair failed and we were unable to recover it. 00:27:19.604 [2024-07-14 07:44:35.571853] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.604 [2024-07-14 07:44:35.572049] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.604 [2024-07-14 07:44:35.572075] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:19.604 qpair failed and we were unable to recover it. 00:27:19.604 [2024-07-14 07:44:35.572257] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.604 [2024-07-14 07:44:35.572430] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.604 [2024-07-14 07:44:35.572459] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:19.604 qpair failed and we were unable to recover it. 00:27:19.604 [2024-07-14 07:44:35.572658] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.604 [2024-07-14 07:44:35.572888] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.604 [2024-07-14 07:44:35.572932] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:19.604 qpair failed and we were unable to recover it. 00:27:19.604 [2024-07-14 07:44:35.573090] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.604 [2024-07-14 07:44:35.573264] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.604 [2024-07-14 07:44:35.573293] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:19.604 qpair failed and we were unable to recover it. 00:27:19.604 [2024-07-14 07:44:35.573554] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.604 [2024-07-14 07:44:35.573808] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.604 [2024-07-14 07:44:35.573835] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:19.604 qpair failed and we were unable to recover it. 00:27:19.604 [2024-07-14 07:44:35.574011] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.604 [2024-07-14 07:44:35.574194] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.604 [2024-07-14 07:44:35.574222] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:19.604 qpair failed and we were unable to recover it. 00:27:19.604 [2024-07-14 07:44:35.574431] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.604 [2024-07-14 07:44:35.574617] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.604 [2024-07-14 07:44:35.574642] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:19.604 qpair failed and we were unable to recover it. 00:27:19.604 [2024-07-14 07:44:35.574856] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.604 [2024-07-14 07:44:35.575030] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.604 [2024-07-14 07:44:35.575058] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:19.604 qpair failed and we were unable to recover it. 00:27:19.604 [2024-07-14 07:44:35.575213] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.604 [2024-07-14 07:44:35.575400] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.604 [2024-07-14 07:44:35.575428] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:19.604 qpair failed and we were unable to recover it. 00:27:19.604 [2024-07-14 07:44:35.575615] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.604 [2024-07-14 07:44:35.575805] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.604 [2024-07-14 07:44:35.575831] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:19.604 qpair failed and we were unable to recover it. 00:27:19.604 [2024-07-14 07:44:35.576003] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.604 [2024-07-14 07:44:35.576167] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.604 [2024-07-14 07:44:35.576192] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:19.604 qpair failed and we were unable to recover it. 00:27:19.604 [2024-07-14 07:44:35.576410] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.604 [2024-07-14 07:44:35.576605] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.604 [2024-07-14 07:44:35.576652] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:19.604 qpair failed and we were unable to recover it. 00:27:19.604 [2024-07-14 07:44:35.576823] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.604 [2024-07-14 07:44:35.577045] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.604 [2024-07-14 07:44:35.577072] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:19.604 qpair failed and we were unable to recover it. 00:27:19.604 [2024-07-14 07:44:35.577264] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.604 [2024-07-14 07:44:35.577409] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.604 [2024-07-14 07:44:35.577456] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:19.604 qpair failed and we were unable to recover it. 00:27:19.604 [2024-07-14 07:44:35.577764] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.604 [2024-07-14 07:44:35.577971] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.604 [2024-07-14 07:44:35.577998] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:19.604 qpair failed and we were unable to recover it. 00:27:19.604 [2024-07-14 07:44:35.578268] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.604 [2024-07-14 07:44:35.578528] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.604 [2024-07-14 07:44:35.578574] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:19.604 qpair failed and we were unable to recover it. 00:27:19.604 [2024-07-14 07:44:35.578781] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.604 [2024-07-14 07:44:35.578967] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.604 [2024-07-14 07:44:35.578994] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:19.604 qpair failed and we were unable to recover it. 00:27:19.604 [2024-07-14 07:44:35.579154] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.604 [2024-07-14 07:44:35.579363] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.604 [2024-07-14 07:44:35.579413] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:19.604 qpair failed and we were unable to recover it. 00:27:19.604 [2024-07-14 07:44:35.579660] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.604 [2024-07-14 07:44:35.579873] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.604 [2024-07-14 07:44:35.579916] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:19.604 qpair failed and we were unable to recover it. 00:27:19.604 [2024-07-14 07:44:35.580082] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.604 [2024-07-14 07:44:35.580296] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.604 [2024-07-14 07:44:35.580342] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:19.604 qpair failed and we were unable to recover it. 00:27:19.604 [2024-07-14 07:44:35.580565] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.604 [2024-07-14 07:44:35.580774] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.605 [2024-07-14 07:44:35.580802] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:19.605 qpair failed and we were unable to recover it. 00:27:19.605 [2024-07-14 07:44:35.580995] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.605 [2024-07-14 07:44:35.581173] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.605 [2024-07-14 07:44:35.581201] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:19.605 qpair failed and we were unable to recover it. 00:27:19.605 [2024-07-14 07:44:35.581411] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.605 [2024-07-14 07:44:35.581635] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.605 [2024-07-14 07:44:35.581681] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:19.605 qpair failed and we were unable to recover it. 00:27:19.605 [2024-07-14 07:44:35.581878] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.605 [2024-07-14 07:44:35.582061] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.605 [2024-07-14 07:44:35.582086] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:19.605 qpair failed and we were unable to recover it. 00:27:19.605 [2024-07-14 07:44:35.582302] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.605 [2024-07-14 07:44:35.582533] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.605 [2024-07-14 07:44:35.582579] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:19.605 qpair failed and we were unable to recover it. 00:27:19.605 [2024-07-14 07:44:35.582784] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.605 [2024-07-14 07:44:35.582979] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.605 [2024-07-14 07:44:35.583006] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:19.605 qpair failed and we were unable to recover it. 00:27:19.605 [2024-07-14 07:44:35.583165] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.605 [2024-07-14 07:44:35.583375] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.605 [2024-07-14 07:44:35.583403] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:19.605 qpair failed and we were unable to recover it. 00:27:19.605 [2024-07-14 07:44:35.583646] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.605 [2024-07-14 07:44:35.583877] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.605 [2024-07-14 07:44:35.583922] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:19.605 qpair failed and we were unable to recover it. 00:27:19.605 [2024-07-14 07:44:35.584080] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.605 [2024-07-14 07:44:35.584268] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.605 [2024-07-14 07:44:35.584294] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:19.605 qpair failed and we were unable to recover it. 00:27:19.605 [2024-07-14 07:44:35.584460] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.605 [2024-07-14 07:44:35.584645] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.605 [2024-07-14 07:44:35.584670] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:19.605 qpair failed and we were unable to recover it. 00:27:19.605 [2024-07-14 07:44:35.584864] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.605 [2024-07-14 07:44:35.585036] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.605 [2024-07-14 07:44:35.585062] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:19.605 qpair failed and we were unable to recover it. 00:27:19.605 [2024-07-14 07:44:35.585354] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.605 [2024-07-14 07:44:35.585556] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.605 [2024-07-14 07:44:35.585585] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:19.605 qpair failed and we were unable to recover it. 00:27:19.605 [2024-07-14 07:44:35.585810] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.605 [2024-07-14 07:44:35.586081] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.605 [2024-07-14 07:44:35.586107] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:19.605 qpair failed and we were unable to recover it. 00:27:19.605 [2024-07-14 07:44:35.586404] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.605 [2024-07-14 07:44:35.586626] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.605 [2024-07-14 07:44:35.586672] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:19.605 qpair failed and we were unable to recover it. 00:27:19.605 [2024-07-14 07:44:35.586924] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.605 [2024-07-14 07:44:35.587088] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.605 [2024-07-14 07:44:35.587118] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:19.605 qpair failed and we were unable to recover it. 00:27:19.605 [2024-07-14 07:44:35.587339] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.605 [2024-07-14 07:44:35.587603] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.605 [2024-07-14 07:44:35.587649] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:19.605 qpair failed and we were unable to recover it. 00:27:19.605 [2024-07-14 07:44:35.587885] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.605 [2024-07-14 07:44:35.588061] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.605 [2024-07-14 07:44:35.588086] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:19.605 qpair failed and we were unable to recover it. 00:27:19.605 [2024-07-14 07:44:35.588281] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.605 [2024-07-14 07:44:35.588498] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.605 [2024-07-14 07:44:35.588524] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:19.605 qpair failed and we were unable to recover it. 00:27:19.605 [2024-07-14 07:44:35.588712] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.605 [2024-07-14 07:44:35.588985] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.605 [2024-07-14 07:44:35.589011] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:19.605 qpair failed and we were unable to recover it. 00:27:19.605 [2024-07-14 07:44:35.589190] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.605 [2024-07-14 07:44:35.589370] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.605 [2024-07-14 07:44:35.589400] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:19.605 qpair failed and we were unable to recover it. 00:27:19.605 [2024-07-14 07:44:35.589643] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.605 [2024-07-14 07:44:35.589861] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.605 [2024-07-14 07:44:35.589897] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:19.605 qpair failed and we were unable to recover it. 00:27:19.605 [2024-07-14 07:44:35.590070] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.605 [2024-07-14 07:44:35.590265] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.605 [2024-07-14 07:44:35.590291] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:19.605 qpair failed and we were unable to recover it. 00:27:19.605 [2024-07-14 07:44:35.590519] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.605 [2024-07-14 07:44:35.590810] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.605 [2024-07-14 07:44:35.590857] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:19.605 qpair failed and we were unable to recover it. 00:27:19.605 [2024-07-14 07:44:35.591074] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.605 [2024-07-14 07:44:35.591296] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.605 [2024-07-14 07:44:35.591329] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:19.605 qpair failed and we were unable to recover it. 00:27:19.605 [2024-07-14 07:44:35.591590] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.605 [2024-07-14 07:44:35.591874] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.605 [2024-07-14 07:44:35.591924] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:19.605 qpair failed and we were unable to recover it. 00:27:19.605 [2024-07-14 07:44:35.592085] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.605 [2024-07-14 07:44:35.592296] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.605 [2024-07-14 07:44:35.592346] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:19.605 qpair failed and we were unable to recover it. 00:27:19.605 [2024-07-14 07:44:35.592638] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.605 [2024-07-14 07:44:35.592845] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.605 [2024-07-14 07:44:35.592878] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:19.605 qpair failed and we were unable to recover it. 00:27:19.605 [2024-07-14 07:44:35.593061] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.606 [2024-07-14 07:44:35.593243] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.606 [2024-07-14 07:44:35.593273] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:19.606 qpair failed and we were unable to recover it. 00:27:19.606 [2024-07-14 07:44:35.593514] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.606 [2024-07-14 07:44:35.593743] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.606 [2024-07-14 07:44:35.593772] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:19.606 qpair failed and we were unable to recover it. 00:27:19.606 [2024-07-14 07:44:35.593969] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.606 [2024-07-14 07:44:35.594131] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.606 [2024-07-14 07:44:35.594157] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:19.606 qpair failed and we were unable to recover it. 00:27:19.606 [2024-07-14 07:44:35.594334] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.606 [2024-07-14 07:44:35.594592] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.606 [2024-07-14 07:44:35.594639] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:19.606 qpair failed and we were unable to recover it. 00:27:19.606 [2024-07-14 07:44:35.594817] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.606 [2024-07-14 07:44:35.594985] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.606 [2024-07-14 07:44:35.595011] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:19.606 qpair failed and we were unable to recover it. 00:27:19.606 [2024-07-14 07:44:35.595218] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.606 [2024-07-14 07:44:35.595422] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.606 [2024-07-14 07:44:35.595469] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:19.606 qpair failed and we were unable to recover it. 00:27:19.606 [2024-07-14 07:44:35.595698] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.606 [2024-07-14 07:44:35.595923] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.606 [2024-07-14 07:44:35.595949] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:19.606 qpair failed and we were unable to recover it. 00:27:19.606 [2024-07-14 07:44:35.596110] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.606 [2024-07-14 07:44:35.596322] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.606 [2024-07-14 07:44:35.596369] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:19.606 qpair failed and we were unable to recover it. 00:27:19.606 [2024-07-14 07:44:35.596625] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.606 [2024-07-14 07:44:35.596880] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.606 [2024-07-14 07:44:35.596925] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:19.606 qpair failed and we were unable to recover it. 00:27:19.606 [2024-07-14 07:44:35.597077] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.606 [2024-07-14 07:44:35.597249] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.606 [2024-07-14 07:44:35.597274] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:19.606 qpair failed and we were unable to recover it. 00:27:19.606 [2024-07-14 07:44:35.597486] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.606 [2024-07-14 07:44:35.597717] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.606 [2024-07-14 07:44:35.597750] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:19.606 qpair failed and we were unable to recover it. 00:27:19.606 [2024-07-14 07:44:35.597959] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.606 [2024-07-14 07:44:35.598121] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.606 [2024-07-14 07:44:35.598158] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:19.606 qpair failed and we were unable to recover it. 00:27:19.606 [2024-07-14 07:44:35.598380] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.606 [2024-07-14 07:44:35.598553] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.606 [2024-07-14 07:44:35.598583] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:19.606 qpair failed and we were unable to recover it. 00:27:19.606 [2024-07-14 07:44:35.598797] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.606 [2024-07-14 07:44:35.599007] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.606 [2024-07-14 07:44:35.599034] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:19.606 qpair failed and we were unable to recover it. 00:27:19.606 [2024-07-14 07:44:35.599196] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.606 [2024-07-14 07:44:35.599379] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.606 [2024-07-14 07:44:35.599406] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:19.606 qpair failed and we were unable to recover it. 00:27:19.606 [2024-07-14 07:44:35.599630] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.606 [2024-07-14 07:44:35.599826] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.606 [2024-07-14 07:44:35.599855] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:19.606 qpair failed and we were unable to recover it. 00:27:19.606 [2024-07-14 07:44:35.600065] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.606 [2024-07-14 07:44:35.600229] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.606 [2024-07-14 07:44:35.600273] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:19.606 qpair failed and we were unable to recover it. 00:27:19.606 [2024-07-14 07:44:35.600553] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.606 [2024-07-14 07:44:35.600758] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.606 [2024-07-14 07:44:35.600787] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:19.606 qpair failed and we were unable to recover it. 00:27:19.606 [2024-07-14 07:44:35.600992] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.606 [2024-07-14 07:44:35.601146] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.606 [2024-07-14 07:44:35.601195] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:19.606 qpair failed and we were unable to recover it. 00:27:19.606 [2024-07-14 07:44:35.601401] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.606 [2024-07-14 07:44:35.601649] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.606 [2024-07-14 07:44:35.601678] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:19.606 qpair failed and we were unable to recover it. 00:27:19.606 [2024-07-14 07:44:35.601893] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.606 [2024-07-14 07:44:35.602053] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.606 [2024-07-14 07:44:35.602079] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:19.606 qpair failed and we were unable to recover it. 00:27:19.606 [2024-07-14 07:44:35.602295] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.606 [2024-07-14 07:44:35.602571] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.606 [2024-07-14 07:44:35.602600] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:19.606 qpair failed and we were unable to recover it. 00:27:19.606 [2024-07-14 07:44:35.602812] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.606 [2024-07-14 07:44:35.603006] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.606 [2024-07-14 07:44:35.603032] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:19.606 qpair failed and we were unable to recover it. 00:27:19.606 [2024-07-14 07:44:35.603218] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.606 [2024-07-14 07:44:35.603415] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.606 [2024-07-14 07:44:35.603444] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:19.606 qpair failed and we were unable to recover it. 00:27:19.606 [2024-07-14 07:44:35.603699] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.606 [2024-07-14 07:44:35.603984] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.606 [2024-07-14 07:44:35.604010] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:19.606 qpair failed and we were unable to recover it. 00:27:19.606 [2024-07-14 07:44:35.604165] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.606 [2024-07-14 07:44:35.604377] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.606 [2024-07-14 07:44:35.604406] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:19.606 qpair failed and we were unable to recover it. 00:27:19.606 [2024-07-14 07:44:35.604636] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.606 [2024-07-14 07:44:35.604872] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.606 [2024-07-14 07:44:35.604902] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:19.606 qpair failed and we were unable to recover it. 00:27:19.606 [2024-07-14 07:44:35.605181] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.606 [2024-07-14 07:44:35.605413] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.606 [2024-07-14 07:44:35.605460] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:19.606 qpair failed and we were unable to recover it. 00:27:19.606 [2024-07-14 07:44:35.605672] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.606 [2024-07-14 07:44:35.605931] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.606 [2024-07-14 07:44:35.605958] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:19.606 qpair failed and we were unable to recover it. 00:27:19.606 [2024-07-14 07:44:35.606130] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.606 [2024-07-14 07:44:35.606363] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.606 [2024-07-14 07:44:35.606410] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:19.606 qpair failed and we were unable to recover it. 00:27:19.606 [2024-07-14 07:44:35.606621] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.606 [2024-07-14 07:44:35.606822] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.606 [2024-07-14 07:44:35.606850] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:19.606 qpair failed and we were unable to recover it. 00:27:19.606 [2024-07-14 07:44:35.607042] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.607 [2024-07-14 07:44:35.607199] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.607 [2024-07-14 07:44:35.607226] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:19.607 qpair failed and we were unable to recover it. 00:27:19.607 [2024-07-14 07:44:35.607405] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.607 [2024-07-14 07:44:35.607740] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.607 [2024-07-14 07:44:35.607786] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:19.607 qpair failed and we were unable to recover it. 00:27:19.607 [2024-07-14 07:44:35.608012] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.607 [2024-07-14 07:44:35.608197] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.607 [2024-07-14 07:44:35.608244] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:19.607 qpair failed and we were unable to recover it. 00:27:19.607 [2024-07-14 07:44:35.608492] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.607 [2024-07-14 07:44:35.608687] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.607 [2024-07-14 07:44:35.608716] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:19.607 qpair failed and we were unable to recover it. 00:27:19.607 [2024-07-14 07:44:35.608906] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.607 [2024-07-14 07:44:35.609088] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.607 [2024-07-14 07:44:35.609113] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:19.607 qpair failed and we were unable to recover it. 00:27:19.607 [2024-07-14 07:44:35.609332] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.607 [2024-07-14 07:44:35.609518] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.607 [2024-07-14 07:44:35.609562] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:19.607 qpair failed and we were unable to recover it. 00:27:19.607 [2024-07-14 07:44:35.609794] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.607 [2024-07-14 07:44:35.609989] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.607 [2024-07-14 07:44:35.610016] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:19.607 qpair failed and we were unable to recover it. 00:27:19.607 [2024-07-14 07:44:35.610204] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.607 [2024-07-14 07:44:35.610443] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.607 [2024-07-14 07:44:35.610472] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:19.607 qpair failed and we were unable to recover it. 00:27:19.607 [2024-07-14 07:44:35.610671] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.607 [2024-07-14 07:44:35.610838] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.607 [2024-07-14 07:44:35.610874] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:19.607 qpair failed and we were unable to recover it. 00:27:19.607 [2024-07-14 07:44:35.611095] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.607 [2024-07-14 07:44:35.611348] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.607 [2024-07-14 07:44:35.611374] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:19.607 qpair failed and we were unable to recover it. 00:27:19.607 [2024-07-14 07:44:35.611557] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.607 [2024-07-14 07:44:35.611724] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.607 [2024-07-14 07:44:35.611752] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:19.607 qpair failed and we were unable to recover it. 00:27:19.607 [2024-07-14 07:44:35.611958] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.607 [2024-07-14 07:44:35.612121] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.607 [2024-07-14 07:44:35.612146] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:19.607 qpair failed and we were unable to recover it. 00:27:19.607 [2024-07-14 07:44:35.612349] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.607 [2024-07-14 07:44:35.612605] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.607 [2024-07-14 07:44:35.612651] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:19.607 qpair failed and we were unable to recover it. 00:27:19.607 [2024-07-14 07:44:35.612837] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.607 [2024-07-14 07:44:35.613023] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.607 [2024-07-14 07:44:35.613049] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:19.607 qpair failed and we were unable to recover it. 00:27:19.607 [2024-07-14 07:44:35.613238] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.607 [2024-07-14 07:44:35.613424] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.607 [2024-07-14 07:44:35.613452] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:19.607 qpair failed and we were unable to recover it. 00:27:19.607 [2024-07-14 07:44:35.613657] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.607 [2024-07-14 07:44:35.613888] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.607 [2024-07-14 07:44:35.613937] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:19.607 qpair failed and we were unable to recover it. 00:27:19.607 [2024-07-14 07:44:35.614104] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.607 [2024-07-14 07:44:35.614313] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.607 [2024-07-14 07:44:35.614359] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:19.607 qpair failed and we were unable to recover it. 00:27:19.607 [2024-07-14 07:44:35.614639] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.607 [2024-07-14 07:44:35.614875] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.607 [2024-07-14 07:44:35.614933] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:19.607 qpair failed and we were unable to recover it. 00:27:19.607 [2024-07-14 07:44:35.615104] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.607 [2024-07-14 07:44:35.615287] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.607 [2024-07-14 07:44:35.615316] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:19.607 qpair failed and we were unable to recover it. 00:27:19.607 [2024-07-14 07:44:35.615515] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.607 [2024-07-14 07:44:35.615747] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.607 [2024-07-14 07:44:35.615794] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:19.607 qpair failed and we were unable to recover it. 00:27:19.607 [2024-07-14 07:44:35.615972] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.607 [2024-07-14 07:44:35.616160] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.607 [2024-07-14 07:44:35.616187] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:19.607 qpair failed and we were unable to recover it. 00:27:19.607 [2024-07-14 07:44:35.616339] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.607 [2024-07-14 07:44:35.616524] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.607 [2024-07-14 07:44:35.616553] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:19.607 qpair failed and we were unable to recover it. 00:27:19.607 [2024-07-14 07:44:35.616777] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.607 [2024-07-14 07:44:35.616988] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.607 [2024-07-14 07:44:35.617013] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:19.607 qpair failed and we were unable to recover it. 00:27:19.607 [2024-07-14 07:44:35.617160] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.607 [2024-07-14 07:44:35.617320] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.607 [2024-07-14 07:44:35.617362] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:19.607 qpair failed and we were unable to recover it. 00:27:19.607 [2024-07-14 07:44:35.617592] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.607 [2024-07-14 07:44:35.617829] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.607 [2024-07-14 07:44:35.617858] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:19.607 qpair failed and we were unable to recover it. 00:27:19.607 [2024-07-14 07:44:35.618053] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.607 [2024-07-14 07:44:35.618236] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.607 [2024-07-14 07:44:35.618266] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:19.607 qpair failed and we were unable to recover it. 00:27:19.607 [2024-07-14 07:44:35.618503] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.607 [2024-07-14 07:44:35.618730] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.607 [2024-07-14 07:44:35.618758] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:19.607 qpair failed and we were unable to recover it. 00:27:19.607 [2024-07-14 07:44:35.618971] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.607 [2024-07-14 07:44:35.619135] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.607 [2024-07-14 07:44:35.619178] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:19.607 qpair failed and we were unable to recover it. 00:27:19.607 [2024-07-14 07:44:35.619411] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.607 [2024-07-14 07:44:35.619580] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.607 [2024-07-14 07:44:35.619607] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:19.607 qpair failed and we were unable to recover it. 00:27:19.607 [2024-07-14 07:44:35.619829] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.607 [2024-07-14 07:44:35.620012] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.607 [2024-07-14 07:44:35.620039] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:19.607 qpair failed and we were unable to recover it. 00:27:19.607 [2024-07-14 07:44:35.620211] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.607 [2024-07-14 07:44:35.620434] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.607 [2024-07-14 07:44:35.620481] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:19.607 qpair failed and we were unable to recover it. 00:27:19.608 [2024-07-14 07:44:35.620705] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.608 [2024-07-14 07:44:35.620925] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.608 [2024-07-14 07:44:35.620951] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:19.608 qpair failed and we were unable to recover it. 00:27:19.608 [2024-07-14 07:44:35.621120] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.608 [2024-07-14 07:44:35.621326] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.608 [2024-07-14 07:44:35.621354] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:19.608 qpair failed and we were unable to recover it. 00:27:19.608 [2024-07-14 07:44:35.621563] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.608 [2024-07-14 07:44:35.621780] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.608 [2024-07-14 07:44:35.621809] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:19.608 qpair failed and we were unable to recover it. 00:27:19.608 [2024-07-14 07:44:35.622003] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.608 [2024-07-14 07:44:35.622169] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.608 [2024-07-14 07:44:35.622196] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:19.608 qpair failed and we were unable to recover it. 00:27:19.608 [2024-07-14 07:44:35.622466] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.608 [2024-07-14 07:44:35.622723] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.608 [2024-07-14 07:44:35.622751] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:19.608 qpair failed and we were unable to recover it. 00:27:19.608 [2024-07-14 07:44:35.622952] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.608 [2024-07-14 07:44:35.623118] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.608 [2024-07-14 07:44:35.623144] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:19.608 qpair failed and we were unable to recover it. 00:27:19.608 [2024-07-14 07:44:35.623343] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.608 [2024-07-14 07:44:35.623683] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.608 [2024-07-14 07:44:35.623714] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:19.608 qpair failed and we were unable to recover it. 00:27:19.608 [2024-07-14 07:44:35.623945] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.608 [2024-07-14 07:44:35.624112] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.608 [2024-07-14 07:44:35.624138] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:19.608 qpair failed and we were unable to recover it. 00:27:19.608 [2024-07-14 07:44:35.624357] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.608 [2024-07-14 07:44:35.624623] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.608 [2024-07-14 07:44:35.624651] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:19.608 qpair failed and we were unable to recover it. 00:27:19.608 [2024-07-14 07:44:35.624856] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.608 [2024-07-14 07:44:35.625071] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.608 [2024-07-14 07:44:35.625098] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:19.608 qpair failed and we were unable to recover it. 00:27:19.608 [2024-07-14 07:44:35.625322] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.608 [2024-07-14 07:44:35.625559] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.608 [2024-07-14 07:44:35.625608] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:19.608 qpair failed and we were unable to recover it. 00:27:19.608 [2024-07-14 07:44:35.625826] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.608 [2024-07-14 07:44:35.626016] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.608 [2024-07-14 07:44:35.626043] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:19.608 qpair failed and we were unable to recover it. 00:27:19.608 [2024-07-14 07:44:35.626250] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.608 [2024-07-14 07:44:35.626475] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.608 [2024-07-14 07:44:35.626503] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:19.608 qpair failed and we were unable to recover it. 00:27:19.608 [2024-07-14 07:44:35.626743] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.608 [2024-07-14 07:44:35.626977] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.608 [2024-07-14 07:44:35.627004] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:19.608 qpair failed and we were unable to recover it. 00:27:19.608 [2024-07-14 07:44:35.627170] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.608 [2024-07-14 07:44:35.627448] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.608 [2024-07-14 07:44:35.627477] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:19.608 qpair failed and we were unable to recover it. 00:27:19.608 [2024-07-14 07:44:35.627658] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.608 [2024-07-14 07:44:35.627861] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.608 [2024-07-14 07:44:35.627897] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:19.608 qpair failed and we were unable to recover it. 00:27:19.608 [2024-07-14 07:44:35.628080] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.608 [2024-07-14 07:44:35.628304] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.608 [2024-07-14 07:44:35.628333] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:19.608 qpair failed and we were unable to recover it. 00:27:19.608 [2024-07-14 07:44:35.628564] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.608 [2024-07-14 07:44:35.628779] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.608 [2024-07-14 07:44:35.628808] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:19.608 qpair failed and we were unable to recover it. 00:27:19.608 [2024-07-14 07:44:35.629010] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.608 [2024-07-14 07:44:35.629192] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.608 [2024-07-14 07:44:35.629220] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:19.608 qpair failed and we were unable to recover it. 00:27:19.608 [2024-07-14 07:44:35.629423] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.608 [2024-07-14 07:44:35.629612] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.608 [2024-07-14 07:44:35.629637] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:19.608 qpair failed and we were unable to recover it. 00:27:19.608 [2024-07-14 07:44:35.629804] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.608 [2024-07-14 07:44:35.630006] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.608 [2024-07-14 07:44:35.630032] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:19.608 qpair failed and we were unable to recover it. 00:27:19.608 [2024-07-14 07:44:35.630197] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.608 [2024-07-14 07:44:35.630420] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.608 [2024-07-14 07:44:35.630468] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:19.608 qpair failed and we were unable to recover it. 00:27:19.608 [2024-07-14 07:44:35.630710] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.608 [2024-07-14 07:44:35.630860] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.608 [2024-07-14 07:44:35.630928] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:19.608 qpair failed and we were unable to recover it. 00:27:19.608 [2024-07-14 07:44:35.631114] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.608 [2024-07-14 07:44:35.631325] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.608 [2024-07-14 07:44:35.631351] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:19.608 qpair failed and we were unable to recover it. 00:27:19.608 [2024-07-14 07:44:35.631573] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.608 [2024-07-14 07:44:35.631756] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.608 [2024-07-14 07:44:35.631782] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:19.608 qpair failed and we were unable to recover it. 00:27:19.608 [2024-07-14 07:44:35.631946] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.608 [2024-07-14 07:44:35.632113] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.608 [2024-07-14 07:44:35.632150] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:19.608 qpair failed and we were unable to recover it. 00:27:19.608 [2024-07-14 07:44:35.632393] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.608 [2024-07-14 07:44:35.632605] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.608 [2024-07-14 07:44:35.632631] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:19.608 qpair failed and we were unable to recover it. 00:27:19.608 [2024-07-14 07:44:35.632846] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.608 [2024-07-14 07:44:35.633031] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.608 [2024-07-14 07:44:35.633057] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:19.608 qpair failed and we were unable to recover it. 00:27:19.608 [2024-07-14 07:44:35.633248] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.608 [2024-07-14 07:44:35.633475] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.608 [2024-07-14 07:44:35.633525] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:19.608 qpair failed and we were unable to recover it. 00:27:19.608 [2024-07-14 07:44:35.633766] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.608 [2024-07-14 07:44:35.634009] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.609 [2024-07-14 07:44:35.634035] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:19.609 qpair failed and we were unable to recover it. 00:27:19.609 [2024-07-14 07:44:35.634198] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.609 [2024-07-14 07:44:35.634417] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.609 [2024-07-14 07:44:35.634443] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:19.609 qpair failed and we were unable to recover it. 00:27:19.609 [2024-07-14 07:44:35.634658] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.609 [2024-07-14 07:44:35.634860] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.609 [2024-07-14 07:44:35.634898] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:19.609 qpair failed and we were unable to recover it. 00:27:19.609 [2024-07-14 07:44:35.635093] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.609 [2024-07-14 07:44:35.635311] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.609 [2024-07-14 07:44:35.635339] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:19.609 qpair failed and we were unable to recover it. 00:27:19.609 [2024-07-14 07:44:35.635522] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.609 [2024-07-14 07:44:35.635758] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.609 [2024-07-14 07:44:35.635783] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:19.609 qpair failed and we were unable to recover it. 00:27:19.609 [2024-07-14 07:44:35.636005] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.609 [2024-07-14 07:44:35.636165] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.609 [2024-07-14 07:44:35.636207] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:19.609 qpair failed and we were unable to recover it. 00:27:19.609 [2024-07-14 07:44:35.636422] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.609 [2024-07-14 07:44:35.636639] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.609 [2024-07-14 07:44:35.636685] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:19.609 qpair failed and we were unable to recover it. 00:27:19.609 [2024-07-14 07:44:35.636861] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.609 [2024-07-14 07:44:35.637055] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.609 [2024-07-14 07:44:35.637081] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:19.609 qpair failed and we were unable to recover it. 00:27:19.609 [2024-07-14 07:44:35.637293] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.609 [2024-07-14 07:44:35.637515] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.609 [2024-07-14 07:44:35.637568] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:19.609 qpair failed and we were unable to recover it. 00:27:19.609 [2024-07-14 07:44:35.637772] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.609 [2024-07-14 07:44:35.637960] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.609 [2024-07-14 07:44:35.637987] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:19.609 qpair failed and we were unable to recover it. 00:27:19.609 [2024-07-14 07:44:35.638166] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.609 [2024-07-14 07:44:35.638418] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.609 [2024-07-14 07:44:35.638462] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:19.609 qpair failed and we were unable to recover it. 00:27:19.609 [2024-07-14 07:44:35.638673] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.609 [2024-07-14 07:44:35.638932] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.609 [2024-07-14 07:44:35.638958] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:19.609 qpair failed and we were unable to recover it. 00:27:19.609 [2024-07-14 07:44:35.639135] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.609 [2024-07-14 07:44:35.639293] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.609 [2024-07-14 07:44:35.639319] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:19.609 qpair failed and we were unable to recover it. 00:27:19.609 [2024-07-14 07:44:35.639509] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.609 [2024-07-14 07:44:35.639666] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.609 [2024-07-14 07:44:35.639693] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:19.609 qpair failed and we were unable to recover it. 00:27:19.609 [2024-07-14 07:44:35.639881] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.609 [2024-07-14 07:44:35.640041] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.609 [2024-07-14 07:44:35.640067] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:19.609 qpair failed and we were unable to recover it. 00:27:19.609 [2024-07-14 07:44:35.640256] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.609 [2024-07-14 07:44:35.640443] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.609 [2024-07-14 07:44:35.640470] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:19.609 qpair failed and we were unable to recover it. 00:27:19.609 [2024-07-14 07:44:35.640665] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.609 [2024-07-14 07:44:35.640843] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.609 [2024-07-14 07:44:35.640883] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:19.609 qpair failed and we were unable to recover it. 00:27:19.609 [2024-07-14 07:44:35.641053] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.609 [2024-07-14 07:44:35.641217] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.609 [2024-07-14 07:44:35.641243] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:19.609 qpair failed and we were unable to recover it. 00:27:19.609 [2024-07-14 07:44:35.641426] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.609 [2024-07-14 07:44:35.641614] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.609 [2024-07-14 07:44:35.641641] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:19.609 qpair failed and we were unable to recover it. 00:27:19.609 [2024-07-14 07:44:35.641864] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.609 [2024-07-14 07:44:35.642036] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.609 [2024-07-14 07:44:35.642063] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:19.609 qpair failed and we were unable to recover it. 00:27:19.609 [2024-07-14 07:44:35.642220] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.609 [2024-07-14 07:44:35.642382] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.609 [2024-07-14 07:44:35.642409] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:19.609 qpair failed and we were unable to recover it. 00:27:19.609 [2024-07-14 07:44:35.642598] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.609 [2024-07-14 07:44:35.642789] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.609 [2024-07-14 07:44:35.642817] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:19.609 qpair failed and we were unable to recover it. 00:27:19.609 [2024-07-14 07:44:35.643035] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.609 [2024-07-14 07:44:35.643190] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.609 [2024-07-14 07:44:35.643220] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:19.609 qpair failed and we were unable to recover it. 00:27:19.609 [2024-07-14 07:44:35.643370] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.609 [2024-07-14 07:44:35.643580] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.609 [2024-07-14 07:44:35.643607] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:19.609 qpair failed and we were unable to recover it. 00:27:19.609 [2024-07-14 07:44:35.643772] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.609 [2024-07-14 07:44:35.643940] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.609 [2024-07-14 07:44:35.643966] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:19.609 qpair failed and we were unable to recover it. 00:27:19.609 [2024-07-14 07:44:35.644177] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.609 [2024-07-14 07:44:35.644328] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.609 [2024-07-14 07:44:35.644355] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:19.609 qpair failed and we were unable to recover it. 00:27:19.609 [2024-07-14 07:44:35.644507] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.609 [2024-07-14 07:44:35.644693] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.609 [2024-07-14 07:44:35.644720] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:19.609 qpair failed and we were unable to recover it. 00:27:19.609 [2024-07-14 07:44:35.644918] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.609 [2024-07-14 07:44:35.645106] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.610 [2024-07-14 07:44:35.645144] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:19.610 qpair failed and we were unable to recover it. 00:27:19.610 [2024-07-14 07:44:35.645358] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.610 [2024-07-14 07:44:35.645538] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.610 [2024-07-14 07:44:35.645564] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:19.610 qpair failed and we were unable to recover it. 00:27:19.610 [2024-07-14 07:44:35.645755] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.610 [2024-07-14 07:44:35.645920] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.610 [2024-07-14 07:44:35.645946] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:19.610 qpair failed and we were unable to recover it. 00:27:19.610 [2024-07-14 07:44:35.646135] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.610 [2024-07-14 07:44:35.646316] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.610 [2024-07-14 07:44:35.646342] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:19.610 qpair failed and we were unable to recover it. 00:27:19.610 [2024-07-14 07:44:35.646529] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.610 [2024-07-14 07:44:35.646724] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.610 [2024-07-14 07:44:35.646750] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:19.610 qpair failed and we were unable to recover it. 00:27:19.610 [2024-07-14 07:44:35.646965] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.610 [2024-07-14 07:44:35.647151] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.610 [2024-07-14 07:44:35.647177] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:19.610 qpair failed and we were unable to recover it. 00:27:19.610 [2024-07-14 07:44:35.647339] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.610 [2024-07-14 07:44:35.647526] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.610 [2024-07-14 07:44:35.647552] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:19.610 qpair failed and we were unable to recover it. 00:27:19.610 [2024-07-14 07:44:35.647772] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.610 [2024-07-14 07:44:35.647955] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.610 [2024-07-14 07:44:35.647982] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:19.610 qpair failed and we were unable to recover it. 00:27:19.610 [2024-07-14 07:44:35.648148] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.610 [2024-07-14 07:44:35.648360] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.610 [2024-07-14 07:44:35.648387] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:19.610 qpair failed and we were unable to recover it. 00:27:19.610 [2024-07-14 07:44:35.648569] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.610 [2024-07-14 07:44:35.648750] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.610 [2024-07-14 07:44:35.648777] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:19.610 qpair failed and we were unable to recover it. 00:27:19.610 [2024-07-14 07:44:35.648955] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.610 [2024-07-14 07:44:35.649142] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.610 [2024-07-14 07:44:35.649168] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:19.610 qpair failed and we were unable to recover it. 00:27:19.610 [2024-07-14 07:44:35.649353] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.610 [2024-07-14 07:44:35.649541] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.610 [2024-07-14 07:44:35.649567] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:19.610 qpair failed and we were unable to recover it. 00:27:19.610 [2024-07-14 07:44:35.649752] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.610 [2024-07-14 07:44:35.649903] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.610 [2024-07-14 07:44:35.649938] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:19.610 qpair failed and we were unable to recover it. 00:27:19.610 [2024-07-14 07:44:35.650124] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.610 [2024-07-14 07:44:35.650338] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.610 [2024-07-14 07:44:35.650364] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:19.610 qpair failed and we were unable to recover it. 00:27:19.610 [2024-07-14 07:44:35.650574] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.610 [2024-07-14 07:44:35.650796] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.610 [2024-07-14 07:44:35.650822] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:19.610 qpair failed and we were unable to recover it. 00:27:19.610 [2024-07-14 07:44:35.651011] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.610 [2024-07-14 07:44:35.651201] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.610 [2024-07-14 07:44:35.651228] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:19.610 qpair failed and we were unable to recover it. 00:27:19.610 [2024-07-14 07:44:35.651438] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.610 [2024-07-14 07:44:35.651624] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.610 [2024-07-14 07:44:35.651651] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:19.610 qpair failed and we were unable to recover it. 00:27:19.610 [2024-07-14 07:44:35.651839] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.610 [2024-07-14 07:44:35.652020] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.610 [2024-07-14 07:44:35.652046] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:19.610 qpair failed and we were unable to recover it. 00:27:19.610 [2024-07-14 07:44:35.652228] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.610 [2024-07-14 07:44:35.652412] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.610 [2024-07-14 07:44:35.652438] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:19.610 qpair failed and we were unable to recover it. 00:27:19.610 [2024-07-14 07:44:35.652661] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.610 [2024-07-14 07:44:35.652876] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.610 [2024-07-14 07:44:35.652903] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:19.610 qpair failed and we were unable to recover it. 00:27:19.610 [2024-07-14 07:44:35.653060] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.610 [2024-07-14 07:44:35.653229] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.610 [2024-07-14 07:44:35.653255] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:19.610 qpair failed and we were unable to recover it. 00:27:19.610 [2024-07-14 07:44:35.653417] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.610 [2024-07-14 07:44:35.653617] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.610 [2024-07-14 07:44:35.653643] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:19.610 qpair failed and we were unable to recover it. 00:27:19.610 [2024-07-14 07:44:35.653834] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.610 [2024-07-14 07:44:35.654020] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.610 [2024-07-14 07:44:35.654046] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:19.610 qpair failed and we were unable to recover it. 00:27:19.610 [2024-07-14 07:44:35.654199] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.610 [2024-07-14 07:44:35.654408] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.610 [2024-07-14 07:44:35.654434] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:19.610 qpair failed and we were unable to recover it. 00:27:19.610 [2024-07-14 07:44:35.654625] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.610 [2024-07-14 07:44:35.654807] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.610 [2024-07-14 07:44:35.654833] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:19.610 qpair failed and we were unable to recover it. 00:27:19.610 [2024-07-14 07:44:35.655013] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.610 [2024-07-14 07:44:35.655200] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.610 [2024-07-14 07:44:35.655225] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:19.610 qpair failed and we were unable to recover it. 00:27:19.610 [2024-07-14 07:44:35.655369] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.610 [2024-07-14 07:44:35.655524] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.610 [2024-07-14 07:44:35.655550] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:19.610 qpair failed and we were unable to recover it. 00:27:19.610 [2024-07-14 07:44:35.655732] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.610 [2024-07-14 07:44:35.655880] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.610 [2024-07-14 07:44:35.655908] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:19.611 qpair failed and we were unable to recover it. 00:27:19.611 [2024-07-14 07:44:35.656070] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.611 [2024-07-14 07:44:35.656232] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.611 [2024-07-14 07:44:35.656259] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:19.611 qpair failed and we were unable to recover it. 00:27:19.611 [2024-07-14 07:44:35.656443] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.611 [2024-07-14 07:44:35.656625] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.611 [2024-07-14 07:44:35.656652] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:19.611 qpair failed and we were unable to recover it. 00:27:19.611 [2024-07-14 07:44:35.656810] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.611 [2024-07-14 07:44:35.657011] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.611 [2024-07-14 07:44:35.657038] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:19.611 qpair failed and we were unable to recover it. 00:27:19.611 [2024-07-14 07:44:35.657191] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.611 [2024-07-14 07:44:35.657377] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.611 [2024-07-14 07:44:35.657403] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:19.611 qpair failed and we were unable to recover it. 00:27:19.611 [2024-07-14 07:44:35.657583] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.611 [2024-07-14 07:44:35.657796] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.611 [2024-07-14 07:44:35.657826] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:19.611 qpair failed and we were unable to recover it. 00:27:19.611 [2024-07-14 07:44:35.658032] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.611 [2024-07-14 07:44:35.658196] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.611 [2024-07-14 07:44:35.658221] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:19.611 qpair failed and we were unable to recover it. 00:27:19.611 [2024-07-14 07:44:35.658428] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.611 [2024-07-14 07:44:35.658637] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.611 [2024-07-14 07:44:35.658664] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:19.611 qpair failed and we were unable to recover it. 00:27:19.611 [2024-07-14 07:44:35.658855] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.611 [2024-07-14 07:44:35.659026] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.611 [2024-07-14 07:44:35.659052] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:19.611 qpair failed and we were unable to recover it. 00:27:19.611 [2024-07-14 07:44:35.659233] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.611 [2024-07-14 07:44:35.659438] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.611 [2024-07-14 07:44:35.659465] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:19.611 qpair failed and we were unable to recover it. 00:27:19.611 [2024-07-14 07:44:35.659653] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.611 [2024-07-14 07:44:35.659845] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.611 [2024-07-14 07:44:35.659879] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:19.611 qpair failed and we were unable to recover it. 00:27:19.611 [2024-07-14 07:44:35.660059] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.611 [2024-07-14 07:44:35.660276] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.611 [2024-07-14 07:44:35.660303] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:19.611 qpair failed and we were unable to recover it. 00:27:19.611 [2024-07-14 07:44:35.660495] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.611 [2024-07-14 07:44:35.660682] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.611 [2024-07-14 07:44:35.660714] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:19.611 qpair failed and we were unable to recover it. 00:27:19.611 [2024-07-14 07:44:35.660914] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.611 [2024-07-14 07:44:35.661104] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.611 [2024-07-14 07:44:35.661141] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:19.611 qpair failed and we were unable to recover it. 00:27:19.611 [2024-07-14 07:44:35.661292] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.611 [2024-07-14 07:44:35.661473] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.611 [2024-07-14 07:44:35.661499] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:19.611 qpair failed and we were unable to recover it. 00:27:19.611 [2024-07-14 07:44:35.661655] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.611 [2024-07-14 07:44:35.661835] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.611 [2024-07-14 07:44:35.661872] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:19.611 qpair failed and we were unable to recover it. 00:27:19.611 [2024-07-14 07:44:35.662097] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.611 [2024-07-14 07:44:35.662322] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.611 [2024-07-14 07:44:35.662348] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:19.611 qpair failed and we were unable to recover it. 00:27:19.611 [2024-07-14 07:44:35.662511] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.611 [2024-07-14 07:44:35.662671] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.611 [2024-07-14 07:44:35.662699] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:19.611 qpair failed and we were unable to recover it. 00:27:19.611 [2024-07-14 07:44:35.662863] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.611 [2024-07-14 07:44:35.663058] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.611 [2024-07-14 07:44:35.663084] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:19.611 qpair failed and we were unable to recover it. 00:27:19.611 [2024-07-14 07:44:35.663275] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.611 [2024-07-14 07:44:35.663491] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.611 [2024-07-14 07:44:35.663517] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:19.611 qpair failed and we were unable to recover it. 00:27:19.611 [2024-07-14 07:44:35.663675] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.611 [2024-07-14 07:44:35.663860] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.611 [2024-07-14 07:44:35.663894] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:19.611 qpair failed and we were unable to recover it. 00:27:19.611 [2024-07-14 07:44:35.664058] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.611 [2024-07-14 07:44:35.664229] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.611 [2024-07-14 07:44:35.664257] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:19.611 qpair failed and we were unable to recover it. 00:27:19.611 [2024-07-14 07:44:35.664456] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.611 [2024-07-14 07:44:35.664636] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.611 [2024-07-14 07:44:35.664663] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:19.611 qpair failed and we were unable to recover it. 00:27:19.611 [2024-07-14 07:44:35.664856] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.611 [2024-07-14 07:44:35.665059] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.611 [2024-07-14 07:44:35.665085] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:19.611 qpair failed and we were unable to recover it. 00:27:19.611 [2024-07-14 07:44:35.665278] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.611 [2024-07-14 07:44:35.665458] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.611 [2024-07-14 07:44:35.665484] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:19.611 qpair failed and we were unable to recover it. 00:27:19.611 [2024-07-14 07:44:35.665696] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.611 [2024-07-14 07:44:35.665860] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.611 [2024-07-14 07:44:35.665897] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:19.611 qpair failed and we were unable to recover it. 00:27:19.611 [2024-07-14 07:44:35.666080] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.611 [2024-07-14 07:44:35.666272] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.611 [2024-07-14 07:44:35.666299] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:19.611 qpair failed and we were unable to recover it. 00:27:19.611 [2024-07-14 07:44:35.666511] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.611 [2024-07-14 07:44:35.666722] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.611 [2024-07-14 07:44:35.666748] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:19.611 qpair failed and we were unable to recover it. 00:27:19.611 [2024-07-14 07:44:35.666938] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.611 [2024-07-14 07:44:35.667119] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.611 [2024-07-14 07:44:35.667152] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:19.611 qpair failed and we were unable to recover it. 00:27:19.611 [2024-07-14 07:44:35.667336] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.611 [2024-07-14 07:44:35.667549] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.611 [2024-07-14 07:44:35.667576] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:19.611 qpair failed and we were unable to recover it. 00:27:19.611 [2024-07-14 07:44:35.667731] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.611 [2024-07-14 07:44:35.667954] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.611 [2024-07-14 07:44:35.667981] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:19.611 qpair failed and we were unable to recover it. 00:27:19.611 [2024-07-14 07:44:35.668140] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.611 [2024-07-14 07:44:35.668300] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.611 [2024-07-14 07:44:35.668327] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:19.612 qpair failed and we were unable to recover it. 00:27:19.612 [2024-07-14 07:44:35.668524] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.612 [2024-07-14 07:44:35.668712] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.612 [2024-07-14 07:44:35.668738] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:19.612 qpair failed and we were unable to recover it. 00:27:19.612 [2024-07-14 07:44:35.668953] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.612 [2024-07-14 07:44:35.669152] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.612 [2024-07-14 07:44:35.669178] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:19.612 qpair failed and we were unable to recover it. 00:27:19.612 [2024-07-14 07:44:35.669341] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.612 [2024-07-14 07:44:35.669530] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.612 [2024-07-14 07:44:35.669557] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:19.612 qpair failed and we were unable to recover it. 00:27:19.612 [2024-07-14 07:44:35.669742] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.612 [2024-07-14 07:44:35.669958] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.612 [2024-07-14 07:44:35.669984] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:19.612 qpair failed and we were unable to recover it. 00:27:19.612 [2024-07-14 07:44:35.670184] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.612 [2024-07-14 07:44:35.670398] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.612 [2024-07-14 07:44:35.670425] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:19.612 qpair failed and we were unable to recover it. 00:27:19.612 [2024-07-14 07:44:35.670583] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.612 [2024-07-14 07:44:35.670740] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.612 [2024-07-14 07:44:35.670767] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:19.612 qpair failed and we were unable to recover it. 00:27:19.612 [2024-07-14 07:44:35.670920] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.612 [2024-07-14 07:44:35.671103] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.612 [2024-07-14 07:44:35.671139] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:19.612 qpair failed and we were unable to recover it. 00:27:19.612 [2024-07-14 07:44:35.671350] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.612 [2024-07-14 07:44:35.671504] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.612 [2024-07-14 07:44:35.671531] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:19.612 qpair failed and we were unable to recover it. 00:27:19.612 [2024-07-14 07:44:35.671710] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.612 [2024-07-14 07:44:35.671931] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.612 [2024-07-14 07:44:35.671957] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:19.612 qpair failed and we were unable to recover it. 00:27:19.612 [2024-07-14 07:44:35.672115] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.612 [2024-07-14 07:44:35.672278] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.612 [2024-07-14 07:44:35.672306] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:19.612 qpair failed and we were unable to recover it. 00:27:19.612 [2024-07-14 07:44:35.672507] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.612 [2024-07-14 07:44:35.672663] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.612 [2024-07-14 07:44:35.672689] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:19.612 qpair failed and we were unable to recover it. 00:27:19.612 [2024-07-14 07:44:35.672892] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.612 [2024-07-14 07:44:35.673115] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.612 [2024-07-14 07:44:35.673149] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:19.612 qpair failed and we were unable to recover it. 00:27:19.612 [2024-07-14 07:44:35.673370] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.612 [2024-07-14 07:44:35.673567] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.612 [2024-07-14 07:44:35.673593] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:19.612 qpair failed and we were unable to recover it. 00:27:19.612 [2024-07-14 07:44:35.673752] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.612 [2024-07-14 07:44:35.673969] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.612 [2024-07-14 07:44:35.673996] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:19.612 qpair failed and we were unable to recover it. 00:27:19.612 [2024-07-14 07:44:35.674216] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.612 [2024-07-14 07:44:35.674404] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.612 [2024-07-14 07:44:35.674431] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:19.612 qpair failed and we were unable to recover it. 00:27:19.612 [2024-07-14 07:44:35.674613] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.612 [2024-07-14 07:44:35.674823] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.612 [2024-07-14 07:44:35.674849] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:19.612 qpair failed and we were unable to recover it. 00:27:19.612 [2024-07-14 07:44:35.675086] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.612 [2024-07-14 07:44:35.675301] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.612 [2024-07-14 07:44:35.675327] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:19.612 qpair failed and we were unable to recover it. 00:27:19.612 [2024-07-14 07:44:35.675490] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.612 [2024-07-14 07:44:35.675641] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.612 [2024-07-14 07:44:35.675667] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:19.612 qpair failed and we were unable to recover it. 00:27:19.612 [2024-07-14 07:44:35.675864] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.612 [2024-07-14 07:44:35.676092] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.612 [2024-07-14 07:44:35.676119] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:19.612 qpair failed and we were unable to recover it. 00:27:19.612 [2024-07-14 07:44:35.676334] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.612 [2024-07-14 07:44:35.676521] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.612 [2024-07-14 07:44:35.676548] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:19.612 qpair failed and we were unable to recover it. 00:27:19.612 [2024-07-14 07:44:35.676763] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.612 [2024-07-14 07:44:35.676977] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.612 [2024-07-14 07:44:35.677003] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:19.612 qpair failed and we were unable to recover it. 00:27:19.612 [2024-07-14 07:44:35.677161] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.612 [2024-07-14 07:44:35.677345] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.612 [2024-07-14 07:44:35.677371] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:19.612 qpair failed and we were unable to recover it. 00:27:19.612 [2024-07-14 07:44:35.677554] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.612 [2024-07-14 07:44:35.677737] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.612 [2024-07-14 07:44:35.677763] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:19.612 qpair failed and we were unable to recover it. 00:27:19.612 [2024-07-14 07:44:35.677953] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.612 [2024-07-14 07:44:35.678141] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.612 [2024-07-14 07:44:35.678168] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:19.612 qpair failed and we were unable to recover it. 00:27:19.612 [2024-07-14 07:44:35.678353] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.612 [2024-07-14 07:44:35.678574] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.612 [2024-07-14 07:44:35.678605] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:19.612 qpair failed and we were unable to recover it. 00:27:19.612 [2024-07-14 07:44:35.678813] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.612 [2024-07-14 07:44:35.678977] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.612 [2024-07-14 07:44:35.679004] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:19.612 qpair failed and we were unable to recover it. 00:27:19.612 [2024-07-14 07:44:35.679167] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.612 [2024-07-14 07:44:35.679345] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.612 [2024-07-14 07:44:35.679372] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:19.612 qpair failed and we were unable to recover it. 00:27:19.612 [2024-07-14 07:44:35.679527] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.612 [2024-07-14 07:44:35.679706] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.612 [2024-07-14 07:44:35.679732] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:19.612 qpair failed and we were unable to recover it. 00:27:19.612 [2024-07-14 07:44:35.679907] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.612 [2024-07-14 07:44:35.680092] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.612 [2024-07-14 07:44:35.680118] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:19.612 qpair failed and we were unable to recover it. 00:27:19.612 [2024-07-14 07:44:35.680275] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.612 [2024-07-14 07:44:35.680463] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.612 [2024-07-14 07:44:35.680489] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:19.612 qpair failed and we were unable to recover it. 00:27:19.612 [2024-07-14 07:44:35.680700] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.612 [2024-07-14 07:44:35.680862] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.612 [2024-07-14 07:44:35.680916] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:19.612 qpair failed and we were unable to recover it. 00:27:19.613 [2024-07-14 07:44:35.681105] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.613 [2024-07-14 07:44:35.681290] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.613 [2024-07-14 07:44:35.681316] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:19.613 qpair failed and we were unable to recover it. 00:27:19.613 [2024-07-14 07:44:35.681504] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.613 [2024-07-14 07:44:35.681698] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.613 [2024-07-14 07:44:35.681724] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:19.613 qpair failed and we were unable to recover it. 00:27:19.613 [2024-07-14 07:44:35.681878] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.613 [2024-07-14 07:44:35.682088] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.613 [2024-07-14 07:44:35.682115] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:19.613 qpair failed and we were unable to recover it. 00:27:19.613 [2024-07-14 07:44:35.682295] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.613 [2024-07-14 07:44:35.682479] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.613 [2024-07-14 07:44:35.682509] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:19.613 qpair failed and we were unable to recover it. 00:27:19.613 [2024-07-14 07:44:35.682699] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.613 [2024-07-14 07:44:35.682892] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.613 [2024-07-14 07:44:35.682919] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:19.613 qpair failed and we were unable to recover it. 00:27:19.613 [2024-07-14 07:44:35.683106] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.613 [2024-07-14 07:44:35.683282] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.613 [2024-07-14 07:44:35.683308] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:19.613 qpair failed and we were unable to recover it. 00:27:19.613 [2024-07-14 07:44:35.683495] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.613 [2024-07-14 07:44:35.683715] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.613 [2024-07-14 07:44:35.683741] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:19.613 qpair failed and we were unable to recover it. 00:27:19.613 [2024-07-14 07:44:35.683898] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.613 [2024-07-14 07:44:35.684081] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.613 [2024-07-14 07:44:35.684108] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:19.613 qpair failed and we were unable to recover it. 00:27:19.613 [2024-07-14 07:44:35.684292] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.613 [2024-07-14 07:44:35.684511] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.613 [2024-07-14 07:44:35.684537] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:19.613 qpair failed and we were unable to recover it. 00:27:19.613 [2024-07-14 07:44:35.684753] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.613 [2024-07-14 07:44:35.684933] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.613 [2024-07-14 07:44:35.684959] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:19.613 qpair failed and we were unable to recover it. 00:27:19.613 [2024-07-14 07:44:35.685169] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.613 [2024-07-14 07:44:35.685358] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.613 [2024-07-14 07:44:35.685384] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:19.613 qpair failed and we were unable to recover it. 00:27:19.613 [2024-07-14 07:44:35.685540] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.613 [2024-07-14 07:44:35.685752] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.613 [2024-07-14 07:44:35.685778] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:19.613 qpair failed and we were unable to recover it. 00:27:19.613 [2024-07-14 07:44:35.685962] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.613 [2024-07-14 07:44:35.686152] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.613 [2024-07-14 07:44:35.686177] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:19.613 qpair failed and we were unable to recover it. 00:27:19.613 [2024-07-14 07:44:35.686377] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.613 [2024-07-14 07:44:35.686554] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.613 [2024-07-14 07:44:35.686580] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:19.613 qpair failed and we were unable to recover it. 00:27:19.613 [2024-07-14 07:44:35.686795] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.613 [2024-07-14 07:44:35.686982] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.613 [2024-07-14 07:44:35.687009] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:19.613 qpair failed and we were unable to recover it. 00:27:19.613 [2024-07-14 07:44:35.687192] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.613 [2024-07-14 07:44:35.687376] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.613 [2024-07-14 07:44:35.687402] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:19.613 qpair failed and we were unable to recover it. 00:27:19.613 [2024-07-14 07:44:35.687583] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.613 [2024-07-14 07:44:35.687749] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.613 [2024-07-14 07:44:35.687775] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:19.613 qpair failed and we were unable to recover it. 00:27:19.613 [2024-07-14 07:44:35.687992] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.613 [2024-07-14 07:44:35.688204] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.613 [2024-07-14 07:44:35.688230] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:19.613 qpair failed and we were unable to recover it. 00:27:19.613 [2024-07-14 07:44:35.688387] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.613 [2024-07-14 07:44:35.688575] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.613 [2024-07-14 07:44:35.688601] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:19.613 qpair failed and we were unable to recover it. 00:27:19.613 [2024-07-14 07:44:35.688808] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.613 [2024-07-14 07:44:35.689022] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.613 [2024-07-14 07:44:35.689049] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:19.613 qpair failed and we were unable to recover it. 00:27:19.613 [2024-07-14 07:44:35.689234] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.613 [2024-07-14 07:44:35.689446] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.613 [2024-07-14 07:44:35.689472] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:19.613 qpair failed and we were unable to recover it. 00:27:19.613 [2024-07-14 07:44:35.689662] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.613 [2024-07-14 07:44:35.689845] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.613 [2024-07-14 07:44:35.689878] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:19.613 qpair failed and we were unable to recover it. 00:27:19.613 [2024-07-14 07:44:35.690062] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.613 [2024-07-14 07:44:35.690217] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.613 [2024-07-14 07:44:35.690243] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:19.613 qpair failed and we were unable to recover it. 00:27:19.613 [2024-07-14 07:44:35.690430] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.613 [2024-07-14 07:44:35.690585] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.613 [2024-07-14 07:44:35.690611] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:19.613 qpair failed and we were unable to recover it. 00:27:19.613 [2024-07-14 07:44:35.690794] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.613 [2024-07-14 07:44:35.690991] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.613 [2024-07-14 07:44:35.691019] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:19.613 qpair failed and we were unable to recover it. 00:27:19.613 [2024-07-14 07:44:35.691202] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.613 [2024-07-14 07:44:35.691379] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.613 [2024-07-14 07:44:35.691404] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:19.613 qpair failed and we were unable to recover it. 00:27:19.613 [2024-07-14 07:44:35.691584] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.613 [2024-07-14 07:44:35.691764] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.613 [2024-07-14 07:44:35.691791] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:19.613 qpair failed and we were unable to recover it. 00:27:19.613 [2024-07-14 07:44:35.691977] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.613 [2024-07-14 07:44:35.692158] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.613 [2024-07-14 07:44:35.692185] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:19.613 qpair failed and we were unable to recover it. 00:27:19.613 [2024-07-14 07:44:35.692341] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.613 [2024-07-14 07:44:35.692553] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.613 [2024-07-14 07:44:35.692579] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:19.613 qpair failed and we were unable to recover it. 00:27:19.613 [2024-07-14 07:44:35.692732] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.613 [2024-07-14 07:44:35.692894] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.613 [2024-07-14 07:44:35.692921] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:19.613 qpair failed and we were unable to recover it. 00:27:19.613 [2024-07-14 07:44:35.693110] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.613 [2024-07-14 07:44:35.693298] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.613 [2024-07-14 07:44:35.693324] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:19.613 qpair failed and we were unable to recover it. 00:27:19.614 [2024-07-14 07:44:35.693510] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.614 [2024-07-14 07:44:35.693690] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.614 [2024-07-14 07:44:35.693716] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:19.614 qpair failed and we were unable to recover it. 00:27:19.614 [2024-07-14 07:44:35.693927] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.614 [2024-07-14 07:44:35.694136] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.614 [2024-07-14 07:44:35.694162] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:19.614 qpair failed and we were unable to recover it. 00:27:19.614 [2024-07-14 07:44:35.694351] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.614 [2024-07-14 07:44:35.694533] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.614 [2024-07-14 07:44:35.694560] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:19.614 qpair failed and we were unable to recover it. 00:27:19.614 [2024-07-14 07:44:35.694754] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.614 [2024-07-14 07:44:35.694926] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.614 [2024-07-14 07:44:35.694953] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:19.614 qpair failed and we were unable to recover it. 00:27:19.614 [2024-07-14 07:44:35.695162] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.614 [2024-07-14 07:44:35.695350] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.614 [2024-07-14 07:44:35.695376] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:19.614 qpair failed and we were unable to recover it. 00:27:19.614 [2024-07-14 07:44:35.695522] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.614 [2024-07-14 07:44:35.695707] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.614 [2024-07-14 07:44:35.695733] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:19.614 qpair failed and we were unable to recover it. 00:27:19.614 [2024-07-14 07:44:35.695921] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.614 [2024-07-14 07:44:35.696092] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.614 [2024-07-14 07:44:35.696118] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:19.614 qpair failed and we were unable to recover it. 00:27:19.614 [2024-07-14 07:44:35.696308] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.614 [2024-07-14 07:44:35.696490] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.614 [2024-07-14 07:44:35.696516] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:19.614 qpair failed and we were unable to recover it. 00:27:19.614 [2024-07-14 07:44:35.696728] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.614 [2024-07-14 07:44:35.696921] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.614 [2024-07-14 07:44:35.696949] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:19.614 qpair failed and we were unable to recover it. 00:27:19.614 [2024-07-14 07:44:35.697131] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.614 [2024-07-14 07:44:35.697317] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.614 [2024-07-14 07:44:35.697343] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:19.614 qpair failed and we were unable to recover it. 00:27:19.614 [2024-07-14 07:44:35.697528] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.614 [2024-07-14 07:44:35.697683] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.614 [2024-07-14 07:44:35.697710] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:19.614 qpair failed and we were unable to recover it. 00:27:19.614 [2024-07-14 07:44:35.697892] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.614 [2024-07-14 07:44:35.698043] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.614 [2024-07-14 07:44:35.698070] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:19.614 qpair failed and we were unable to recover it. 00:27:19.614 [2024-07-14 07:44:35.698281] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.614 [2024-07-14 07:44:35.698470] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.614 [2024-07-14 07:44:35.698497] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:19.614 qpair failed and we were unable to recover it. 00:27:19.614 [2024-07-14 07:44:35.698672] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.614 [2024-07-14 07:44:35.698871] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.614 [2024-07-14 07:44:35.698898] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:19.614 qpair failed and we were unable to recover it. 00:27:19.614 [2024-07-14 07:44:35.699083] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.614 [2024-07-14 07:44:35.699237] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.614 [2024-07-14 07:44:35.699263] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:19.614 qpair failed and we were unable to recover it. 00:27:19.614 [2024-07-14 07:44:35.699454] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.614 [2024-07-14 07:44:35.699617] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.614 [2024-07-14 07:44:35.699643] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:19.614 qpair failed and we were unable to recover it. 00:27:19.614 [2024-07-14 07:44:35.699823] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.614 [2024-07-14 07:44:35.700008] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.614 [2024-07-14 07:44:35.700034] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:19.614 qpair failed and we were unable to recover it. 00:27:19.614 [2024-07-14 07:44:35.700191] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.614 [2024-07-14 07:44:35.700343] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.614 [2024-07-14 07:44:35.700369] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:19.614 qpair failed and we were unable to recover it. 00:27:19.614 [2024-07-14 07:44:35.700578] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.614 [2024-07-14 07:44:35.700771] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.614 [2024-07-14 07:44:35.700797] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:19.614 qpair failed and we were unable to recover it. 00:27:19.614 [2024-07-14 07:44:35.700984] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.614 [2024-07-14 07:44:35.701141] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.614 [2024-07-14 07:44:35.701167] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:19.614 qpair failed and we were unable to recover it. 00:27:19.614 [2024-07-14 07:44:35.701348] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.614 [2024-07-14 07:44:35.701526] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.614 [2024-07-14 07:44:35.701553] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:19.614 qpair failed and we were unable to recover it. 00:27:19.614 [2024-07-14 07:44:35.701733] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.614 [2024-07-14 07:44:35.701917] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.614 [2024-07-14 07:44:35.701944] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:19.614 qpair failed and we were unable to recover it. 00:27:19.614 [2024-07-14 07:44:35.702134] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.614 [2024-07-14 07:44:35.702321] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.614 [2024-07-14 07:44:35.702348] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:19.614 qpair failed and we were unable to recover it. 00:27:19.614 [2024-07-14 07:44:35.702535] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.614 [2024-07-14 07:44:35.702745] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.614 [2024-07-14 07:44:35.702776] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:19.614 qpair failed and we were unable to recover it. 00:27:19.614 [2024-07-14 07:44:35.702963] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.614 [2024-07-14 07:44:35.703147] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.614 [2024-07-14 07:44:35.703173] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:19.614 qpair failed and we were unable to recover it. 00:27:19.614 [2024-07-14 07:44:35.703389] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.614 [2024-07-14 07:44:35.703577] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.614 [2024-07-14 07:44:35.703603] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:19.614 qpair failed and we were unable to recover it. 00:27:19.614 [2024-07-14 07:44:35.703789] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.614 [2024-07-14 07:44:35.703989] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.614 [2024-07-14 07:44:35.704015] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:19.614 qpair failed and we were unable to recover it. 00:27:19.614 [2024-07-14 07:44:35.704201] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.614 [2024-07-14 07:44:35.704387] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.614 [2024-07-14 07:44:35.704413] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:19.614 qpair failed and we were unable to recover it. 00:27:19.614 [2024-07-14 07:44:35.704623] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.614 [2024-07-14 07:44:35.704808] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.614 [2024-07-14 07:44:35.704836] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:19.614 qpair failed and we were unable to recover it. 00:27:19.614 [2024-07-14 07:44:35.705043] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.614 [2024-07-14 07:44:35.705254] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.614 [2024-07-14 07:44:35.705280] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:19.614 qpair failed and we were unable to recover it. 00:27:19.614 [2024-07-14 07:44:35.705472] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.614 [2024-07-14 07:44:35.705651] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.614 [2024-07-14 07:44:35.705677] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:19.614 qpair failed and we were unable to recover it. 00:27:19.615 [2024-07-14 07:44:35.705839] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.615 [2024-07-14 07:44:35.706032] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.615 [2024-07-14 07:44:35.706059] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:19.615 qpair failed and we were unable to recover it. 00:27:19.615 [2024-07-14 07:44:35.706244] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.615 [2024-07-14 07:44:35.706452] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.615 [2024-07-14 07:44:35.706479] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:19.615 qpair failed and we were unable to recover it. 00:27:19.615 [2024-07-14 07:44:35.706670] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.615 [2024-07-14 07:44:35.706825] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.615 [2024-07-14 07:44:35.706851] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:19.615 qpair failed and we were unable to recover it. 00:27:19.615 [2024-07-14 07:44:35.707080] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.615 [2024-07-14 07:44:35.707281] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.615 [2024-07-14 07:44:35.707311] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0c30000b90 with addr=10.0.0.2, port=4420 00:27:19.615 qpair failed and we were unable to recover it. 00:27:19.615 [2024-07-14 07:44:35.707589] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.615 [2024-07-14 07:44:35.707783] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.615 [2024-07-14 07:44:35.707810] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0c30000b90 with addr=10.0.0.2, port=4420 00:27:19.615 qpair failed and we were unable to recover it. 00:27:19.615 [2024-07-14 07:44:35.708029] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.615 [2024-07-14 07:44:35.708229] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.615 [2024-07-14 07:44:35.708256] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0c30000b90 with addr=10.0.0.2, port=4420 00:27:19.615 qpair failed and we were unable to recover it. 00:27:19.615 [2024-07-14 07:44:35.708451] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.615 [2024-07-14 07:44:35.708642] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.615 [2024-07-14 07:44:35.708669] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0c30000b90 with addr=10.0.0.2, port=4420 00:27:19.615 qpair failed and we were unable to recover it. 00:27:19.615 [2024-07-14 07:44:35.708863] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.615 [2024-07-14 07:44:35.709061] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.615 [2024-07-14 07:44:35.709088] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0c30000b90 with addr=10.0.0.2, port=4420 00:27:19.615 qpair failed and we were unable to recover it. 00:27:19.615 [2024-07-14 07:44:35.709306] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.615 [2024-07-14 07:44:35.709466] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.615 [2024-07-14 07:44:35.709494] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0c30000b90 with addr=10.0.0.2, port=4420 00:27:19.615 qpair failed and we were unable to recover it. 00:27:19.615 [2024-07-14 07:44:35.709687] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.615 [2024-07-14 07:44:35.709886] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.615 [2024-07-14 07:44:35.709915] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0c30000b90 with addr=10.0.0.2, port=4420 00:27:19.615 qpair failed and we were unable to recover it. 00:27:19.615 [2024-07-14 07:44:35.710130] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.615 [2024-07-14 07:44:35.710342] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.615 [2024-07-14 07:44:35.710369] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0c30000b90 with addr=10.0.0.2, port=4420 00:27:19.615 qpair failed and we were unable to recover it. 00:27:19.615 [2024-07-14 07:44:35.710561] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.615 [2024-07-14 07:44:35.710779] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.615 [2024-07-14 07:44:35.710807] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0c30000b90 with addr=10.0.0.2, port=4420 00:27:19.615 qpair failed and we were unable to recover it. 00:27:19.615 [2024-07-14 07:44:35.711001] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.615 [2024-07-14 07:44:35.711162] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.615 [2024-07-14 07:44:35.711189] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0c30000b90 with addr=10.0.0.2, port=4420 00:27:19.615 qpair failed and we were unable to recover it. 00:27:19.615 [2024-07-14 07:44:35.711392] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.615 [2024-07-14 07:44:35.711610] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.615 [2024-07-14 07:44:35.711637] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0c30000b90 with addr=10.0.0.2, port=4420 00:27:19.615 qpair failed and we were unable to recover it. 00:27:19.615 [2024-07-14 07:44:35.711826] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.615 [2024-07-14 07:44:35.712050] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.615 [2024-07-14 07:44:35.712077] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0c30000b90 with addr=10.0.0.2, port=4420 00:27:19.615 qpair failed and we were unable to recover it. 00:27:19.615 [2024-07-14 07:44:35.712291] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.615 [2024-07-14 07:44:35.712483] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.615 [2024-07-14 07:44:35.712510] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0c30000b90 with addr=10.0.0.2, port=4420 00:27:19.615 qpair failed and we were unable to recover it. 00:27:19.615 [2024-07-14 07:44:35.712729] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.615 [2024-07-14 07:44:35.712919] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.615 [2024-07-14 07:44:35.712947] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0c30000b90 with addr=10.0.0.2, port=4420 00:27:19.615 qpair failed and we were unable to recover it. 00:27:19.615 [2024-07-14 07:44:35.713166] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.615 [2024-07-14 07:44:35.713384] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.615 [2024-07-14 07:44:35.713412] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0c30000b90 with addr=10.0.0.2, port=4420 00:27:19.615 qpair failed and we were unable to recover it. 00:27:19.615 [2024-07-14 07:44:35.713647] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.615 [2024-07-14 07:44:35.713841] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.615 [2024-07-14 07:44:35.713875] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0c30000b90 with addr=10.0.0.2, port=4420 00:27:19.615 qpair failed and we were unable to recover it. 00:27:19.615 [2024-07-14 07:44:35.714068] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.615 [2024-07-14 07:44:35.714303] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.615 [2024-07-14 07:44:35.714332] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0c30000b90 with addr=10.0.0.2, port=4420 00:27:19.615 qpair failed and we were unable to recover it. 00:27:19.615 [2024-07-14 07:44:35.714556] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.615 [2024-07-14 07:44:35.714800] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.615 [2024-07-14 07:44:35.714844] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0c30000b90 with addr=10.0.0.2, port=4420 00:27:19.615 qpair failed and we were unable to recover it. 00:27:19.615 [2024-07-14 07:44:35.715880] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.615 [2024-07-14 07:44:35.716085] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.615 [2024-07-14 07:44:35.716114] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0c30000b90 with addr=10.0.0.2, port=4420 00:27:19.615 qpair failed and we were unable to recover it. 00:27:19.615 [2024-07-14 07:44:35.716297] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.615 [2024-07-14 07:44:35.716520] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.615 [2024-07-14 07:44:35.716562] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0c30000b90 with addr=10.0.0.2, port=4420 00:27:19.615 qpair failed and we were unable to recover it. 00:27:19.615 [2024-07-14 07:44:35.716942] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.615 [2024-07-14 07:44:35.717182] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.615 [2024-07-14 07:44:35.717206] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0c30000b90 with addr=10.0.0.2, port=4420 00:27:19.615 qpair failed and we were unable to recover it. 00:27:19.615 [2024-07-14 07:44:35.717425] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.615 [2024-07-14 07:44:35.717677] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.615 [2024-07-14 07:44:35.717718] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0c30000b90 with addr=10.0.0.2, port=4420 00:27:19.615 qpair failed and we were unable to recover it. 00:27:19.615 [2024-07-14 07:44:35.717946] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.615 [2024-07-14 07:44:35.718156] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.615 [2024-07-14 07:44:35.718180] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0c30000b90 with addr=10.0.0.2, port=4420 00:27:19.615 qpair failed and we were unable to recover it. 00:27:19.615 [2024-07-14 07:44:35.718440] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.615 [2024-07-14 07:44:35.718768] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.616 [2024-07-14 07:44:35.718831] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0c30000b90 with addr=10.0.0.2, port=4420 00:27:19.616 qpair failed and we were unable to recover it. 00:27:19.616 [2024-07-14 07:44:35.719120] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.616 [2024-07-14 07:44:35.719329] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.616 [2024-07-14 07:44:35.719370] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0c30000b90 with addr=10.0.0.2, port=4420 00:27:19.616 qpair failed and we were unable to recover it. 00:27:19.616 [2024-07-14 07:44:35.719566] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.616 [2024-07-14 07:44:35.719842] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.616 [2024-07-14 07:44:35.719870] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0c30000b90 with addr=10.0.0.2, port=4420 00:27:19.616 qpair failed and we were unable to recover it. 00:27:19.616 [2024-07-14 07:44:35.720168] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.616 [2024-07-14 07:44:35.720406] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.616 [2024-07-14 07:44:35.720434] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0c30000b90 with addr=10.0.0.2, port=4420 00:27:19.616 qpair failed and we were unable to recover it. 00:27:19.616 [2024-07-14 07:44:35.720685] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.616 [2024-07-14 07:44:35.720850] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.616 [2024-07-14 07:44:35.720901] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0c30000b90 with addr=10.0.0.2, port=4420 00:27:19.616 qpair failed and we were unable to recover it. 00:27:19.616 [2024-07-14 07:44:35.721095] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.616 [2024-07-14 07:44:35.721363] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.616 [2024-07-14 07:44:35.721405] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0c30000b90 with addr=10.0.0.2, port=4420 00:27:19.616 qpair failed and we were unable to recover it. 00:27:19.616 [2024-07-14 07:44:35.721575] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.616 [2024-07-14 07:44:35.721794] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.616 [2024-07-14 07:44:35.721834] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0c30000b90 with addr=10.0.0.2, port=4420 00:27:19.616 qpair failed and we were unable to recover it. 00:27:19.616 [2024-07-14 07:44:35.722105] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.616 [2024-07-14 07:44:35.722348] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.616 [2024-07-14 07:44:35.722391] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0c30000b90 with addr=10.0.0.2, port=4420 00:27:19.616 qpair failed and we were unable to recover it. 00:27:19.616 [2024-07-14 07:44:35.722605] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.616 [2024-07-14 07:44:35.722807] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.616 [2024-07-14 07:44:35.722832] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0c30000b90 with addr=10.0.0.2, port=4420 00:27:19.616 qpair failed and we were unable to recover it. 00:27:19.616 [2024-07-14 07:44:35.722997] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.616 [2024-07-14 07:44:35.723210] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.616 [2024-07-14 07:44:35.723253] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0c30000b90 with addr=10.0.0.2, port=4420 00:27:19.616 qpair failed and we were unable to recover it. 00:27:19.616 [2024-07-14 07:44:35.723538] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.616 [2024-07-14 07:44:35.723791] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.616 [2024-07-14 07:44:35.723816] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0c30000b90 with addr=10.0.0.2, port=4420 00:27:19.616 qpair failed and we were unable to recover it. 00:27:19.616 [2024-07-14 07:44:35.724008] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.616 [2024-07-14 07:44:35.724221] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.616 [2024-07-14 07:44:35.724264] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0c30000b90 with addr=10.0.0.2, port=4420 00:27:19.616 qpair failed and we were unable to recover it. 00:27:19.616 [2024-07-14 07:44:35.724451] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.616 [2024-07-14 07:44:35.724705] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.616 [2024-07-14 07:44:35.724748] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0c30000b90 with addr=10.0.0.2, port=4420 00:27:19.616 qpair failed and we were unable to recover it. 00:27:19.616 [2024-07-14 07:44:35.724971] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.616 [2024-07-14 07:44:35.725214] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.616 [2024-07-14 07:44:35.725242] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0c30000b90 with addr=10.0.0.2, port=4420 00:27:19.616 qpair failed and we were unable to recover it. 00:27:19.616 [2024-07-14 07:44:35.725525] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.616 [2024-07-14 07:44:35.725822] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.616 [2024-07-14 07:44:35.725847] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0c30000b90 with addr=10.0.0.2, port=4420 00:27:19.616 qpair failed and we were unable to recover it. 00:27:19.616 [2024-07-14 07:44:35.726054] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.616 [2024-07-14 07:44:35.726242] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.616 [2024-07-14 07:44:35.726285] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0c30000b90 with addr=10.0.0.2, port=4420 00:27:19.616 qpair failed and we were unable to recover it. 00:27:19.616 [2024-07-14 07:44:35.726499] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.616 [2024-07-14 07:44:35.726706] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.616 [2024-07-14 07:44:35.726731] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0c30000b90 with addr=10.0.0.2, port=4420 00:27:19.616 qpair failed and we were unable to recover it. 00:27:19.616 [2024-07-14 07:44:35.726984] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.616 [2024-07-14 07:44:35.727218] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.616 [2024-07-14 07:44:35.727246] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0c30000b90 with addr=10.0.0.2, port=4420 00:27:19.616 qpair failed and we were unable to recover it. 00:27:19.616 [2024-07-14 07:44:35.727522] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.616 [2024-07-14 07:44:35.727731] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.616 [2024-07-14 07:44:35.727756] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0c30000b90 with addr=10.0.0.2, port=4420 00:27:19.616 qpair failed and we were unable to recover it. 00:27:19.616 [2024-07-14 07:44:35.727994] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.616 [2024-07-14 07:44:35.728205] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.616 [2024-07-14 07:44:35.728231] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0c30000b90 with addr=10.0.0.2, port=4420 00:27:19.616 qpair failed and we were unable to recover it. 00:27:19.616 [2024-07-14 07:44:35.728487] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.616 [2024-07-14 07:44:35.728686] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.616 [2024-07-14 07:44:35.728712] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0c30000b90 with addr=10.0.0.2, port=4420 00:27:19.616 qpair failed and we were unable to recover it. 00:27:19.616 [2024-07-14 07:44:35.728909] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.616 [2024-07-14 07:44:35.729135] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.616 [2024-07-14 07:44:35.729180] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0c30000b90 with addr=10.0.0.2, port=4420 00:27:19.616 qpair failed and we were unable to recover it. 00:27:19.616 [2024-07-14 07:44:35.729440] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.616 [2024-07-14 07:44:35.729748] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.616 [2024-07-14 07:44:35.729773] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0c30000b90 with addr=10.0.0.2, port=4420 00:27:19.616 qpair failed and we were unable to recover it. 00:27:19.616 [2024-07-14 07:44:35.730022] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.616 [2024-07-14 07:44:35.730287] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.616 [2024-07-14 07:44:35.730329] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0c30000b90 with addr=10.0.0.2, port=4420 00:27:19.616 qpair failed and we were unable to recover it. 00:27:19.616 [2024-07-14 07:44:35.730583] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.616 [2024-07-14 07:44:35.730787] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.616 [2024-07-14 07:44:35.730812] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0c30000b90 with addr=10.0.0.2, port=4420 00:27:19.616 qpair failed and we were unable to recover it. 00:27:19.616 [2024-07-14 07:44:35.731050] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.616 [2024-07-14 07:44:35.731248] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.616 [2024-07-14 07:44:35.731291] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0c30000b90 with addr=10.0.0.2, port=4420 00:27:19.616 qpair failed and we were unable to recover it. 00:27:19.616 [2024-07-14 07:44:35.731479] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.616 [2024-07-14 07:44:35.731700] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.616 [2024-07-14 07:44:35.731725] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0c30000b90 with addr=10.0.0.2, port=4420 00:27:19.616 qpair failed and we were unable to recover it. 00:27:19.616 [2024-07-14 07:44:35.731965] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.616 [2024-07-14 07:44:35.732197] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.616 [2024-07-14 07:44:35.732225] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0c30000b90 with addr=10.0.0.2, port=4420 00:27:19.616 qpair failed and we were unable to recover it. 00:27:19.616 [2024-07-14 07:44:35.732426] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.616 [2024-07-14 07:44:35.732603] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.616 [2024-07-14 07:44:35.732629] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0c30000b90 with addr=10.0.0.2, port=4420 00:27:19.616 qpair failed and we were unable to recover it. 00:27:19.616 [2024-07-14 07:44:35.732822] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.616 [2024-07-14 07:44:35.733071] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.616 [2024-07-14 07:44:35.733114] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0c30000b90 with addr=10.0.0.2, port=4420 00:27:19.616 qpair failed and we were unable to recover it. 00:27:19.616 [2024-07-14 07:44:35.733324] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.616 [2024-07-14 07:44:35.733581] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.616 [2024-07-14 07:44:35.733623] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0c30000b90 with addr=10.0.0.2, port=4420 00:27:19.616 qpair failed and we were unable to recover it. 00:27:19.616 [2024-07-14 07:44:35.733810] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.617 [2024-07-14 07:44:35.734040] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.617 [2024-07-14 07:44:35.734083] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0c30000b90 with addr=10.0.0.2, port=4420 00:27:19.617 qpair failed and we were unable to recover it. 00:27:19.617 [2024-07-14 07:44:35.734285] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.617 [2024-07-14 07:44:35.734510] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.617 [2024-07-14 07:44:35.734553] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0c30000b90 with addr=10.0.0.2, port=4420 00:27:19.617 qpair failed and we were unable to recover it. 00:27:19.617 [2024-07-14 07:44:35.734742] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.617 [2024-07-14 07:44:35.734946] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.617 [2024-07-14 07:44:35.734990] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0c30000b90 with addr=10.0.0.2, port=4420 00:27:19.617 qpair failed and we were unable to recover it. 00:27:19.617 [2024-07-14 07:44:35.735232] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.617 [2024-07-14 07:44:35.735455] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.617 [2024-07-14 07:44:35.735497] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0c30000b90 with addr=10.0.0.2, port=4420 00:27:19.617 qpair failed and we were unable to recover it. 00:27:19.617 [2024-07-14 07:44:35.735680] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.617 [2024-07-14 07:44:35.735888] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.617 [2024-07-14 07:44:35.735913] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0c30000b90 with addr=10.0.0.2, port=4420 00:27:19.617 qpair failed and we were unable to recover it. 00:27:19.617 [2024-07-14 07:44:35.736097] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.617 [2024-07-14 07:44:35.736362] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.617 [2024-07-14 07:44:35.736405] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0c30000b90 with addr=10.0.0.2, port=4420 00:27:19.617 qpair failed and we were unable to recover it. 00:27:19.617 [2024-07-14 07:44:35.736630] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.617 [2024-07-14 07:44:35.736857] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.617 [2024-07-14 07:44:35.736897] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0c30000b90 with addr=10.0.0.2, port=4420 00:27:19.617 qpair failed and we were unable to recover it. 00:27:19.617 [2024-07-14 07:44:35.737064] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.617 [2024-07-14 07:44:35.737304] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.617 [2024-07-14 07:44:35.737346] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0c30000b90 with addr=10.0.0.2, port=4420 00:27:19.617 qpair failed and we were unable to recover it. 00:27:19.617 [2024-07-14 07:44:35.737594] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.617 [2024-07-14 07:44:35.737800] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.617 [2024-07-14 07:44:35.737827] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0c30000b90 with addr=10.0.0.2, port=4420 00:27:19.617 qpair failed and we were unable to recover it. 00:27:19.617 [2024-07-14 07:44:35.738020] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.617 [2024-07-14 07:44:35.738233] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.617 [2024-07-14 07:44:35.738276] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0c30000b90 with addr=10.0.0.2, port=4420 00:27:19.617 qpair failed and we were unable to recover it. 00:27:19.617 [2024-07-14 07:44:35.738515] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.617 [2024-07-14 07:44:35.738716] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.617 [2024-07-14 07:44:35.738741] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0c30000b90 with addr=10.0.0.2, port=4420 00:27:19.617 qpair failed and we were unable to recover it. 00:27:19.617 [2024-07-14 07:44:35.738976] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.617 [2024-07-14 07:44:35.739242] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.617 [2024-07-14 07:44:35.739284] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0c30000b90 with addr=10.0.0.2, port=4420 00:27:19.617 qpair failed and we were unable to recover it. 00:27:19.617 [2024-07-14 07:44:35.739499] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.617 [2024-07-14 07:44:35.739677] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.617 [2024-07-14 07:44:35.739702] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0c30000b90 with addr=10.0.0.2, port=4420 00:27:19.617 qpair failed and we were unable to recover it. 00:27:19.617 [2024-07-14 07:44:35.739890] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.617 [2024-07-14 07:44:35.740101] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.617 [2024-07-14 07:44:35.740144] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0c30000b90 with addr=10.0.0.2, port=4420 00:27:19.617 qpair failed and we were unable to recover it. 00:27:19.617 [2024-07-14 07:44:35.740320] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.617 [2024-07-14 07:44:35.740542] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.617 [2024-07-14 07:44:35.740584] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0c30000b90 with addr=10.0.0.2, port=4420 00:27:19.617 qpair failed and we were unable to recover it. 00:27:19.617 [2024-07-14 07:44:35.740776] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.617 [2024-07-14 07:44:35.740959] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.617 [2024-07-14 07:44:35.740985] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0c30000b90 with addr=10.0.0.2, port=4420 00:27:19.617 qpair failed and we were unable to recover it. 00:27:19.617 [2024-07-14 07:44:35.741177] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.617 [2024-07-14 07:44:35.741445] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.617 [2024-07-14 07:44:35.741487] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0c30000b90 with addr=10.0.0.2, port=4420 00:27:19.617 qpair failed and we were unable to recover it. 00:27:19.617 [2024-07-14 07:44:35.741680] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.617 [2024-07-14 07:44:35.741862] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.617 [2024-07-14 07:44:35.741892] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0c30000b90 with addr=10.0.0.2, port=4420 00:27:19.617 qpair failed and we were unable to recover it. 00:27:19.617 [2024-07-14 07:44:35.742101] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.617 [2024-07-14 07:44:35.742307] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.617 [2024-07-14 07:44:35.742350] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0c30000b90 with addr=10.0.0.2, port=4420 00:27:19.617 qpair failed and we were unable to recover it. 00:27:19.617 [2024-07-14 07:44:35.742569] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.617 [2024-07-14 07:44:35.742797] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.617 [2024-07-14 07:44:35.742822] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0c30000b90 with addr=10.0.0.2, port=4420 00:27:19.617 qpair failed and we were unable to recover it. 00:27:19.617 [2024-07-14 07:44:35.743016] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.617 [2024-07-14 07:44:35.743229] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.617 [2024-07-14 07:44:35.743272] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0c30000b90 with addr=10.0.0.2, port=4420 00:27:19.617 qpair failed and we were unable to recover it. 00:27:19.617 [2024-07-14 07:44:35.743484] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.617 [2024-07-14 07:44:35.743738] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.617 [2024-07-14 07:44:35.743781] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0c30000b90 with addr=10.0.0.2, port=4420 00:27:19.617 qpair failed and we were unable to recover it. 00:27:19.617 [2024-07-14 07:44:35.743991] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.617 [2024-07-14 07:44:35.744213] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.617 [2024-07-14 07:44:35.744256] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0c30000b90 with addr=10.0.0.2, port=4420 00:27:19.617 qpair failed and we were unable to recover it. 00:27:19.617 [2024-07-14 07:44:35.744462] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.617 [2024-07-14 07:44:35.744690] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.617 [2024-07-14 07:44:35.744733] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0c30000b90 with addr=10.0.0.2, port=4420 00:27:19.617 qpair failed and we were unable to recover it. 00:27:19.617 [2024-07-14 07:44:35.744948] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.617 [2024-07-14 07:44:35.745205] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.617 [2024-07-14 07:44:35.745247] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0c30000b90 with addr=10.0.0.2, port=4420 00:27:19.617 qpair failed and we were unable to recover it. 00:27:19.617 [2024-07-14 07:44:35.745457] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.617 [2024-07-14 07:44:35.745689] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.617 [2024-07-14 07:44:35.745714] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0c30000b90 with addr=10.0.0.2, port=4420 00:27:19.617 qpair failed and we were unable to recover it. 00:27:19.617 [2024-07-14 07:44:35.745940] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.617 [2024-07-14 07:44:35.746141] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.617 [2024-07-14 07:44:35.746185] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0c30000b90 with addr=10.0.0.2, port=4420 00:27:19.617 qpair failed and we were unable to recover it. 00:27:19.617 [2024-07-14 07:44:35.746426] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.617 [2024-07-14 07:44:35.746645] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.617 [2024-07-14 07:44:35.746670] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0c30000b90 with addr=10.0.0.2, port=4420 00:27:19.617 qpair failed and we were unable to recover it. 00:27:19.617 [2024-07-14 07:44:35.746853] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.617 [2024-07-14 07:44:35.747107] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.617 [2024-07-14 07:44:35.747150] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0c30000b90 with addr=10.0.0.2, port=4420 00:27:19.617 qpair failed and we were unable to recover it. 00:27:19.617 [2024-07-14 07:44:35.747356] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.617 [2024-07-14 07:44:35.747580] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.617 [2024-07-14 07:44:35.747625] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0c30000b90 with addr=10.0.0.2, port=4420 00:27:19.617 qpair failed and we were unable to recover it. 00:27:19.617 [2024-07-14 07:44:35.747844] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.617 [2024-07-14 07:44:35.748040] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.617 [2024-07-14 07:44:35.748066] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0c30000b90 with addr=10.0.0.2, port=4420 00:27:19.617 qpair failed and we were unable to recover it. 00:27:19.617 [2024-07-14 07:44:35.748279] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.618 [2024-07-14 07:44:35.748501] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.618 [2024-07-14 07:44:35.748543] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0c30000b90 with addr=10.0.0.2, port=4420 00:27:19.618 qpair failed and we were unable to recover it. 00:27:19.618 [2024-07-14 07:44:35.748756] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.618 [2024-07-14 07:44:35.748942] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.618 [2024-07-14 07:44:35.748968] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0c30000b90 with addr=10.0.0.2, port=4420 00:27:19.618 qpair failed and we were unable to recover it. 00:27:19.618 [2024-07-14 07:44:35.749184] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.618 [2024-07-14 07:44:35.749418] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.618 [2024-07-14 07:44:35.749461] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0c30000b90 with addr=10.0.0.2, port=4420 00:27:19.618 qpair failed and we were unable to recover it. 00:27:19.618 [2024-07-14 07:44:35.749704] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.618 [2024-07-14 07:44:35.749935] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.618 [2024-07-14 07:44:35.749978] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0c30000b90 with addr=10.0.0.2, port=4420 00:27:19.618 qpair failed and we were unable to recover it. 00:27:19.618 [2024-07-14 07:44:35.750192] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.618 [2024-07-14 07:44:35.750452] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.618 [2024-07-14 07:44:35.750492] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0c30000b90 with addr=10.0.0.2, port=4420 00:27:19.618 qpair failed and we were unable to recover it. 00:27:19.618 [2024-07-14 07:44:35.750674] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.618 [2024-07-14 07:44:35.750844] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.618 [2024-07-14 07:44:35.750875] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0c30000b90 with addr=10.0.0.2, port=4420 00:27:19.618 qpair failed and we were unable to recover it. 00:27:19.618 [2024-07-14 07:44:35.751066] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.618 [2024-07-14 07:44:35.751276] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.618 [2024-07-14 07:44:35.751318] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0c30000b90 with addr=10.0.0.2, port=4420 00:27:19.618 qpair failed and we were unable to recover it. 00:27:19.618 [2024-07-14 07:44:35.751556] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.618 [2024-07-14 07:44:35.751760] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.618 [2024-07-14 07:44:35.751785] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0c30000b90 with addr=10.0.0.2, port=4420 00:27:19.618 qpair failed and we were unable to recover it. 00:27:19.618 [2024-07-14 07:44:35.752029] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.618 [2024-07-14 07:44:35.752260] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.618 [2024-07-14 07:44:35.752303] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0c30000b90 with addr=10.0.0.2, port=4420 00:27:19.618 qpair failed and we were unable to recover it. 00:27:19.618 [2024-07-14 07:44:35.752515] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.618 [2024-07-14 07:44:35.752693] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.618 [2024-07-14 07:44:35.752717] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0c30000b90 with addr=10.0.0.2, port=4420 00:27:19.618 qpair failed and we were unable to recover it. 00:27:19.618 [2024-07-14 07:44:35.752924] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.618 [2024-07-14 07:44:35.753135] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.618 [2024-07-14 07:44:35.753178] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0c30000b90 with addr=10.0.0.2, port=4420 00:27:19.618 qpair failed and we were unable to recover it. 00:27:19.618 [2024-07-14 07:44:35.753405] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.618 [2024-07-14 07:44:35.753600] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.618 [2024-07-14 07:44:35.753642] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0c30000b90 with addr=10.0.0.2, port=4420 00:27:19.618 qpair failed and we were unable to recover it. 00:27:19.891 [2024-07-14 07:44:35.753873] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.891 [2024-07-14 07:44:35.754037] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.891 [2024-07-14 07:44:35.754063] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0c30000b90 with addr=10.0.0.2, port=4420 00:27:19.891 qpair failed and we were unable to recover it. 00:27:19.891 [2024-07-14 07:44:35.754277] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.891 [2024-07-14 07:44:35.754539] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.891 [2024-07-14 07:44:35.754580] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0c30000b90 with addr=10.0.0.2, port=4420 00:27:19.891 qpair failed and we were unable to recover it. 00:27:19.891 [2024-07-14 07:44:35.754772] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.891 [2024-07-14 07:44:35.754960] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.891 [2024-07-14 07:44:35.754985] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0c30000b90 with addr=10.0.0.2, port=4420 00:27:19.891 qpair failed and we were unable to recover it. 00:27:19.891 [2024-07-14 07:44:35.755191] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.891 [2024-07-14 07:44:35.755427] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.891 [2024-07-14 07:44:35.755470] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0c30000b90 with addr=10.0.0.2, port=4420 00:27:19.891 qpair failed and we were unable to recover it. 00:27:19.891 [2024-07-14 07:44:35.755666] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.891 [2024-07-14 07:44:35.755882] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.892 [2024-07-14 07:44:35.755909] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0c30000b90 with addr=10.0.0.2, port=4420 00:27:19.892 qpair failed and we were unable to recover it. 00:27:19.892 [2024-07-14 07:44:35.756149] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.892 [2024-07-14 07:44:35.756342] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.892 [2024-07-14 07:44:35.756386] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0c30000b90 with addr=10.0.0.2, port=4420 00:27:19.892 qpair failed and we were unable to recover it. 00:27:19.892 [2024-07-14 07:44:35.756580] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.892 [2024-07-14 07:44:35.756784] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.892 [2024-07-14 07:44:35.756809] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0c30000b90 with addr=10.0.0.2, port=4420 00:27:19.892 qpair failed and we were unable to recover it. 00:27:19.892 [2024-07-14 07:44:35.756995] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.892 [2024-07-14 07:44:35.757199] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.892 [2024-07-14 07:44:35.757241] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0c30000b90 with addr=10.0.0.2, port=4420 00:27:19.892 qpair failed and we were unable to recover it. 00:27:19.892 [2024-07-14 07:44:35.757454] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.892 [2024-07-14 07:44:35.757677] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.892 [2024-07-14 07:44:35.757719] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0c30000b90 with addr=10.0.0.2, port=4420 00:27:19.892 qpair failed and we were unable to recover it. 00:27:19.892 [2024-07-14 07:44:35.757933] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.892 [2024-07-14 07:44:35.758151] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.892 [2024-07-14 07:44:35.758193] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0c30000b90 with addr=10.0.0.2, port=4420 00:27:19.892 qpair failed and we were unable to recover it. 00:27:19.892 [2024-07-14 07:44:35.758399] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.892 [2024-07-14 07:44:35.758658] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.892 [2024-07-14 07:44:35.758699] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0c30000b90 with addr=10.0.0.2, port=4420 00:27:19.892 qpair failed and we were unable to recover it. 00:27:19.892 [2024-07-14 07:44:35.758893] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.892 [2024-07-14 07:44:35.759126] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.892 [2024-07-14 07:44:35.759168] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0c30000b90 with addr=10.0.0.2, port=4420 00:27:19.892 qpair failed and we were unable to recover it. 00:27:19.892 [2024-07-14 07:44:35.759418] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.892 [2024-07-14 07:44:35.759619] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.892 [2024-07-14 07:44:35.759661] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0c30000b90 with addr=10.0.0.2, port=4420 00:27:19.892 qpair failed and we were unable to recover it. 00:27:19.892 [2024-07-14 07:44:35.759845] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.892 [2024-07-14 07:44:35.760038] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.892 [2024-07-14 07:44:35.760067] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0c30000b90 with addr=10.0.0.2, port=4420 00:27:19.892 qpair failed and we were unable to recover it. 00:27:19.892 [2024-07-14 07:44:35.760274] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.892 [2024-07-14 07:44:35.760497] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.892 [2024-07-14 07:44:35.760539] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0c30000b90 with addr=10.0.0.2, port=4420 00:27:19.892 qpair failed and we were unable to recover it. 00:27:19.892 [2024-07-14 07:44:35.760780] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.892 [2024-07-14 07:44:35.760951] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.892 [2024-07-14 07:44:35.760977] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0c30000b90 with addr=10.0.0.2, port=4420 00:27:19.892 qpair failed and we were unable to recover it. 00:27:19.892 [2024-07-14 07:44:35.761193] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.892 [2024-07-14 07:44:35.761418] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.892 [2024-07-14 07:44:35.761463] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0c30000b90 with addr=10.0.0.2, port=4420 00:27:19.892 qpair failed and we were unable to recover it. 00:27:19.892 [2024-07-14 07:44:35.761697] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.892 [2024-07-14 07:44:35.761899] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.892 [2024-07-14 07:44:35.761925] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0c30000b90 with addr=10.0.0.2, port=4420 00:27:19.892 qpair failed and we were unable to recover it. 00:27:19.892 [2024-07-14 07:44:35.762167] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.892 [2024-07-14 07:44:35.762362] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.892 [2024-07-14 07:44:35.762406] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0c30000b90 with addr=10.0.0.2, port=4420 00:27:19.892 qpair failed and we were unable to recover it. 00:27:19.892 [2024-07-14 07:44:35.762645] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.892 [2024-07-14 07:44:35.762871] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.892 [2024-07-14 07:44:35.762897] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0c30000b90 with addr=10.0.0.2, port=4420 00:27:19.892 qpair failed and we were unable to recover it. 00:27:19.892 [2024-07-14 07:44:35.763110] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.892 [2024-07-14 07:44:35.763313] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.892 [2024-07-14 07:44:35.763355] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0c30000b90 with addr=10.0.0.2, port=4420 00:27:19.892 qpair failed and we were unable to recover it. 00:27:19.892 [2024-07-14 07:44:35.763546] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.892 [2024-07-14 07:44:35.763716] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.892 [2024-07-14 07:44:35.763741] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0c30000b90 with addr=10.0.0.2, port=4420 00:27:19.892 qpair failed and we were unable to recover it. 00:27:19.892 [2024-07-14 07:44:35.763989] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.892 [2024-07-14 07:44:35.764191] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.892 [2024-07-14 07:44:35.764234] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0c30000b90 with addr=10.0.0.2, port=4420 00:27:19.892 qpair failed and we were unable to recover it. 00:27:19.892 [2024-07-14 07:44:35.764487] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.892 [2024-07-14 07:44:35.764660] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.892 [2024-07-14 07:44:35.764690] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0c30000b90 with addr=10.0.0.2, port=4420 00:27:19.892 qpair failed and we were unable to recover it. 00:27:19.892 [2024-07-14 07:44:35.764903] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.892 [2024-07-14 07:44:35.765078] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.892 [2024-07-14 07:44:35.765121] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0c30000b90 with addr=10.0.0.2, port=4420 00:27:19.892 qpair failed and we were unable to recover it. 00:27:19.892 [2024-07-14 07:44:35.765336] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.892 [2024-07-14 07:44:35.765543] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.892 [2024-07-14 07:44:35.765585] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0c30000b90 with addr=10.0.0.2, port=4420 00:27:19.892 qpair failed and we were unable to recover it. 00:27:19.892 [2024-07-14 07:44:35.765769] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.892 [2024-07-14 07:44:35.765981] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.892 [2024-07-14 07:44:35.766023] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0c30000b90 with addr=10.0.0.2, port=4420 00:27:19.892 qpair failed and we were unable to recover it. 00:27:19.892 [2024-07-14 07:44:35.766230] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.892 [2024-07-14 07:44:35.766476] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.892 [2024-07-14 07:44:35.766518] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0c30000b90 with addr=10.0.0.2, port=4420 00:27:19.892 qpair failed and we were unable to recover it. 00:27:19.892 [2024-07-14 07:44:35.766683] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.892 [2024-07-14 07:44:35.766899] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.892 [2024-07-14 07:44:35.766924] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0c30000b90 with addr=10.0.0.2, port=4420 00:27:19.892 qpair failed and we were unable to recover it. 00:27:19.892 [2024-07-14 07:44:35.767109] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.892 [2024-07-14 07:44:35.767330] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.892 [2024-07-14 07:44:35.767373] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0c30000b90 with addr=10.0.0.2, port=4420 00:27:19.892 qpair failed and we were unable to recover it. 00:27:19.892 [2024-07-14 07:44:35.767611] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.892 [2024-07-14 07:44:35.767802] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.892 [2024-07-14 07:44:35.767827] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0c30000b90 with addr=10.0.0.2, port=4420 00:27:19.892 qpair failed and we were unable to recover it. 00:27:19.892 [2024-07-14 07:44:35.768047] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.892 [2024-07-14 07:44:35.768253] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.892 [2024-07-14 07:44:35.768296] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0c30000b90 with addr=10.0.0.2, port=4420 00:27:19.892 qpair failed and we were unable to recover it. 00:27:19.892 [2024-07-14 07:44:35.768520] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.892 [2024-07-14 07:44:35.768724] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.892 [2024-07-14 07:44:35.768749] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0c30000b90 with addr=10.0.0.2, port=4420 00:27:19.892 qpair failed and we were unable to recover it. 00:27:19.892 [2024-07-14 07:44:35.768956] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.892 [2024-07-14 07:44:35.769152] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.892 [2024-07-14 07:44:35.769198] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0c30000b90 with addr=10.0.0.2, port=4420 00:27:19.892 qpair failed and we were unable to recover it. 00:27:19.892 [2024-07-14 07:44:35.769429] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.892 [2024-07-14 07:44:35.769650] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.893 [2024-07-14 07:44:35.769691] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0c30000b90 with addr=10.0.0.2, port=4420 00:27:19.893 qpair failed and we were unable to recover it. 00:27:19.893 [2024-07-14 07:44:35.769851] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.893 [2024-07-14 07:44:35.770079] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.893 [2024-07-14 07:44:35.770106] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0c30000b90 with addr=10.0.0.2, port=4420 00:27:19.893 qpair failed and we were unable to recover it. 00:27:19.893 [2024-07-14 07:44:35.770332] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.893 [2024-07-14 07:44:35.770586] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.893 [2024-07-14 07:44:35.770629] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0c30000b90 with addr=10.0.0.2, port=4420 00:27:19.893 qpair failed and we were unable to recover it. 00:27:19.893 [2024-07-14 07:44:35.770790] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.893 [2024-07-14 07:44:35.770966] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.893 [2024-07-14 07:44:35.771010] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0c30000b90 with addr=10.0.0.2, port=4420 00:27:19.893 qpair failed and we were unable to recover it. 00:27:19.893 [2024-07-14 07:44:35.771248] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.893 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/target_disconnect.sh: line 44: 18565 Killed "${NVMF_APP[@]}" "$@" 00:27:19.893 [2024-07-14 07:44:35.771479] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.893 [2024-07-14 07:44:35.771522] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0c30000b90 with addr=10.0.0.2, port=4420 00:27:19.893 qpair failed and we were unable to recover it. 00:27:19.893 07:44:35 -- host/target_disconnect.sh@56 -- # disconnect_init 10.0.0.2 00:27:19.893 [2024-07-14 07:44:35.771733] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.893 07:44:35 -- host/target_disconnect.sh@17 -- # nvmfappstart -m 0xF0 00:27:19.893 [2024-07-14 07:44:35.771936] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.893 07:44:35 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:27:19.893 [2024-07-14 07:44:35.771979] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0c30000b90 with addr=10.0.0.2, port=4420 00:27:19.893 qpair failed and we were unable to recover it. 00:27:19.893 07:44:35 -- common/autotest_common.sh@712 -- # xtrace_disable 00:27:19.893 [2024-07-14 07:44:35.772228] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.893 07:44:35 -- common/autotest_common.sh@10 -- # set +x 00:27:19.893 [2024-07-14 07:44:35.772423] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.893 [2024-07-14 07:44:35.772466] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0c30000b90 with addr=10.0.0.2, port=4420 00:27:19.893 qpair failed and we were unable to recover it. 00:27:19.893 [2024-07-14 07:44:35.772657] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.893 [2024-07-14 07:44:35.772840] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.893 [2024-07-14 07:44:35.772874] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0c30000b90 with addr=10.0.0.2, port=4420 00:27:19.893 qpair failed and we were unable to recover it. 00:27:19.893 [2024-07-14 07:44:35.773063] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.893 [2024-07-14 07:44:35.773302] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.893 [2024-07-14 07:44:35.773349] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0c30000b90 with addr=10.0.0.2, port=4420 00:27:19.893 qpair failed and we were unable to recover it. 00:27:19.893 [2024-07-14 07:44:35.773584] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.893 [2024-07-14 07:44:35.773807] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.893 [2024-07-14 07:44:35.773832] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0c30000b90 with addr=10.0.0.2, port=4420 00:27:19.893 qpair failed and we were unable to recover it. 00:27:19.893 [2024-07-14 07:44:35.774024] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.893 [2024-07-14 07:44:35.774220] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.893 [2024-07-14 07:44:35.774261] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0c30000b90 with addr=10.0.0.2, port=4420 00:27:19.893 qpair failed and we were unable to recover it. 00:27:19.893 [2024-07-14 07:44:35.774484] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.893 [2024-07-14 07:44:35.774670] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.893 [2024-07-14 07:44:35.774694] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0c30000b90 with addr=10.0.0.2, port=4420 00:27:19.893 qpair failed and we were unable to recover it. 00:27:19.893 [2024-07-14 07:44:35.774871] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.893 [2024-07-14 07:44:35.775061] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.893 [2024-07-14 07:44:35.775103] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0c30000b90 with addr=10.0.0.2, port=4420 00:27:19.893 qpair failed and we were unable to recover it. 00:27:19.893 [2024-07-14 07:44:35.775381] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.893 07:44:35 -- nvmf/common.sh@469 -- # nvmfpid=19266 00:27:19.893 [2024-07-14 07:44:35.775667] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.893 [2024-07-14 07:44:35.775709] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0c30000b90 with addr=10.0.0.2, port=4420 00:27:19.893 qpair failed and we were unable to recover it. 00:27:19.893 07:44:35 -- nvmf/common.sh@470 -- # waitforlisten 19266 00:27:19.893 07:44:35 -- nvmf/common.sh@468 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF0 00:27:19.893 [2024-07-14 07:44:35.775897] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.893 07:44:35 -- common/autotest_common.sh@819 -- # '[' -z 19266 ']' 00:27:19.893 07:44:35 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:27:19.893 [2024-07-14 07:44:35.776085] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.893 [2024-07-14 07:44:35.776127] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0c30000b90 with addr=10.0.0.2, port=4420 00:27:19.893 qpair failed and we were unable to recover it. 00:27:19.893 07:44:35 -- common/autotest_common.sh@824 -- # local max_retries=100 00:27:19.893 07:44:35 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:27:19.893 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:27:19.893 [2024-07-14 07:44:35.776370] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.893 07:44:35 -- common/autotest_common.sh@828 -- # xtrace_disable 00:27:19.893 [2024-07-14 07:44:35.776599] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.893 07:44:35 -- common/autotest_common.sh@10 -- # set +x 00:27:19.893 [2024-07-14 07:44:35.776641] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0c30000b90 with addr=10.0.0.2, port=4420 00:27:19.893 qpair failed and we were unable to recover it. 00:27:19.893 [2024-07-14 07:44:35.776856] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.893 [2024-07-14 07:44:35.777025] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.893 [2024-07-14 07:44:35.777057] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0c30000b90 with addr=10.0.0.2, port=4420 00:27:19.893 qpair failed and we were unable to recover it. 00:27:19.893 [2024-07-14 07:44:35.777267] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.893 [2024-07-14 07:44:35.777512] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.893 [2024-07-14 07:44:35.777540] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0c30000b90 with addr=10.0.0.2, port=4420 00:27:19.893 qpair failed and we were unable to recover it. 00:27:19.893 [2024-07-14 07:44:35.777767] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.893 [2024-07-14 07:44:35.777961] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.893 [2024-07-14 07:44:35.777985] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0c30000b90 with addr=10.0.0.2, port=4420 00:27:19.893 qpair failed and we were unable to recover it. 00:27:19.893 [2024-07-14 07:44:35.778169] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.893 [2024-07-14 07:44:35.778399] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.893 [2024-07-14 07:44:35.778440] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0c30000b90 with addr=10.0.0.2, port=4420 00:27:19.893 qpair failed and we were unable to recover it. 00:27:19.893 [2024-07-14 07:44:35.778638] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.893 [2024-07-14 07:44:35.778842] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.893 [2024-07-14 07:44:35.778872] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0c30000b90 with addr=10.0.0.2, port=4420 00:27:19.893 qpair failed and we were unable to recover it. 00:27:19.893 [2024-07-14 07:44:35.779034] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.893 [2024-07-14 07:44:35.779222] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.893 [2024-07-14 07:44:35.779264] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0c30000b90 with addr=10.0.0.2, port=4420 00:27:19.893 qpair failed and we were unable to recover it. 00:27:19.893 [2024-07-14 07:44:35.779544] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.893 [2024-07-14 07:44:35.779720] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.893 [2024-07-14 07:44:35.779748] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0c30000b90 with addr=10.0.0.2, port=4420 00:27:19.893 qpair failed and we were unable to recover it. 00:27:19.893 [2024-07-14 07:44:35.779986] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.893 [2024-07-14 07:44:35.780211] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.893 [2024-07-14 07:44:35.780255] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0c30000b90 with addr=10.0.0.2, port=4420 00:27:19.893 qpair failed and we were unable to recover it. 00:27:19.893 [2024-07-14 07:44:35.780480] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.893 [2024-07-14 07:44:35.780708] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.893 [2024-07-14 07:44:35.780735] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0c30000b90 with addr=10.0.0.2, port=4420 00:27:19.893 qpair failed and we were unable to recover it. 00:27:19.893 [2024-07-14 07:44:35.780939] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.893 [2024-07-14 07:44:35.781143] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.893 [2024-07-14 07:44:35.781180] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0c30000b90 with addr=10.0.0.2, port=4420 00:27:19.893 qpair failed and we were unable to recover it. 00:27:19.893 [2024-07-14 07:44:35.781384] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.893 [2024-07-14 07:44:35.781677] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.893 [2024-07-14 07:44:35.781725] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0c30000b90 with addr=10.0.0.2, port=4420 00:27:19.893 qpair failed and we were unable to recover it. 00:27:19.893 [2024-07-14 07:44:35.781952] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.893 [2024-07-14 07:44:35.782189] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.894 [2024-07-14 07:44:35.782217] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0c30000b90 with addr=10.0.0.2, port=4420 00:27:19.894 qpair failed and we were unable to recover it. 00:27:19.894 [2024-07-14 07:44:35.782479] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.894 [2024-07-14 07:44:35.782769] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.894 [2024-07-14 07:44:35.782797] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0c30000b90 with addr=10.0.0.2, port=4420 00:27:19.894 qpair failed and we were unable to recover it. 00:27:19.894 [2024-07-14 07:44:35.783027] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.894 [2024-07-14 07:44:35.783240] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.894 [2024-07-14 07:44:35.783283] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0c30000b90 with addr=10.0.0.2, port=4420 00:27:19.894 qpair failed and we were unable to recover it. 00:27:19.894 [2024-07-14 07:44:35.783562] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.894 [2024-07-14 07:44:35.783793] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.894 [2024-07-14 07:44:35.783820] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0c30000b90 with addr=10.0.0.2, port=4420 00:27:19.894 qpair failed and we were unable to recover it. 00:27:19.894 [2024-07-14 07:44:35.784021] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.894 [2024-07-14 07:44:35.784202] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.894 [2024-07-14 07:44:35.784252] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0c30000b90 with addr=10.0.0.2, port=4420 00:27:19.894 qpair failed and we were unable to recover it. 00:27:19.894 [2024-07-14 07:44:35.784497] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.894 [2024-07-14 07:44:35.784847] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.894 [2024-07-14 07:44:35.784924] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0c30000b90 with addr=10.0.0.2, port=4420 00:27:19.894 qpair failed and we were unable to recover it. 00:27:19.894 [2024-07-14 07:44:35.785116] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.894 [2024-07-14 07:44:35.785300] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.894 [2024-07-14 07:44:35.785342] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0c30000b90 with addr=10.0.0.2, port=4420 00:27:19.894 qpair failed and we were unable to recover it. 00:27:19.894 [2024-07-14 07:44:35.785569] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.894 [2024-07-14 07:44:35.785759] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.894 [2024-07-14 07:44:35.785786] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0c30000b90 with addr=10.0.0.2, port=4420 00:27:19.894 qpair failed and we were unable to recover it. 00:27:19.894 [2024-07-14 07:44:35.785964] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.894 [2024-07-14 07:44:35.786150] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.894 [2024-07-14 07:44:35.786196] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0c30000b90 with addr=10.0.0.2, port=4420 00:27:19.894 qpair failed and we were unable to recover it. 00:27:19.894 [2024-07-14 07:44:35.786427] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.894 [2024-07-14 07:44:35.786622] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.894 [2024-07-14 07:44:35.786671] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0c30000b90 with addr=10.0.0.2, port=4420 00:27:19.894 qpair failed and we were unable to recover it. 00:27:19.894 [2024-07-14 07:44:35.786883] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.894 [2024-07-14 07:44:35.787048] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.894 [2024-07-14 07:44:35.787084] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0c30000b90 with addr=10.0.0.2, port=4420 00:27:19.894 qpair failed and we were unable to recover it. 00:27:19.894 [2024-07-14 07:44:35.787313] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.894 [2024-07-14 07:44:35.787566] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.894 [2024-07-14 07:44:35.787609] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0c30000b90 with addr=10.0.0.2, port=4420 00:27:19.894 qpair failed and we were unable to recover it. 00:27:19.894 [2024-07-14 07:44:35.787799] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.894 [2024-07-14 07:44:35.788011] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.894 [2024-07-14 07:44:35.788037] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0c30000b90 with addr=10.0.0.2, port=4420 00:27:19.894 qpair failed and we were unable to recover it. 00:27:19.894 [2024-07-14 07:44:35.788234] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.894 [2024-07-14 07:44:35.788465] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.894 [2024-07-14 07:44:35.788509] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0c30000b90 with addr=10.0.0.2, port=4420 00:27:19.894 qpair failed and we were unable to recover it. 00:27:19.894 [2024-07-14 07:44:35.788700] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.894 [2024-07-14 07:44:35.788873] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.894 [2024-07-14 07:44:35.788901] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0c30000b90 with addr=10.0.0.2, port=4420 00:27:19.894 qpair failed and we were unable to recover it. 00:27:19.894 [2024-07-14 07:44:35.789149] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.894 [2024-07-14 07:44:35.789440] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.894 [2024-07-14 07:44:35.789490] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0c30000b90 with addr=10.0.0.2, port=4420 00:27:19.894 qpair failed and we were unable to recover it. 00:27:19.894 [2024-07-14 07:44:35.789780] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.894 [2024-07-14 07:44:35.790000] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.894 [2024-07-14 07:44:35.790029] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0c30000b90 with addr=10.0.0.2, port=4420 00:27:19.894 qpair failed and we were unable to recover it. 00:27:19.894 [2024-07-14 07:44:35.790315] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.894 [2024-07-14 07:44:35.790538] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.894 [2024-07-14 07:44:35.790582] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0c30000b90 with addr=10.0.0.2, port=4420 00:27:19.894 qpair failed and we were unable to recover it. 00:27:19.894 [2024-07-14 07:44:35.790768] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.894 [2024-07-14 07:44:35.790935] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.894 [2024-07-14 07:44:35.790963] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0c30000b90 with addr=10.0.0.2, port=4420 00:27:19.894 qpair failed and we were unable to recover it. 00:27:19.894 [2024-07-14 07:44:35.791180] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.894 [2024-07-14 07:44:35.791390] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.894 [2024-07-14 07:44:35.791434] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0c30000b90 with addr=10.0.0.2, port=4420 00:27:19.894 qpair failed and we were unable to recover it. 00:27:19.894 [2024-07-14 07:44:35.791659] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.894 [2024-07-14 07:44:35.791874] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.894 [2024-07-14 07:44:35.791905] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0c30000b90 with addr=10.0.0.2, port=4420 00:27:19.894 qpair failed and we were unable to recover it. 00:27:19.894 [2024-07-14 07:44:35.792079] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.894 [2024-07-14 07:44:35.792353] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.894 [2024-07-14 07:44:35.792398] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0c30000b90 with addr=10.0.0.2, port=4420 00:27:19.894 qpair failed and we were unable to recover it. 00:27:19.894 [2024-07-14 07:44:35.792632] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.894 [2024-07-14 07:44:35.792859] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.894 [2024-07-14 07:44:35.792899] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0c30000b90 with addr=10.0.0.2, port=4420 00:27:19.894 qpair failed and we were unable to recover it. 00:27:19.894 [2024-07-14 07:44:35.793073] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.894 [2024-07-14 07:44:35.793320] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.894 [2024-07-14 07:44:35.793366] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0c30000b90 with addr=10.0.0.2, port=4420 00:27:19.894 qpair failed and we were unable to recover it. 00:27:19.894 [2024-07-14 07:44:35.793592] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.894 [2024-07-14 07:44:35.793819] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.894 [2024-07-14 07:44:35.793847] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0c30000b90 with addr=10.0.0.2, port=4420 00:27:19.894 qpair failed and we were unable to recover it. 00:27:19.894 [2024-07-14 07:44:35.794018] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.894 [2024-07-14 07:44:35.794214] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.894 [2024-07-14 07:44:35.794258] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0c30000b90 with addr=10.0.0.2, port=4420 00:27:19.894 qpair failed and we were unable to recover it. 00:27:19.894 [2024-07-14 07:44:35.794481] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.894 [2024-07-14 07:44:35.794707] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.894 [2024-07-14 07:44:35.794751] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0c30000b90 with addr=10.0.0.2, port=4420 00:27:19.894 qpair failed and we were unable to recover it. 00:27:19.894 [2024-07-14 07:44:35.794956] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.894 [2024-07-14 07:44:35.795175] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.894 [2024-07-14 07:44:35.795222] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0c30000b90 with addr=10.0.0.2, port=4420 00:27:19.894 qpair failed and we were unable to recover it. 00:27:19.894 [2024-07-14 07:44:35.795499] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.894 [2024-07-14 07:44:35.795779] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.894 [2024-07-14 07:44:35.795806] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0c30000b90 with addr=10.0.0.2, port=4420 00:27:19.894 qpair failed and we were unable to recover it. 00:27:19.895 [2024-07-14 07:44:35.796020] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.895 [2024-07-14 07:44:35.796259] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.895 [2024-07-14 07:44:35.796303] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0c30000b90 with addr=10.0.0.2, port=4420 00:27:19.895 qpair failed and we were unable to recover it. 00:27:19.895 [2024-07-14 07:44:35.796540] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.895 [2024-07-14 07:44:35.796720] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.895 [2024-07-14 07:44:35.796748] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0c30000b90 with addr=10.0.0.2, port=4420 00:27:19.895 qpair failed and we were unable to recover it. 00:27:19.895 [2024-07-14 07:44:35.796941] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.895 [2024-07-14 07:44:35.797162] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.895 [2024-07-14 07:44:35.797206] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0c30000b90 with addr=10.0.0.2, port=4420 00:27:19.895 qpair failed and we were unable to recover it. 00:27:19.895 [2024-07-14 07:44:35.797454] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.895 [2024-07-14 07:44:35.797661] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.895 [2024-07-14 07:44:35.797688] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0c30000b90 with addr=10.0.0.2, port=4420 00:27:19.895 qpair failed and we were unable to recover it. 00:27:19.895 [2024-07-14 07:44:35.797881] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.895 [2024-07-14 07:44:35.798105] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.895 [2024-07-14 07:44:35.798157] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0c30000b90 with addr=10.0.0.2, port=4420 00:27:19.895 qpair failed and we were unable to recover it. 00:27:19.895 [2024-07-14 07:44:35.798373] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.895 [2024-07-14 07:44:35.798598] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.895 [2024-07-14 07:44:35.798641] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0c30000b90 with addr=10.0.0.2, port=4420 00:27:19.895 qpair failed and we were unable to recover it. 00:27:19.895 [2024-07-14 07:44:35.798825] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.895 [2024-07-14 07:44:35.799043] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.895 [2024-07-14 07:44:35.799091] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0c30000b90 with addr=10.0.0.2, port=4420 00:27:19.895 qpair failed and we were unable to recover it. 00:27:19.895 [2024-07-14 07:44:35.799318] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.895 [2024-07-14 07:44:35.799534] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.895 [2024-07-14 07:44:35.799586] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0c30000b90 with addr=10.0.0.2, port=4420 00:27:19.895 qpair failed and we were unable to recover it. 00:27:19.895 [2024-07-14 07:44:35.799754] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.895 [2024-07-14 07:44:35.799963] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.895 [2024-07-14 07:44:35.800012] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0c30000b90 with addr=10.0.0.2, port=4420 00:27:19.895 qpair failed and we were unable to recover it. 00:27:19.895 [2024-07-14 07:44:35.800198] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.895 [2024-07-14 07:44:35.800391] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.895 [2024-07-14 07:44:35.800437] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0c30000b90 with addr=10.0.0.2, port=4420 00:27:19.895 qpair failed and we were unable to recover it. 00:27:19.895 [2024-07-14 07:44:35.800629] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.895 [2024-07-14 07:44:35.800843] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.895 [2024-07-14 07:44:35.800894] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0c30000b90 with addr=10.0.0.2, port=4420 00:27:19.895 qpair failed and we were unable to recover it. 00:27:19.895 [2024-07-14 07:44:35.801093] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.895 [2024-07-14 07:44:35.801361] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.895 [2024-07-14 07:44:35.801404] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0c30000b90 with addr=10.0.0.2, port=4420 00:27:19.895 qpair failed and we were unable to recover it. 00:27:19.895 [2024-07-14 07:44:35.801633] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.895 [2024-07-14 07:44:35.801834] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.895 [2024-07-14 07:44:35.801862] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0c30000b90 with addr=10.0.0.2, port=4420 00:27:19.895 qpair failed and we were unable to recover it. 00:27:19.895 [2024-07-14 07:44:35.802037] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.895 [2024-07-14 07:44:35.802227] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.895 [2024-07-14 07:44:35.802270] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0c30000b90 with addr=10.0.0.2, port=4420 00:27:19.895 qpair failed and we were unable to recover it. 00:27:19.895 [2024-07-14 07:44:35.802462] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.895 [2024-07-14 07:44:35.802668] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.895 [2024-07-14 07:44:35.802697] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0c30000b90 with addr=10.0.0.2, port=4420 00:27:19.895 qpair failed and we were unable to recover it. 00:27:19.895 [2024-07-14 07:44:35.802894] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.895 [2024-07-14 07:44:35.803107] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.895 [2024-07-14 07:44:35.803154] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0c30000b90 with addr=10.0.0.2, port=4420 00:27:19.895 qpair failed and we were unable to recover it. 00:27:19.895 [2024-07-14 07:44:35.803418] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.895 [2024-07-14 07:44:35.803789] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.895 [2024-07-14 07:44:35.803841] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0c30000b90 with addr=10.0.0.2, port=4420 00:27:19.895 qpair failed and we were unable to recover it. 00:27:19.895 [2024-07-14 07:44:35.804047] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.895 [2024-07-14 07:44:35.804252] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.895 [2024-07-14 07:44:35.804296] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0c30000b90 with addr=10.0.0.2, port=4420 00:27:19.895 qpair failed and we were unable to recover it. 00:27:19.895 [2024-07-14 07:44:35.804509] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.895 [2024-07-14 07:44:35.804737] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.895 [2024-07-14 07:44:35.804774] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0c30000b90 with addr=10.0.0.2, port=4420 00:27:19.895 qpair failed and we were unable to recover it. 00:27:19.895 [2024-07-14 07:44:35.804997] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.895 [2024-07-14 07:44:35.805223] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.895 [2024-07-14 07:44:35.805267] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0c30000b90 with addr=10.0.0.2, port=4420 00:27:19.895 qpair failed and we were unable to recover it. 00:27:19.895 [2024-07-14 07:44:35.805454] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.895 [2024-07-14 07:44:35.805712] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.895 [2024-07-14 07:44:35.805766] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0c30000b90 with addr=10.0.0.2, port=4420 00:27:19.895 qpair failed and we were unable to recover it. 00:27:19.895 [2024-07-14 07:44:35.806002] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.895 [2024-07-14 07:44:35.806232] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.895 [2024-07-14 07:44:35.806274] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0c30000b90 with addr=10.0.0.2, port=4420 00:27:19.895 qpair failed and we were unable to recover it. 00:27:19.895 [2024-07-14 07:44:35.806542] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.895 [2024-07-14 07:44:35.806792] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.895 [2024-07-14 07:44:35.806819] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0c30000b90 with addr=10.0.0.2, port=4420 00:27:19.895 qpair failed and we were unable to recover it. 00:27:19.895 [2024-07-14 07:44:35.807027] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.895 [2024-07-14 07:44:35.807239] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.895 [2024-07-14 07:44:35.807287] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0c30000b90 with addr=10.0.0.2, port=4420 00:27:19.895 qpair failed and we were unable to recover it. 00:27:19.895 [2024-07-14 07:44:35.807539] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.895 [2024-07-14 07:44:35.807778] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.895 [2024-07-14 07:44:35.807806] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0c30000b90 with addr=10.0.0.2, port=4420 00:27:19.895 qpair failed and we were unable to recover it. 00:27:19.895 [2024-07-14 07:44:35.807998] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.895 [2024-07-14 07:44:35.808193] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.895 [2024-07-14 07:44:35.808247] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0c30000b90 with addr=10.0.0.2, port=4420 00:27:19.895 qpair failed and we were unable to recover it. 00:27:19.895 [2024-07-14 07:44:35.808449] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.895 [2024-07-14 07:44:35.808670] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.895 [2024-07-14 07:44:35.808713] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0c30000b90 with addr=10.0.0.2, port=4420 00:27:19.895 qpair failed and we were unable to recover it. 00:27:19.895 [2024-07-14 07:44:35.808961] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.895 [2024-07-14 07:44:35.809173] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.895 [2024-07-14 07:44:35.809218] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0c30000b90 with addr=10.0.0.2, port=4420 00:27:19.895 qpair failed and we were unable to recover it. 00:27:19.895 [2024-07-14 07:44:35.809429] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.895 [2024-07-14 07:44:35.809635] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.895 [2024-07-14 07:44:35.809678] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0c30000b90 with addr=10.0.0.2, port=4420 00:27:19.895 qpair failed and we were unable to recover it. 00:27:19.895 [2024-07-14 07:44:35.809936] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.895 [2024-07-14 07:44:35.810133] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.895 [2024-07-14 07:44:35.810165] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:19.895 qpair failed and we were unable to recover it. 00:27:19.895 [2024-07-14 07:44:35.810376] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.896 [2024-07-14 07:44:35.810585] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.896 [2024-07-14 07:44:35.810628] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:19.896 qpair failed and we were unable to recover it. 00:27:19.896 [2024-07-14 07:44:35.810811] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.896 [2024-07-14 07:44:35.811000] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.896 [2024-07-14 07:44:35.811026] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:19.896 qpair failed and we were unable to recover it. 00:27:19.896 [2024-07-14 07:44:35.811213] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.896 [2024-07-14 07:44:35.811420] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.896 [2024-07-14 07:44:35.811444] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:19.896 qpair failed and we were unable to recover it. 00:27:19.896 [2024-07-14 07:44:35.811792] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.896 [2024-07-14 07:44:35.812015] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.896 [2024-07-14 07:44:35.812040] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:19.896 qpair failed and we were unable to recover it. 00:27:19.896 [2024-07-14 07:44:35.812202] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.896 [2024-07-14 07:44:35.812357] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.896 [2024-07-14 07:44:35.812398] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:19.896 qpair failed and we were unable to recover it. 00:27:19.896 [2024-07-14 07:44:35.812595] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.896 [2024-07-14 07:44:35.812817] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.896 [2024-07-14 07:44:35.812843] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:19.896 qpair failed and we were unable to recover it. 00:27:19.896 [2024-07-14 07:44:35.813068] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.896 [2024-07-14 07:44:35.813216] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.896 [2024-07-14 07:44:35.813240] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:19.896 qpair failed and we were unable to recover it. 00:27:19.896 [2024-07-14 07:44:35.813552] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.896 [2024-07-14 07:44:35.813772] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.896 [2024-07-14 07:44:35.813799] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:19.896 qpair failed and we were unable to recover it. 00:27:19.896 [2024-07-14 07:44:35.814017] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.896 [2024-07-14 07:44:35.814176] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.896 [2024-07-14 07:44:35.814201] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:19.896 qpair failed and we were unable to recover it. 00:27:19.896 [2024-07-14 07:44:35.814407] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.896 [2024-07-14 07:44:35.814627] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.896 [2024-07-14 07:44:35.814688] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:19.896 qpair failed and we were unable to recover it. 00:27:19.896 [2024-07-14 07:44:35.814933] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.896 [2024-07-14 07:44:35.815091] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.896 [2024-07-14 07:44:35.815116] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:19.896 qpair failed and we were unable to recover it. 00:27:19.896 [2024-07-14 07:44:35.815374] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.896 [2024-07-14 07:44:35.815590] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.896 [2024-07-14 07:44:35.815637] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:19.896 qpair failed and we were unable to recover it. 00:27:19.896 [2024-07-14 07:44:35.815821] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.896 [2024-07-14 07:44:35.816022] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.896 [2024-07-14 07:44:35.816048] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:19.896 qpair failed and we were unable to recover it. 00:27:19.896 [2024-07-14 07:44:35.816201] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.896 [2024-07-14 07:44:35.816362] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.896 [2024-07-14 07:44:35.816402] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:19.896 qpair failed and we were unable to recover it. 00:27:19.896 [2024-07-14 07:44:35.816637] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.896 [2024-07-14 07:44:35.816845] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.896 [2024-07-14 07:44:35.816875] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:19.896 qpair failed and we were unable to recover it. 00:27:19.896 [2024-07-14 07:44:35.817076] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.896 [2024-07-14 07:44:35.817282] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.896 [2024-07-14 07:44:35.817306] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:19.896 qpair failed and we were unable to recover it. 00:27:19.896 [2024-07-14 07:44:35.817500] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.896 [2024-07-14 07:44:35.817687] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.896 [2024-07-14 07:44:35.817711] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:19.896 qpair failed and we were unable to recover it. 00:27:19.896 [2024-07-14 07:44:35.817899] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.896 [2024-07-14 07:44:35.818079] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.896 [2024-07-14 07:44:35.818103] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:19.896 qpair failed and we were unable to recover it. 00:27:19.896 [2024-07-14 07:44:35.818341] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.896 [2024-07-14 07:44:35.818578] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.896 [2024-07-14 07:44:35.818605] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:19.896 qpair failed and we were unable to recover it. 00:27:19.896 [2024-07-14 07:44:35.818855] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.896 [2024-07-14 07:44:35.819040] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.896 [2024-07-14 07:44:35.819064] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:19.896 qpair failed and we were unable to recover it. 00:27:19.896 [2024-07-14 07:44:35.819253] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.896 [2024-07-14 07:44:35.819501] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.896 [2024-07-14 07:44:35.819547] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:19.896 qpair failed and we were unable to recover it. 00:27:19.896 [2024-07-14 07:44:35.819757] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.896 [2024-07-14 07:44:35.819950] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.896 [2024-07-14 07:44:35.819975] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:19.896 qpair failed and we were unable to recover it. 00:27:19.896 [2024-07-14 07:44:35.820138] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.896 [2024-07-14 07:44:35.820291] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.896 [2024-07-14 07:44:35.820315] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:19.896 qpair failed and we were unable to recover it. 00:27:19.896 [2024-07-14 07:44:35.820537] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.896 [2024-07-14 07:44:35.820754] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.896 [2024-07-14 07:44:35.820781] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:19.896 qpair failed and we were unable to recover it. 00:27:19.896 [2024-07-14 07:44:35.820988] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.897 [2024-07-14 07:44:35.821138] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.897 [2024-07-14 07:44:35.821163] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:19.897 qpair failed and we were unable to recover it. 00:27:19.897 [2024-07-14 07:44:35.821314] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.897 [2024-07-14 07:44:35.821497] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.897 [2024-07-14 07:44:35.821521] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:19.897 qpair failed and we were unable to recover it. 00:27:19.897 [2024-07-14 07:44:35.821729] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.897 [2024-07-14 07:44:35.821910] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.897 [2024-07-14 07:44:35.821936] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:19.897 qpair failed and we were unable to recover it. 00:27:19.897 [2024-07-14 07:44:35.822104] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.897 [2024-07-14 07:44:35.822316] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.897 [2024-07-14 07:44:35.822340] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:19.897 qpair failed and we were unable to recover it. 00:27:19.897 [2024-07-14 07:44:35.822519] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.897 [2024-07-14 07:44:35.822738] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.897 [2024-07-14 07:44:35.822765] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:19.897 qpair failed and we were unable to recover it. 00:27:19.897 [2024-07-14 07:44:35.822952] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.897 [2024-07-14 07:44:35.823135] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.897 [2024-07-14 07:44:35.823160] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:19.897 qpair failed and we were unable to recover it. 00:27:19.897 [2024-07-14 07:44:35.823348] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.897 [2024-07-14 07:44:35.823534] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.897 [2024-07-14 07:44:35.823559] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:19.897 qpair failed and we were unable to recover it. 00:27:19.897 [2024-07-14 07:44:35.823761] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.897 [2024-07-14 07:44:35.823939] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.897 [2024-07-14 07:44:35.823969] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:19.897 qpair failed and we were unable to recover it. 00:27:19.897 [2024-07-14 07:44:35.824132] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.897 [2024-07-14 07:44:35.824294] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.897 [2024-07-14 07:44:35.824319] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:19.897 qpair failed and we were unable to recover it. 00:27:19.897 [2024-07-14 07:44:35.824554] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.897 [2024-07-14 07:44:35.824747] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.897 [2024-07-14 07:44:35.824774] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:19.897 qpair failed and we were unable to recover it. 00:27:19.897 [2024-07-14 07:44:35.825010] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.897 [2024-07-14 07:44:35.825198] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.897 [2024-07-14 07:44:35.825223] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:19.897 qpair failed and we were unable to recover it. 00:27:19.897 [2024-07-14 07:44:35.825401] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.897 [2024-07-14 07:44:35.825580] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.897 [2024-07-14 07:44:35.825605] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:19.897 qpair failed and we were unable to recover it. 00:27:19.897 [2024-07-14 07:44:35.825813] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.897 [2024-07-14 07:44:35.825985] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.897 [2024-07-14 07:44:35.826011] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:19.897 qpair failed and we were unable to recover it. 00:27:19.897 [2024-07-14 07:44:35.826165] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.897 [2024-07-14 07:44:35.826408] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.897 [2024-07-14 07:44:35.826457] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:19.897 qpair failed and we were unable to recover it. 00:27:19.897 [2024-07-14 07:44:35.826661] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.897 [2024-07-14 07:44:35.826888] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.897 [2024-07-14 07:44:35.826930] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:19.897 qpair failed and we were unable to recover it. 00:27:19.897 [2024-07-14 07:44:35.827114] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.897 [2024-07-14 07:44:35.827310] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.897 [2024-07-14 07:44:35.827334] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:19.897 qpair failed and we were unable to recover it. 00:27:19.897 [2024-07-14 07:44:35.827517] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.897 [2024-07-14 07:44:35.827811] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.897 [2024-07-14 07:44:35.827840] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:19.897 qpair failed and we were unable to recover it. 00:27:19.897 [2024-07-14 07:44:35.828098] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.897 [2024-07-14 07:44:35.828257] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.897 [2024-07-14 07:44:35.828281] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:19.897 qpair failed and we were unable to recover it. 00:27:19.897 [2024-07-14 07:44:35.828442] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.897 [2024-07-14 07:44:35.828653] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.897 [2024-07-14 07:44:35.828677] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:19.897 qpair failed and we were unable to recover it. 00:27:19.897 [2024-07-14 07:44:35.828936] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.897 [2024-07-14 07:44:35.829126] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.897 [2024-07-14 07:44:35.829167] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:19.897 qpair failed and we were unable to recover it. 00:27:19.897 [2024-07-14 07:44:35.829384] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.897 [2024-07-14 07:44:35.829597] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.897 [2024-07-14 07:44:35.829621] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:19.897 qpair failed and we were unable to recover it. 00:27:19.897 [2024-07-14 07:44:35.829785] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.897 [2024-07-14 07:44:35.829952] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.897 [2024-07-14 07:44:35.829978] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:19.897 qpair failed and we were unable to recover it. 00:27:19.897 [2024-07-14 07:44:35.830167] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.897 [2024-07-14 07:44:35.830307] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:27:19.897 [2024-07-14 07:44:35.830365] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.897 [2024-07-14 07:44:35.830391] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:19.897 qpair failed and we were unable to recover it. 00:27:19.897 [2024-07-14 07:44:35.830405] [ DPDK EAL parameters: nvmf -c 0xF0 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:27:19.897 [2024-07-14 07:44:35.830554] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.897 [2024-07-14 07:44:35.830784] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.897 [2024-07-14 07:44:35.830810] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:19.897 qpair failed and we were unable to recover it. 00:27:19.897 [2024-07-14 07:44:35.830999] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.897 [2024-07-14 07:44:35.831205] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.897 [2024-07-14 07:44:35.831232] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:19.897 qpair failed and we were unable to recover it. 00:27:19.897 [2024-07-14 07:44:35.831421] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.897 [2024-07-14 07:44:35.831598] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.897 [2024-07-14 07:44:35.831623] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:19.897 qpair failed and we were unable to recover it. 00:27:19.897 [2024-07-14 07:44:35.831777] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.897 [2024-07-14 07:44:35.831938] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.897 [2024-07-14 07:44:35.831963] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:19.897 qpair failed and we were unable to recover it. 00:27:19.897 [2024-07-14 07:44:35.832131] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.897 [2024-07-14 07:44:35.832313] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.897 [2024-07-14 07:44:35.832337] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:19.897 qpair failed and we were unable to recover it. 00:27:19.897 [2024-07-14 07:44:35.832526] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.897 [2024-07-14 07:44:35.832692] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.897 [2024-07-14 07:44:35.832719] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:19.897 qpair failed and we were unable to recover it. 00:27:19.897 [2024-07-14 07:44:35.832927] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.897 [2024-07-14 07:44:35.833136] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.897 [2024-07-14 07:44:35.833175] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:19.897 qpair failed and we were unable to recover it. 00:27:19.898 [2024-07-14 07:44:35.833401] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.898 [2024-07-14 07:44:35.833579] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.898 [2024-07-14 07:44:35.833603] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:19.898 qpair failed and we were unable to recover it. 00:27:19.898 [2024-07-14 07:44:35.833812] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.898 [2024-07-14 07:44:35.833998] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.898 [2024-07-14 07:44:35.834023] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:19.898 qpair failed and we were unable to recover it. 00:27:19.898 [2024-07-14 07:44:35.834213] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.898 [2024-07-14 07:44:35.834424] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.898 [2024-07-14 07:44:35.834449] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:19.898 qpair failed and we were unable to recover it. 00:27:19.898 [2024-07-14 07:44:35.834601] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.898 [2024-07-14 07:44:35.834807] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.898 [2024-07-14 07:44:35.834834] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:19.898 qpair failed and we were unable to recover it. 00:27:19.898 [2024-07-14 07:44:35.835054] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.898 [2024-07-14 07:44:35.835237] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.898 [2024-07-14 07:44:35.835261] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:19.898 qpair failed and we were unable to recover it. 00:27:19.898 [2024-07-14 07:44:35.835413] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.898 [2024-07-14 07:44:35.835631] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.898 [2024-07-14 07:44:35.835655] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:19.898 qpair failed and we were unable to recover it. 00:27:19.898 [2024-07-14 07:44:35.835848] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.898 [2024-07-14 07:44:35.836025] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.898 [2024-07-14 07:44:35.836050] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:19.898 qpair failed and we were unable to recover it. 00:27:19.898 [2024-07-14 07:44:35.836286] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.898 [2024-07-14 07:44:35.836526] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.898 [2024-07-14 07:44:35.836569] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:19.898 qpair failed and we were unable to recover it. 00:27:19.898 [2024-07-14 07:44:35.836798] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.898 [2024-07-14 07:44:35.837008] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.898 [2024-07-14 07:44:35.837037] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:19.898 qpair failed and we were unable to recover it. 00:27:19.898 [2024-07-14 07:44:35.837212] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.898 [2024-07-14 07:44:35.837440] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.898 [2024-07-14 07:44:35.837467] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:19.898 qpair failed and we were unable to recover it. 00:27:19.898 [2024-07-14 07:44:35.837671] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.898 [2024-07-14 07:44:35.837849] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.898 [2024-07-14 07:44:35.837885] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:19.898 qpair failed and we were unable to recover it. 00:27:19.898 [2024-07-14 07:44:35.838061] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.898 [2024-07-14 07:44:35.838251] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.898 [2024-07-14 07:44:35.838276] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:19.898 qpair failed and we were unable to recover it. 00:27:19.898 [2024-07-14 07:44:35.838462] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.898 [2024-07-14 07:44:35.838642] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.898 [2024-07-14 07:44:35.838666] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:19.898 qpair failed and we were unable to recover it. 00:27:19.898 [2024-07-14 07:44:35.838849] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.898 [2024-07-14 07:44:35.839066] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.898 [2024-07-14 07:44:35.839095] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:19.898 qpair failed and we were unable to recover it. 00:27:19.898 [2024-07-14 07:44:35.839340] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.898 [2024-07-14 07:44:35.839520] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.898 [2024-07-14 07:44:35.839544] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:19.898 qpair failed and we were unable to recover it. 00:27:19.898 [2024-07-14 07:44:35.839749] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.898 [2024-07-14 07:44:35.839911] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.898 [2024-07-14 07:44:35.839938] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:19.898 qpair failed and we were unable to recover it. 00:27:19.898 [2024-07-14 07:44:35.840098] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.898 [2024-07-14 07:44:35.840280] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.898 [2024-07-14 07:44:35.840305] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:19.898 qpair failed and we were unable to recover it. 00:27:19.898 [2024-07-14 07:44:35.840501] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.898 [2024-07-14 07:44:35.840712] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.898 [2024-07-14 07:44:35.840740] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:19.898 qpair failed and we were unable to recover it. 00:27:19.898 [2024-07-14 07:44:35.840927] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.898 [2024-07-14 07:44:35.841137] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.898 [2024-07-14 07:44:35.841172] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:19.898 qpair failed and we were unable to recover it. 00:27:19.898 [2024-07-14 07:44:35.841325] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.898 [2024-07-14 07:44:35.841497] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.898 [2024-07-14 07:44:35.841521] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:19.898 qpair failed and we were unable to recover it. 00:27:19.898 [2024-07-14 07:44:35.841701] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.898 [2024-07-14 07:44:35.841859] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.898 [2024-07-14 07:44:35.841887] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:19.898 qpair failed and we were unable to recover it. 00:27:19.898 [2024-07-14 07:44:35.842073] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.898 [2024-07-14 07:44:35.842224] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.898 [2024-07-14 07:44:35.842248] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:19.898 qpair failed and we were unable to recover it. 00:27:19.898 [2024-07-14 07:44:35.842477] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.898 [2024-07-14 07:44:35.842654] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.898 [2024-07-14 07:44:35.842679] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:19.898 qpair failed and we were unable to recover it. 00:27:19.898 [2024-07-14 07:44:35.842841] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.898 [2024-07-14 07:44:35.843032] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.898 [2024-07-14 07:44:35.843057] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:19.898 qpair failed and we were unable to recover it. 00:27:19.898 [2024-07-14 07:44:35.843214] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.898 [2024-07-14 07:44:35.843415] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.898 [2024-07-14 07:44:35.843442] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:19.898 qpair failed and we were unable to recover it. 00:27:19.898 [2024-07-14 07:44:35.843619] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.898 [2024-07-14 07:44:35.843826] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.898 [2024-07-14 07:44:35.843853] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:19.898 qpair failed and we were unable to recover it. 00:27:19.898 [2024-07-14 07:44:35.844064] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.898 [2024-07-14 07:44:35.844256] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.898 [2024-07-14 07:44:35.844280] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:19.898 qpair failed and we were unable to recover it. 00:27:19.898 [2024-07-14 07:44:35.844437] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.898 [2024-07-14 07:44:35.844644] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.898 [2024-07-14 07:44:35.844673] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:19.898 qpair failed and we were unable to recover it. 00:27:19.898 [2024-07-14 07:44:35.844860] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.898 [2024-07-14 07:44:35.845078] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.898 [2024-07-14 07:44:35.845119] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:19.898 qpair failed and we were unable to recover it. 00:27:19.898 [2024-07-14 07:44:35.845351] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.898 [2024-07-14 07:44:35.845510] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.898 [2024-07-14 07:44:35.845534] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:19.899 qpair failed and we were unable to recover it. 00:27:19.899 [2024-07-14 07:44:35.845752] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.899 [2024-07-14 07:44:35.845954] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.899 [2024-07-14 07:44:35.845980] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:19.899 qpair failed and we were unable to recover it. 00:27:19.899 [2024-07-14 07:44:35.846144] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.899 [2024-07-14 07:44:35.846369] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.899 [2024-07-14 07:44:35.846393] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:19.899 qpair failed and we were unable to recover it. 00:27:19.899 [2024-07-14 07:44:35.846550] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.899 [2024-07-14 07:44:35.846727] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.899 [2024-07-14 07:44:35.846751] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:19.899 qpair failed and we were unable to recover it. 00:27:19.899 [2024-07-14 07:44:35.846969] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.899 [2024-07-14 07:44:35.847157] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.899 [2024-07-14 07:44:35.847182] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:19.899 qpair failed and we were unable to recover it. 00:27:19.899 [2024-07-14 07:44:35.847370] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.899 [2024-07-14 07:44:35.847534] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.899 [2024-07-14 07:44:35.847559] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:19.899 qpair failed and we were unable to recover it. 00:27:19.899 [2024-07-14 07:44:35.847770] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.899 [2024-07-14 07:44:35.847952] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.899 [2024-07-14 07:44:35.847977] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:19.899 qpair failed and we were unable to recover it. 00:27:19.899 [2024-07-14 07:44:35.848166] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.899 [2024-07-14 07:44:35.848323] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.899 [2024-07-14 07:44:35.848348] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:19.899 qpair failed and we were unable to recover it. 00:27:19.899 [2024-07-14 07:44:35.848505] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.899 [2024-07-14 07:44:35.848687] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.899 [2024-07-14 07:44:35.848711] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:19.899 qpair failed and we were unable to recover it. 00:27:19.899 [2024-07-14 07:44:35.848931] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.899 [2024-07-14 07:44:35.849110] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.899 [2024-07-14 07:44:35.849137] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:19.899 qpair failed and we were unable to recover it. 00:27:19.899 [2024-07-14 07:44:35.849341] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.899 [2024-07-14 07:44:35.849521] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.899 [2024-07-14 07:44:35.849549] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:19.899 qpair failed and we were unable to recover it. 00:27:19.899 [2024-07-14 07:44:35.849780] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.899 [2024-07-14 07:44:35.849973] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.899 [2024-07-14 07:44:35.850001] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:19.899 qpair failed and we were unable to recover it. 00:27:19.899 [2024-07-14 07:44:35.850183] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.899 [2024-07-14 07:44:35.850339] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.899 [2024-07-14 07:44:35.850364] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:19.899 qpair failed and we were unable to recover it. 00:27:19.899 [2024-07-14 07:44:35.850543] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.899 [2024-07-14 07:44:35.850727] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.899 [2024-07-14 07:44:35.850751] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:19.899 qpair failed and we were unable to recover it. 00:27:19.899 [2024-07-14 07:44:35.850906] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.899 [2024-07-14 07:44:35.851110] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.899 [2024-07-14 07:44:35.851134] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:19.899 qpair failed and we were unable to recover it. 00:27:19.899 [2024-07-14 07:44:35.851294] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.899 [2024-07-14 07:44:35.851470] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.899 [2024-07-14 07:44:35.851494] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:19.899 qpair failed and we were unable to recover it. 00:27:19.899 [2024-07-14 07:44:35.851702] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.899 [2024-07-14 07:44:35.851915] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.899 [2024-07-14 07:44:35.851941] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:19.899 qpair failed and we were unable to recover it. 00:27:19.899 [2024-07-14 07:44:35.852146] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.899 [2024-07-14 07:44:35.852356] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.899 [2024-07-14 07:44:35.852381] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:19.899 qpair failed and we were unable to recover it. 00:27:19.899 [2024-07-14 07:44:35.852565] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.899 [2024-07-14 07:44:35.852744] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.899 [2024-07-14 07:44:35.852768] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:19.899 qpair failed and we were unable to recover it. 00:27:19.899 [2024-07-14 07:44:35.852925] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.899 [2024-07-14 07:44:35.853111] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.899 [2024-07-14 07:44:35.853135] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:19.899 qpair failed and we were unable to recover it. 00:27:19.899 [2024-07-14 07:44:35.853317] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.899 [2024-07-14 07:44:35.853519] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.899 [2024-07-14 07:44:35.853563] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:19.899 qpair failed and we were unable to recover it. 00:27:19.899 [2024-07-14 07:44:35.853759] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.899 [2024-07-14 07:44:35.853927] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.899 [2024-07-14 07:44:35.853955] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:19.899 qpair failed and we were unable to recover it. 00:27:19.899 [2024-07-14 07:44:35.854167] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.899 [2024-07-14 07:44:35.854351] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.899 [2024-07-14 07:44:35.854375] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:19.899 qpair failed and we were unable to recover it. 00:27:19.899 [2024-07-14 07:44:35.854561] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.899 [2024-07-14 07:44:35.854714] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.899 [2024-07-14 07:44:35.854738] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:19.899 qpair failed and we were unable to recover it. 00:27:19.899 [2024-07-14 07:44:35.854987] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.899 [2024-07-14 07:44:35.855151] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.899 [2024-07-14 07:44:35.855176] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:19.899 qpair failed and we were unable to recover it. 00:27:19.899 [2024-07-14 07:44:35.855362] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.899 [2024-07-14 07:44:35.855625] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.899 [2024-07-14 07:44:35.855677] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:19.899 qpair failed and we were unable to recover it. 00:27:19.899 [2024-07-14 07:44:35.855909] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.899 [2024-07-14 07:44:35.856083] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.899 [2024-07-14 07:44:35.856111] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:19.899 qpair failed and we were unable to recover it. 00:27:19.899 [2024-07-14 07:44:35.856314] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.899 [2024-07-14 07:44:35.856526] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.899 [2024-07-14 07:44:35.856551] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:19.899 qpair failed and we were unable to recover it. 00:27:19.899 [2024-07-14 07:44:35.856734] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.899 [2024-07-14 07:44:35.856946] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.899 [2024-07-14 07:44:35.856971] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:19.899 qpair failed and we were unable to recover it. 00:27:19.899 [2024-07-14 07:44:35.857133] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.899 [2024-07-14 07:44:35.857329] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.899 [2024-07-14 07:44:35.857354] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:19.899 qpair failed and we were unable to recover it. 00:27:19.899 [2024-07-14 07:44:35.857532] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.899 [2024-07-14 07:44:35.857717] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.900 [2024-07-14 07:44:35.857741] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:19.900 qpair failed and we were unable to recover it. 00:27:19.900 [2024-07-14 07:44:35.857897] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.900 [2024-07-14 07:44:35.858079] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.900 [2024-07-14 07:44:35.858104] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:19.900 qpair failed and we were unable to recover it. 00:27:19.900 [2024-07-14 07:44:35.858257] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.900 [2024-07-14 07:44:35.858440] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.900 [2024-07-14 07:44:35.858465] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:19.900 qpair failed and we were unable to recover it. 00:27:19.900 [2024-07-14 07:44:35.858627] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.900 [2024-07-14 07:44:35.858848] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.900 [2024-07-14 07:44:35.858878] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:19.900 qpair failed and we were unable to recover it. 00:27:19.900 [2024-07-14 07:44:35.859110] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.900 [2024-07-14 07:44:35.859270] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.900 [2024-07-14 07:44:35.859296] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:19.900 qpair failed and we were unable to recover it. 00:27:19.900 [2024-07-14 07:44:35.859469] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.900 [2024-07-14 07:44:35.859659] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.900 [2024-07-14 07:44:35.859687] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:19.900 qpair failed and we were unable to recover it. 00:27:19.900 [2024-07-14 07:44:35.859884] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.900 [2024-07-14 07:44:35.860088] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.900 [2024-07-14 07:44:35.860115] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:19.900 qpair failed and we were unable to recover it. 00:27:19.900 [2024-07-14 07:44:35.860289] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.900 [2024-07-14 07:44:35.860514] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.900 [2024-07-14 07:44:35.860559] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:19.900 qpair failed and we were unable to recover it. 00:27:19.900 [2024-07-14 07:44:35.860773] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.900 [2024-07-14 07:44:35.860961] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.900 [2024-07-14 07:44:35.860986] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:19.900 qpair failed and we were unable to recover it. 00:27:19.900 [2024-07-14 07:44:35.861193] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.900 [2024-07-14 07:44:35.861403] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.900 [2024-07-14 07:44:35.861428] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:19.900 qpair failed and we were unable to recover it. 00:27:19.900 [2024-07-14 07:44:35.861611] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.900 [2024-07-14 07:44:35.861829] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.900 [2024-07-14 07:44:35.861856] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:19.900 qpair failed and we were unable to recover it. 00:27:19.900 [2024-07-14 07:44:35.862076] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.900 [2024-07-14 07:44:35.862274] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.900 [2024-07-14 07:44:35.862321] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:19.900 qpair failed and we were unable to recover it. 00:27:19.900 [2024-07-14 07:44:35.862532] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.900 [2024-07-14 07:44:35.862730] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.900 [2024-07-14 07:44:35.862775] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:19.900 qpair failed and we were unable to recover it. 00:27:19.900 [2024-07-14 07:44:35.862976] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.900 [2024-07-14 07:44:35.863194] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.900 [2024-07-14 07:44:35.863245] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:19.900 qpair failed and we were unable to recover it. 00:27:19.900 [2024-07-14 07:44:35.863482] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.900 [2024-07-14 07:44:35.863658] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.900 [2024-07-14 07:44:35.863682] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:19.900 qpair failed and we were unable to recover it. 00:27:19.900 [2024-07-14 07:44:35.863876] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.900 [2024-07-14 07:44:35.864058] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.900 [2024-07-14 07:44:35.864085] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:19.900 qpair failed and we were unable to recover it. 00:27:19.900 [2024-07-14 07:44:35.864265] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.900 [2024-07-14 07:44:35.864493] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.900 [2024-07-14 07:44:35.864540] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:19.900 qpair failed and we were unable to recover it. 00:27:19.900 [2024-07-14 07:44:35.864769] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.900 [2024-07-14 07:44:35.864949] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.900 [2024-07-14 07:44:35.864974] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:19.900 qpair failed and we were unable to recover it. 00:27:19.900 [2024-07-14 07:44:35.865179] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.900 [2024-07-14 07:44:35.865392] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.900 [2024-07-14 07:44:35.865417] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:19.900 qpair failed and we were unable to recover it. 00:27:19.900 [2024-07-14 07:44:35.865606] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.900 [2024-07-14 07:44:35.865760] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.900 [2024-07-14 07:44:35.865791] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:19.900 qpair failed and we were unable to recover it. 00:27:19.900 [2024-07-14 07:44:35.865955] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.900 [2024-07-14 07:44:35.866160] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.900 [2024-07-14 07:44:35.866187] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:19.900 qpair failed and we were unable to recover it. 00:27:19.900 [2024-07-14 07:44:35.866413] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.900 [2024-07-14 07:44:35.866585] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.900 [2024-07-14 07:44:35.866612] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:19.900 qpair failed and we were unable to recover it. 00:27:19.900 [2024-07-14 07:44:35.866814] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.900 [2024-07-14 07:44:35.867027] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.900 [2024-07-14 07:44:35.867054] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:19.900 qpair failed and we were unable to recover it. 00:27:19.900 [2024-07-14 07:44:35.867237] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.900 [2024-07-14 07:44:35.867450] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.900 [2024-07-14 07:44:35.867474] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:19.900 qpair failed and we were unable to recover it. 00:27:19.900 [2024-07-14 07:44:35.867630] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.900 [2024-07-14 07:44:35.867782] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.900 [2024-07-14 07:44:35.867806] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:19.900 qpair failed and we were unable to recover it. 00:27:19.900 [2024-07-14 07:44:35.867989] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.900 [2024-07-14 07:44:35.868197] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.900 [2024-07-14 07:44:35.868222] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:19.900 qpair failed and we were unable to recover it. 00:27:19.900 [2024-07-14 07:44:35.868379] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.900 [2024-07-14 07:44:35.868582] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.900 [2024-07-14 07:44:35.868609] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:19.900 qpair failed and we were unable to recover it. 00:27:19.901 [2024-07-14 07:44:35.868836] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.901 [2024-07-14 07:44:35.869022] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.901 [2024-07-14 07:44:35.869050] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:19.901 qpair failed and we were unable to recover it. 00:27:19.901 [2024-07-14 07:44:35.869220] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.901 [2024-07-14 07:44:35.869420] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.901 [2024-07-14 07:44:35.869447] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:19.901 qpair failed and we were unable to recover it. 00:27:19.901 [2024-07-14 07:44:35.869648] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.901 [2024-07-14 07:44:35.869850] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.901 [2024-07-14 07:44:35.869883] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:19.901 qpair failed and we were unable to recover it. 00:27:19.901 [2024-07-14 07:44:35.870103] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.901 [2024-07-14 07:44:35.870251] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.901 [2024-07-14 07:44:35.870275] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:19.901 qpair failed and we were unable to recover it. 00:27:19.901 [2024-07-14 07:44:35.870480] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.901 [2024-07-14 07:44:35.870703] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.901 [2024-07-14 07:44:35.870730] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:19.901 qpair failed and we were unable to recover it. 00:27:19.901 [2024-07-14 07:44:35.870936] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.901 [2024-07-14 07:44:35.871168] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.901 [2024-07-14 07:44:35.871213] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:19.901 qpair failed and we were unable to recover it. 00:27:19.901 [2024-07-14 07:44:35.871463] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.901 [2024-07-14 07:44:35.871713] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.901 [2024-07-14 07:44:35.871763] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:19.901 qpair failed and we were unable to recover it. 00:27:19.901 [2024-07-14 07:44:35.871969] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.901 [2024-07-14 07:44:35.872205] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.901 [2024-07-14 07:44:35.872237] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:19.901 qpair failed and we were unable to recover it. 00:27:19.901 [2024-07-14 07:44:35.872457] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.901 [2024-07-14 07:44:35.872686] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.901 [2024-07-14 07:44:35.872733] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:19.901 qpair failed and we were unable to recover it. 00:27:19.901 [2024-07-14 07:44:35.872959] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.901 [2024-07-14 07:44:35.873187] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.901 [2024-07-14 07:44:35.873234] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:19.901 qpair failed and we were unable to recover it. 00:27:19.901 [2024-07-14 07:44:35.873439] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.901 [2024-07-14 07:44:35.873623] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.901 [2024-07-14 07:44:35.873651] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:19.901 qpair failed and we were unable to recover it. 00:27:19.901 [2024-07-14 07:44:35.873887] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.901 [2024-07-14 07:44:35.874071] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.901 [2024-07-14 07:44:35.874096] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:19.901 qpair failed and we were unable to recover it. 00:27:19.901 [2024-07-14 07:44:35.874309] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.901 [2024-07-14 07:44:35.874545] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.901 [2024-07-14 07:44:35.874592] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:19.901 qpair failed and we were unable to recover it. 00:27:19.901 [2024-07-14 07:44:35.874799] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.901 [2024-07-14 07:44:35.875011] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.901 [2024-07-14 07:44:35.875041] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:19.901 qpair failed and we were unable to recover it. 00:27:19.901 [2024-07-14 07:44:35.875259] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.901 [2024-07-14 07:44:35.875468] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.901 [2024-07-14 07:44:35.875492] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:19.901 qpair failed and we were unable to recover it. 00:27:19.901 [2024-07-14 07:44:35.875706] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.901 [2024-07-14 07:44:35.875912] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.901 [2024-07-14 07:44:35.875946] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:19.901 qpair failed and we were unable to recover it. 00:27:19.901 [2024-07-14 07:44:35.876174] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.901 [2024-07-14 07:44:35.876351] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.901 [2024-07-14 07:44:35.876378] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:19.901 qpair failed and we were unable to recover it. 00:27:19.901 [2024-07-14 07:44:35.876605] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.901 [2024-07-14 07:44:35.876770] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.901 [2024-07-14 07:44:35.876796] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:19.901 qpair failed and we were unable to recover it. 00:27:19.901 [2024-07-14 07:44:35.877042] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.901 EAL: No free 2048 kB hugepages reported on node 1 00:27:19.901 [2024-07-14 07:44:35.877248] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.901 [2024-07-14 07:44:35.877296] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:19.901 qpair failed and we were unable to recover it. 00:27:19.901 [2024-07-14 07:44:35.877480] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.901 [2024-07-14 07:44:35.877662] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.901 [2024-07-14 07:44:35.877687] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:19.901 qpair failed and we were unable to recover it. 00:27:19.901 [2024-07-14 07:44:35.877877] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.901 [2024-07-14 07:44:35.878097] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.901 [2024-07-14 07:44:35.878124] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:19.901 qpair failed and we were unable to recover it. 00:27:19.901 [2024-07-14 07:44:35.878327] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.901 [2024-07-14 07:44:35.878621] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.901 [2024-07-14 07:44:35.878673] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:19.901 qpair failed and we were unable to recover it. 00:27:19.901 [2024-07-14 07:44:35.878901] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.901 [2024-07-14 07:44:35.879135] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.901 [2024-07-14 07:44:35.879160] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:19.901 qpair failed and we were unable to recover it. 00:27:19.901 [2024-07-14 07:44:35.879380] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.901 [2024-07-14 07:44:35.879564] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.901 [2024-07-14 07:44:35.879588] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:19.901 qpair failed and we were unable to recover it. 00:27:19.901 [2024-07-14 07:44:35.879799] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.901 [2024-07-14 07:44:35.879984] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.901 [2024-07-14 07:44:35.880012] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:19.901 qpair failed and we were unable to recover it. 00:27:19.901 [2024-07-14 07:44:35.880217] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.901 [2024-07-14 07:44:35.880424] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.901 [2024-07-14 07:44:35.880450] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:19.901 qpair failed and we were unable to recover it. 00:27:19.901 [2024-07-14 07:44:35.880625] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.901 [2024-07-14 07:44:35.880830] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.901 [2024-07-14 07:44:35.880855] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:19.901 qpair failed and we were unable to recover it. 00:27:19.901 [2024-07-14 07:44:35.881029] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.901 [2024-07-14 07:44:35.881183] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.901 [2024-07-14 07:44:35.881209] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:19.901 qpair failed and we were unable to recover it. 00:27:19.901 [2024-07-14 07:44:35.881402] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.901 [2024-07-14 07:44:35.881610] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.901 [2024-07-14 07:44:35.881634] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:19.901 qpair failed and we were unable to recover it. 00:27:19.901 [2024-07-14 07:44:35.881821] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.901 [2024-07-14 07:44:35.882020] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.901 [2024-07-14 07:44:35.882045] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:19.901 qpair failed and we were unable to recover it. 00:27:19.901 [2024-07-14 07:44:35.882229] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.901 [2024-07-14 07:44:35.882410] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.902 [2024-07-14 07:44:35.882439] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:19.902 qpair failed and we were unable to recover it. 00:27:19.902 [2024-07-14 07:44:35.882593] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.902 [2024-07-14 07:44:35.882745] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.902 [2024-07-14 07:44:35.882769] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:19.902 qpair failed and we were unable to recover it. 00:27:19.902 [2024-07-14 07:44:35.882962] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.902 [2024-07-14 07:44:35.883154] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.902 [2024-07-14 07:44:35.883178] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:19.902 qpair failed and we were unable to recover it. 00:27:19.902 [2024-07-14 07:44:35.883387] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.902 [2024-07-14 07:44:35.883542] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.902 [2024-07-14 07:44:35.883567] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:19.902 qpair failed and we were unable to recover it. 00:27:19.902 [2024-07-14 07:44:35.883751] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.902 [2024-07-14 07:44:35.883932] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.902 [2024-07-14 07:44:35.883957] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:19.902 qpair failed and we were unable to recover it. 00:27:19.902 [2024-07-14 07:44:35.884143] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.902 [2024-07-14 07:44:35.884323] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.902 [2024-07-14 07:44:35.884347] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:19.902 qpair failed and we were unable to recover it. 00:27:19.902 [2024-07-14 07:44:35.884545] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.902 [2024-07-14 07:44:35.884729] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.902 [2024-07-14 07:44:35.884753] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:19.902 qpair failed and we were unable to recover it. 00:27:19.902 [2024-07-14 07:44:35.884913] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.902 [2024-07-14 07:44:35.885089] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.902 [2024-07-14 07:44:35.885114] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:19.902 qpair failed and we were unable to recover it. 00:27:19.902 [2024-07-14 07:44:35.885271] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.902 [2024-07-14 07:44:35.885449] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.902 [2024-07-14 07:44:35.885474] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:19.902 qpair failed and we were unable to recover it. 00:27:19.902 [2024-07-14 07:44:35.885623] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.902 [2024-07-14 07:44:35.885786] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.902 [2024-07-14 07:44:35.885811] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:19.902 qpair failed and we were unable to recover it. 00:27:19.902 [2024-07-14 07:44:35.886042] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.902 [2024-07-14 07:44:35.886202] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.902 [2024-07-14 07:44:35.886226] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:19.902 qpair failed and we were unable to recover it. 00:27:19.902 [2024-07-14 07:44:35.886415] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.902 [2024-07-14 07:44:35.886600] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.902 [2024-07-14 07:44:35.886626] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:19.902 qpair failed and we were unable to recover it. 00:27:19.902 [2024-07-14 07:44:35.886809] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.902 [2024-07-14 07:44:35.886997] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.902 [2024-07-14 07:44:35.887022] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:19.902 qpair failed and we were unable to recover it. 00:27:19.902 [2024-07-14 07:44:35.887206] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.902 [2024-07-14 07:44:35.887395] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.902 [2024-07-14 07:44:35.887419] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:19.902 qpair failed and we were unable to recover it. 00:27:19.902 [2024-07-14 07:44:35.887631] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.902 [2024-07-14 07:44:35.887815] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.902 [2024-07-14 07:44:35.887839] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:19.902 qpair failed and we were unable to recover it. 00:27:19.902 [2024-07-14 07:44:35.888008] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.902 [2024-07-14 07:44:35.888204] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.902 [2024-07-14 07:44:35.888228] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:19.902 qpair failed and we were unable to recover it. 00:27:19.902 [2024-07-14 07:44:35.888382] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.902 [2024-07-14 07:44:35.888588] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.902 [2024-07-14 07:44:35.888613] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:19.902 qpair failed and we were unable to recover it. 00:27:19.902 [2024-07-14 07:44:35.888793] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.902 [2024-07-14 07:44:35.888961] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.902 [2024-07-14 07:44:35.888988] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:19.902 qpair failed and we were unable to recover it. 00:27:19.902 [2024-07-14 07:44:35.889172] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.902 [2024-07-14 07:44:35.889386] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.902 [2024-07-14 07:44:35.889411] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:19.902 qpair failed and we were unable to recover it. 00:27:19.902 [2024-07-14 07:44:35.889559] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.902 [2024-07-14 07:44:35.889745] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.902 [2024-07-14 07:44:35.889772] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:19.902 qpair failed and we were unable to recover it. 00:27:19.902 [2024-07-14 07:44:35.889986] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.902 [2024-07-14 07:44:35.890139] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.902 [2024-07-14 07:44:35.890163] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:19.902 qpair failed and we were unable to recover it. 00:27:19.902 [2024-07-14 07:44:35.890345] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.902 [2024-07-14 07:44:35.890528] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.902 [2024-07-14 07:44:35.890552] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:19.902 qpair failed and we were unable to recover it. 00:27:19.902 [2024-07-14 07:44:35.890738] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.902 [2024-07-14 07:44:35.890896] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.902 [2024-07-14 07:44:35.890921] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:19.902 qpair failed and we were unable to recover it. 00:27:19.902 [2024-07-14 07:44:35.891065] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.902 [2024-07-14 07:44:35.891258] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.902 [2024-07-14 07:44:35.891286] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:19.902 qpair failed and we were unable to recover it. 00:27:19.902 [2024-07-14 07:44:35.891467] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.902 [2024-07-14 07:44:35.891648] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.902 [2024-07-14 07:44:35.891672] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:19.902 qpair failed and we were unable to recover it. 00:27:19.902 [2024-07-14 07:44:35.891825] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.902 [2024-07-14 07:44:35.892002] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.902 [2024-07-14 07:44:35.892027] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:19.902 qpair failed and we were unable to recover it. 00:27:19.902 [2024-07-14 07:44:35.892211] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.902 [2024-07-14 07:44:35.892395] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.902 [2024-07-14 07:44:35.892419] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:19.902 qpair failed and we were unable to recover it. 00:27:19.902 [2024-07-14 07:44:35.892602] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.902 [2024-07-14 07:44:35.892753] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.902 [2024-07-14 07:44:35.892778] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:19.902 qpair failed and we were unable to recover it. 00:27:19.902 [2024-07-14 07:44:35.892941] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.902 [2024-07-14 07:44:35.893122] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.902 [2024-07-14 07:44:35.893147] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:19.902 qpair failed and we were unable to recover it. 00:27:19.902 [2024-07-14 07:44:35.893318] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.902 [2024-07-14 07:44:35.893478] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.902 [2024-07-14 07:44:35.893505] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:19.902 qpair failed and we were unable to recover it. 00:27:19.902 [2024-07-14 07:44:35.893663] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.902 [2024-07-14 07:44:35.893846] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.902 [2024-07-14 07:44:35.893887] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:19.902 qpair failed and we were unable to recover it. 00:27:19.903 [2024-07-14 07:44:35.894074] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.903 [2024-07-14 07:44:35.894302] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.903 [2024-07-14 07:44:35.894327] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:19.903 qpair failed and we were unable to recover it. 00:27:19.903 [2024-07-14 07:44:35.894476] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.903 [2024-07-14 07:44:35.894659] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.903 [2024-07-14 07:44:35.894683] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:19.903 qpair failed and we were unable to recover it. 00:27:19.903 [2024-07-14 07:44:35.894842] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.903 [2024-07-14 07:44:35.895072] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.903 [2024-07-14 07:44:35.895097] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:19.903 qpair failed and we were unable to recover it. 00:27:19.903 [2024-07-14 07:44:35.895286] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.903 [2024-07-14 07:44:35.895466] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.903 [2024-07-14 07:44:35.895490] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:19.903 qpair failed and we were unable to recover it. 00:27:19.903 [2024-07-14 07:44:35.895672] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.903 [2024-07-14 07:44:35.895828] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.903 [2024-07-14 07:44:35.895854] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:19.903 qpair failed and we were unable to recover it. 00:27:19.903 [2024-07-14 07:44:35.896045] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.903 [2024-07-14 07:44:35.896258] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.903 [2024-07-14 07:44:35.896282] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:19.903 qpair failed and we were unable to recover it. 00:27:19.903 [2024-07-14 07:44:35.896443] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.903 [2024-07-14 07:44:35.896647] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.903 [2024-07-14 07:44:35.896672] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:19.903 qpair failed and we were unable to recover it. 00:27:19.903 [2024-07-14 07:44:35.896862] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.903 [2024-07-14 07:44:35.897052] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.903 [2024-07-14 07:44:35.897076] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:19.903 qpair failed and we were unable to recover it. 00:27:19.903 [2024-07-14 07:44:35.897280] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.903 [2024-07-14 07:44:35.897432] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.903 [2024-07-14 07:44:35.897457] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:19.903 qpair failed and we were unable to recover it. 00:27:19.903 [2024-07-14 07:44:35.897654] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.903 [2024-07-14 07:44:35.897835] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.903 [2024-07-14 07:44:35.897878] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:19.903 qpair failed and we were unable to recover it. 00:27:19.903 [2024-07-14 07:44:35.898087] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.903 [2024-07-14 07:44:35.898281] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.903 [2024-07-14 07:44:35.898305] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:19.903 qpair failed and we were unable to recover it. 00:27:19.903 [2024-07-14 07:44:35.898489] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.903 [2024-07-14 07:44:35.898694] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.903 [2024-07-14 07:44:35.898718] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:19.903 qpair failed and we were unable to recover it. 00:27:19.903 [2024-07-14 07:44:35.898901] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.903 [2024-07-14 07:44:35.899088] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.903 [2024-07-14 07:44:35.899113] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:19.903 qpair failed and we were unable to recover it. 00:27:19.903 [2024-07-14 07:44:35.899332] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.903 [2024-07-14 07:44:35.899516] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.903 [2024-07-14 07:44:35.899541] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:19.903 qpair failed and we were unable to recover it. 00:27:19.903 [2024-07-14 07:44:35.899726] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.903 [2024-07-14 07:44:35.899912] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.903 [2024-07-14 07:44:35.899938] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:19.903 qpair failed and we were unable to recover it. 00:27:19.903 [2024-07-14 07:44:35.900098] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.903 [2024-07-14 07:44:35.900291] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.903 [2024-07-14 07:44:35.900316] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:19.903 qpair failed and we were unable to recover it. 00:27:19.903 [2024-07-14 07:44:35.900497] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.903 [2024-07-14 07:44:35.900676] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.903 [2024-07-14 07:44:35.900700] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:19.903 qpair failed and we were unable to recover it. 00:27:19.903 [2024-07-14 07:44:35.900854] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.903 [2024-07-14 07:44:35.901041] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.903 [2024-07-14 07:44:35.901066] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:19.903 qpair failed and we were unable to recover it. 00:27:19.903 [2024-07-14 07:44:35.901262] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.903 [2024-07-14 07:44:35.901472] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.903 [2024-07-14 07:44:35.901496] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:19.903 qpair failed and we were unable to recover it. 00:27:19.903 [2024-07-14 07:44:35.901683] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.903 [2024-07-14 07:44:35.901872] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.903 [2024-07-14 07:44:35.901897] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:19.903 qpair failed and we were unable to recover it. 00:27:19.903 [2024-07-14 07:44:35.902080] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.903 [2024-07-14 07:44:35.902234] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.903 [2024-07-14 07:44:35.902259] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:19.903 qpair failed and we were unable to recover it. 00:27:19.903 [2024-07-14 07:44:35.902465] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.903 [2024-07-14 07:44:35.902650] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.903 [2024-07-14 07:44:35.902674] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:19.903 qpair failed and we were unable to recover it. 00:27:19.903 [2024-07-14 07:44:35.902824] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.903 [2024-07-14 07:44:35.902993] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.903 [2024-07-14 07:44:35.903018] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:19.903 qpair failed and we were unable to recover it. 00:27:19.903 [2024-07-14 07:44:35.903205] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.903 [2024-07-14 07:44:35.903412] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.903 [2024-07-14 07:44:35.903436] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:19.903 qpair failed and we were unable to recover it. 00:27:19.903 [2024-07-14 07:44:35.903613] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.903 [2024-07-14 07:44:35.903787] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.903 [2024-07-14 07:44:35.903812] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:19.903 qpair failed and we were unable to recover it. 00:27:19.903 [2024-07-14 07:44:35.903996] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.903 [2024-07-14 07:44:35.904176] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.903 [2024-07-14 07:44:35.904201] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:19.903 qpair failed and we were unable to recover it. 00:27:19.903 [2024-07-14 07:44:35.904379] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.903 [2024-07-14 07:44:35.904570] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.903 [2024-07-14 07:44:35.904594] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:19.903 qpair failed and we were unable to recover it. 00:27:19.903 [2024-07-14 07:44:35.904777] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.903 [2024-07-14 07:44:35.904957] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.903 [2024-07-14 07:44:35.904983] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:19.903 qpair failed and we were unable to recover it. 00:27:19.903 [2024-07-14 07:44:35.905195] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.903 [2024-07-14 07:44:35.905377] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.903 [2024-07-14 07:44:35.905402] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:19.903 qpair failed and we were unable to recover it. 00:27:19.903 [2024-07-14 07:44:35.905609] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.903 [2024-07-14 07:44:35.905783] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.903 [2024-07-14 07:44:35.905808] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:19.903 qpair failed and we were unable to recover it. 00:27:19.903 [2024-07-14 07:44:35.905959] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.903 [2024-07-14 07:44:35.906115] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.903 [2024-07-14 07:44:35.906148] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:19.904 qpair failed and we were unable to recover it. 00:27:19.904 [2024-07-14 07:44:35.906330] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.904 [2024-07-14 07:44:35.906485] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.904 [2024-07-14 07:44:35.906510] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:19.904 qpair failed and we were unable to recover it. 00:27:19.904 [2024-07-14 07:44:35.906692] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.904 [2024-07-14 07:44:35.906841] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.904 [2024-07-14 07:44:35.906870] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:19.904 qpair failed and we were unable to recover it. 00:27:19.904 [2024-07-14 07:44:35.907083] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.904 [2024-07-14 07:44:35.907283] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.904 [2024-07-14 07:44:35.907307] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:19.904 qpair failed and we were unable to recover it. 00:27:19.904 [2024-07-14 07:44:35.907460] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.904 [2024-07-14 07:44:35.907668] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.904 [2024-07-14 07:44:35.907693] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:19.904 qpair failed and we were unable to recover it. 00:27:19.904 [2024-07-14 07:44:35.907890] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.904 [2024-07-14 07:44:35.908078] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.904 [2024-07-14 07:44:35.908103] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:19.904 qpair failed and we were unable to recover it. 00:27:19.904 [2024-07-14 07:44:35.908265] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.904 [2024-07-14 07:44:35.908453] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.904 [2024-07-14 07:44:35.908478] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:19.904 qpair failed and we were unable to recover it. 00:27:19.904 [2024-07-14 07:44:35.908663] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.904 [2024-07-14 07:44:35.908844] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.904 [2024-07-14 07:44:35.908873] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:19.904 qpair failed and we were unable to recover it. 00:27:19.904 [2024-07-14 07:44:35.909044] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.904 [2024-07-14 07:44:35.909222] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.904 [2024-07-14 07:44:35.909247] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:19.904 qpair failed and we were unable to recover it. 00:27:19.904 [2024-07-14 07:44:35.909407] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.904 [2024-07-14 07:44:35.909594] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.904 [2024-07-14 07:44:35.909619] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:19.904 qpair failed and we were unable to recover it. 00:27:19.904 [2024-07-14 07:44:35.909777] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.904 [2024-07-14 07:44:35.909935] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.904 [2024-07-14 07:44:35.909960] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:19.904 qpair failed and we were unable to recover it. 00:27:19.904 [2024-07-14 07:44:35.910173] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.904 [2024-07-14 07:44:35.910336] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.904 [2024-07-14 07:44:35.910361] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:19.904 qpair failed and we were unable to recover it. 00:27:19.904 [2024-07-14 07:44:35.910513] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.904 [2024-07-14 07:44:35.910718] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.904 [2024-07-14 07:44:35.910743] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:19.904 qpair failed and we were unable to recover it. 00:27:19.904 [2024-07-14 07:44:35.910953] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.904 [2024-07-14 07:44:35.911106] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.904 [2024-07-14 07:44:35.911137] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:19.904 qpair failed and we were unable to recover it. 00:27:19.904 [2024-07-14 07:44:35.911348] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.904 [2024-07-14 07:44:35.911489] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.904 [2024-07-14 07:44:35.911514] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:19.904 qpair failed and we were unable to recover it. 00:27:19.904 [2024-07-14 07:44:35.911665] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.904 [2024-07-14 07:44:35.911883] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.904 [2024-07-14 07:44:35.911908] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:19.904 qpair failed and we were unable to recover it. 00:27:19.904 [2024-07-14 07:44:35.912096] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.904 [2024-07-14 07:44:35.912127] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:27:19.904 [2024-07-14 07:44:35.912251] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.904 [2024-07-14 07:44:35.912276] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:19.904 qpair failed and we were unable to recover it. 00:27:19.904 [2024-07-14 07:44:35.912485] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.904 [2024-07-14 07:44:35.912643] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.904 [2024-07-14 07:44:35.912668] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:19.904 qpair failed and we were unable to recover it. 00:27:19.904 [2024-07-14 07:44:35.912854] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.904 [2024-07-14 07:44:35.913012] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.904 [2024-07-14 07:44:35.913037] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:19.904 qpair failed and we were unable to recover it. 00:27:19.904 [2024-07-14 07:44:35.913251] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.904 [2024-07-14 07:44:35.913429] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.904 [2024-07-14 07:44:35.913453] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:19.904 qpair failed and we were unable to recover it. 00:27:19.904 [2024-07-14 07:44:35.913615] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.904 [2024-07-14 07:44:35.913829] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.904 [2024-07-14 07:44:35.913854] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:19.904 qpair failed and we were unable to recover it. 00:27:19.904 [2024-07-14 07:44:35.914140] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.904 [2024-07-14 07:44:35.914363] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.904 [2024-07-14 07:44:35.914389] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:19.904 qpair failed and we were unable to recover it. 00:27:19.904 [2024-07-14 07:44:35.914575] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.904 [2024-07-14 07:44:35.914759] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.904 [2024-07-14 07:44:35.914784] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:19.904 qpair failed and we were unable to recover it. 00:27:19.904 [2024-07-14 07:44:35.914965] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.904 [2024-07-14 07:44:35.915155] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.904 [2024-07-14 07:44:35.915184] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:19.904 qpair failed and we were unable to recover it. 00:27:19.904 [2024-07-14 07:44:35.915371] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.904 [2024-07-14 07:44:35.915551] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.904 [2024-07-14 07:44:35.915575] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:19.904 qpair failed and we were unable to recover it. 00:27:19.904 [2024-07-14 07:44:35.915761] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.904 [2024-07-14 07:44:35.916031] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.904 [2024-07-14 07:44:35.916057] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:19.904 qpair failed and we were unable to recover it. 00:27:19.904 [2024-07-14 07:44:35.916272] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.904 [2024-07-14 07:44:35.916482] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.904 [2024-07-14 07:44:35.916506] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:19.904 qpair failed and we were unable to recover it. 00:27:19.904 [2024-07-14 07:44:35.916694] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.904 [2024-07-14 07:44:35.916881] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.904 [2024-07-14 07:44:35.916906] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:19.904 qpair failed and we were unable to recover it. 00:27:19.904 [2024-07-14 07:44:35.917088] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.904 [2024-07-14 07:44:35.917271] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.904 [2024-07-14 07:44:35.917296] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:19.905 qpair failed and we were unable to recover it. 00:27:19.905 [2024-07-14 07:44:35.917482] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.905 [2024-07-14 07:44:35.917640] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.905 [2024-07-14 07:44:35.917665] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:19.905 qpair failed and we were unable to recover it. 00:27:19.905 [2024-07-14 07:44:35.917874] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.905 [2024-07-14 07:44:35.918033] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.905 [2024-07-14 07:44:35.918057] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:19.905 qpair failed and we were unable to recover it. 00:27:19.905 [2024-07-14 07:44:35.918312] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.905 [2024-07-14 07:44:35.918494] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.905 [2024-07-14 07:44:35.918519] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:19.905 qpair failed and we were unable to recover it. 00:27:19.905 [2024-07-14 07:44:35.918708] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.905 [2024-07-14 07:44:35.918998] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.905 [2024-07-14 07:44:35.919023] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:19.905 qpair failed and we were unable to recover it. 00:27:19.905 [2024-07-14 07:44:35.919220] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.905 [2024-07-14 07:44:35.919382] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.905 [2024-07-14 07:44:35.919407] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:19.905 qpair failed and we were unable to recover it. 00:27:19.905 [2024-07-14 07:44:35.919570] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.905 [2024-07-14 07:44:35.919752] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.905 [2024-07-14 07:44:35.919776] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:19.905 qpair failed and we were unable to recover it. 00:27:19.905 [2024-07-14 07:44:35.919961] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.905 [2024-07-14 07:44:35.920153] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.905 [2024-07-14 07:44:35.920177] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:19.905 qpair failed and we were unable to recover it. 00:27:19.905 [2024-07-14 07:44:35.920366] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.905 [2024-07-14 07:44:35.920550] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.905 [2024-07-14 07:44:35.920575] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:19.905 qpair failed and we were unable to recover it. 00:27:19.905 [2024-07-14 07:44:35.920841] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.905 [2024-07-14 07:44:35.921030] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.905 [2024-07-14 07:44:35.921055] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:19.905 qpair failed and we were unable to recover it. 00:27:19.905 [2024-07-14 07:44:35.921265] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.905 [2024-07-14 07:44:35.921427] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.905 [2024-07-14 07:44:35.921451] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:19.905 qpair failed and we were unable to recover it. 00:27:19.905 [2024-07-14 07:44:35.921644] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.905 [2024-07-14 07:44:35.921804] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.905 [2024-07-14 07:44:35.921831] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:19.905 qpair failed and we were unable to recover it. 00:27:19.905 [2024-07-14 07:44:35.922054] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.905 [2024-07-14 07:44:35.922273] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.905 [2024-07-14 07:44:35.922298] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:19.905 qpair failed and we were unable to recover it. 00:27:19.905 [2024-07-14 07:44:35.922466] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.905 [2024-07-14 07:44:35.922654] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.905 [2024-07-14 07:44:35.922679] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:19.905 qpair failed and we were unable to recover it. 00:27:19.905 [2024-07-14 07:44:35.922887] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.905 [2024-07-14 07:44:35.923072] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.905 [2024-07-14 07:44:35.923097] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:19.905 qpair failed and we were unable to recover it. 00:27:19.905 [2024-07-14 07:44:35.923258] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.905 [2024-07-14 07:44:35.923420] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.905 [2024-07-14 07:44:35.923445] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:19.905 qpair failed and we were unable to recover it. 00:27:19.905 [2024-07-14 07:44:35.923633] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.905 [2024-07-14 07:44:35.923823] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.905 [2024-07-14 07:44:35.923847] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:19.905 qpair failed and we were unable to recover it. 00:27:19.905 [2024-07-14 07:44:35.924064] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.905 [2024-07-14 07:44:35.924220] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.905 [2024-07-14 07:44:35.924244] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:19.905 qpair failed and we were unable to recover it. 00:27:19.905 [2024-07-14 07:44:35.924401] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.905 [2024-07-14 07:44:35.924581] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.905 [2024-07-14 07:44:35.924606] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:19.905 qpair failed and we were unable to recover it. 00:27:19.905 [2024-07-14 07:44:35.924792] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.905 [2024-07-14 07:44:35.924977] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.905 [2024-07-14 07:44:35.925003] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:19.905 qpair failed and we were unable to recover it. 00:27:19.905 [2024-07-14 07:44:35.925239] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.905 [2024-07-14 07:44:35.925418] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.905 [2024-07-14 07:44:35.925444] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:19.905 qpair failed and we were unable to recover it. 00:27:19.905 [2024-07-14 07:44:35.925602] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.905 [2024-07-14 07:44:35.925810] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.905 [2024-07-14 07:44:35.925834] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:19.905 qpair failed and we were unable to recover it. 00:27:19.905 [2024-07-14 07:44:35.926027] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.905 [2024-07-14 07:44:35.926182] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.905 [2024-07-14 07:44:35.926206] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:19.905 qpair failed and we were unable to recover it. 00:27:19.905 [2024-07-14 07:44:35.926368] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.905 [2024-07-14 07:44:35.926548] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.905 [2024-07-14 07:44:35.926572] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:19.905 qpair failed and we were unable to recover it. 00:27:19.905 [2024-07-14 07:44:35.926781] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.905 [2024-07-14 07:44:35.926945] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.905 [2024-07-14 07:44:35.926970] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:19.905 qpair failed and we were unable to recover it. 00:27:19.905 [2024-07-14 07:44:35.927165] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.905 [2024-07-14 07:44:35.927369] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.905 [2024-07-14 07:44:35.927393] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:19.905 qpair failed and we were unable to recover it. 00:27:19.905 [2024-07-14 07:44:35.927585] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.905 [2024-07-14 07:44:35.927767] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.905 [2024-07-14 07:44:35.927791] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:19.905 qpair failed and we were unable to recover it. 00:27:19.905 [2024-07-14 07:44:35.927974] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.905 [2024-07-14 07:44:35.928127] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.905 [2024-07-14 07:44:35.928152] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:19.905 qpair failed and we were unable to recover it. 00:27:19.905 [2024-07-14 07:44:35.928334] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.905 [2024-07-14 07:44:35.928544] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.905 [2024-07-14 07:44:35.928568] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:19.905 qpair failed and we were unable to recover it. 00:27:19.906 [2024-07-14 07:44:35.928732] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.906 [2024-07-14 07:44:35.928919] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.906 [2024-07-14 07:44:35.928944] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:19.906 qpair failed and we were unable to recover it. 00:27:19.906 [2024-07-14 07:44:35.929157] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.906 [2024-07-14 07:44:35.929337] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.906 [2024-07-14 07:44:35.929362] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:19.906 qpair failed and we were unable to recover it. 00:27:19.906 [2024-07-14 07:44:35.929512] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.906 [2024-07-14 07:44:35.929698] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.906 [2024-07-14 07:44:35.929725] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:19.906 qpair failed and we were unable to recover it. 00:27:19.906 [2024-07-14 07:44:35.929909] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.906 [2024-07-14 07:44:35.930090] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.906 [2024-07-14 07:44:35.930116] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:19.906 qpair failed and we were unable to recover it. 00:27:19.906 [2024-07-14 07:44:35.930303] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.906 [2024-07-14 07:44:35.930514] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.906 [2024-07-14 07:44:35.930538] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:19.906 qpair failed and we were unable to recover it. 00:27:19.906 [2024-07-14 07:44:35.930693] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.906 [2024-07-14 07:44:35.930911] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.906 [2024-07-14 07:44:35.930936] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:19.906 qpair failed and we were unable to recover it. 00:27:19.906 [2024-07-14 07:44:35.931124] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.906 [2024-07-14 07:44:35.931334] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.906 [2024-07-14 07:44:35.931359] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:19.906 qpair failed and we were unable to recover it. 00:27:19.906 [2024-07-14 07:44:35.931544] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.906 [2024-07-14 07:44:35.931736] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.906 [2024-07-14 07:44:35.931760] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:19.906 qpair failed and we were unable to recover it. 00:27:19.906 [2024-07-14 07:44:35.931955] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.906 [2024-07-14 07:44:35.932141] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.906 [2024-07-14 07:44:35.932165] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:19.906 qpair failed and we were unable to recover it. 00:27:19.906 [2024-07-14 07:44:35.932332] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.906 [2024-07-14 07:44:35.932536] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.906 [2024-07-14 07:44:35.932561] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:19.906 qpair failed and we were unable to recover it. 00:27:19.906 [2024-07-14 07:44:35.932722] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.906 [2024-07-14 07:44:35.932874] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.906 [2024-07-14 07:44:35.932900] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:19.906 qpair failed and we were unable to recover it. 00:27:19.906 [2024-07-14 07:44:35.933059] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.906 [2024-07-14 07:44:35.933219] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.906 [2024-07-14 07:44:35.933243] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:19.906 qpair failed and we were unable to recover it. 00:27:19.906 [2024-07-14 07:44:35.933429] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.906 [2024-07-14 07:44:35.933639] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.906 [2024-07-14 07:44:35.933664] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:19.906 qpair failed and we were unable to recover it. 00:27:19.906 [2024-07-14 07:44:35.933816] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.906 [2024-07-14 07:44:35.933982] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.906 [2024-07-14 07:44:35.934007] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:19.906 qpair failed and we were unable to recover it. 00:27:19.906 [2024-07-14 07:44:35.934193] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.906 [2024-07-14 07:44:35.934346] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.906 [2024-07-14 07:44:35.934371] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:19.906 qpair failed and we were unable to recover it. 00:27:19.906 [2024-07-14 07:44:35.934527] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.906 [2024-07-14 07:44:35.934690] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.906 [2024-07-14 07:44:35.934715] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:19.906 qpair failed and we were unable to recover it. 00:27:19.906 [2024-07-14 07:44:35.934909] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.906 [2024-07-14 07:44:35.935070] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.906 [2024-07-14 07:44:35.935094] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:19.906 qpair failed and we were unable to recover it. 00:27:19.906 [2024-07-14 07:44:35.935247] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.906 [2024-07-14 07:44:35.935443] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.906 [2024-07-14 07:44:35.935471] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:19.906 qpair failed and we were unable to recover it. 00:27:19.906 [2024-07-14 07:44:35.935671] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.906 [2024-07-14 07:44:35.935849] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.906 [2024-07-14 07:44:35.935901] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:19.906 qpair failed and we were unable to recover it. 00:27:19.906 [2024-07-14 07:44:35.936057] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.906 [2024-07-14 07:44:35.936231] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.906 [2024-07-14 07:44:35.936255] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:19.906 qpair failed and we were unable to recover it. 00:27:19.906 [2024-07-14 07:44:35.936473] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.906 [2024-07-14 07:44:35.936661] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.906 [2024-07-14 07:44:35.936686] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:19.906 qpair failed and we were unable to recover it. 00:27:19.906 [2024-07-14 07:44:35.936844] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.906 [2024-07-14 07:44:35.937033] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.906 [2024-07-14 07:44:35.937059] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:19.906 qpair failed and we were unable to recover it. 00:27:19.906 [2024-07-14 07:44:35.937242] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.906 [2024-07-14 07:44:35.937424] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.906 [2024-07-14 07:44:35.937449] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:19.906 qpair failed and we were unable to recover it. 00:27:19.906 [2024-07-14 07:44:35.937628] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.906 [2024-07-14 07:44:35.937853] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.906 [2024-07-14 07:44:35.937883] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:19.906 qpair failed and we were unable to recover it. 00:27:19.906 [2024-07-14 07:44:35.938076] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.906 [2024-07-14 07:44:35.938238] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.906 [2024-07-14 07:44:35.938276] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:19.906 qpair failed and we were unable to recover it. 00:27:19.906 [2024-07-14 07:44:35.938472] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.906 [2024-07-14 07:44:35.938653] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.906 [2024-07-14 07:44:35.938678] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:19.906 qpair failed and we were unable to recover it. 00:27:19.906 [2024-07-14 07:44:35.938888] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.906 [2024-07-14 07:44:35.939082] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.906 [2024-07-14 07:44:35.939107] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:19.906 qpair failed and we were unable to recover it. 00:27:19.906 [2024-07-14 07:44:35.939384] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.906 [2024-07-14 07:44:35.939657] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.906 [2024-07-14 07:44:35.939686] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:19.906 qpair failed and we were unable to recover it. 00:27:19.906 [2024-07-14 07:44:35.939906] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.906 [2024-07-14 07:44:35.940121] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.906 [2024-07-14 07:44:35.940146] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:19.906 qpair failed and we were unable to recover it. 00:27:19.906 [2024-07-14 07:44:35.940310] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.906 [2024-07-14 07:44:35.940494] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.906 [2024-07-14 07:44:35.940519] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:19.906 qpair failed and we were unable to recover it. 00:27:19.906 [2024-07-14 07:44:35.940703] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.906 [2024-07-14 07:44:35.940858] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.907 [2024-07-14 07:44:35.940888] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:19.907 qpair failed and we were unable to recover it. 00:27:19.907 [2024-07-14 07:44:35.941072] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.907 [2024-07-14 07:44:35.941260] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.907 [2024-07-14 07:44:35.941285] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:19.907 qpair failed and we were unable to recover it. 00:27:19.907 [2024-07-14 07:44:35.941470] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.907 [2024-07-14 07:44:35.941680] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.907 [2024-07-14 07:44:35.941705] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:19.907 qpair failed and we were unable to recover it. 00:27:19.907 [2024-07-14 07:44:35.941896] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.907 [2024-07-14 07:44:35.942108] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.907 [2024-07-14 07:44:35.942133] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:19.907 qpair failed and we were unable to recover it. 00:27:19.907 [2024-07-14 07:44:35.942291] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.907 [2024-07-14 07:44:35.942452] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.907 [2024-07-14 07:44:35.942476] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:19.907 qpair failed and we were unable to recover it. 00:27:19.907 [2024-07-14 07:44:35.942673] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.907 [2024-07-14 07:44:35.942861] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.907 [2024-07-14 07:44:35.942890] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:19.907 qpair failed and we were unable to recover it. 00:27:19.907 [2024-07-14 07:44:35.943075] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.907 [2024-07-14 07:44:35.943236] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.907 [2024-07-14 07:44:35.943260] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:19.907 qpair failed and we were unable to recover it. 00:27:19.907 [2024-07-14 07:44:35.943410] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.907 [2024-07-14 07:44:35.943590] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.907 [2024-07-14 07:44:35.943615] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:19.907 qpair failed and we were unable to recover it. 00:27:19.907 [2024-07-14 07:44:35.943814] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.907 [2024-07-14 07:44:35.944025] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.907 [2024-07-14 07:44:35.944051] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:19.907 qpair failed and we were unable to recover it. 00:27:19.907 [2024-07-14 07:44:35.944258] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.907 [2024-07-14 07:44:35.944443] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.907 [2024-07-14 07:44:35.944468] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:19.907 qpair failed and we were unable to recover it. 00:27:19.907 [2024-07-14 07:44:35.944656] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.907 [2024-07-14 07:44:35.944808] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.907 [2024-07-14 07:44:35.944833] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:19.907 qpair failed and we were unable to recover it. 00:27:19.907 [2024-07-14 07:44:35.945019] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.907 [2024-07-14 07:44:35.945208] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.907 [2024-07-14 07:44:35.945233] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:19.907 qpair failed and we were unable to recover it. 00:27:19.907 [2024-07-14 07:44:35.945416] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.907 [2024-07-14 07:44:35.945604] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.907 [2024-07-14 07:44:35.945629] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:19.907 qpair failed and we were unable to recover it. 00:27:19.907 [2024-07-14 07:44:35.945839] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.907 [2024-07-14 07:44:35.946027] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.907 [2024-07-14 07:44:35.946052] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:19.907 qpair failed and we were unable to recover it. 00:27:19.907 [2024-07-14 07:44:35.946240] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.907 [2024-07-14 07:44:35.946403] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.907 [2024-07-14 07:44:35.946426] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:19.907 qpair failed and we were unable to recover it. 00:27:19.907 [2024-07-14 07:44:35.946612] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.907 [2024-07-14 07:44:35.946768] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.907 [2024-07-14 07:44:35.946793] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:19.907 qpair failed and we were unable to recover it. 00:27:19.907 [2024-07-14 07:44:35.946951] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.907 [2024-07-14 07:44:35.947161] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.907 [2024-07-14 07:44:35.947186] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:19.907 qpair failed and we were unable to recover it. 00:27:19.907 [2024-07-14 07:44:35.947370] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.907 [2024-07-14 07:44:35.947569] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.907 [2024-07-14 07:44:35.947593] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:19.907 qpair failed and we were unable to recover it. 00:27:19.907 [2024-07-14 07:44:35.947813] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.907 [2024-07-14 07:44:35.947998] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.907 [2024-07-14 07:44:35.948023] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:19.907 qpair failed and we were unable to recover it. 00:27:19.907 [2024-07-14 07:44:35.948209] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.907 [2024-07-14 07:44:35.948391] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.907 [2024-07-14 07:44:35.948415] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:19.907 qpair failed and we were unable to recover it. 00:27:19.907 [2024-07-14 07:44:35.948592] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.907 [2024-07-14 07:44:35.948775] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.907 [2024-07-14 07:44:35.948800] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:19.907 qpair failed and we were unable to recover it. 00:27:19.907 [2024-07-14 07:44:35.948988] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.907 [2024-07-14 07:44:35.949151] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.907 [2024-07-14 07:44:35.949176] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:19.907 qpair failed and we were unable to recover it. 00:27:19.907 [2024-07-14 07:44:35.949370] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.907 [2024-07-14 07:44:35.949613] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.907 [2024-07-14 07:44:35.949637] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:19.907 qpair failed and we were unable to recover it. 00:27:19.907 [2024-07-14 07:44:35.949857] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.907 [2024-07-14 07:44:35.950020] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.907 [2024-07-14 07:44:35.950045] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:19.907 qpair failed and we were unable to recover it. 00:27:19.907 [2024-07-14 07:44:35.950237] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.907 [2024-07-14 07:44:35.950425] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.907 [2024-07-14 07:44:35.950450] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:19.907 qpair failed and we were unable to recover it. 00:27:19.907 [2024-07-14 07:44:35.950637] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.907 [2024-07-14 07:44:35.950794] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.907 [2024-07-14 07:44:35.950819] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:19.907 qpair failed and we were unable to recover it. 00:27:19.907 [2024-07-14 07:44:35.951031] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.907 [2024-07-14 07:44:35.951243] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.907 [2024-07-14 07:44:35.951268] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:19.907 qpair failed and we were unable to recover it. 00:27:19.907 [2024-07-14 07:44:35.951423] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.907 [2024-07-14 07:44:35.951608] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.907 [2024-07-14 07:44:35.951633] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:19.907 qpair failed and we were unable to recover it. 00:27:19.907 [2024-07-14 07:44:35.951798] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.907 [2024-07-14 07:44:35.951987] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.907 [2024-07-14 07:44:35.952013] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:19.907 qpair failed and we were unable to recover it. 00:27:19.907 [2024-07-14 07:44:35.952227] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.907 [2024-07-14 07:44:35.952437] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.907 [2024-07-14 07:44:35.952461] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:19.907 qpair failed and we were unable to recover it. 00:27:19.907 [2024-07-14 07:44:35.952680] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.907 [2024-07-14 07:44:35.952876] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.907 [2024-07-14 07:44:35.952902] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:19.907 qpair failed and we were unable to recover it. 00:27:19.907 [2024-07-14 07:44:35.953087] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.907 [2024-07-14 07:44:35.953270] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.908 [2024-07-14 07:44:35.953295] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:19.908 qpair failed and we were unable to recover it. 00:27:19.908 [2024-07-14 07:44:35.953483] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.908 [2024-07-14 07:44:35.953677] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.908 [2024-07-14 07:44:35.953702] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:19.908 qpair failed and we were unable to recover it. 00:27:19.908 [2024-07-14 07:44:35.953930] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.908 [2024-07-14 07:44:35.954147] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.908 [2024-07-14 07:44:35.954171] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:19.908 qpair failed and we were unable to recover it. 00:27:19.908 [2024-07-14 07:44:35.954363] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.908 [2024-07-14 07:44:35.954519] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.908 [2024-07-14 07:44:35.954543] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:19.908 qpair failed and we were unable to recover it. 00:27:19.908 [2024-07-14 07:44:35.954700] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.908 [2024-07-14 07:44:35.954890] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.908 [2024-07-14 07:44:35.954915] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:19.908 qpair failed and we were unable to recover it. 00:27:19.908 [2024-07-14 07:44:35.955100] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.908 [2024-07-14 07:44:35.955246] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.908 [2024-07-14 07:44:35.955271] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:19.908 qpair failed and we were unable to recover it. 00:27:19.908 [2024-07-14 07:44:35.955419] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.908 [2024-07-14 07:44:35.955627] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.908 [2024-07-14 07:44:35.955652] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:19.908 qpair failed and we were unable to recover it. 00:27:19.908 [2024-07-14 07:44:35.955862] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.908 [2024-07-14 07:44:35.956051] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.908 [2024-07-14 07:44:35.956076] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:19.908 qpair failed and we were unable to recover it. 00:27:19.908 [2024-07-14 07:44:35.956257] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.908 [2024-07-14 07:44:35.956466] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.908 [2024-07-14 07:44:35.956490] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:19.908 qpair failed and we were unable to recover it. 00:27:19.908 [2024-07-14 07:44:35.956699] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.908 [2024-07-14 07:44:35.956905] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.908 [2024-07-14 07:44:35.956930] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:19.908 qpair failed and we were unable to recover it. 00:27:19.908 [2024-07-14 07:44:35.957107] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.908 [2024-07-14 07:44:35.957261] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.908 [2024-07-14 07:44:35.957287] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:19.908 qpair failed and we were unable to recover it. 00:27:19.908 [2024-07-14 07:44:35.957476] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.908 [2024-07-14 07:44:35.957652] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.908 [2024-07-14 07:44:35.957676] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:19.908 qpair failed and we were unable to recover it. 00:27:19.908 [2024-07-14 07:44:35.957860] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.908 [2024-07-14 07:44:35.958044] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.908 [2024-07-14 07:44:35.958069] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:19.908 qpair failed and we were unable to recover it. 00:27:19.908 [2024-07-14 07:44:35.958282] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.908 [2024-07-14 07:44:35.958490] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.908 [2024-07-14 07:44:35.958514] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:19.908 qpair failed and we were unable to recover it. 00:27:19.908 [2024-07-14 07:44:35.958696] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.908 [2024-07-14 07:44:35.958852] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.908 [2024-07-14 07:44:35.958881] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:19.908 qpair failed and we were unable to recover it. 00:27:19.908 [2024-07-14 07:44:35.959065] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.908 [2024-07-14 07:44:35.959247] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.908 [2024-07-14 07:44:35.959272] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:19.908 qpair failed and we were unable to recover it. 00:27:19.908 [2024-07-14 07:44:35.959454] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.908 [2024-07-14 07:44:35.959639] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.908 [2024-07-14 07:44:35.959663] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:19.908 qpair failed and we were unable to recover it. 00:27:19.908 [2024-07-14 07:44:35.959839] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.908 [2024-07-14 07:44:35.960037] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.908 [2024-07-14 07:44:35.960066] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:19.908 qpair failed and we were unable to recover it. 00:27:19.908 [2024-07-14 07:44:35.960243] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.908 [2024-07-14 07:44:35.960419] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.908 [2024-07-14 07:44:35.960444] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:19.908 qpair failed and we were unable to recover it. 00:27:19.908 [2024-07-14 07:44:35.960627] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.908 [2024-07-14 07:44:35.960804] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.908 [2024-07-14 07:44:35.960828] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:19.908 qpair failed and we were unable to recover it. 00:27:19.908 [2024-07-14 07:44:35.961015] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.908 [2024-07-14 07:44:35.961167] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.908 [2024-07-14 07:44:35.961191] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:19.908 qpair failed and we were unable to recover it. 00:27:19.908 [2024-07-14 07:44:35.961366] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.908 [2024-07-14 07:44:35.961548] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.908 [2024-07-14 07:44:35.961572] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:19.908 qpair failed and we were unable to recover it. 00:27:19.908 [2024-07-14 07:44:35.961752] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.908 [2024-07-14 07:44:35.961964] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.908 [2024-07-14 07:44:35.961990] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:19.908 qpair failed and we were unable to recover it. 00:27:19.908 [2024-07-14 07:44:35.962177] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.908 [2024-07-14 07:44:35.962354] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.908 [2024-07-14 07:44:35.962379] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:19.908 qpair failed and we were unable to recover it. 00:27:19.908 [2024-07-14 07:44:35.962591] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.908 [2024-07-14 07:44:35.962750] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.908 [2024-07-14 07:44:35.962774] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:19.908 qpair failed and we were unable to recover it. 00:27:19.908 [2024-07-14 07:44:35.962936] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.908 [2024-07-14 07:44:35.963115] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.908 [2024-07-14 07:44:35.963140] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:19.908 qpair failed and we were unable to recover it. 00:27:19.908 [2024-07-14 07:44:35.963346] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.908 [2024-07-14 07:44:35.963497] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.908 [2024-07-14 07:44:35.963522] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:19.908 qpair failed and we were unable to recover it. 00:27:19.908 [2024-07-14 07:44:35.963704] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.908 [2024-07-14 07:44:35.963859] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.908 [2024-07-14 07:44:35.963889] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:19.908 qpair failed and we were unable to recover it. 00:27:19.908 [2024-07-14 07:44:35.964055] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.908 [2024-07-14 07:44:35.964264] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.908 [2024-07-14 07:44:35.964289] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:19.908 qpair failed and we were unable to recover it. 00:27:19.908 [2024-07-14 07:44:35.964449] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.908 [2024-07-14 07:44:35.964658] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.908 [2024-07-14 07:44:35.964683] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:19.908 qpair failed and we were unable to recover it. 00:27:19.908 [2024-07-14 07:44:35.964873] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.908 [2024-07-14 07:44:35.965065] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.908 [2024-07-14 07:44:35.965090] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:19.908 qpair failed and we were unable to recover it. 00:27:19.908 [2024-07-14 07:44:35.965280] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.909 [2024-07-14 07:44:35.965463] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.909 [2024-07-14 07:44:35.965489] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:19.909 qpair failed and we were unable to recover it. 00:27:19.909 [2024-07-14 07:44:35.965678] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.909 [2024-07-14 07:44:35.965859] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.909 [2024-07-14 07:44:35.965889] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:19.909 qpair failed and we were unable to recover it. 00:27:19.909 [2024-07-14 07:44:35.966048] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.909 [2024-07-14 07:44:35.966228] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.909 [2024-07-14 07:44:35.966253] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:19.909 qpair failed and we were unable to recover it. 00:27:19.909 [2024-07-14 07:44:35.966416] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.909 [2024-07-14 07:44:35.966598] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.909 [2024-07-14 07:44:35.966622] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:19.909 qpair failed and we were unable to recover it. 00:27:19.909 [2024-07-14 07:44:35.966806] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.909 [2024-07-14 07:44:35.967012] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.909 [2024-07-14 07:44:35.967037] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:19.909 qpair failed and we were unable to recover it. 00:27:19.909 [2024-07-14 07:44:35.967220] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.909 [2024-07-14 07:44:35.967404] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.909 [2024-07-14 07:44:35.967429] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:19.909 qpair failed and we were unable to recover it. 00:27:19.909 [2024-07-14 07:44:35.967582] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.909 [2024-07-14 07:44:35.967762] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.909 [2024-07-14 07:44:35.967787] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:19.909 qpair failed and we were unable to recover it. 00:27:19.909 [2024-07-14 07:44:35.967944] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.909 [2024-07-14 07:44:35.968103] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.909 [2024-07-14 07:44:35.968128] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:19.909 qpair failed and we were unable to recover it. 00:27:19.909 [2024-07-14 07:44:35.968322] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.909 [2024-07-14 07:44:35.968506] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.909 [2024-07-14 07:44:35.968530] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:19.909 qpair failed and we were unable to recover it. 00:27:19.909 [2024-07-14 07:44:35.968745] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.909 [2024-07-14 07:44:35.968922] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.909 [2024-07-14 07:44:35.968947] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:19.909 qpair failed and we were unable to recover it. 00:27:19.909 [2024-07-14 07:44:35.969129] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.909 [2024-07-14 07:44:35.969293] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.909 [2024-07-14 07:44:35.969318] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:19.909 qpair failed and we were unable to recover it. 00:27:19.909 [2024-07-14 07:44:35.969498] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.909 [2024-07-14 07:44:35.969653] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.909 [2024-07-14 07:44:35.969677] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:19.909 qpair failed and we were unable to recover it. 00:27:19.909 [2024-07-14 07:44:35.969890] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.909 [2024-07-14 07:44:35.970057] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.909 [2024-07-14 07:44:35.970084] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:19.909 qpair failed and we were unable to recover it. 00:27:19.909 [2024-07-14 07:44:35.970271] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.909 [2024-07-14 07:44:35.970429] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.909 [2024-07-14 07:44:35.970454] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:19.909 qpair failed and we were unable to recover it. 00:27:19.909 [2024-07-14 07:44:35.970638] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.909 [2024-07-14 07:44:35.970793] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.909 [2024-07-14 07:44:35.970820] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:19.909 qpair failed and we were unable to recover it. 00:27:19.909 [2024-07-14 07:44:35.970972] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.909 [2024-07-14 07:44:35.971126] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.909 [2024-07-14 07:44:35.971151] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:19.909 qpair failed and we were unable to recover it. 00:27:19.909 [2024-07-14 07:44:35.971338] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.909 [2024-07-14 07:44:35.971521] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.909 [2024-07-14 07:44:35.971545] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:19.909 qpair failed and we were unable to recover it. 00:27:19.909 [2024-07-14 07:44:35.971735] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.909 [2024-07-14 07:44:35.971926] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.909 [2024-07-14 07:44:35.971952] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:19.909 qpair failed and we were unable to recover it. 00:27:19.909 [2024-07-14 07:44:35.972106] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.909 [2024-07-14 07:44:35.972267] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.909 [2024-07-14 07:44:35.972291] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:19.909 qpair failed and we were unable to recover it. 00:27:19.909 [2024-07-14 07:44:35.972582] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.909 [2024-07-14 07:44:35.972788] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.909 [2024-07-14 07:44:35.972812] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:19.909 qpair failed and we were unable to recover it. 00:27:19.909 [2024-07-14 07:44:35.973025] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.909 [2024-07-14 07:44:35.973186] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.909 [2024-07-14 07:44:35.973210] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:19.909 qpair failed and we were unable to recover it. 00:27:19.909 [2024-07-14 07:44:35.973392] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.909 [2024-07-14 07:44:35.973569] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.909 [2024-07-14 07:44:35.973593] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:19.909 qpair failed and we were unable to recover it. 00:27:19.909 [2024-07-14 07:44:35.973781] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.909 [2024-07-14 07:44:35.973947] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.909 [2024-07-14 07:44:35.973972] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:19.909 qpair failed and we were unable to recover it. 00:27:19.909 [2024-07-14 07:44:35.974160] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.909 [2024-07-14 07:44:35.974346] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.909 [2024-07-14 07:44:35.974370] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:19.909 qpair failed and we were unable to recover it. 00:27:19.909 [2024-07-14 07:44:35.974523] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.909 [2024-07-14 07:44:35.974733] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.909 [2024-07-14 07:44:35.974758] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:19.909 qpair failed and we were unable to recover it. 00:27:19.909 [2024-07-14 07:44:35.974946] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.909 [2024-07-14 07:44:35.975111] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.909 [2024-07-14 07:44:35.975135] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:19.909 qpair failed and we were unable to recover it. 00:27:19.909 [2024-07-14 07:44:35.975344] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.910 [2024-07-14 07:44:35.975495] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.910 [2024-07-14 07:44:35.975520] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:19.910 qpair failed and we were unable to recover it. 00:27:19.910 [2024-07-14 07:44:35.975729] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.910 [2024-07-14 07:44:35.975904] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.910 [2024-07-14 07:44:35.975929] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:19.910 qpair failed and we were unable to recover it. 00:27:19.910 [2024-07-14 07:44:35.976125] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.910 [2024-07-14 07:44:35.976306] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.910 [2024-07-14 07:44:35.976331] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:19.910 qpair failed and we were unable to recover it. 00:27:19.910 [2024-07-14 07:44:35.976507] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.910 [2024-07-14 07:44:35.976663] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.910 [2024-07-14 07:44:35.976687] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:19.910 qpair failed and we were unable to recover it. 00:27:19.910 [2024-07-14 07:44:35.976849] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.910 [2024-07-14 07:44:35.977043] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.910 [2024-07-14 07:44:35.977069] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:19.910 qpair failed and we were unable to recover it. 00:27:19.910 [2024-07-14 07:44:35.977228] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.910 [2024-07-14 07:44:35.977391] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.910 [2024-07-14 07:44:35.977416] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:19.910 qpair failed and we were unable to recover it. 00:27:19.910 [2024-07-14 07:44:35.977632] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.910 [2024-07-14 07:44:35.977842] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.910 [2024-07-14 07:44:35.977872] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:19.910 qpair failed and we were unable to recover it. 00:27:19.910 [2024-07-14 07:44:35.978037] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.910 [2024-07-14 07:44:35.978243] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.910 [2024-07-14 07:44:35.978267] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:19.910 qpair failed and we were unable to recover it. 00:27:19.910 [2024-07-14 07:44:35.978476] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.910 [2024-07-14 07:44:35.978670] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.910 [2024-07-14 07:44:35.978694] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:19.910 qpair failed and we were unable to recover it. 00:27:19.910 [2024-07-14 07:44:35.978878] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.910 [2024-07-14 07:44:35.979029] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.910 [2024-07-14 07:44:35.979052] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:19.910 qpair failed and we were unable to recover it. 00:27:19.910 [2024-07-14 07:44:35.979274] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.910 [2024-07-14 07:44:35.979463] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.910 [2024-07-14 07:44:35.979490] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:19.910 qpair failed and we were unable to recover it. 00:27:19.910 [2024-07-14 07:44:35.979648] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.910 [2024-07-14 07:44:35.979804] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.910 [2024-07-14 07:44:35.979834] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:19.910 qpair failed and we were unable to recover it. 00:27:19.910 [2024-07-14 07:44:35.980029] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.910 [2024-07-14 07:44:35.980211] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.910 [2024-07-14 07:44:35.980236] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:19.910 qpair failed and we were unable to recover it. 00:27:19.910 [2024-07-14 07:44:35.980394] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.910 [2024-07-14 07:44:35.980548] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.910 [2024-07-14 07:44:35.980572] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:19.910 qpair failed and we were unable to recover it. 00:27:19.910 [2024-07-14 07:44:35.980736] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.910 [2024-07-14 07:44:35.980946] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.910 [2024-07-14 07:44:35.980972] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:19.910 qpair failed and we were unable to recover it. 00:27:19.910 [2024-07-14 07:44:35.981138] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.910 [2024-07-14 07:44:35.981324] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.910 [2024-07-14 07:44:35.981349] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:19.910 qpair failed and we were unable to recover it. 00:27:19.910 [2024-07-14 07:44:35.981508] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.910 [2024-07-14 07:44:35.981719] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.910 [2024-07-14 07:44:35.981744] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:19.910 qpair failed and we were unable to recover it. 00:27:19.910 [2024-07-14 07:44:35.981937] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.910 [2024-07-14 07:44:35.982091] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.910 [2024-07-14 07:44:35.982115] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:19.910 qpair failed and we were unable to recover it. 00:27:19.910 [2024-07-14 07:44:35.982314] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.910 [2024-07-14 07:44:35.982496] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.910 [2024-07-14 07:44:35.982520] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:19.910 qpair failed and we were unable to recover it. 00:27:19.910 [2024-07-14 07:44:35.982701] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.910 [2024-07-14 07:44:35.982856] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.910 [2024-07-14 07:44:35.982884] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:19.910 qpair failed and we were unable to recover it. 00:27:19.910 [2024-07-14 07:44:35.983055] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.910 [2024-07-14 07:44:35.983278] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.910 [2024-07-14 07:44:35.983302] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:19.910 qpair failed and we were unable to recover it. 00:27:19.910 [2024-07-14 07:44:35.983465] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.910 [2024-07-14 07:44:35.983679] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.910 [2024-07-14 07:44:35.983703] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:19.910 qpair failed and we were unable to recover it. 00:27:19.910 [2024-07-14 07:44:35.983894] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.910 [2024-07-14 07:44:35.984059] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.910 [2024-07-14 07:44:35.984084] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:19.910 qpair failed and we were unable to recover it. 00:27:19.910 [2024-07-14 07:44:35.984275] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.910 [2024-07-14 07:44:35.984430] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.910 [2024-07-14 07:44:35.984454] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:19.910 qpair failed and we were unable to recover it. 00:27:19.910 [2024-07-14 07:44:35.984639] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.910 [2024-07-14 07:44:35.984815] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.910 [2024-07-14 07:44:35.984839] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:19.910 qpair failed and we were unable to recover it. 00:27:19.910 [2024-07-14 07:44:35.984995] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.910 [2024-07-14 07:44:35.985184] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.910 [2024-07-14 07:44:35.985208] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:19.910 qpair failed and we were unable to recover it. 00:27:19.910 [2024-07-14 07:44:35.985407] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.910 [2024-07-14 07:44:35.985570] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.910 [2024-07-14 07:44:35.985594] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:19.910 qpair failed and we were unable to recover it. 00:27:19.910 [2024-07-14 07:44:35.985777] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.910 [2024-07-14 07:44:35.985940] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.910 [2024-07-14 07:44:35.985965] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:19.910 qpair failed and we were unable to recover it. 00:27:19.910 [2024-07-14 07:44:35.986155] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.910 [2024-07-14 07:44:35.986340] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.910 [2024-07-14 07:44:35.986365] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:19.910 qpair failed and we were unable to recover it. 00:27:19.910 [2024-07-14 07:44:35.986553] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.910 [2024-07-14 07:44:35.986713] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.910 [2024-07-14 07:44:35.986738] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:19.910 qpair failed and we were unable to recover it. 00:27:19.910 [2024-07-14 07:44:35.986921] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.910 [2024-07-14 07:44:35.987114] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.910 [2024-07-14 07:44:35.987139] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:19.910 qpair failed and we were unable to recover it. 00:27:19.911 [2024-07-14 07:44:35.987352] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.911 [2024-07-14 07:44:35.987533] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.911 [2024-07-14 07:44:35.987558] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:19.911 qpair failed and we were unable to recover it. 00:27:19.911 [2024-07-14 07:44:35.987752] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.911 [2024-07-14 07:44:35.987977] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.911 [2024-07-14 07:44:35.988002] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:19.911 qpair failed and we were unable to recover it. 00:27:19.911 [2024-07-14 07:44:35.988209] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.911 [2024-07-14 07:44:35.988394] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.911 [2024-07-14 07:44:35.988419] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:19.911 qpair failed and we were unable to recover it. 00:27:19.911 [2024-07-14 07:44:35.988631] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.911 [2024-07-14 07:44:35.988814] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.911 [2024-07-14 07:44:35.988838] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:19.911 qpair failed and we were unable to recover it. 00:27:19.911 [2024-07-14 07:44:35.989007] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.911 [2024-07-14 07:44:35.989188] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.911 [2024-07-14 07:44:35.989213] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:19.911 qpair failed and we were unable to recover it. 00:27:19.911 [2024-07-14 07:44:35.989410] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.911 [2024-07-14 07:44:35.989570] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.911 [2024-07-14 07:44:35.989594] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:19.911 qpair failed and we were unable to recover it. 00:27:19.911 [2024-07-14 07:44:35.989779] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.911 [2024-07-14 07:44:35.989961] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.911 [2024-07-14 07:44:35.989986] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:19.911 qpair failed and we were unable to recover it. 00:27:19.911 [2024-07-14 07:44:35.990197] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.911 [2024-07-14 07:44:35.990385] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.911 [2024-07-14 07:44:35.990409] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:19.911 qpair failed and we were unable to recover it. 00:27:19.911 [2024-07-14 07:44:35.990589] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.911 [2024-07-14 07:44:35.990748] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.911 [2024-07-14 07:44:35.990773] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:19.911 qpair failed and we were unable to recover it. 00:27:19.911 [2024-07-14 07:44:35.990934] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.911 [2024-07-14 07:44:35.991087] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.911 [2024-07-14 07:44:35.991111] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:19.911 qpair failed and we were unable to recover it. 00:27:19.911 [2024-07-14 07:44:35.991322] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.911 [2024-07-14 07:44:35.991531] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.911 [2024-07-14 07:44:35.991556] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:19.911 qpair failed and we were unable to recover it. 00:27:19.911 [2024-07-14 07:44:35.991745] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.911 [2024-07-14 07:44:35.991929] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.911 [2024-07-14 07:44:35.991954] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:19.911 qpair failed and we were unable to recover it. 00:27:19.911 [2024-07-14 07:44:35.992135] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.911 [2024-07-14 07:44:35.992343] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.911 [2024-07-14 07:44:35.992367] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:19.911 qpair failed and we were unable to recover it. 00:27:19.911 [2024-07-14 07:44:35.992575] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.911 [2024-07-14 07:44:35.992789] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.911 [2024-07-14 07:44:35.992814] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:19.911 qpair failed and we were unable to recover it. 00:27:19.911 [2024-07-14 07:44:35.993002] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.911 [2024-07-14 07:44:35.993187] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.911 [2024-07-14 07:44:35.993212] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:19.911 qpair failed and we were unable to recover it. 00:27:19.911 [2024-07-14 07:44:35.993373] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.911 [2024-07-14 07:44:35.993588] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.911 [2024-07-14 07:44:35.993613] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:19.911 qpair failed and we were unable to recover it. 00:27:19.911 [2024-07-14 07:44:35.993824] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.911 [2024-07-14 07:44:35.994039] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.911 [2024-07-14 07:44:35.994064] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:19.911 qpair failed and we were unable to recover it. 00:27:19.911 [2024-07-14 07:44:35.994277] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.911 [2024-07-14 07:44:35.994482] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.911 [2024-07-14 07:44:35.994506] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:19.911 qpair failed and we were unable to recover it. 00:27:19.911 [2024-07-14 07:44:35.994690] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.911 [2024-07-14 07:44:35.994862] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.911 [2024-07-14 07:44:35.994893] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:19.911 qpair failed and we were unable to recover it. 00:27:19.911 [2024-07-14 07:44:35.995087] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.911 [2024-07-14 07:44:35.995300] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.911 [2024-07-14 07:44:35.995326] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:19.911 qpair failed and we were unable to recover it. 00:27:19.911 [2024-07-14 07:44:35.995512] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.911 [2024-07-14 07:44:35.995701] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.911 [2024-07-14 07:44:35.995724] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:19.911 qpair failed and we were unable to recover it. 00:27:19.911 [2024-07-14 07:44:35.995908] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.911 [2024-07-14 07:44:35.996122] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.911 [2024-07-14 07:44:35.996147] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:19.911 qpair failed and we were unable to recover it. 00:27:19.911 [2024-07-14 07:44:35.996325] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.911 [2024-07-14 07:44:35.996507] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.911 [2024-07-14 07:44:35.996531] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:19.911 qpair failed and we were unable to recover it. 00:27:19.911 [2024-07-14 07:44:35.996689] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.911 [2024-07-14 07:44:35.996852] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.911 [2024-07-14 07:44:35.996882] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:19.911 qpair failed and we were unable to recover it. 00:27:19.911 [2024-07-14 07:44:35.997106] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.911 [2024-07-14 07:44:35.997261] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.911 [2024-07-14 07:44:35.997285] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:19.911 qpair failed and we were unable to recover it. 00:27:19.911 [2024-07-14 07:44:35.997469] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.911 [2024-07-14 07:44:35.997646] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.911 [2024-07-14 07:44:35.997671] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:19.911 qpair failed and we were unable to recover it. 00:27:19.911 [2024-07-14 07:44:35.997826] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.911 [2024-07-14 07:44:35.998015] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.911 [2024-07-14 07:44:35.998040] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:19.911 qpair failed and we were unable to recover it. 00:27:19.911 [2024-07-14 07:44:35.998226] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.911 [2024-07-14 07:44:35.998412] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.911 [2024-07-14 07:44:35.998437] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:19.911 qpair failed and we were unable to recover it. 00:27:19.911 [2024-07-14 07:44:35.998597] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.911 [2024-07-14 07:44:35.998775] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.911 [2024-07-14 07:44:35.998800] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:19.911 qpair failed and we were unable to recover it. 00:27:19.911 [2024-07-14 07:44:35.998988] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.911 [2024-07-14 07:44:35.999177] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.911 [2024-07-14 07:44:35.999201] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:19.911 qpair failed and we were unable to recover it. 00:27:19.911 [2024-07-14 07:44:35.999359] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.911 [2024-07-14 07:44:35.999550] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.911 [2024-07-14 07:44:35.999575] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:19.912 qpair failed and we were unable to recover it. 00:27:19.912 [2024-07-14 07:44:35.999759] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.912 [2024-07-14 07:44:35.999942] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.912 [2024-07-14 07:44:35.999972] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:19.912 qpair failed and we were unable to recover it. 00:27:19.912 [2024-07-14 07:44:36.000160] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.912 [2024-07-14 07:44:36.000345] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.912 [2024-07-14 07:44:36.000370] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:19.912 qpair failed and we were unable to recover it. 00:27:19.912 [2024-07-14 07:44:36.000553] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.912 [2024-07-14 07:44:36.000738] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.912 [2024-07-14 07:44:36.000763] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:19.912 qpair failed and we were unable to recover it. 00:27:19.912 [2024-07-14 07:44:36.000949] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.912 [2024-07-14 07:44:36.001157] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.912 [2024-07-14 07:44:36.001182] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:19.912 qpair failed and we were unable to recover it. 00:27:19.912 [2024-07-14 07:44:36.001366] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.912 [2024-07-14 07:44:36.001545] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.912 [2024-07-14 07:44:36.001569] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:19.912 qpair failed and we were unable to recover it. 00:27:19.912 [2024-07-14 07:44:36.001775] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.912 [2024-07-14 07:44:36.001986] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.912 [2024-07-14 07:44:36.002011] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:19.912 qpair failed and we were unable to recover it. 00:27:19.912 [2024-07-14 07:44:36.002225] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.912 [2024-07-14 07:44:36.002409] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.912 [2024-07-14 07:44:36.002433] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:19.912 qpair failed and we were unable to recover it. 00:27:19.912 [2024-07-14 07:44:36.002618] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.912 [2024-07-14 07:44:36.002769] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.912 [2024-07-14 07:44:36.002793] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:19.912 qpair failed and we were unable to recover it. 00:27:19.912 [2024-07-14 07:44:36.002980] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.912 [2024-07-14 07:44:36.003132] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.912 [2024-07-14 07:44:36.003155] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:19.912 qpair failed and we were unable to recover it. 00:27:19.912 [2024-07-14 07:44:36.003367] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.912 [2024-07-14 07:44:36.003549] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.912 [2024-07-14 07:44:36.003573] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:19.912 qpair failed and we were unable to recover it. 00:27:19.912 [2024-07-14 07:44:36.003757] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.912 [2024-07-14 07:44:36.003944] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.912 [2024-07-14 07:44:36.003973] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:19.912 qpair failed and we were unable to recover it. 00:27:19.912 [2024-07-14 07:44:36.004133] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.912 [2024-07-14 07:44:36.004318] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.912 [2024-07-14 07:44:36.004343] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:19.912 qpair failed and we were unable to recover it. 00:27:19.912 [2024-07-14 07:44:36.004528] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.912 [2024-07-14 07:44:36.004687] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.912 [2024-07-14 07:44:36.004711] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:19.912 qpair failed and we were unable to recover it. 00:27:19.912 [2024-07-14 07:44:36.004900] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.912 [2024-07-14 07:44:36.005106] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.912 [2024-07-14 07:44:36.005130] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:19.912 qpair failed and we were unable to recover it. 00:27:19.912 [2024-07-14 07:44:36.005292] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.912 [2024-07-14 07:44:36.005497] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.912 [2024-07-14 07:44:36.005521] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:19.912 qpair failed and we were unable to recover it. 00:27:19.912 [2024-07-14 07:44:36.005706] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.912 [2024-07-14 07:44:36.005861] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.912 [2024-07-14 07:44:36.005891] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:19.912 qpair failed and we were unable to recover it. 00:27:19.912 [2024-07-14 07:44:36.006110] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.912 [2024-07-14 07:44:36.006264] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.912 [2024-07-14 07:44:36.006288] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:19.912 qpair failed and we were unable to recover it. 00:27:19.912 [2024-07-14 07:44:36.006472] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.912 [2024-07-14 07:44:36.006650] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.912 [2024-07-14 07:44:36.006674] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:19.912 qpair failed and we were unable to recover it. 00:27:19.912 [2024-07-14 07:44:36.006859] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.912 [2024-07-14 07:44:36.007046] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.912 [2024-07-14 07:44:36.007071] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:19.912 qpair failed and we were unable to recover it. 00:27:19.912 [2024-07-14 07:44:36.007225] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.912 [2024-07-14 07:44:36.007407] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.912 [2024-07-14 07:44:36.007431] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:19.912 qpair failed and we were unable to recover it. 00:27:19.912 [2024-07-14 07:44:36.007617] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.912 [2024-07-14 07:44:36.007794] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.912 [2024-07-14 07:44:36.007818] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:19.912 qpair failed and we were unable to recover it. 00:27:19.912 [2024-07-14 07:44:36.008015] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.912 [2024-07-14 07:44:36.008202] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.912 [2024-07-14 07:44:36.008226] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:19.912 qpair failed and we were unable to recover it. 00:27:19.912 [2024-07-14 07:44:36.008434] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.912 [2024-07-14 07:44:36.008622] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.912 [2024-07-14 07:44:36.008646] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:19.912 qpair failed and we were unable to recover it. 00:27:19.912 [2024-07-14 07:44:36.008835] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.912 [2024-07-14 07:44:36.009027] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.912 [2024-07-14 07:44:36.009052] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:19.912 qpair failed and we were unable to recover it. 00:27:19.912 [2024-07-14 07:44:36.009235] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.912 [2024-07-14 07:44:36.009452] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.912 [2024-07-14 07:44:36.009477] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:19.912 qpair failed and we were unable to recover it. 00:27:19.912 [2024-07-14 07:44:36.009642] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.912 [2024-07-14 07:44:36.009851] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.912 [2024-07-14 07:44:36.009881] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:19.912 qpair failed and we were unable to recover it. 00:27:19.912 [2024-07-14 07:44:36.010042] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.912 [2024-07-14 07:44:36.010225] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.912 [2024-07-14 07:44:36.010250] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:19.912 qpair failed and we were unable to recover it. 00:27:19.912 [2024-07-14 07:44:36.010410] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.912 [2024-07-14 07:44:36.010594] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.912 [2024-07-14 07:44:36.010618] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:19.912 qpair failed and we were unable to recover it. 00:27:19.912 [2024-07-14 07:44:36.010820] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.912 [2024-07-14 07:44:36.010998] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.912 [2024-07-14 07:44:36.011024] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:19.912 qpair failed and we were unable to recover it. 00:27:19.912 [2024-07-14 07:44:36.011208] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.912 [2024-07-14 07:44:36.011358] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.912 [2024-07-14 07:44:36.011382] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:19.912 qpair failed and we were unable to recover it. 00:27:19.912 [2024-07-14 07:44:36.011573] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.912 [2024-07-14 07:44:36.011758] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.913 [2024-07-14 07:44:36.011782] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:19.913 qpair failed and we were unable to recover it. 00:27:19.913 [2024-07-14 07:44:36.011951] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.913 [2024-07-14 07:44:36.012112] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.913 [2024-07-14 07:44:36.012136] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:19.913 qpair failed and we were unable to recover it. 00:27:19.913 [2024-07-14 07:44:36.012291] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.913 [2024-07-14 07:44:36.012477] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.913 [2024-07-14 07:44:36.012501] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:19.913 qpair failed and we were unable to recover it. 00:27:19.913 [2024-07-14 07:44:36.012650] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.913 [2024-07-14 07:44:36.012831] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.913 [2024-07-14 07:44:36.012855] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:19.913 qpair failed and we were unable to recover it. 00:27:19.913 [2024-07-14 07:44:36.013018] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.913 [2024-07-14 07:44:36.013174] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.913 [2024-07-14 07:44:36.013198] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:19.913 qpair failed and we were unable to recover it. 00:27:19.913 [2024-07-14 07:44:36.013390] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.913 [2024-07-14 07:44:36.013544] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.913 [2024-07-14 07:44:36.013568] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:19.913 qpair failed and we were unable to recover it. 00:27:19.913 [2024-07-14 07:44:36.013756] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.913 [2024-07-14 07:44:36.013936] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.913 [2024-07-14 07:44:36.013961] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:19.913 qpair failed and we were unable to recover it. 00:27:19.913 [2024-07-14 07:44:36.014140] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.913 [2024-07-14 07:44:36.014293] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.913 [2024-07-14 07:44:36.014318] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:19.913 qpair failed and we were unable to recover it. 00:27:19.913 [2024-07-14 07:44:36.014504] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.913 [2024-07-14 07:44:36.014649] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.913 [2024-07-14 07:44:36.014674] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:19.913 qpair failed and we were unable to recover it. 00:27:19.913 [2024-07-14 07:44:36.014857] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.913 [2024-07-14 07:44:36.015046] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.913 [2024-07-14 07:44:36.015070] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:19.913 qpair failed and we were unable to recover it. 00:27:19.913 [2024-07-14 07:44:36.015259] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.913 [2024-07-14 07:44:36.015443] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.913 [2024-07-14 07:44:36.015467] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:19.913 qpair failed and we were unable to recover it. 00:27:19.913 [2024-07-14 07:44:36.015656] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.913 [2024-07-14 07:44:36.015844] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.913 [2024-07-14 07:44:36.015885] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:19.913 qpair failed and we were unable to recover it. 00:27:19.913 [2024-07-14 07:44:36.016069] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.913 [2024-07-14 07:44:36.016288] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.913 [2024-07-14 07:44:36.016312] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:19.913 qpair failed and we were unable to recover it. 00:27:19.913 [2024-07-14 07:44:36.016496] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.913 [2024-07-14 07:44:36.016678] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.913 [2024-07-14 07:44:36.016702] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:19.913 qpair failed and we were unable to recover it. 00:27:19.913 [2024-07-14 07:44:36.016857] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.913 [2024-07-14 07:44:36.017045] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.913 [2024-07-14 07:44:36.017070] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:19.913 qpair failed and we were unable to recover it. 00:27:19.913 [2024-07-14 07:44:36.017233] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.913 [2024-07-14 07:44:36.017412] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.913 [2024-07-14 07:44:36.017437] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:19.913 qpair failed and we were unable to recover it. 00:27:19.913 [2024-07-14 07:44:36.017617] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.913 [2024-07-14 07:44:36.017796] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.913 [2024-07-14 07:44:36.017819] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:19.913 qpair failed and we were unable to recover it. 00:27:19.913 [2024-07-14 07:44:36.018016] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.913 [2024-07-14 07:44:36.018202] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.913 [2024-07-14 07:44:36.018225] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:19.913 qpair failed and we were unable to recover it. 00:27:19.913 [2024-07-14 07:44:36.018392] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.913 [2024-07-14 07:44:36.018567] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.913 [2024-07-14 07:44:36.018591] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:19.913 qpair failed and we were unable to recover it. 00:27:19.913 [2024-07-14 07:44:36.018771] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.913 [2024-07-14 07:44:36.018969] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.913 [2024-07-14 07:44:36.018993] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:19.913 qpair failed and we were unable to recover it. 00:27:19.913 [2024-07-14 07:44:36.019206] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.913 [2024-07-14 07:44:36.019360] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.913 [2024-07-14 07:44:36.019384] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:19.913 qpair failed and we were unable to recover it. 00:27:19.913 [2024-07-14 07:44:36.019570] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.913 [2024-07-14 07:44:36.019728] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.913 [2024-07-14 07:44:36.019752] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:19.913 qpair failed and we were unable to recover it. 00:27:19.913 [2024-07-14 07:44:36.019911] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.913 [2024-07-14 07:44:36.020061] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.913 [2024-07-14 07:44:36.020085] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:19.913 qpair failed and we were unable to recover it. 00:27:19.913 [2024-07-14 07:44:36.020273] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.913 [2024-07-14 07:44:36.020485] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.913 [2024-07-14 07:44:36.020509] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:19.913 qpair failed and we were unable to recover it. 00:27:19.913 [2024-07-14 07:44:36.020660] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.913 [2024-07-14 07:44:36.020844] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.913 [2024-07-14 07:44:36.020873] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:19.913 qpair failed and we were unable to recover it. 00:27:19.913 [2024-07-14 07:44:36.021039] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.913 [2024-07-14 07:44:36.021193] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.913 [2024-07-14 07:44:36.021217] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:19.913 qpair failed and we were unable to recover it. 00:27:19.913 [2024-07-14 07:44:36.021407] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.913 [2024-07-14 07:44:36.021569] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.913 [2024-07-14 07:44:36.021593] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:19.913 qpair failed and we were unable to recover it. 00:27:19.913 [2024-07-14 07:44:36.021754] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.914 [2024-07-14 07:44:36.021940] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.914 [2024-07-14 07:44:36.021967] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:19.914 qpair failed and we were unable to recover it. 00:27:19.914 [2024-07-14 07:44:36.022118] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.914 [2024-07-14 07:44:36.022298] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.914 [2024-07-14 07:44:36.022323] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:19.914 qpair failed and we were unable to recover it. 00:27:19.914 [2024-07-14 07:44:36.022511] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.914 [2024-07-14 07:44:36.022696] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.914 [2024-07-14 07:44:36.022720] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:19.914 qpair failed and we were unable to recover it. 00:27:19.914 [2024-07-14 07:44:36.022909] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.914 [2024-07-14 07:44:36.023061] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.914 [2024-07-14 07:44:36.023086] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:19.914 qpair failed and we were unable to recover it. 00:27:19.914 [2024-07-14 07:44:36.023272] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.914 [2024-07-14 07:44:36.023458] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.914 [2024-07-14 07:44:36.023486] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:19.914 qpair failed and we were unable to recover it. 00:27:19.914 [2024-07-14 07:44:36.023675] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.914 [2024-07-14 07:44:36.023819] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.914 [2024-07-14 07:44:36.023843] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:19.914 qpair failed and we were unable to recover it. 00:27:19.914 [2024-07-14 07:44:36.024043] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.914 [2024-07-14 07:44:36.024252] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.914 [2024-07-14 07:44:36.024276] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:19.914 qpair failed and we were unable to recover it. 00:27:19.914 [2024-07-14 07:44:36.024463] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.914 [2024-07-14 07:44:36.024656] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.914 [2024-07-14 07:44:36.024680] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:19.914 qpair failed and we were unable to recover it. 00:27:19.914 [2024-07-14 07:44:36.024837] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.914 [2024-07-14 07:44:36.025004] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.914 [2024-07-14 07:44:36.025029] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:19.914 qpair failed and we were unable to recover it. 00:27:19.914 [2024-07-14 07:44:36.025186] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.914 [2024-07-14 07:44:36.025342] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.914 [2024-07-14 07:44:36.025367] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:19.914 qpair failed and we were unable to recover it. 00:27:19.914 [2024-07-14 07:44:36.025550] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.914 [2024-07-14 07:44:36.025707] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.914 [2024-07-14 07:44:36.025731] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:19.914 qpair failed and we were unable to recover it. 00:27:19.914 [2024-07-14 07:44:36.025900] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.914 [2024-07-14 07:44:36.026058] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.914 [2024-07-14 07:44:36.026083] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:19.914 qpair failed and we were unable to recover it. 00:27:19.914 [2024-07-14 07:44:36.026161] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:27:19.914 [2024-07-14 07:44:36.026241] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.914 [2024-07-14 07:44:36.026286] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:27:19.914 [2024-07-14 07:44:36.026305] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:27:19.914 [2024-07-14 07:44:36.026318] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:27:19.914 [2024-07-14 07:44:36.026428] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.914 [2024-07-14 07:44:36.026382] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 5 00:27:19.914 [2024-07-14 07:44:36.026451] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:19.914 qpair failed and we were unable to recover it. 00:27:19.914 [2024-07-14 07:44:36.026413] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 6 00:27:19.914 [2024-07-14 07:44:36.026441] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 7 00:27:19.914 [2024-07-14 07:44:36.026443] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 4 00:27:19.914 [2024-07-14 07:44:36.026636] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.914 [2024-07-14 07:44:36.026818] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.914 [2024-07-14 07:44:36.026842] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:19.914 qpair failed and we were unable to recover it. 00:27:19.914 [2024-07-14 07:44:36.027005] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.914 [2024-07-14 07:44:36.027161] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.914 [2024-07-14 07:44:36.027186] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:19.914 qpair failed and we were unable to recover it. 00:27:19.914 [2024-07-14 07:44:36.027346] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.914 [2024-07-14 07:44:36.027525] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.914 [2024-07-14 07:44:36.027549] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:19.914 qpair failed and we were unable to recover it. 00:27:19.914 [2024-07-14 07:44:36.027708] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.914 [2024-07-14 07:44:36.027892] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.914 [2024-07-14 07:44:36.027918] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:19.914 qpair failed and we were unable to recover it. 00:27:19.914 [2024-07-14 07:44:36.028074] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.914 [2024-07-14 07:44:36.028240] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.914 [2024-07-14 07:44:36.028264] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:19.914 qpair failed and we were unable to recover it. 00:27:19.914 [2024-07-14 07:44:36.028443] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.914 [2024-07-14 07:44:36.028597] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.914 [2024-07-14 07:44:36.028622] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:19.914 qpair failed and we were unable to recover it. 00:27:19.914 [2024-07-14 07:44:36.028850] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.914 [2024-07-14 07:44:36.029045] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.914 [2024-07-14 07:44:36.029070] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:19.914 qpair failed and we were unable to recover it. 00:27:19.914 [2024-07-14 07:44:36.029223] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.914 [2024-07-14 07:44:36.029400] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.914 [2024-07-14 07:44:36.029424] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:19.914 qpair failed and we were unable to recover it. 00:27:19.914 [2024-07-14 07:44:36.029586] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.914 [2024-07-14 07:44:36.029762] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.914 [2024-07-14 07:44:36.029786] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:19.914 qpair failed and we were unable to recover it. 00:27:19.914 [2024-07-14 07:44:36.029954] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.914 [2024-07-14 07:44:36.030122] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.914 [2024-07-14 07:44:36.030145] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:19.914 qpair failed and we were unable to recover it. 00:27:19.914 [2024-07-14 07:44:36.030310] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.914 [2024-07-14 07:44:36.030464] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.914 [2024-07-14 07:44:36.030488] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:19.914 qpair failed and we were unable to recover it. 00:27:19.914 [2024-07-14 07:44:36.030673] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.914 [2024-07-14 07:44:36.030829] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.914 [2024-07-14 07:44:36.030853] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:19.914 qpair failed and we were unable to recover it. 00:27:19.914 [2024-07-14 07:44:36.031022] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.914 [2024-07-14 07:44:36.031173] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.914 [2024-07-14 07:44:36.031198] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:19.914 qpair failed and we were unable to recover it. 00:27:19.914 [2024-07-14 07:44:36.031391] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.914 [2024-07-14 07:44:36.031584] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.914 [2024-07-14 07:44:36.031607] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:19.914 qpair failed and we were unable to recover it. 00:27:19.914 [2024-07-14 07:44:36.031780] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.914 [2024-07-14 07:44:36.031937] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.914 [2024-07-14 07:44:36.031962] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:19.914 qpair failed and we were unable to recover it. 00:27:19.914 [2024-07-14 07:44:36.032144] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.914 [2024-07-14 07:44:36.032292] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.914 [2024-07-14 07:44:36.032316] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:19.914 qpair failed and we were unable to recover it. 00:27:19.915 [2024-07-14 07:44:36.032485] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.915 [2024-07-14 07:44:36.032639] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.915 [2024-07-14 07:44:36.032663] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:19.915 qpair failed and we were unable to recover it. 00:27:19.915 [2024-07-14 07:44:36.032838] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.915 [2024-07-14 07:44:36.033008] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.915 [2024-07-14 07:44:36.033033] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:19.915 qpair failed and we were unable to recover it. 00:27:19.915 [2024-07-14 07:44:36.033232] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.915 [2024-07-14 07:44:36.033520] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.915 [2024-07-14 07:44:36.033544] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:19.915 qpair failed and we were unable to recover it. 00:27:19.915 [2024-07-14 07:44:36.033726] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.915 [2024-07-14 07:44:36.033909] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.915 [2024-07-14 07:44:36.033934] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:19.915 qpair failed and we were unable to recover it. 00:27:19.915 [2024-07-14 07:44:36.034095] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.915 [2024-07-14 07:44:36.034293] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.915 [2024-07-14 07:44:36.034320] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:19.915 qpair failed and we were unable to recover it. 00:27:19.915 [2024-07-14 07:44:36.034512] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.915 [2024-07-14 07:44:36.034699] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.915 [2024-07-14 07:44:36.034724] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:19.915 qpair failed and we were unable to recover it. 00:27:19.915 [2024-07-14 07:44:36.034880] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.915 [2024-07-14 07:44:36.035040] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.915 [2024-07-14 07:44:36.035065] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:19.915 qpair failed and we were unable to recover it. 00:27:19.915 [2024-07-14 07:44:36.035219] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.915 [2024-07-14 07:44:36.035374] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.915 [2024-07-14 07:44:36.035399] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:19.915 qpair failed and we were unable to recover it. 00:27:19.915 [2024-07-14 07:44:36.035547] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.915 [2024-07-14 07:44:36.035740] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.915 [2024-07-14 07:44:36.035764] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:19.915 qpair failed and we were unable to recover it. 00:27:19.915 [2024-07-14 07:44:36.036046] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.915 [2024-07-14 07:44:36.036244] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.915 [2024-07-14 07:44:36.036267] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:19.915 qpair failed and we were unable to recover it. 00:27:19.915 [2024-07-14 07:44:36.036463] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.915 [2024-07-14 07:44:36.036629] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.915 [2024-07-14 07:44:36.036653] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:19.915 qpair failed and we were unable to recover it. 00:27:19.915 [2024-07-14 07:44:36.036810] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.915 [2024-07-14 07:44:36.036973] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.915 [2024-07-14 07:44:36.036998] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:19.915 qpair failed and we were unable to recover it. 00:27:19.915 [2024-07-14 07:44:36.037192] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.915 [2024-07-14 07:44:36.037358] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.915 [2024-07-14 07:44:36.037382] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:19.915 qpair failed and we were unable to recover it. 00:27:19.915 [2024-07-14 07:44:36.037555] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.915 [2024-07-14 07:44:36.037712] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.915 [2024-07-14 07:44:36.037737] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:19.915 qpair failed and we were unable to recover it. 00:27:19.915 [2024-07-14 07:44:36.037893] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.915 [2024-07-14 07:44:36.038048] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.915 [2024-07-14 07:44:36.038072] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:19.915 qpair failed and we were unable to recover it. 00:27:19.915 [2024-07-14 07:44:36.038252] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.915 [2024-07-14 07:44:36.038424] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.915 [2024-07-14 07:44:36.038448] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:19.915 qpair failed and we were unable to recover it. 00:27:19.915 [2024-07-14 07:44:36.038595] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.915 [2024-07-14 07:44:36.038835] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.915 [2024-07-14 07:44:36.038860] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:19.915 qpair failed and we were unable to recover it. 00:27:19.915 [2024-07-14 07:44:36.039030] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.915 [2024-07-14 07:44:36.039186] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.915 [2024-07-14 07:44:36.039211] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:19.915 qpair failed and we were unable to recover it. 00:27:19.915 [2024-07-14 07:44:36.039399] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.915 [2024-07-14 07:44:36.039608] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.915 [2024-07-14 07:44:36.039632] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:19.915 qpair failed and we were unable to recover it. 00:27:19.915 [2024-07-14 07:44:36.039902] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.915 [2024-07-14 07:44:36.040066] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.915 [2024-07-14 07:44:36.040090] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:19.915 qpair failed and we were unable to recover it. 00:27:19.915 [2024-07-14 07:44:36.040270] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.915 [2024-07-14 07:44:36.040423] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.915 [2024-07-14 07:44:36.040447] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:19.915 qpair failed and we were unable to recover it. 00:27:19.915 [2024-07-14 07:44:36.040609] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.915 [2024-07-14 07:44:36.040817] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.915 [2024-07-14 07:44:36.040841] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:19.915 qpair failed and we were unable to recover it. 00:27:19.915 [2024-07-14 07:44:36.041033] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.915 [2024-07-14 07:44:36.041218] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.915 [2024-07-14 07:44:36.041242] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:19.915 qpair failed and we were unable to recover it. 00:27:19.915 [2024-07-14 07:44:36.041414] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.915 [2024-07-14 07:44:36.041565] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.915 [2024-07-14 07:44:36.041589] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:19.915 qpair failed and we were unable to recover it. 00:27:19.915 [2024-07-14 07:44:36.041743] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.915 [2024-07-14 07:44:36.042027] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.915 [2024-07-14 07:44:36.042052] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:19.915 qpair failed and we were unable to recover it. 00:27:19.915 [2024-07-14 07:44:36.042230] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.915 [2024-07-14 07:44:36.042430] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.915 [2024-07-14 07:44:36.042468] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:19.915 qpair failed and we were unable to recover it. 00:27:19.915 [2024-07-14 07:44:36.042770] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.915 [2024-07-14 07:44:36.042922] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.915 [2024-07-14 07:44:36.042948] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:19.915 qpair failed and we were unable to recover it. 00:27:19.915 [2024-07-14 07:44:36.043129] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.915 [2024-07-14 07:44:36.043320] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.915 [2024-07-14 07:44:36.043348] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:19.915 qpair failed and we were unable to recover it. 00:27:19.915 [2024-07-14 07:44:36.043543] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.915 [2024-07-14 07:44:36.043716] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.915 [2024-07-14 07:44:36.043740] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:19.915 qpair failed and we were unable to recover it. 00:27:19.915 [2024-07-14 07:44:36.043903] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.915 [2024-07-14 07:44:36.044091] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.915 [2024-07-14 07:44:36.044116] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:19.915 qpair failed and we were unable to recover it. 00:27:19.915 [2024-07-14 07:44:36.044280] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.915 [2024-07-14 07:44:36.044441] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.915 [2024-07-14 07:44:36.044475] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:19.916 qpair failed and we were unable to recover it. 00:27:19.916 [2024-07-14 07:44:36.044669] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.916 [2024-07-14 07:44:36.044856] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.916 [2024-07-14 07:44:36.044905] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:19.916 qpair failed and we were unable to recover it. 00:27:19.916 [2024-07-14 07:44:36.045232] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.916 [2024-07-14 07:44:36.045443] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.916 [2024-07-14 07:44:36.045470] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:19.916 qpair failed and we were unable to recover it. 00:27:19.916 [2024-07-14 07:44:36.045637] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.916 [2024-07-14 07:44:36.045813] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.916 [2024-07-14 07:44:36.045839] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:19.916 qpair failed and we were unable to recover it. 00:27:19.916 [2024-07-14 07:44:36.046028] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.916 [2024-07-14 07:44:36.046200] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.916 [2024-07-14 07:44:36.046230] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:19.916 qpair failed and we were unable to recover it. 00:27:19.916 [2024-07-14 07:44:36.046390] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.916 [2024-07-14 07:44:36.046580] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.916 [2024-07-14 07:44:36.046613] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:19.916 qpair failed and we were unable to recover it. 00:27:19.916 [2024-07-14 07:44:36.046829] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.916 [2024-07-14 07:44:36.047019] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.916 [2024-07-14 07:44:36.047045] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:19.916 qpair failed and we were unable to recover it. 00:27:20.190 [2024-07-14 07:44:36.047246] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.190 [2024-07-14 07:44:36.047437] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.190 [2024-07-14 07:44:36.047461] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:20.190 qpair failed and we were unable to recover it. 00:27:20.190 [2024-07-14 07:44:36.047679] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.190 [2024-07-14 07:44:36.047856] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.190 [2024-07-14 07:44:36.047925] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:20.190 qpair failed and we were unable to recover it. 00:27:20.190 [2024-07-14 07:44:36.048134] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.190 [2024-07-14 07:44:36.048311] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.190 [2024-07-14 07:44:36.048346] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:20.190 qpair failed and we were unable to recover it. 00:27:20.190 [2024-07-14 07:44:36.048547] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.190 [2024-07-14 07:44:36.048737] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.190 [2024-07-14 07:44:36.048770] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:20.190 qpair failed and we were unable to recover it. 00:27:20.190 [2024-07-14 07:44:36.048952] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.190 [2024-07-14 07:44:36.049134] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.190 [2024-07-14 07:44:36.049171] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:20.190 qpair failed and we were unable to recover it. 00:27:20.190 [2024-07-14 07:44:36.049359] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.190 [2024-07-14 07:44:36.049531] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.190 [2024-07-14 07:44:36.049561] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:20.190 qpair failed and we were unable to recover it. 00:27:20.190 [2024-07-14 07:44:36.049740] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.190 [2024-07-14 07:44:36.049923] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.190 [2024-07-14 07:44:36.049949] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:20.190 qpair failed and we were unable to recover it. 00:27:20.190 [2024-07-14 07:44:36.050230] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.190 [2024-07-14 07:44:36.050413] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.190 [2024-07-14 07:44:36.050444] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:20.190 qpair failed and we were unable to recover it. 00:27:20.190 [2024-07-14 07:44:36.050637] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.190 [2024-07-14 07:44:36.050808] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.190 [2024-07-14 07:44:36.050833] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:20.190 qpair failed and we were unable to recover it. 00:27:20.190 [2024-07-14 07:44:36.051009] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.190 [2024-07-14 07:44:36.051190] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.190 [2024-07-14 07:44:36.051215] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:20.190 qpair failed and we were unable to recover it. 00:27:20.190 [2024-07-14 07:44:36.051378] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.190 [2024-07-14 07:44:36.051538] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.190 [2024-07-14 07:44:36.051563] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:20.190 qpair failed and we were unable to recover it. 00:27:20.190 [2024-07-14 07:44:36.051729] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.190 [2024-07-14 07:44:36.051928] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.190 [2024-07-14 07:44:36.051954] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:20.190 qpair failed and we were unable to recover it. 00:27:20.190 [2024-07-14 07:44:36.052124] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.190 [2024-07-14 07:44:36.052296] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.190 [2024-07-14 07:44:36.052321] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:20.190 qpair failed and we were unable to recover it. 00:27:20.190 [2024-07-14 07:44:36.052478] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.190 [2024-07-14 07:44:36.052626] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.190 [2024-07-14 07:44:36.052651] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:20.190 qpair failed and we were unable to recover it. 00:27:20.190 [2024-07-14 07:44:36.052824] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.190 [2024-07-14 07:44:36.052998] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.190 [2024-07-14 07:44:36.053023] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:20.190 qpair failed and we were unable to recover it. 00:27:20.190 [2024-07-14 07:44:36.053200] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.190 [2024-07-14 07:44:36.053389] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.190 [2024-07-14 07:44:36.053414] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:20.190 qpair failed and we were unable to recover it. 00:27:20.190 [2024-07-14 07:44:36.053558] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.190 [2024-07-14 07:44:36.053739] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.190 [2024-07-14 07:44:36.053764] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:20.190 qpair failed and we were unable to recover it. 00:27:20.190 [2024-07-14 07:44:36.053947] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.190 [2024-07-14 07:44:36.054129] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.190 [2024-07-14 07:44:36.054154] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:20.190 qpair failed and we were unable to recover it. 00:27:20.190 [2024-07-14 07:44:36.054310] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.190 [2024-07-14 07:44:36.054456] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.190 [2024-07-14 07:44:36.054480] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:20.190 qpair failed and we were unable to recover it. 00:27:20.190 [2024-07-14 07:44:36.054631] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.190 [2024-07-14 07:44:36.054784] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.190 [2024-07-14 07:44:36.054809] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:20.190 qpair failed and we were unable to recover it. 00:27:20.190 [2024-07-14 07:44:36.054999] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.190 [2024-07-14 07:44:36.055189] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.190 [2024-07-14 07:44:36.055214] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:20.190 qpair failed and we were unable to recover it. 00:27:20.190 [2024-07-14 07:44:36.055392] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.191 [2024-07-14 07:44:36.055572] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.191 [2024-07-14 07:44:36.055597] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:20.191 qpair failed and we were unable to recover it. 00:27:20.191 [2024-07-14 07:44:36.055753] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.191 [2024-07-14 07:44:36.055904] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.191 [2024-07-14 07:44:36.055929] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:20.191 qpair failed and we were unable to recover it. 00:27:20.191 [2024-07-14 07:44:36.056114] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.191 [2024-07-14 07:44:36.056304] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.191 [2024-07-14 07:44:36.056329] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:20.191 qpair failed and we were unable to recover it. 00:27:20.191 [2024-07-14 07:44:36.056498] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.191 [2024-07-14 07:44:36.056675] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.191 [2024-07-14 07:44:36.056699] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:20.191 qpair failed and we were unable to recover it. 00:27:20.191 [2024-07-14 07:44:36.056883] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.191 [2024-07-14 07:44:36.057054] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.191 [2024-07-14 07:44:36.057079] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:20.191 qpair failed and we were unable to recover it. 00:27:20.191 [2024-07-14 07:44:36.057273] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.191 [2024-07-14 07:44:36.057475] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.191 [2024-07-14 07:44:36.057499] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:20.191 qpair failed and we were unable to recover it. 00:27:20.191 [2024-07-14 07:44:36.057687] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.191 [2024-07-14 07:44:36.057862] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.191 [2024-07-14 07:44:36.057895] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:20.191 qpair failed and we were unable to recover it. 00:27:20.191 [2024-07-14 07:44:36.058054] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.191 [2024-07-14 07:44:36.058237] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.191 [2024-07-14 07:44:36.058262] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:20.191 qpair failed and we were unable to recover it. 00:27:20.191 [2024-07-14 07:44:36.058452] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.191 [2024-07-14 07:44:36.058605] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.191 [2024-07-14 07:44:36.058629] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:20.191 qpair failed and we were unable to recover it. 00:27:20.191 [2024-07-14 07:44:36.058804] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.191 [2024-07-14 07:44:36.058993] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.191 [2024-07-14 07:44:36.059018] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:20.191 qpair failed and we were unable to recover it. 00:27:20.191 [2024-07-14 07:44:36.059188] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.191 [2024-07-14 07:44:36.059338] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.191 [2024-07-14 07:44:36.059363] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:20.191 qpair failed and we were unable to recover it. 00:27:20.191 [2024-07-14 07:44:36.059526] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.191 [2024-07-14 07:44:36.059705] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.191 [2024-07-14 07:44:36.059730] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:20.191 qpair failed and we were unable to recover it. 00:27:20.191 [2024-07-14 07:44:36.059902] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.191 [2024-07-14 07:44:36.060085] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.191 [2024-07-14 07:44:36.060109] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:20.191 qpair failed and we were unable to recover it. 00:27:20.191 [2024-07-14 07:44:36.060286] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.191 [2024-07-14 07:44:36.060440] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.191 [2024-07-14 07:44:36.060464] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:20.191 qpair failed and we were unable to recover it. 00:27:20.191 [2024-07-14 07:44:36.060637] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.191 [2024-07-14 07:44:36.060812] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.191 [2024-07-14 07:44:36.060837] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:20.191 qpair failed and we were unable to recover it. 00:27:20.191 [2024-07-14 07:44:36.060996] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.191 [2024-07-14 07:44:36.061181] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.191 [2024-07-14 07:44:36.061206] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:20.191 qpair failed and we were unable to recover it. 00:27:20.191 [2024-07-14 07:44:36.061367] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.191 [2024-07-14 07:44:36.061540] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.191 [2024-07-14 07:44:36.061565] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:20.191 qpair failed and we were unable to recover it. 00:27:20.191 [2024-07-14 07:44:36.061764] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.191 [2024-07-14 07:44:36.061947] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.191 [2024-07-14 07:44:36.061972] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:20.191 qpair failed and we were unable to recover it. 00:27:20.191 [2024-07-14 07:44:36.062133] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.191 [2024-07-14 07:44:36.062343] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.191 [2024-07-14 07:44:36.062367] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:20.191 qpair failed and we were unable to recover it. 00:27:20.191 [2024-07-14 07:44:36.062540] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.191 [2024-07-14 07:44:36.062724] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.191 [2024-07-14 07:44:36.062748] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:20.191 qpair failed and we were unable to recover it. 00:27:20.191 [2024-07-14 07:44:36.062905] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.191 [2024-07-14 07:44:36.063087] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.191 [2024-07-14 07:44:36.063111] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:20.191 qpair failed and we were unable to recover it. 00:27:20.191 [2024-07-14 07:44:36.063267] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.191 [2024-07-14 07:44:36.063448] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.191 [2024-07-14 07:44:36.063473] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:20.191 qpair failed and we were unable to recover it. 00:27:20.191 [2024-07-14 07:44:36.063632] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.191 [2024-07-14 07:44:36.063786] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.191 [2024-07-14 07:44:36.063810] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:20.191 qpair failed and we were unable to recover it. 00:27:20.191 [2024-07-14 07:44:36.063965] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.191 [2024-07-14 07:44:36.064116] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.191 [2024-07-14 07:44:36.064141] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:20.191 qpair failed and we were unable to recover it. 00:27:20.191 [2024-07-14 07:44:36.064300] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.191 [2024-07-14 07:44:36.064476] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.191 [2024-07-14 07:44:36.064500] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:20.191 qpair failed and we were unable to recover it. 00:27:20.191 [2024-07-14 07:44:36.064676] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.191 [2024-07-14 07:44:36.064862] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.191 [2024-07-14 07:44:36.064892] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:20.191 qpair failed and we were unable to recover it. 00:27:20.191 [2024-07-14 07:44:36.065061] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.191 [2024-07-14 07:44:36.065219] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.191 [2024-07-14 07:44:36.065244] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:20.191 qpair failed and we were unable to recover it. 00:27:20.191 [2024-07-14 07:44:36.065434] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.191 [2024-07-14 07:44:36.065613] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.191 [2024-07-14 07:44:36.065641] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:20.191 qpair failed and we were unable to recover it. 00:27:20.191 [2024-07-14 07:44:36.065803] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.191 [2024-07-14 07:44:36.065995] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.191 [2024-07-14 07:44:36.066020] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:20.191 qpair failed and we were unable to recover it. 00:27:20.191 [2024-07-14 07:44:36.066201] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.191 [2024-07-14 07:44:36.066357] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.191 [2024-07-14 07:44:36.066381] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:20.191 qpair failed and we were unable to recover it. 00:27:20.191 [2024-07-14 07:44:36.066572] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.191 [2024-07-14 07:44:36.066723] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.191 [2024-07-14 07:44:36.066747] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:20.191 qpair failed and we were unable to recover it. 00:27:20.192 [2024-07-14 07:44:36.066901] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.192 [2024-07-14 07:44:36.067083] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.192 [2024-07-14 07:44:36.067107] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:20.192 qpair failed and we were unable to recover it. 00:27:20.192 [2024-07-14 07:44:36.067308] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.192 [2024-07-14 07:44:36.067513] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.192 [2024-07-14 07:44:36.067537] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:20.192 qpair failed and we were unable to recover it. 00:27:20.192 [2024-07-14 07:44:36.067698] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.192 [2024-07-14 07:44:36.067849] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.192 [2024-07-14 07:44:36.067902] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:20.192 qpair failed and we were unable to recover it. 00:27:20.192 [2024-07-14 07:44:36.068062] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.192 [2024-07-14 07:44:36.068214] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.192 [2024-07-14 07:44:36.068237] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:20.192 qpair failed and we were unable to recover it. 00:27:20.192 [2024-07-14 07:44:36.068405] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.192 [2024-07-14 07:44:36.068616] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.192 [2024-07-14 07:44:36.068642] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:20.192 qpair failed and we were unable to recover it. 00:27:20.192 [2024-07-14 07:44:36.068918] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.192 [2024-07-14 07:44:36.069074] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.192 [2024-07-14 07:44:36.069099] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:20.192 qpair failed and we were unable to recover it. 00:27:20.192 [2024-07-14 07:44:36.069318] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.192 [2024-07-14 07:44:36.069476] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.192 [2024-07-14 07:44:36.069500] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:20.192 qpair failed and we were unable to recover it. 00:27:20.192 [2024-07-14 07:44:36.069676] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.192 [2024-07-14 07:44:36.069834] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.192 [2024-07-14 07:44:36.069859] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:20.192 qpair failed and we were unable to recover it. 00:27:20.192 [2024-07-14 07:44:36.070018] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.192 [2024-07-14 07:44:36.070176] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.192 [2024-07-14 07:44:36.070200] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:20.192 qpair failed and we were unable to recover it. 00:27:20.192 [2024-07-14 07:44:36.070392] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.192 [2024-07-14 07:44:36.070545] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.192 [2024-07-14 07:44:36.070572] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:20.192 qpair failed and we were unable to recover it. 00:27:20.192 [2024-07-14 07:44:36.070762] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.192 [2024-07-14 07:44:36.070929] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.192 [2024-07-14 07:44:36.070955] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:20.192 qpair failed and we were unable to recover it. 00:27:20.192 [2024-07-14 07:44:36.071130] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.192 [2024-07-14 07:44:36.071284] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.192 [2024-07-14 07:44:36.071309] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:20.192 qpair failed and we were unable to recover it. 00:27:20.192 [2024-07-14 07:44:36.071491] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.192 [2024-07-14 07:44:36.071654] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.192 [2024-07-14 07:44:36.071679] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:20.192 qpair failed and we were unable to recover it. 00:27:20.192 [2024-07-14 07:44:36.071852] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.192 [2024-07-14 07:44:36.072039] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.192 [2024-07-14 07:44:36.072064] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:20.192 qpair failed and we were unable to recover it. 00:27:20.192 [2024-07-14 07:44:36.072223] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.192 [2024-07-14 07:44:36.072373] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.192 [2024-07-14 07:44:36.072398] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:20.192 qpair failed and we were unable to recover it. 00:27:20.192 [2024-07-14 07:44:36.072559] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.192 [2024-07-14 07:44:36.072714] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.192 [2024-07-14 07:44:36.072738] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:20.192 qpair failed and we were unable to recover it. 00:27:20.192 [2024-07-14 07:44:36.072935] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.192 [2024-07-14 07:44:36.073093] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.192 [2024-07-14 07:44:36.073117] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:20.192 qpair failed and we were unable to recover it. 00:27:20.192 [2024-07-14 07:44:36.073315] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.192 [2024-07-14 07:44:36.073526] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.192 [2024-07-14 07:44:36.073551] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:20.192 qpair failed and we were unable to recover it. 00:27:20.192 [2024-07-14 07:44:36.073750] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.192 [2024-07-14 07:44:36.073899] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.192 [2024-07-14 07:44:36.073924] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:20.192 qpair failed and we were unable to recover it. 00:27:20.192 [2024-07-14 07:44:36.074108] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.192 [2024-07-14 07:44:36.074286] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.192 [2024-07-14 07:44:36.074311] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:20.192 qpair failed and we were unable to recover it. 00:27:20.192 [2024-07-14 07:44:36.074483] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.192 [2024-07-14 07:44:36.074633] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.192 [2024-07-14 07:44:36.074657] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:20.192 qpair failed and we were unable to recover it. 00:27:20.192 [2024-07-14 07:44:36.074808] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.192 [2024-07-14 07:44:36.075000] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.192 [2024-07-14 07:44:36.075025] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:20.192 qpair failed and we were unable to recover it. 00:27:20.192 [2024-07-14 07:44:36.075224] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.192 [2024-07-14 07:44:36.075403] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.192 [2024-07-14 07:44:36.075428] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:20.192 qpair failed and we were unable to recover it. 00:27:20.192 [2024-07-14 07:44:36.075581] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.192 [2024-07-14 07:44:36.075756] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.192 [2024-07-14 07:44:36.075780] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:20.192 qpair failed and we were unable to recover it. 00:27:20.192 [2024-07-14 07:44:36.075927] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.192 [2024-07-14 07:44:36.076101] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.192 [2024-07-14 07:44:36.076126] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:20.192 qpair failed and we were unable to recover it. 00:27:20.192 [2024-07-14 07:44:36.076319] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.192 [2024-07-14 07:44:36.076490] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.192 [2024-07-14 07:44:36.076515] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:20.192 qpair failed and we were unable to recover it. 00:27:20.192 [2024-07-14 07:44:36.076671] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.192 [2024-07-14 07:44:36.076817] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.192 [2024-07-14 07:44:36.076842] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:20.192 qpair failed and we were unable to recover it. 00:27:20.192 [2024-07-14 07:44:36.077045] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.192 [2024-07-14 07:44:36.077221] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.192 [2024-07-14 07:44:36.077246] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:20.192 qpair failed and we were unable to recover it. 00:27:20.192 [2024-07-14 07:44:36.077531] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.192 [2024-07-14 07:44:36.077741] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.192 [2024-07-14 07:44:36.077770] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:20.192 qpair failed and we were unable to recover it. 00:27:20.192 [2024-07-14 07:44:36.077986] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.192 [2024-07-14 07:44:36.078138] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.192 [2024-07-14 07:44:36.078163] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:20.192 qpair failed and we were unable to recover it. 00:27:20.192 [2024-07-14 07:44:36.078345] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.192 [2024-07-14 07:44:36.078496] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.193 [2024-07-14 07:44:36.078520] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:20.193 qpair failed and we were unable to recover it. 00:27:20.193 [2024-07-14 07:44:36.078672] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.193 [2024-07-14 07:44:36.078825] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.193 [2024-07-14 07:44:36.078849] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:20.193 qpair failed and we were unable to recover it. 00:27:20.193 [2024-07-14 07:44:36.079063] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.193 [2024-07-14 07:44:36.079216] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.193 [2024-07-14 07:44:36.079240] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:20.193 qpair failed and we were unable to recover it. 00:27:20.193 [2024-07-14 07:44:36.079422] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.193 [2024-07-14 07:44:36.079572] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.193 [2024-07-14 07:44:36.079596] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:20.193 qpair failed and we were unable to recover it. 00:27:20.193 [2024-07-14 07:44:36.079747] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.193 [2024-07-14 07:44:36.079929] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.193 [2024-07-14 07:44:36.079955] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:20.193 qpair failed and we were unable to recover it. 00:27:20.193 [2024-07-14 07:44:36.080101] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.193 [2024-07-14 07:44:36.080250] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.193 [2024-07-14 07:44:36.080275] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:20.193 qpair failed and we were unable to recover it. 00:27:20.193 [2024-07-14 07:44:36.080448] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.193 [2024-07-14 07:44:36.080624] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.193 [2024-07-14 07:44:36.080649] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:20.193 qpair failed and we were unable to recover it. 00:27:20.193 [2024-07-14 07:44:36.080804] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.193 [2024-07-14 07:44:36.080979] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.193 [2024-07-14 07:44:36.081005] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:20.193 qpair failed and we were unable to recover it. 00:27:20.193 [2024-07-14 07:44:36.081200] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.193 [2024-07-14 07:44:36.081354] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.193 [2024-07-14 07:44:36.081379] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:20.193 qpair failed and we were unable to recover it. 00:27:20.193 [2024-07-14 07:44:36.081557] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.193 [2024-07-14 07:44:36.081716] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.193 [2024-07-14 07:44:36.081743] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:20.193 qpair failed and we were unable to recover it. 00:27:20.193 [2024-07-14 07:44:36.081926] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.193 [2024-07-14 07:44:36.082078] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.193 [2024-07-14 07:44:36.082104] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:20.193 qpair failed and we were unable to recover it. 00:27:20.193 [2024-07-14 07:44:36.082306] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.193 [2024-07-14 07:44:36.082487] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.193 [2024-07-14 07:44:36.082512] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:20.193 qpair failed and we were unable to recover it. 00:27:20.193 [2024-07-14 07:44:36.082721] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.193 [2024-07-14 07:44:36.082891] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.193 [2024-07-14 07:44:36.082926] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:20.193 qpair failed and we were unable to recover it. 00:27:20.193 [2024-07-14 07:44:36.083137] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.193 [2024-07-14 07:44:36.083286] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.193 [2024-07-14 07:44:36.083310] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:20.193 qpair failed and we were unable to recover it. 00:27:20.193 [2024-07-14 07:44:36.083463] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.193 [2024-07-14 07:44:36.083648] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.193 [2024-07-14 07:44:36.083672] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:20.193 qpair failed and we were unable to recover it. 00:27:20.193 [2024-07-14 07:44:36.083821] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.193 [2024-07-14 07:44:36.084045] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.193 [2024-07-14 07:44:36.084071] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:20.193 qpair failed and we were unable to recover it. 00:27:20.193 [2024-07-14 07:44:36.084250] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.193 [2024-07-14 07:44:36.084410] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.193 [2024-07-14 07:44:36.084435] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:20.193 qpair failed and we were unable to recover it. 00:27:20.193 [2024-07-14 07:44:36.084611] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.193 [2024-07-14 07:44:36.084786] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.193 [2024-07-14 07:44:36.084815] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:20.193 qpair failed and we were unable to recover it. 00:27:20.193 [2024-07-14 07:44:36.084982] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.193 [2024-07-14 07:44:36.085145] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.193 [2024-07-14 07:44:36.085170] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:20.193 qpair failed and we were unable to recover it. 00:27:20.193 [2024-07-14 07:44:36.085359] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.193 [2024-07-14 07:44:36.085504] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.193 [2024-07-14 07:44:36.085529] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:20.193 qpair failed and we were unable to recover it. 00:27:20.193 [2024-07-14 07:44:36.085697] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.193 [2024-07-14 07:44:36.085876] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.193 [2024-07-14 07:44:36.085902] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:20.193 qpair failed and we were unable to recover it. 00:27:20.193 [2024-07-14 07:44:36.086064] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.193 [2024-07-14 07:44:36.086205] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.193 [2024-07-14 07:44:36.086230] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:20.193 qpair failed and we were unable to recover it. 00:27:20.193 [2024-07-14 07:44:36.086385] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.193 [2024-07-14 07:44:36.086547] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.193 [2024-07-14 07:44:36.086572] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:20.193 qpair failed and we were unable to recover it. 00:27:20.193 [2024-07-14 07:44:36.086725] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.193 [2024-07-14 07:44:36.086907] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.193 [2024-07-14 07:44:36.086933] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:20.193 qpair failed and we were unable to recover it. 00:27:20.193 [2024-07-14 07:44:36.087082] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.193 [2024-07-14 07:44:36.087380] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.193 [2024-07-14 07:44:36.087409] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:20.193 qpair failed and we were unable to recover it. 00:27:20.193 [2024-07-14 07:44:36.087617] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.193 [2024-07-14 07:44:36.087798] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.193 [2024-07-14 07:44:36.087823] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:20.193 qpair failed and we were unable to recover it. 00:27:20.193 [2024-07-14 07:44:36.087984] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.193 [2024-07-14 07:44:36.088178] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.193 [2024-07-14 07:44:36.088223] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:20.193 qpair failed and we were unable to recover it. 00:27:20.193 [2024-07-14 07:44:36.088512] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.193 [2024-07-14 07:44:36.088696] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.193 [2024-07-14 07:44:36.088726] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:20.193 qpair failed and we were unable to recover it. 00:27:20.193 [2024-07-14 07:44:36.088918] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.193 [2024-07-14 07:44:36.089113] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.193 [2024-07-14 07:44:36.089148] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:20.193 qpair failed and we were unable to recover it. 00:27:20.193 [2024-07-14 07:44:36.089338] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.193 [2024-07-14 07:44:36.089518] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.193 [2024-07-14 07:44:36.089543] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:20.193 qpair failed and we were unable to recover it. 00:27:20.193 [2024-07-14 07:44:36.089699] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.193 [2024-07-14 07:44:36.089851] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.193 [2024-07-14 07:44:36.089881] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:20.193 qpair failed and we were unable to recover it. 00:27:20.194 [2024-07-14 07:44:36.090031] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.194 [2024-07-14 07:44:36.090216] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.194 [2024-07-14 07:44:36.090241] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:20.194 qpair failed and we were unable to recover it. 00:27:20.194 [2024-07-14 07:44:36.090397] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.194 [2024-07-14 07:44:36.090606] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.194 [2024-07-14 07:44:36.090630] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:20.194 qpair failed and we were unable to recover it. 00:27:20.194 [2024-07-14 07:44:36.090795] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.194 [2024-07-14 07:44:36.090977] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.194 [2024-07-14 07:44:36.091003] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:20.194 qpair failed and we were unable to recover it. 00:27:20.194 [2024-07-14 07:44:36.091165] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.194 [2024-07-14 07:44:36.091316] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.194 [2024-07-14 07:44:36.091341] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:20.194 qpair failed and we were unable to recover it. 00:27:20.194 [2024-07-14 07:44:36.091493] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.194 [2024-07-14 07:44:36.091667] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.194 [2024-07-14 07:44:36.091692] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:20.194 qpair failed and we were unable to recover it. 00:27:20.194 [2024-07-14 07:44:36.091847] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.194 [2024-07-14 07:44:36.092061] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.194 [2024-07-14 07:44:36.092086] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:20.194 qpair failed and we were unable to recover it. 00:27:20.194 [2024-07-14 07:44:36.092286] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.194 [2024-07-14 07:44:36.092454] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.194 [2024-07-14 07:44:36.092479] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:20.194 qpair failed and we were unable to recover it. 00:27:20.194 [2024-07-14 07:44:36.092670] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.194 [2024-07-14 07:44:36.092822] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.194 [2024-07-14 07:44:36.092847] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:20.194 qpair failed and we were unable to recover it. 00:27:20.194 [2024-07-14 07:44:36.093006] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.194 [2024-07-14 07:44:36.093224] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.194 [2024-07-14 07:44:36.093248] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:20.194 qpair failed and we were unable to recover it. 00:27:20.194 [2024-07-14 07:44:36.093399] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.194 [2024-07-14 07:44:36.093630] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.194 [2024-07-14 07:44:36.093654] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:20.194 qpair failed and we were unable to recover it. 00:27:20.194 [2024-07-14 07:44:36.093820] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.194 [2024-07-14 07:44:36.094046] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.194 [2024-07-14 07:44:36.094072] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:20.194 qpair failed and we were unable to recover it. 00:27:20.194 [2024-07-14 07:44:36.094241] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.194 [2024-07-14 07:44:36.094430] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.194 [2024-07-14 07:44:36.094455] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:20.194 qpair failed and we were unable to recover it. 00:27:20.194 [2024-07-14 07:44:36.094616] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.194 [2024-07-14 07:44:36.094797] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.194 [2024-07-14 07:44:36.094822] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:20.194 qpair failed and we were unable to recover it. 00:27:20.194 [2024-07-14 07:44:36.095030] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.194 [2024-07-14 07:44:36.095319] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.194 [2024-07-14 07:44:36.095344] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:20.194 qpair failed and we were unable to recover it. 00:27:20.194 [2024-07-14 07:44:36.095520] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.194 [2024-07-14 07:44:36.095674] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.194 [2024-07-14 07:44:36.095699] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:20.194 qpair failed and we were unable to recover it. 00:27:20.194 [2024-07-14 07:44:36.095852] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.194 [2024-07-14 07:44:36.096002] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.194 [2024-07-14 07:44:36.096027] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:20.194 qpair failed and we were unable to recover it. 00:27:20.194 [2024-07-14 07:44:36.096177] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.194 [2024-07-14 07:44:36.096386] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.194 [2024-07-14 07:44:36.096410] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:20.194 qpair failed and we were unable to recover it. 00:27:20.194 [2024-07-14 07:44:36.096565] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.194 [2024-07-14 07:44:36.096726] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.194 [2024-07-14 07:44:36.096750] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:20.194 qpair failed and we were unable to recover it. 00:27:20.194 [2024-07-14 07:44:36.096902] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.194 [2024-07-14 07:44:36.097072] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.194 [2024-07-14 07:44:36.097096] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:20.194 qpair failed and we were unable to recover it. 00:27:20.194 [2024-07-14 07:44:36.097305] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.194 [2024-07-14 07:44:36.097512] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.194 [2024-07-14 07:44:36.097536] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:20.194 qpair failed and we were unable to recover it. 00:27:20.194 [2024-07-14 07:44:36.097722] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.194 [2024-07-14 07:44:36.097912] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.194 [2024-07-14 07:44:36.097938] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:20.194 qpair failed and we were unable to recover it. 00:27:20.194 [2024-07-14 07:44:36.098101] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.194 [2024-07-14 07:44:36.098267] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.194 [2024-07-14 07:44:36.098292] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:20.194 qpair failed and we were unable to recover it. 00:27:20.194 [2024-07-14 07:44:36.098476] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.194 [2024-07-14 07:44:36.098636] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.194 [2024-07-14 07:44:36.098661] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:20.194 qpair failed and we were unable to recover it. 00:27:20.194 [2024-07-14 07:44:36.098839] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.194 [2024-07-14 07:44:36.099025] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.194 [2024-07-14 07:44:36.099050] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:20.194 qpair failed and we were unable to recover it. 00:27:20.194 [2024-07-14 07:44:36.099202] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.194 [2024-07-14 07:44:36.099372] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.195 [2024-07-14 07:44:36.099397] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:20.195 qpair failed and we were unable to recover it. 00:27:20.195 [2024-07-14 07:44:36.099612] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.195 [2024-07-14 07:44:36.099784] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.195 [2024-07-14 07:44:36.099808] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:20.195 qpair failed and we were unable to recover it. 00:27:20.195 [2024-07-14 07:44:36.099972] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.195 [2024-07-14 07:44:36.100160] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.195 [2024-07-14 07:44:36.100185] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:20.195 qpair failed and we were unable to recover it. 00:27:20.195 [2024-07-14 07:44:36.100349] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.195 [2024-07-14 07:44:36.100495] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.195 [2024-07-14 07:44:36.100520] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:20.195 qpair failed and we were unable to recover it. 00:27:20.195 [2024-07-14 07:44:36.100676] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.195 [2024-07-14 07:44:36.100859] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.195 [2024-07-14 07:44:36.100891] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:20.195 qpair failed and we were unable to recover it. 00:27:20.195 [2024-07-14 07:44:36.101071] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.195 [2024-07-14 07:44:36.101246] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.195 [2024-07-14 07:44:36.101271] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:20.195 qpair failed and we were unable to recover it. 00:27:20.195 [2024-07-14 07:44:36.101428] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.195 [2024-07-14 07:44:36.101580] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.195 [2024-07-14 07:44:36.101605] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:20.195 qpair failed and we were unable to recover it. 00:27:20.195 [2024-07-14 07:44:36.101802] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.195 [2024-07-14 07:44:36.102000] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.195 [2024-07-14 07:44:36.102026] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:20.195 qpair failed and we were unable to recover it. 00:27:20.195 [2024-07-14 07:44:36.102187] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.195 [2024-07-14 07:44:36.102370] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.195 [2024-07-14 07:44:36.102395] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:20.195 qpair failed and we were unable to recover it. 00:27:20.195 [2024-07-14 07:44:36.102550] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.195 [2024-07-14 07:44:36.102729] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.195 [2024-07-14 07:44:36.102753] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:20.195 qpair failed and we were unable to recover it. 00:27:20.195 [2024-07-14 07:44:36.102949] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.195 [2024-07-14 07:44:36.103105] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.195 [2024-07-14 07:44:36.103130] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:20.195 qpair failed and we were unable to recover it. 00:27:20.195 [2024-07-14 07:44:36.103314] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.195 [2024-07-14 07:44:36.103493] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.195 [2024-07-14 07:44:36.103517] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:20.195 qpair failed and we were unable to recover it. 00:27:20.195 [2024-07-14 07:44:36.103700] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.195 [2024-07-14 07:44:36.103885] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.195 [2024-07-14 07:44:36.103910] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:20.195 qpair failed and we were unable to recover it. 00:27:20.195 [2024-07-14 07:44:36.104092] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.195 [2024-07-14 07:44:36.104253] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.195 [2024-07-14 07:44:36.104278] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:20.195 qpair failed and we were unable to recover it. 00:27:20.195 [2024-07-14 07:44:36.104422] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.195 [2024-07-14 07:44:36.104605] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.195 [2024-07-14 07:44:36.104630] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:20.195 qpair failed and we were unable to recover it. 00:27:20.195 [2024-07-14 07:44:36.104819] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.195 [2024-07-14 07:44:36.104982] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.195 [2024-07-14 07:44:36.105007] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:20.195 qpair failed and we were unable to recover it. 00:27:20.195 [2024-07-14 07:44:36.105197] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.195 [2024-07-14 07:44:36.105370] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.195 [2024-07-14 07:44:36.105395] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:20.195 qpair failed and we were unable to recover it. 00:27:20.195 [2024-07-14 07:44:36.105547] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.195 [2024-07-14 07:44:36.105699] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.195 [2024-07-14 07:44:36.105723] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:20.195 qpair failed and we were unable to recover it. 00:27:20.195 [2024-07-14 07:44:36.105880] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.195 [2024-07-14 07:44:36.106033] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.195 [2024-07-14 07:44:36.106058] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:20.195 qpair failed and we were unable to recover it. 00:27:20.195 [2024-07-14 07:44:36.106213] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.195 [2024-07-14 07:44:36.106369] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.195 [2024-07-14 07:44:36.106394] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:20.195 qpair failed and we were unable to recover it. 00:27:20.195 [2024-07-14 07:44:36.106608] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.195 [2024-07-14 07:44:36.106773] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.195 [2024-07-14 07:44:36.106798] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:20.195 qpair failed and we were unable to recover it. 00:27:20.195 [2024-07-14 07:44:36.106964] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.195 [2024-07-14 07:44:36.107118] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.195 [2024-07-14 07:44:36.107143] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:20.195 qpair failed and we were unable to recover it. 00:27:20.195 [2024-07-14 07:44:36.107320] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.195 [2024-07-14 07:44:36.107502] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.195 [2024-07-14 07:44:36.107526] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:20.195 qpair failed and we were unable to recover it. 00:27:20.195 [2024-07-14 07:44:36.107705] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.195 [2024-07-14 07:44:36.107891] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.195 [2024-07-14 07:44:36.107920] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:20.195 qpair failed and we were unable to recover it. 00:27:20.195 [2024-07-14 07:44:36.108108] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.195 [2024-07-14 07:44:36.108292] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.195 [2024-07-14 07:44:36.108317] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:20.195 qpair failed and we were unable to recover it. 00:27:20.195 [2024-07-14 07:44:36.108485] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.195 [2024-07-14 07:44:36.108661] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.195 [2024-07-14 07:44:36.108686] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:20.195 qpair failed and we were unable to recover it. 00:27:20.195 [2024-07-14 07:44:36.108834] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.195 [2024-07-14 07:44:36.109024] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.195 [2024-07-14 07:44:36.109049] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:20.195 qpair failed and we were unable to recover it. 00:27:20.195 [2024-07-14 07:44:36.109207] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.195 [2024-07-14 07:44:36.109360] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.195 [2024-07-14 07:44:36.109386] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:20.195 qpair failed and we were unable to recover it. 00:27:20.195 [2024-07-14 07:44:36.109544] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.195 [2024-07-14 07:44:36.109727] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.195 [2024-07-14 07:44:36.109752] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:20.195 qpair failed and we were unable to recover it. 00:27:20.195 [2024-07-14 07:44:36.109903] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.195 [2024-07-14 07:44:36.110086] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.195 [2024-07-14 07:44:36.110111] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:20.195 qpair failed and we were unable to recover it. 00:27:20.195 [2024-07-14 07:44:36.110293] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.195 [2024-07-14 07:44:36.110449] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.195 [2024-07-14 07:44:36.110473] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:20.195 qpair failed and we were unable to recover it. 00:27:20.196 [2024-07-14 07:44:36.110651] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.196 [2024-07-14 07:44:36.110802] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.196 [2024-07-14 07:44:36.110827] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:20.196 qpair failed and we were unable to recover it. 00:27:20.196 [2024-07-14 07:44:36.110987] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.196 [2024-07-14 07:44:36.111143] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.196 [2024-07-14 07:44:36.111167] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:20.196 qpair failed and we were unable to recover it. 00:27:20.196 [2024-07-14 07:44:36.111320] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.196 [2024-07-14 07:44:36.111485] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.196 [2024-07-14 07:44:36.111510] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:20.196 qpair failed and we were unable to recover it. 00:27:20.196 [2024-07-14 07:44:36.111697] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.196 [2024-07-14 07:44:36.111878] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.196 [2024-07-14 07:44:36.111903] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:20.196 qpair failed and we were unable to recover it. 00:27:20.196 [2024-07-14 07:44:36.112055] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.196 [2024-07-14 07:44:36.112204] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.196 [2024-07-14 07:44:36.112228] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:20.196 qpair failed and we were unable to recover it. 00:27:20.196 [2024-07-14 07:44:36.112410] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.196 [2024-07-14 07:44:36.112578] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.196 [2024-07-14 07:44:36.112603] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:20.196 qpair failed and we were unable to recover it. 00:27:20.196 [2024-07-14 07:44:36.112809] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.196 [2024-07-14 07:44:36.112965] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.196 [2024-07-14 07:44:36.112990] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:20.196 qpair failed and we were unable to recover it. 00:27:20.196 [2024-07-14 07:44:36.113168] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.196 [2024-07-14 07:44:36.113349] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.196 [2024-07-14 07:44:36.113383] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:20.196 qpair failed and we were unable to recover it. 00:27:20.196 [2024-07-14 07:44:36.113562] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.196 [2024-07-14 07:44:36.113747] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.196 [2024-07-14 07:44:36.113773] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:20.196 qpair failed and we were unable to recover it. 00:27:20.196 [2024-07-14 07:44:36.113956] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.196 [2024-07-14 07:44:36.114140] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.196 [2024-07-14 07:44:36.114164] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:20.196 qpair failed and we were unable to recover it. 00:27:20.196 [2024-07-14 07:44:36.114389] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.196 [2024-07-14 07:44:36.114571] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.196 [2024-07-14 07:44:36.114595] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:20.196 qpair failed and we were unable to recover it. 00:27:20.196 [2024-07-14 07:44:36.114749] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.196 [2024-07-14 07:44:36.114942] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.196 [2024-07-14 07:44:36.114967] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:20.196 qpair failed and we were unable to recover it. 00:27:20.196 [2024-07-14 07:44:36.115142] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.196 [2024-07-14 07:44:36.115293] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.196 [2024-07-14 07:44:36.115318] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:20.196 qpair failed and we were unable to recover it. 00:27:20.196 [2024-07-14 07:44:36.115471] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.196 [2024-07-14 07:44:36.115679] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.196 [2024-07-14 07:44:36.115704] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:20.196 qpair failed and we were unable to recover it. 00:27:20.196 [2024-07-14 07:44:36.115886] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.196 [2024-07-14 07:44:36.116038] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.196 [2024-07-14 07:44:36.116062] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:20.196 qpair failed and we were unable to recover it. 00:27:20.196 [2024-07-14 07:44:36.116243] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.196 [2024-07-14 07:44:36.116400] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.196 [2024-07-14 07:44:36.116426] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:20.196 qpair failed and we were unable to recover it. 00:27:20.196 [2024-07-14 07:44:36.116605] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.196 [2024-07-14 07:44:36.116785] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.196 [2024-07-14 07:44:36.116810] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:20.196 qpair failed and we were unable to recover it. 00:27:20.196 [2024-07-14 07:44:36.116970] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.196 [2024-07-14 07:44:36.117156] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.196 [2024-07-14 07:44:36.117181] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:20.196 qpair failed and we were unable to recover it. 00:27:20.196 [2024-07-14 07:44:36.117356] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.196 [2024-07-14 07:44:36.117538] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.196 [2024-07-14 07:44:36.117563] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:20.196 qpair failed and we were unable to recover it. 00:27:20.196 [2024-07-14 07:44:36.117721] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.196 [2024-07-14 07:44:36.117883] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.196 [2024-07-14 07:44:36.117909] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:20.196 qpair failed and we were unable to recover it. 00:27:20.196 [2024-07-14 07:44:36.118072] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.196 [2024-07-14 07:44:36.118222] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.196 [2024-07-14 07:44:36.118247] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:20.196 qpair failed and we were unable to recover it. 00:27:20.196 [2024-07-14 07:44:36.118427] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.196 [2024-07-14 07:44:36.118584] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.196 [2024-07-14 07:44:36.118609] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:20.196 qpair failed and we were unable to recover it. 00:27:20.196 [2024-07-14 07:44:36.118791] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.196 [2024-07-14 07:44:36.118968] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.196 [2024-07-14 07:44:36.118993] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:20.196 qpair failed and we were unable to recover it. 00:27:20.196 [2024-07-14 07:44:36.119144] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bd44b0 is same with the state(5) to be set 00:27:20.196 [2024-07-14 07:44:36.119416] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.196 [2024-07-14 07:44:36.119620] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.196 [2024-07-14 07:44:36.119648] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0c30000b90 with addr=10.0.0.2, port=4420 00:27:20.196 qpair failed and we were unable to recover it. 00:27:20.196 [2024-07-14 07:44:36.119843] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.196 [2024-07-14 07:44:36.120012] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.196 [2024-07-14 07:44:36.120038] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0c30000b90 with addr=10.0.0.2, port=4420 00:27:20.196 qpair failed and we were unable to recover it. 00:27:20.196 [2024-07-14 07:44:36.120252] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.196 [2024-07-14 07:44:36.120427] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.196 [2024-07-14 07:44:36.120452] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0c30000b90 with addr=10.0.0.2, port=4420 00:27:20.196 qpair failed and we were unable to recover it. 00:27:20.196 [2024-07-14 07:44:36.120664] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.196 [2024-07-14 07:44:36.120832] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.196 [2024-07-14 07:44:36.120857] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0c30000b90 with addr=10.0.0.2, port=4420 00:27:20.196 qpair failed and we were unable to recover it. 00:27:20.196 [2024-07-14 07:44:36.121072] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.196 [2024-07-14 07:44:36.121227] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.196 [2024-07-14 07:44:36.121252] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0c30000b90 with addr=10.0.0.2, port=4420 00:27:20.196 qpair failed and we were unable to recover it. 00:27:20.196 [2024-07-14 07:44:36.121466] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.196 [2024-07-14 07:44:36.121651] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.196 [2024-07-14 07:44:36.121676] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0c30000b90 with addr=10.0.0.2, port=4420 00:27:20.196 qpair failed and we were unable to recover it. 00:27:20.196 [2024-07-14 07:44:36.121823] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.196 [2024-07-14 07:44:36.121993] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.196 [2024-07-14 07:44:36.122021] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0c30000b90 with addr=10.0.0.2, port=4420 00:27:20.197 qpair failed and we were unable to recover it. 00:27:20.197 [2024-07-14 07:44:36.122208] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.197 [2024-07-14 07:44:36.122363] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.197 [2024-07-14 07:44:36.122388] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0c30000b90 with addr=10.0.0.2, port=4420 00:27:20.197 qpair failed and we were unable to recover it. 00:27:20.197 [2024-07-14 07:44:36.122549] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.197 [2024-07-14 07:44:36.122740] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.197 [2024-07-14 07:44:36.122766] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0c30000b90 with addr=10.0.0.2, port=4420 00:27:20.197 qpair failed and we were unable to recover it. 00:27:20.197 [2024-07-14 07:44:36.123033] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.197 [2024-07-14 07:44:36.123212] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.197 [2024-07-14 07:44:36.123237] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0c30000b90 with addr=10.0.0.2, port=4420 00:27:20.197 qpair failed and we were unable to recover it. 00:27:20.197 [2024-07-14 07:44:36.123436] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.197 [2024-07-14 07:44:36.123613] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.197 [2024-07-14 07:44:36.123638] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0c30000b90 with addr=10.0.0.2, port=4420 00:27:20.197 qpair failed and we were unable to recover it. 00:27:20.197 [2024-07-14 07:44:36.123908] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.197 [2024-07-14 07:44:36.124089] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.197 [2024-07-14 07:44:36.124114] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0c30000b90 with addr=10.0.0.2, port=4420 00:27:20.197 qpair failed and we were unable to recover it. 00:27:20.197 [2024-07-14 07:44:36.124375] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.197 [2024-07-14 07:44:36.124549] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.197 [2024-07-14 07:44:36.124574] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0c30000b90 with addr=10.0.0.2, port=4420 00:27:20.197 qpair failed and we were unable to recover it. 00:27:20.197 [2024-07-14 07:44:36.124762] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.197 [2024-07-14 07:44:36.124937] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.197 [2024-07-14 07:44:36.124964] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0c30000b90 with addr=10.0.0.2, port=4420 00:27:20.197 qpair failed and we were unable to recover it. 00:27:20.197 [2024-07-14 07:44:36.125152] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.197 [2024-07-14 07:44:36.125328] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.197 [2024-07-14 07:44:36.125353] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0c30000b90 with addr=10.0.0.2, port=4420 00:27:20.197 qpair failed and we were unable to recover it. 00:27:20.197 [2024-07-14 07:44:36.125508] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.197 [2024-07-14 07:44:36.125694] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.197 [2024-07-14 07:44:36.125718] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0c30000b90 with addr=10.0.0.2, port=4420 00:27:20.197 qpair failed and we were unable to recover it. 00:27:20.197 [2024-07-14 07:44:36.125887] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.197 [2024-07-14 07:44:36.126070] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.197 [2024-07-14 07:44:36.126096] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0c30000b90 with addr=10.0.0.2, port=4420 00:27:20.197 qpair failed and we were unable to recover it. 00:27:20.197 [2024-07-14 07:44:36.126272] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.197 [2024-07-14 07:44:36.126459] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.197 [2024-07-14 07:44:36.126485] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0c30000b90 with addr=10.0.0.2, port=4420 00:27:20.197 qpair failed and we were unable to recover it. 00:27:20.197 [2024-07-14 07:44:36.126697] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.197 [2024-07-14 07:44:36.126878] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.197 [2024-07-14 07:44:36.126904] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0c30000b90 with addr=10.0.0.2, port=4420 00:27:20.197 qpair failed and we were unable to recover it. 00:27:20.197 [2024-07-14 07:44:36.127089] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.197 [2024-07-14 07:44:36.127276] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.197 [2024-07-14 07:44:36.127301] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0c30000b90 with addr=10.0.0.2, port=4420 00:27:20.197 qpair failed and we were unable to recover it. 00:27:20.197 [2024-07-14 07:44:36.127486] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.197 [2024-07-14 07:44:36.127666] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.197 [2024-07-14 07:44:36.127691] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0c30000b90 with addr=10.0.0.2, port=4420 00:27:20.197 qpair failed and we were unable to recover it. 00:27:20.197 [2024-07-14 07:44:36.127878] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.197 [2024-07-14 07:44:36.128090] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.197 [2024-07-14 07:44:36.128116] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0c30000b90 with addr=10.0.0.2, port=4420 00:27:20.197 qpair failed and we were unable to recover it. 00:27:20.197 [2024-07-14 07:44:36.128305] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.197 [2024-07-14 07:44:36.128475] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.197 [2024-07-14 07:44:36.128501] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0c30000b90 with addr=10.0.0.2, port=4420 00:27:20.197 qpair failed and we were unable to recover it. 00:27:20.197 [2024-07-14 07:44:36.128684] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.197 [2024-07-14 07:44:36.128834] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.197 [2024-07-14 07:44:36.128859] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0c30000b90 with addr=10.0.0.2, port=4420 00:27:20.197 qpair failed and we were unable to recover it. 00:27:20.197 [2024-07-14 07:44:36.129066] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.197 [2024-07-14 07:44:36.129246] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.197 [2024-07-14 07:44:36.129271] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0c30000b90 with addr=10.0.0.2, port=4420 00:27:20.197 qpair failed and we were unable to recover it. 00:27:20.197 [2024-07-14 07:44:36.129487] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.197 [2024-07-14 07:44:36.129636] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.197 [2024-07-14 07:44:36.129660] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0c30000b90 with addr=10.0.0.2, port=4420 00:27:20.197 qpair failed and we were unable to recover it. 00:27:20.197 [2024-07-14 07:44:36.129810] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.197 [2024-07-14 07:44:36.129995] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.197 [2024-07-14 07:44:36.130021] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0c30000b90 with addr=10.0.0.2, port=4420 00:27:20.197 qpair failed and we were unable to recover it. 00:27:20.197 [2024-07-14 07:44:36.130209] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.197 [2024-07-14 07:44:36.130358] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.197 [2024-07-14 07:44:36.130384] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0c30000b90 with addr=10.0.0.2, port=4420 00:27:20.197 qpair failed and we were unable to recover it. 00:27:20.197 [2024-07-14 07:44:36.130539] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.197 [2024-07-14 07:44:36.130687] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.197 [2024-07-14 07:44:36.130711] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0c30000b90 with addr=10.0.0.2, port=4420 00:27:20.197 qpair failed and we were unable to recover it. 00:27:20.197 [2024-07-14 07:44:36.130890] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.197 [2024-07-14 07:44:36.131075] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.197 [2024-07-14 07:44:36.131100] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0c30000b90 with addr=10.0.0.2, port=4420 00:27:20.197 qpair failed and we were unable to recover it. 00:27:20.197 [2024-07-14 07:44:36.131318] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.197 [2024-07-14 07:44:36.131491] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.197 [2024-07-14 07:44:36.131515] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0c30000b90 with addr=10.0.0.2, port=4420 00:27:20.197 qpair failed and we were unable to recover it. 00:27:20.197 [2024-07-14 07:44:36.131704] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.197 [2024-07-14 07:44:36.131890] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.197 [2024-07-14 07:44:36.131916] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0c30000b90 with addr=10.0.0.2, port=4420 00:27:20.197 qpair failed and we were unable to recover it. 00:27:20.197 [2024-07-14 07:44:36.132093] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.197 [2024-07-14 07:44:36.132272] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.197 [2024-07-14 07:44:36.132297] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0c30000b90 with addr=10.0.0.2, port=4420 00:27:20.197 qpair failed and we were unable to recover it. 00:27:20.197 [2024-07-14 07:44:36.132481] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.197 [2024-07-14 07:44:36.132658] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.197 [2024-07-14 07:44:36.132683] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0c30000b90 with addr=10.0.0.2, port=4420 00:27:20.197 qpair failed and we were unable to recover it. 00:27:20.197 [2024-07-14 07:44:36.132846] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.197 [2024-07-14 07:44:36.133030] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.197 [2024-07-14 07:44:36.133056] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0c30000b90 with addr=10.0.0.2, port=4420 00:27:20.197 qpair failed and we were unable to recover it. 00:27:20.197 [2024-07-14 07:44:36.133210] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.197 [2024-07-14 07:44:36.133390] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.197 [2024-07-14 07:44:36.133415] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0c30000b90 with addr=10.0.0.2, port=4420 00:27:20.197 qpair failed and we were unable to recover it. 00:27:20.197 [2024-07-14 07:44:36.133620] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.197 [2024-07-14 07:44:36.133769] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.197 [2024-07-14 07:44:36.133793] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0c30000b90 with addr=10.0.0.2, port=4420 00:27:20.197 qpair failed and we were unable to recover it. 00:27:20.197 [2024-07-14 07:44:36.133982] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.198 [2024-07-14 07:44:36.134142] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.198 [2024-07-14 07:44:36.134167] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0c30000b90 with addr=10.0.0.2, port=4420 00:27:20.198 qpair failed and we were unable to recover it. 00:27:20.198 [2024-07-14 07:44:36.134353] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.198 [2024-07-14 07:44:36.134559] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.198 [2024-07-14 07:44:36.134584] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0c30000b90 with addr=10.0.0.2, port=4420 00:27:20.198 qpair failed and we were unable to recover it. 00:27:20.198 [2024-07-14 07:44:36.134761] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.198 [2024-07-14 07:44:36.134956] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.198 [2024-07-14 07:44:36.134982] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0c30000b90 with addr=10.0.0.2, port=4420 00:27:20.198 qpair failed and we were unable to recover it. 00:27:20.198 [2024-07-14 07:44:36.135200] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.198 [2024-07-14 07:44:36.135359] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.198 [2024-07-14 07:44:36.135386] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0c30000b90 with addr=10.0.0.2, port=4420 00:27:20.198 qpair failed and we were unable to recover it. 00:27:20.198 [2024-07-14 07:44:36.135567] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.198 [2024-07-14 07:44:36.135751] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.198 [2024-07-14 07:44:36.135775] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0c30000b90 with addr=10.0.0.2, port=4420 00:27:20.198 qpair failed and we were unable to recover it. 00:27:20.198 [2024-07-14 07:44:36.135954] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.198 [2024-07-14 07:44:36.136144] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.198 [2024-07-14 07:44:36.136168] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0c30000b90 with addr=10.0.0.2, port=4420 00:27:20.198 qpair failed and we were unable to recover it. 00:27:20.198 [2024-07-14 07:44:36.136322] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.198 [2024-07-14 07:44:36.136494] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.198 [2024-07-14 07:44:36.136518] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0c30000b90 with addr=10.0.0.2, port=4420 00:27:20.198 qpair failed and we were unable to recover it. 00:27:20.198 [2024-07-14 07:44:36.136683] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.198 [2024-07-14 07:44:36.136854] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.198 [2024-07-14 07:44:36.136885] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0c30000b90 with addr=10.0.0.2, port=4420 00:27:20.198 qpair failed and we were unable to recover it. 00:27:20.198 [2024-07-14 07:44:36.137067] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.198 [2024-07-14 07:44:36.137214] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.198 [2024-07-14 07:44:36.137238] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0c30000b90 with addr=10.0.0.2, port=4420 00:27:20.198 qpair failed and we were unable to recover it. 00:27:20.198 [2024-07-14 07:44:36.137447] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.198 [2024-07-14 07:44:36.137649] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.198 [2024-07-14 07:44:36.137674] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0c30000b90 with addr=10.0.0.2, port=4420 00:27:20.198 qpair failed and we were unable to recover it. 00:27:20.198 [2024-07-14 07:44:36.137853] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.198 [2024-07-14 07:44:36.138025] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.198 [2024-07-14 07:44:36.138049] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0c30000b90 with addr=10.0.0.2, port=4420 00:27:20.198 qpair failed and we were unable to recover it. 00:27:20.198 [2024-07-14 07:44:36.138204] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.198 [2024-07-14 07:44:36.138382] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.198 [2024-07-14 07:44:36.138407] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0c30000b90 with addr=10.0.0.2, port=4420 00:27:20.198 qpair failed and we were unable to recover it. 00:27:20.198 [2024-07-14 07:44:36.138591] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.198 [2024-07-14 07:44:36.138751] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.198 [2024-07-14 07:44:36.138775] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0c30000b90 with addr=10.0.0.2, port=4420 00:27:20.198 qpair failed and we were unable to recover it. 00:27:20.198 [2024-07-14 07:44:36.138964] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.198 [2024-07-14 07:44:36.139152] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.198 [2024-07-14 07:44:36.139177] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0c30000b90 with addr=10.0.0.2, port=4420 00:27:20.198 qpair failed and we were unable to recover it. 00:27:20.198 [2024-07-14 07:44:36.139327] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.198 [2024-07-14 07:44:36.139513] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.198 [2024-07-14 07:44:36.139539] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0c30000b90 with addr=10.0.0.2, port=4420 00:27:20.198 qpair failed and we were unable to recover it. 00:27:20.198 [2024-07-14 07:44:36.139728] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.198 [2024-07-14 07:44:36.139912] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.198 [2024-07-14 07:44:36.139938] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0c30000b90 with addr=10.0.0.2, port=4420 00:27:20.198 qpair failed and we were unable to recover it. 00:27:20.198 [2024-07-14 07:44:36.140124] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.198 [2024-07-14 07:44:36.140306] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.198 [2024-07-14 07:44:36.140330] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0c30000b90 with addr=10.0.0.2, port=4420 00:27:20.198 qpair failed and we were unable to recover it. 00:27:20.198 [2024-07-14 07:44:36.140498] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.198 [2024-07-14 07:44:36.140677] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.198 [2024-07-14 07:44:36.140702] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0c30000b90 with addr=10.0.0.2, port=4420 00:27:20.198 qpair failed and we were unable to recover it. 00:27:20.198 [2024-07-14 07:44:36.140876] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.198 [2024-07-14 07:44:36.141028] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.198 [2024-07-14 07:44:36.141053] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0c30000b90 with addr=10.0.0.2, port=4420 00:27:20.198 qpair failed and we were unable to recover it. 00:27:20.198 [2024-07-14 07:44:36.141239] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.198 [2024-07-14 07:44:36.141399] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.198 [2024-07-14 07:44:36.141424] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0c30000b90 with addr=10.0.0.2, port=4420 00:27:20.198 qpair failed and we were unable to recover it. 00:27:20.198 [2024-07-14 07:44:36.141585] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.198 [2024-07-14 07:44:36.141755] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.198 [2024-07-14 07:44:36.141779] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0c30000b90 with addr=10.0.0.2, port=4420 00:27:20.198 qpair failed and we were unable to recover it. 00:27:20.198 [2024-07-14 07:44:36.142007] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.198 [2024-07-14 07:44:36.142188] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.198 [2024-07-14 07:44:36.142213] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0c30000b90 with addr=10.0.0.2, port=4420 00:27:20.198 qpair failed and we were unable to recover it. 00:27:20.198 [2024-07-14 07:44:36.142370] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.198 [2024-07-14 07:44:36.142570] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.198 [2024-07-14 07:44:36.142594] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0c30000b90 with addr=10.0.0.2, port=4420 00:27:20.198 qpair failed and we were unable to recover it. 00:27:20.198 [2024-07-14 07:44:36.142779] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.198 [2024-07-14 07:44:36.142937] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.198 [2024-07-14 07:44:36.142963] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0c30000b90 with addr=10.0.0.2, port=4420 00:27:20.198 qpair failed and we were unable to recover it. 00:27:20.198 [2024-07-14 07:44:36.143145] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.198 [2024-07-14 07:44:36.143305] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.198 [2024-07-14 07:44:36.143331] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0c30000b90 with addr=10.0.0.2, port=4420 00:27:20.198 qpair failed and we were unable to recover it. 00:27:20.198 [2024-07-14 07:44:36.143512] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.198 [2024-07-14 07:44:36.143694] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.198 [2024-07-14 07:44:36.143719] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0c30000b90 with addr=10.0.0.2, port=4420 00:27:20.198 qpair failed and we were unable to recover it. 00:27:20.198 [2024-07-14 07:44:36.143886] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.198 [2024-07-14 07:44:36.144054] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.198 [2024-07-14 07:44:36.144078] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0c30000b90 with addr=10.0.0.2, port=4420 00:27:20.198 qpair failed and we were unable to recover it. 00:27:20.198 [2024-07-14 07:44:36.144268] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.198 [2024-07-14 07:44:36.144423] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.198 [2024-07-14 07:44:36.144448] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0c30000b90 with addr=10.0.0.2, port=4420 00:27:20.198 qpair failed and we were unable to recover it. 00:27:20.198 [2024-07-14 07:44:36.144607] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.199 [2024-07-14 07:44:36.144775] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.199 [2024-07-14 07:44:36.144800] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0c30000b90 with addr=10.0.0.2, port=4420 00:27:20.199 qpair failed and we were unable to recover it. 00:27:20.199 [2024-07-14 07:44:36.144983] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.199 [2024-07-14 07:44:36.145163] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.199 [2024-07-14 07:44:36.145188] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0c30000b90 with addr=10.0.0.2, port=4420 00:27:20.199 qpair failed and we were unable to recover it. 00:27:20.199 [2024-07-14 07:44:36.145345] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.199 [2024-07-14 07:44:36.145515] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.199 [2024-07-14 07:44:36.145540] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0c30000b90 with addr=10.0.0.2, port=4420 00:27:20.199 qpair failed and we were unable to recover it. 00:27:20.199 [2024-07-14 07:44:36.145698] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.199 [2024-07-14 07:44:36.145881] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.199 [2024-07-14 07:44:36.145907] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0c30000b90 with addr=10.0.0.2, port=4420 00:27:20.199 qpair failed and we were unable to recover it. 00:27:20.199 [2024-07-14 07:44:36.146058] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.199 [2024-07-14 07:44:36.146260] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.199 [2024-07-14 07:44:36.146285] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0c30000b90 with addr=10.0.0.2, port=4420 00:27:20.199 qpair failed and we were unable to recover it. 00:27:20.199 [2024-07-14 07:44:36.146444] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.199 [2024-07-14 07:44:36.146635] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.199 [2024-07-14 07:44:36.146663] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0c30000b90 with addr=10.0.0.2, port=4420 00:27:20.199 qpair failed and we were unable to recover it. 00:27:20.199 [2024-07-14 07:44:36.146842] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.199 [2024-07-14 07:44:36.147012] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.199 [2024-07-14 07:44:36.147037] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0c30000b90 with addr=10.0.0.2, port=4420 00:27:20.199 qpair failed and we were unable to recover it. 00:27:20.199 [2024-07-14 07:44:36.147208] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.199 [2024-07-14 07:44:36.147377] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.199 [2024-07-14 07:44:36.147401] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0c30000b90 with addr=10.0.0.2, port=4420 00:27:20.199 qpair failed and we were unable to recover it. 00:27:20.199 [2024-07-14 07:44:36.147587] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.199 [2024-07-14 07:44:36.147763] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.199 [2024-07-14 07:44:36.147788] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0c30000b90 with addr=10.0.0.2, port=4420 00:27:20.199 qpair failed and we were unable to recover it. 00:27:20.199 [2024-07-14 07:44:36.147989] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.199 [2024-07-14 07:44:36.148141] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.199 [2024-07-14 07:44:36.148165] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0c30000b90 with addr=10.0.0.2, port=4420 00:27:20.199 qpair failed and we were unable to recover it. 00:27:20.199 [2024-07-14 07:44:36.148351] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.199 [2024-07-14 07:44:36.148498] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.199 [2024-07-14 07:44:36.148524] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0c30000b90 with addr=10.0.0.2, port=4420 00:27:20.199 qpair failed and we were unable to recover it. 00:27:20.199 [2024-07-14 07:44:36.148671] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.199 [2024-07-14 07:44:36.148825] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.199 [2024-07-14 07:44:36.148851] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0c30000b90 with addr=10.0.0.2, port=4420 00:27:20.199 qpair failed and we were unable to recover it. 00:27:20.199 [2024-07-14 07:44:36.149059] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.199 [2024-07-14 07:44:36.149218] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.199 [2024-07-14 07:44:36.149242] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0c30000b90 with addr=10.0.0.2, port=4420 00:27:20.199 qpair failed and we were unable to recover it. 00:27:20.199 [2024-07-14 07:44:36.149399] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.199 [2024-07-14 07:44:36.149548] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.199 [2024-07-14 07:44:36.149573] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0c30000b90 with addr=10.0.0.2, port=4420 00:27:20.199 qpair failed and we were unable to recover it. 00:27:20.199 [2024-07-14 07:44:36.149754] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.199 [2024-07-14 07:44:36.149925] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.199 [2024-07-14 07:44:36.149950] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0c30000b90 with addr=10.0.0.2, port=4420 00:27:20.199 qpair failed and we were unable to recover it. 00:27:20.199 [2024-07-14 07:44:36.150135] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.199 [2024-07-14 07:44:36.150314] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.199 [2024-07-14 07:44:36.150339] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0c30000b90 with addr=10.0.0.2, port=4420 00:27:20.199 qpair failed and we were unable to recover it. 00:27:20.199 [2024-07-14 07:44:36.150552] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.199 [2024-07-14 07:44:36.150709] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.199 [2024-07-14 07:44:36.150734] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0c30000b90 with addr=10.0.0.2, port=4420 00:27:20.199 qpair failed and we were unable to recover it. 00:27:20.199 [2024-07-14 07:44:36.150904] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.199 [2024-07-14 07:44:36.151084] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.199 [2024-07-14 07:44:36.151108] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0c30000b90 with addr=10.0.0.2, port=4420 00:27:20.199 qpair failed and we were unable to recover it. 00:27:20.199 [2024-07-14 07:44:36.151262] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.199 [2024-07-14 07:44:36.151443] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.199 [2024-07-14 07:44:36.151469] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0c30000b90 with addr=10.0.0.2, port=4420 00:27:20.199 qpair failed and we were unable to recover it. 00:27:20.199 [2024-07-14 07:44:36.151673] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.199 [2024-07-14 07:44:36.151848] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.199 [2024-07-14 07:44:36.151891] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0c30000b90 with addr=10.0.0.2, port=4420 00:27:20.199 qpair failed and we were unable to recover it. 00:27:20.199 [2024-07-14 07:44:36.152050] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.199 [2024-07-14 07:44:36.152221] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.199 [2024-07-14 07:44:36.152245] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0c30000b90 with addr=10.0.0.2, port=4420 00:27:20.199 qpair failed and we were unable to recover it. 00:27:20.199 [2024-07-14 07:44:36.152422] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.199 [2024-07-14 07:44:36.152583] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.199 [2024-07-14 07:44:36.152608] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0c30000b90 with addr=10.0.0.2, port=4420 00:27:20.199 qpair failed and we were unable to recover it. 00:27:20.199 [2024-07-14 07:44:36.152817] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.199 [2024-07-14 07:44:36.152976] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.199 [2024-07-14 07:44:36.153002] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0c30000b90 with addr=10.0.0.2, port=4420 00:27:20.199 qpair failed and we were unable to recover it. 00:27:20.199 [2024-07-14 07:44:36.153166] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.199 [2024-07-14 07:44:36.153350] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.199 [2024-07-14 07:44:36.153374] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0c30000b90 with addr=10.0.0.2, port=4420 00:27:20.199 qpair failed and we were unable to recover it. 00:27:20.199 [2024-07-14 07:44:36.153551] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.199 [2024-07-14 07:44:36.153734] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.199 [2024-07-14 07:44:36.153758] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0c30000b90 with addr=10.0.0.2, port=4420 00:27:20.199 qpair failed and we were unable to recover it. 00:27:20.199 [2024-07-14 07:44:36.153933] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.199 [2024-07-14 07:44:36.154126] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.199 [2024-07-14 07:44:36.154152] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0c30000b90 with addr=10.0.0.2, port=4420 00:27:20.199 qpair failed and we were unable to recover it. 00:27:20.199 [2024-07-14 07:44:36.154368] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.199 [2024-07-14 07:44:36.154537] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.199 [2024-07-14 07:44:36.154561] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0c30000b90 with addr=10.0.0.2, port=4420 00:27:20.199 qpair failed and we were unable to recover it. 00:27:20.199 [2024-07-14 07:44:36.154743] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.199 [2024-07-14 07:44:36.154923] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.200 [2024-07-14 07:44:36.154949] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0c30000b90 with addr=10.0.0.2, port=4420 00:27:20.200 qpair failed and we were unable to recover it. 00:27:20.200 [2024-07-14 07:44:36.155133] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.200 [2024-07-14 07:44:36.155333] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.200 [2024-07-14 07:44:36.155357] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0c30000b90 with addr=10.0.0.2, port=4420 00:27:20.200 qpair failed and we were unable to recover it. 00:27:20.200 [2024-07-14 07:44:36.155547] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.200 [2024-07-14 07:44:36.155733] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.200 [2024-07-14 07:44:36.155758] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0c30000b90 with addr=10.0.0.2, port=4420 00:27:20.200 qpair failed and we were unable to recover it. 00:27:20.200 [2024-07-14 07:44:36.155915] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.200 [2024-07-14 07:44:36.156095] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.200 [2024-07-14 07:44:36.156120] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0c30000b90 with addr=10.0.0.2, port=4420 00:27:20.200 qpair failed and we were unable to recover it. 00:27:20.200 [2024-07-14 07:44:36.156278] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.200 [2024-07-14 07:44:36.156431] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.200 [2024-07-14 07:44:36.156457] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0c30000b90 with addr=10.0.0.2, port=4420 00:27:20.200 qpair failed and we were unable to recover it. 00:27:20.200 [2024-07-14 07:44:36.156613] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.200 [2024-07-14 07:44:36.156813] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.200 [2024-07-14 07:44:36.156838] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0c30000b90 with addr=10.0.0.2, port=4420 00:27:20.200 qpair failed and we were unable to recover it. 00:27:20.200 [2024-07-14 07:44:36.157047] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.200 [2024-07-14 07:44:36.157195] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.200 [2024-07-14 07:44:36.157219] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0c30000b90 with addr=10.0.0.2, port=4420 00:27:20.200 qpair failed and we were unable to recover it. 00:27:20.200 [2024-07-14 07:44:36.157423] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.200 [2024-07-14 07:44:36.157593] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.200 [2024-07-14 07:44:36.157618] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0c30000b90 with addr=10.0.0.2, port=4420 00:27:20.200 qpair failed and we were unable to recover it. 00:27:20.200 [2024-07-14 07:44:36.157807] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.200 [2024-07-14 07:44:36.157962] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.200 [2024-07-14 07:44:36.157987] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0c30000b90 with addr=10.0.0.2, port=4420 00:27:20.200 qpair failed and we were unable to recover it. 00:27:20.200 [2024-07-14 07:44:36.158179] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.200 [2024-07-14 07:44:36.158333] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.200 [2024-07-14 07:44:36.158358] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0c30000b90 with addr=10.0.0.2, port=4420 00:27:20.200 qpair failed and we were unable to recover it. 00:27:20.200 [2024-07-14 07:44:36.158567] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.200 [2024-07-14 07:44:36.158743] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.200 [2024-07-14 07:44:36.158767] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0c30000b90 with addr=10.0.0.2, port=4420 00:27:20.200 qpair failed and we were unable to recover it. 00:27:20.200 [2024-07-14 07:44:36.158954] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.200 [2024-07-14 07:44:36.159110] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.200 [2024-07-14 07:44:36.159135] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0c30000b90 with addr=10.0.0.2, port=4420 00:27:20.200 qpair failed and we were unable to recover it. 00:27:20.200 [2024-07-14 07:44:36.159326] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.200 [2024-07-14 07:44:36.159479] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.200 [2024-07-14 07:44:36.159503] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0c30000b90 with addr=10.0.0.2, port=4420 00:27:20.200 qpair failed and we were unable to recover it. 00:27:20.200 [2024-07-14 07:44:36.159659] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.200 [2024-07-14 07:44:36.159840] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.200 [2024-07-14 07:44:36.159880] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0c30000b90 with addr=10.0.0.2, port=4420 00:27:20.200 qpair failed and we were unable to recover it. 00:27:20.200 [2024-07-14 07:44:36.160090] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.200 [2024-07-14 07:44:36.160239] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.200 [2024-07-14 07:44:36.160263] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0c30000b90 with addr=10.0.0.2, port=4420 00:27:20.200 qpair failed and we were unable to recover it. 00:27:20.200 [2024-07-14 07:44:36.160474] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.200 [2024-07-14 07:44:36.160647] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.200 [2024-07-14 07:44:36.160672] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0c30000b90 with addr=10.0.0.2, port=4420 00:27:20.200 qpair failed and we were unable to recover it. 00:27:20.200 [2024-07-14 07:44:36.160855] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.200 [2024-07-14 07:44:36.161069] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.200 [2024-07-14 07:44:36.161094] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0c30000b90 with addr=10.0.0.2, port=4420 00:27:20.200 qpair failed and we were unable to recover it. 00:27:20.200 [2024-07-14 07:44:36.161281] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.200 [2024-07-14 07:44:36.161437] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.200 [2024-07-14 07:44:36.161463] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0c30000b90 with addr=10.0.0.2, port=4420 00:27:20.200 qpair failed and we were unable to recover it. 00:27:20.200 [2024-07-14 07:44:36.161636] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.200 [2024-07-14 07:44:36.161794] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.200 [2024-07-14 07:44:36.161824] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0c30000b90 with addr=10.0.0.2, port=4420 00:27:20.200 qpair failed and we were unable to recover it. 00:27:20.200 [2024-07-14 07:44:36.161991] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.200 [2024-07-14 07:44:36.162166] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.200 [2024-07-14 07:44:36.162193] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0c30000b90 with addr=10.0.0.2, port=4420 00:27:20.200 qpair failed and we were unable to recover it. 00:27:20.200 [2024-07-14 07:44:36.162412] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.200 [2024-07-14 07:44:36.162593] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.200 [2024-07-14 07:44:36.162617] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0c30000b90 with addr=10.0.0.2, port=4420 00:27:20.200 qpair failed and we were unable to recover it. 00:27:20.200 [2024-07-14 07:44:36.162817] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.200 [2024-07-14 07:44:36.163000] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.200 [2024-07-14 07:44:36.163025] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0c30000b90 with addr=10.0.0.2, port=4420 00:27:20.200 qpair failed and we were unable to recover it. 00:27:20.200 [2024-07-14 07:44:36.163194] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.200 [2024-07-14 07:44:36.163342] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.200 [2024-07-14 07:44:36.163367] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0c30000b90 with addr=10.0.0.2, port=4420 00:27:20.200 qpair failed and we were unable to recover it. 00:27:20.200 [2024-07-14 07:44:36.163585] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.200 [2024-07-14 07:44:36.163736] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.200 [2024-07-14 07:44:36.163763] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0c30000b90 with addr=10.0.0.2, port=4420 00:27:20.200 qpair failed and we were unable to recover it. 00:27:20.200 [2024-07-14 07:44:36.163935] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.200 [2024-07-14 07:44:36.164150] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.200 [2024-07-14 07:44:36.164174] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0c30000b90 with addr=10.0.0.2, port=4420 00:27:20.200 qpair failed and we were unable to recover it. 00:27:20.200 [2024-07-14 07:44:36.164353] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.200 [2024-07-14 07:44:36.164525] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.200 [2024-07-14 07:44:36.164550] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0c30000b90 with addr=10.0.0.2, port=4420 00:27:20.200 qpair failed and we were unable to recover it. 00:27:20.200 [2024-07-14 07:44:36.164763] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.200 [2024-07-14 07:44:36.164941] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.200 [2024-07-14 07:44:36.164966] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0c30000b90 with addr=10.0.0.2, port=4420 00:27:20.200 qpair failed and we were unable to recover it. 00:27:20.200 [2024-07-14 07:44:36.165140] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.200 [2024-07-14 07:44:36.165326] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.200 [2024-07-14 07:44:36.165351] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0c30000b90 with addr=10.0.0.2, port=4420 00:27:20.200 qpair failed and we were unable to recover it. 00:27:20.200 [2024-07-14 07:44:36.165516] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.200 [2024-07-14 07:44:36.165695] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.200 [2024-07-14 07:44:36.165723] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0c30000b90 with addr=10.0.0.2, port=4420 00:27:20.200 qpair failed and we were unable to recover it. 00:27:20.200 [2024-07-14 07:44:36.165927] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.200 [2024-07-14 07:44:36.166102] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.200 [2024-07-14 07:44:36.166128] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0c30000b90 with addr=10.0.0.2, port=4420 00:27:20.200 qpair failed and we were unable to recover it. 00:27:20.200 [2024-07-14 07:44:36.166317] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.200 [2024-07-14 07:44:36.166474] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.200 [2024-07-14 07:44:36.166498] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0c30000b90 with addr=10.0.0.2, port=4420 00:27:20.200 qpair failed and we were unable to recover it. 00:27:20.201 [2024-07-14 07:44:36.166683] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.201 [2024-07-14 07:44:36.166836] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.201 [2024-07-14 07:44:36.166863] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0c30000b90 with addr=10.0.0.2, port=4420 00:27:20.201 qpair failed and we were unable to recover it. 00:27:20.201 [2024-07-14 07:44:36.167043] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.201 [2024-07-14 07:44:36.167223] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.201 [2024-07-14 07:44:36.167247] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0c30000b90 with addr=10.0.0.2, port=4420 00:27:20.201 qpair failed and we were unable to recover it. 00:27:20.201 [2024-07-14 07:44:36.167422] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.201 [2024-07-14 07:44:36.167602] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.201 [2024-07-14 07:44:36.167627] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0c30000b90 with addr=10.0.0.2, port=4420 00:27:20.201 qpair failed and we were unable to recover it. 00:27:20.201 [2024-07-14 07:44:36.167848] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.201 [2024-07-14 07:44:36.168042] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.201 [2024-07-14 07:44:36.168068] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0c30000b90 with addr=10.0.0.2, port=4420 00:27:20.201 qpair failed and we were unable to recover it. 00:27:20.201 [2024-07-14 07:44:36.168253] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.201 [2024-07-14 07:44:36.168424] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.201 [2024-07-14 07:44:36.168449] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0c30000b90 with addr=10.0.0.2, port=4420 00:27:20.201 qpair failed and we were unable to recover it. 00:27:20.201 [2024-07-14 07:44:36.168633] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.201 [2024-07-14 07:44:36.168808] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.201 [2024-07-14 07:44:36.168833] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0c30000b90 with addr=10.0.0.2, port=4420 00:27:20.201 qpair failed and we were unable to recover it. 00:27:20.201 [2024-07-14 07:44:36.169042] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.201 [2024-07-14 07:44:36.169195] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.201 [2024-07-14 07:44:36.169222] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0c30000b90 with addr=10.0.0.2, port=4420 00:27:20.201 qpair failed and we were unable to recover it. 00:27:20.201 [2024-07-14 07:44:36.169408] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.201 [2024-07-14 07:44:36.169563] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.201 [2024-07-14 07:44:36.169592] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0c30000b90 with addr=10.0.0.2, port=4420 00:27:20.201 qpair failed and we were unable to recover it. 00:27:20.201 [2024-07-14 07:44:36.169754] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.201 [2024-07-14 07:44:36.169910] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.201 [2024-07-14 07:44:36.169936] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0c30000b90 with addr=10.0.0.2, port=4420 00:27:20.201 qpair failed and we were unable to recover it. 00:27:20.201 [2024-07-14 07:44:36.170118] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.201 [2024-07-14 07:44:36.170276] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.201 [2024-07-14 07:44:36.170302] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0c30000b90 with addr=10.0.0.2, port=4420 00:27:20.201 qpair failed and we were unable to recover it. 00:27:20.201 [2024-07-14 07:44:36.170487] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.201 [2024-07-14 07:44:36.170675] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.201 [2024-07-14 07:44:36.170700] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0c30000b90 with addr=10.0.0.2, port=4420 00:27:20.201 qpair failed and we were unable to recover it. 00:27:20.201 [2024-07-14 07:44:36.170852] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.201 [2024-07-14 07:44:36.171014] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.201 [2024-07-14 07:44:36.171039] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0c30000b90 with addr=10.0.0.2, port=4420 00:27:20.201 qpair failed and we were unable to recover it. 00:27:20.201 [2024-07-14 07:44:36.171197] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.201 [2024-07-14 07:44:36.171371] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.201 [2024-07-14 07:44:36.171396] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0c30000b90 with addr=10.0.0.2, port=4420 00:27:20.201 qpair failed and we were unable to recover it. 00:27:20.201 [2024-07-14 07:44:36.171576] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.201 [2024-07-14 07:44:36.171733] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.201 [2024-07-14 07:44:36.171759] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0c30000b90 with addr=10.0.0.2, port=4420 00:27:20.201 qpair failed and we were unable to recover it. 00:27:20.201 [2024-07-14 07:44:36.171944] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.201 [2024-07-14 07:44:36.172121] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.201 [2024-07-14 07:44:36.172146] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0c30000b90 with addr=10.0.0.2, port=4420 00:27:20.201 qpair failed and we were unable to recover it. 00:27:20.201 [2024-07-14 07:44:36.172364] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.201 [2024-07-14 07:44:36.172539] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.201 [2024-07-14 07:44:36.172563] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0c30000b90 with addr=10.0.0.2, port=4420 00:27:20.201 qpair failed and we were unable to recover it. 00:27:20.201 [2024-07-14 07:44:36.172743] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.201 [2024-07-14 07:44:36.172928] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.201 [2024-07-14 07:44:36.172954] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0c30000b90 with addr=10.0.0.2, port=4420 00:27:20.201 qpair failed and we were unable to recover it. 00:27:20.201 [2024-07-14 07:44:36.173128] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.201 [2024-07-14 07:44:36.173344] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.201 [2024-07-14 07:44:36.173369] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0c30000b90 with addr=10.0.0.2, port=4420 00:27:20.201 qpair failed and we were unable to recover it. 00:27:20.201 [2024-07-14 07:44:36.173547] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.201 [2024-07-14 07:44:36.173730] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.201 [2024-07-14 07:44:36.173755] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0c30000b90 with addr=10.0.0.2, port=4420 00:27:20.201 qpair failed and we were unable to recover it. 00:27:20.201 [2024-07-14 07:44:36.173918] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.201 [2024-07-14 07:44:36.174077] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.201 [2024-07-14 07:44:36.174102] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0c30000b90 with addr=10.0.0.2, port=4420 00:27:20.201 qpair failed and we were unable to recover it. 00:27:20.201 [2024-07-14 07:44:36.174316] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.201 [2024-07-14 07:44:36.174469] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.201 [2024-07-14 07:44:36.174496] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0c30000b90 with addr=10.0.0.2, port=4420 00:27:20.201 qpair failed and we were unable to recover it. 00:27:20.201 [2024-07-14 07:44:36.174680] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.201 [2024-07-14 07:44:36.174838] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.201 [2024-07-14 07:44:36.174864] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0c30000b90 with addr=10.0.0.2, port=4420 00:27:20.201 qpair failed and we were unable to recover it. 00:27:20.201 [2024-07-14 07:44:36.175057] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.201 [2024-07-14 07:44:36.175241] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.201 [2024-07-14 07:44:36.175266] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0c30000b90 with addr=10.0.0.2, port=4420 00:27:20.201 qpair failed and we were unable to recover it. 00:27:20.201 [2024-07-14 07:44:36.175415] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.201 [2024-07-14 07:44:36.175568] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.201 [2024-07-14 07:44:36.175593] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0c30000b90 with addr=10.0.0.2, port=4420 00:27:20.201 qpair failed and we were unable to recover it. 00:27:20.201 [2024-07-14 07:44:36.175804] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.201 [2024-07-14 07:44:36.175976] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.201 [2024-07-14 07:44:36.176001] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0c30000b90 with addr=10.0.0.2, port=4420 00:27:20.201 qpair failed and we were unable to recover it. 00:27:20.201 [2024-07-14 07:44:36.176166] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.201 [2024-07-14 07:44:36.176353] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.201 [2024-07-14 07:44:36.176378] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0c30000b90 with addr=10.0.0.2, port=4420 00:27:20.201 qpair failed and we were unable to recover it. 00:27:20.201 [2024-07-14 07:44:36.176539] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.201 [2024-07-14 07:44:36.176693] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.201 [2024-07-14 07:44:36.176718] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0c30000b90 with addr=10.0.0.2, port=4420 00:27:20.201 qpair failed and we were unable to recover it. 00:27:20.201 [2024-07-14 07:44:36.176877] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.201 [2024-07-14 07:44:36.177026] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.201 [2024-07-14 07:44:36.177051] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0c30000b90 with addr=10.0.0.2, port=4420 00:27:20.201 qpair failed and we were unable to recover it. 00:27:20.201 [2024-07-14 07:44:36.177262] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.201 [2024-07-14 07:44:36.177445] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.201 [2024-07-14 07:44:36.177470] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0c30000b90 with addr=10.0.0.2, port=4420 00:27:20.201 qpair failed and we were unable to recover it. 00:27:20.201 [2024-07-14 07:44:36.177628] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.201 [2024-07-14 07:44:36.177806] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.201 [2024-07-14 07:44:36.177832] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0c30000b90 with addr=10.0.0.2, port=4420 00:27:20.201 qpair failed and we were unable to recover it. 00:27:20.201 [2024-07-14 07:44:36.178025] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.201 [2024-07-14 07:44:36.178197] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.202 [2024-07-14 07:44:36.178221] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0c30000b90 with addr=10.0.0.2, port=4420 00:27:20.202 qpair failed and we were unable to recover it. 00:27:20.202 [2024-07-14 07:44:36.178382] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.202 [2024-07-14 07:44:36.178599] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.202 [2024-07-14 07:44:36.178624] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0c30000b90 with addr=10.0.0.2, port=4420 00:27:20.202 qpair failed and we were unable to recover it. 00:27:20.202 [2024-07-14 07:44:36.178780] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.202 [2024-07-14 07:44:36.178937] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.202 [2024-07-14 07:44:36.178963] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0c30000b90 with addr=10.0.0.2, port=4420 00:27:20.202 qpair failed and we were unable to recover it. 00:27:20.202 [2024-07-14 07:44:36.179150] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.202 [2024-07-14 07:44:36.179361] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.202 [2024-07-14 07:44:36.179386] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0c30000b90 with addr=10.0.0.2, port=4420 00:27:20.202 qpair failed and we were unable to recover it. 00:27:20.202 [2024-07-14 07:44:36.179545] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.202 [2024-07-14 07:44:36.179716] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.202 [2024-07-14 07:44:36.179741] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0c30000b90 with addr=10.0.0.2, port=4420 00:27:20.202 qpair failed and we were unable to recover it. 00:27:20.202 [2024-07-14 07:44:36.179911] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.202 [2024-07-14 07:44:36.180083] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.202 [2024-07-14 07:44:36.180108] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0c30000b90 with addr=10.0.0.2, port=4420 00:27:20.202 qpair failed and we were unable to recover it. 00:27:20.202 [2024-07-14 07:44:36.180263] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.202 [2024-07-14 07:44:36.180448] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.202 [2024-07-14 07:44:36.180473] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0c30000b90 with addr=10.0.0.2, port=4420 00:27:20.202 qpair failed and we were unable to recover it. 00:27:20.202 [2024-07-14 07:44:36.180657] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.202 [2024-07-14 07:44:36.180810] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.202 [2024-07-14 07:44:36.180834] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0c30000b90 with addr=10.0.0.2, port=4420 00:27:20.202 qpair failed and we were unable to recover it. 00:27:20.202 [2024-07-14 07:44:36.181030] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.202 [2024-07-14 07:44:36.181187] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.202 [2024-07-14 07:44:36.181212] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0c30000b90 with addr=10.0.0.2, port=4420 00:27:20.202 qpair failed and we were unable to recover it. 00:27:20.202 [2024-07-14 07:44:36.181380] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.202 [2024-07-14 07:44:36.181599] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.202 [2024-07-14 07:44:36.181622] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0c30000b90 with addr=10.0.0.2, port=4420 00:27:20.202 qpair failed and we were unable to recover it. 00:27:20.202 [2024-07-14 07:44:36.181782] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.202 [2024-07-14 07:44:36.181963] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.202 [2024-07-14 07:44:36.181988] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0c30000b90 with addr=10.0.0.2, port=4420 00:27:20.202 qpair failed and we were unable to recover it. 00:27:20.202 [2024-07-14 07:44:36.182173] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.202 [2024-07-14 07:44:36.182326] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.202 [2024-07-14 07:44:36.182351] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0c30000b90 with addr=10.0.0.2, port=4420 00:27:20.202 qpair failed and we were unable to recover it. 00:27:20.202 [2024-07-14 07:44:36.182520] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.202 [2024-07-14 07:44:36.182678] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.202 [2024-07-14 07:44:36.182704] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0c30000b90 with addr=10.0.0.2, port=4420 00:27:20.202 qpair failed and we were unable to recover it. 00:27:20.202 [2024-07-14 07:44:36.182893] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.202 [2024-07-14 07:44:36.183076] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.202 [2024-07-14 07:44:36.183100] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0c30000b90 with addr=10.0.0.2, port=4420 00:27:20.202 qpair failed and we were unable to recover it. 00:27:20.202 [2024-07-14 07:44:36.183290] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.202 [2024-07-14 07:44:36.183500] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.202 [2024-07-14 07:44:36.183525] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0c30000b90 with addr=10.0.0.2, port=4420 00:27:20.202 qpair failed and we were unable to recover it. 00:27:20.202 [2024-07-14 07:44:36.183675] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.202 [2024-07-14 07:44:36.183824] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.202 [2024-07-14 07:44:36.183849] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0c30000b90 with addr=10.0.0.2, port=4420 00:27:20.202 qpair failed and we were unable to recover it. 00:27:20.202 [2024-07-14 07:44:36.184038] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.202 [2024-07-14 07:44:36.184210] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.202 [2024-07-14 07:44:36.184234] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0c30000b90 with addr=10.0.0.2, port=4420 00:27:20.202 qpair failed and we were unable to recover it. 00:27:20.202 [2024-07-14 07:44:36.184420] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.202 [2024-07-14 07:44:36.184576] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.202 [2024-07-14 07:44:36.184601] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0c30000b90 with addr=10.0.0.2, port=4420 00:27:20.202 qpair failed and we were unable to recover it. 00:27:20.202 [2024-07-14 07:44:36.184759] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.202 [2024-07-14 07:44:36.184912] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.202 [2024-07-14 07:44:36.184938] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0c30000b90 with addr=10.0.0.2, port=4420 00:27:20.202 qpair failed and we were unable to recover it. 00:27:20.202 [2024-07-14 07:44:36.185100] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.202 [2024-07-14 07:44:36.185279] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.202 [2024-07-14 07:44:36.185303] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0c30000b90 with addr=10.0.0.2, port=4420 00:27:20.202 qpair failed and we were unable to recover it. 00:27:20.202 [2024-07-14 07:44:36.185486] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.202 [2024-07-14 07:44:36.185661] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.202 [2024-07-14 07:44:36.185686] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0c30000b90 with addr=10.0.0.2, port=4420 00:27:20.202 qpair failed and we were unable to recover it. 00:27:20.202 [2024-07-14 07:44:36.185839] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.202 [2024-07-14 07:44:36.186025] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.202 [2024-07-14 07:44:36.186049] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0c30000b90 with addr=10.0.0.2, port=4420 00:27:20.202 qpair failed and we were unable to recover it. 00:27:20.202 [2024-07-14 07:44:36.186207] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.202 [2024-07-14 07:44:36.186353] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.202 [2024-07-14 07:44:36.186378] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0c30000b90 with addr=10.0.0.2, port=4420 00:27:20.202 qpair failed and we were unable to recover it. 00:27:20.202 [2024-07-14 07:44:36.186540] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.202 [2024-07-14 07:44:36.186739] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.202 [2024-07-14 07:44:36.186764] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0c30000b90 with addr=10.0.0.2, port=4420 00:27:20.202 qpair failed and we were unable to recover it. 00:27:20.202 [2024-07-14 07:44:36.186939] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.202 [2024-07-14 07:44:36.187126] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.202 [2024-07-14 07:44:36.187150] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0c30000b90 with addr=10.0.0.2, port=4420 00:27:20.202 qpair failed and we were unable to recover it. 00:27:20.202 [2024-07-14 07:44:36.187329] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.202 [2024-07-14 07:44:36.187523] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.202 [2024-07-14 07:44:36.187548] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0c30000b90 with addr=10.0.0.2, port=4420 00:27:20.202 qpair failed and we were unable to recover it. 00:27:20.202 [2024-07-14 07:44:36.187706] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.202 [2024-07-14 07:44:36.187892] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.202 [2024-07-14 07:44:36.187917] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0c30000b90 with addr=10.0.0.2, port=4420 00:27:20.202 qpair failed and we were unable to recover it. 00:27:20.202 [2024-07-14 07:44:36.188105] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.202 [2024-07-14 07:44:36.188285] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.202 [2024-07-14 07:44:36.188312] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0c30000b90 with addr=10.0.0.2, port=4420 00:27:20.202 qpair failed and we were unable to recover it. 00:27:20.202 [2024-07-14 07:44:36.188487] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.202 [2024-07-14 07:44:36.188703] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.202 [2024-07-14 07:44:36.188728] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0c30000b90 with addr=10.0.0.2, port=4420 00:27:20.202 qpair failed and we were unable to recover it. 00:27:20.202 [2024-07-14 07:44:36.188883] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.202 [2024-07-14 07:44:36.189042] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.202 [2024-07-14 07:44:36.189068] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0c30000b90 with addr=10.0.0.2, port=4420 00:27:20.202 qpair failed and we were unable to recover it. 00:27:20.202 [2024-07-14 07:44:36.189241] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.202 [2024-07-14 07:44:36.189423] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.202 [2024-07-14 07:44:36.189447] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0c30000b90 with addr=10.0.0.2, port=4420 00:27:20.202 qpair failed and we were unable to recover it. 00:27:20.203 [2024-07-14 07:44:36.189599] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.203 [2024-07-14 07:44:36.189775] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.203 [2024-07-14 07:44:36.189800] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0c30000b90 with addr=10.0.0.2, port=4420 00:27:20.203 qpair failed and we were unable to recover it. 00:27:20.203 [2024-07-14 07:44:36.189976] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.203 [2024-07-14 07:44:36.190130] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.203 [2024-07-14 07:44:36.190153] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0c30000b90 with addr=10.0.0.2, port=4420 00:27:20.203 qpair failed and we were unable to recover it. 00:27:20.203 [2024-07-14 07:44:36.190303] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.203 [2024-07-14 07:44:36.190484] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.203 [2024-07-14 07:44:36.190508] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0c30000b90 with addr=10.0.0.2, port=4420 00:27:20.203 qpair failed and we were unable to recover it. 00:27:20.203 [2024-07-14 07:44:36.190667] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.203 [2024-07-14 07:44:36.190881] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.203 [2024-07-14 07:44:36.190907] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0c30000b90 with addr=10.0.0.2, port=4420 00:27:20.203 qpair failed and we were unable to recover it. 00:27:20.203 [2024-07-14 07:44:36.191094] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.203 [2024-07-14 07:44:36.191247] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.203 [2024-07-14 07:44:36.191271] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0c30000b90 with addr=10.0.0.2, port=4420 00:27:20.203 qpair failed and we were unable to recover it. 00:27:20.203 [2024-07-14 07:44:36.191422] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.203 [2024-07-14 07:44:36.191603] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.203 [2024-07-14 07:44:36.191629] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0c30000b90 with addr=10.0.0.2, port=4420 00:27:20.203 qpair failed and we were unable to recover it. 00:27:20.203 [2024-07-14 07:44:36.191804] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.203 [2024-07-14 07:44:36.191953] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.203 [2024-07-14 07:44:36.191978] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0c30000b90 with addr=10.0.0.2, port=4420 00:27:20.203 qpair failed and we were unable to recover it. 00:27:20.203 [2024-07-14 07:44:36.192154] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.203 [2024-07-14 07:44:36.192356] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.203 [2024-07-14 07:44:36.192382] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0c30000b90 with addr=10.0.0.2, port=4420 00:27:20.203 qpair failed and we were unable to recover it. 00:27:20.203 [2024-07-14 07:44:36.192563] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.203 [2024-07-14 07:44:36.192741] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.203 [2024-07-14 07:44:36.192764] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0c30000b90 with addr=10.0.0.2, port=4420 00:27:20.203 qpair failed and we were unable to recover it. 00:27:20.203 [2024-07-14 07:44:36.192927] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.203 [2024-07-14 07:44:36.193105] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.203 [2024-07-14 07:44:36.193130] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0c30000b90 with addr=10.0.0.2, port=4420 00:27:20.203 qpair failed and we were unable to recover it. 00:27:20.203 [2024-07-14 07:44:36.193340] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.203 [2024-07-14 07:44:36.193526] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.203 [2024-07-14 07:44:36.193551] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0c30000b90 with addr=10.0.0.2, port=4420 00:27:20.203 qpair failed and we were unable to recover it. 00:27:20.203 [2024-07-14 07:44:36.193712] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.203 [2024-07-14 07:44:36.193914] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.203 [2024-07-14 07:44:36.193940] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0c30000b90 with addr=10.0.0.2, port=4420 00:27:20.203 qpair failed and we were unable to recover it. 00:27:20.203 [2024-07-14 07:44:36.194144] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.203 [2024-07-14 07:44:36.194325] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.203 [2024-07-14 07:44:36.194349] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0c30000b90 with addr=10.0.0.2, port=4420 00:27:20.203 qpair failed and we were unable to recover it. 00:27:20.203 [2024-07-14 07:44:36.194514] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.203 [2024-07-14 07:44:36.194700] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.203 [2024-07-14 07:44:36.194726] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0c30000b90 with addr=10.0.0.2, port=4420 00:27:20.203 qpair failed and we were unable to recover it. 00:27:20.203 [2024-07-14 07:44:36.194911] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.203 [2024-07-14 07:44:36.195062] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.203 [2024-07-14 07:44:36.195089] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0c30000b90 with addr=10.0.0.2, port=4420 00:27:20.203 qpair failed and we were unable to recover it. 00:27:20.203 [2024-07-14 07:44:36.195279] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.203 [2024-07-14 07:44:36.195481] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.203 [2024-07-14 07:44:36.195505] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0c30000b90 with addr=10.0.0.2, port=4420 00:27:20.203 qpair failed and we were unable to recover it. 00:27:20.203 [2024-07-14 07:44:36.195665] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.203 [2024-07-14 07:44:36.195813] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.203 [2024-07-14 07:44:36.195837] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0c30000b90 with addr=10.0.0.2, port=4420 00:27:20.203 qpair failed and we were unable to recover it. 00:27:20.203 [2024-07-14 07:44:36.196086] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.203 [2024-07-14 07:44:36.196245] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.203 [2024-07-14 07:44:36.196272] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0c30000b90 with addr=10.0.0.2, port=4420 00:27:20.203 qpair failed and we were unable to recover it. 00:27:20.203 [2024-07-14 07:44:36.196448] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.203 [2024-07-14 07:44:36.196631] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.203 [2024-07-14 07:44:36.196655] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0c30000b90 with addr=10.0.0.2, port=4420 00:27:20.203 qpair failed and we were unable to recover it. 00:27:20.203 [2024-07-14 07:44:36.196832] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.203 [2024-07-14 07:44:36.196989] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.203 [2024-07-14 07:44:36.197014] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0c30000b90 with addr=10.0.0.2, port=4420 00:27:20.203 qpair failed and we were unable to recover it. 00:27:20.203 [2024-07-14 07:44:36.197190] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.203 [2024-07-14 07:44:36.197366] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.203 [2024-07-14 07:44:36.197401] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0c30000b90 with addr=10.0.0.2, port=4420 00:27:20.203 qpair failed and we were unable to recover it. 00:27:20.203 [2024-07-14 07:44:36.197613] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.203 [2024-07-14 07:44:36.197786] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.203 [2024-07-14 07:44:36.197813] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0c30000b90 with addr=10.0.0.2, port=4420 00:27:20.203 qpair failed and we were unable to recover it. 00:27:20.203 [2024-07-14 07:44:36.197978] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.203 [2024-07-14 07:44:36.198164] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.203 [2024-07-14 07:44:36.198189] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0c30000b90 with addr=10.0.0.2, port=4420 00:27:20.203 qpair failed and we were unable to recover it. 00:27:20.203 [2024-07-14 07:44:36.198367] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.203 [2024-07-14 07:44:36.198552] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.203 [2024-07-14 07:44:36.198576] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0c30000b90 with addr=10.0.0.2, port=4420 00:27:20.203 qpair failed and we were unable to recover it. 00:27:20.203 [2024-07-14 07:44:36.198756] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.203 [2024-07-14 07:44:36.198918] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.203 [2024-07-14 07:44:36.198944] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0c30000b90 with addr=10.0.0.2, port=4420 00:27:20.203 qpair failed and we were unable to recover it. 00:27:20.203 [2024-07-14 07:44:36.199124] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.203 [2024-07-14 07:44:36.199304] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.203 [2024-07-14 07:44:36.199328] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0c30000b90 with addr=10.0.0.2, port=4420 00:27:20.203 qpair failed and we were unable to recover it. 00:27:20.204 [2024-07-14 07:44:36.199529] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.204 [2024-07-14 07:44:36.199705] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.204 [2024-07-14 07:44:36.199730] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0c30000b90 with addr=10.0.0.2, port=4420 00:27:20.204 qpair failed and we were unable to recover it. 00:27:20.204 [2024-07-14 07:44:36.199892] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.204 [2024-07-14 07:44:36.200071] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.204 [2024-07-14 07:44:36.200095] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0c30000b90 with addr=10.0.0.2, port=4420 00:27:20.204 qpair failed and we were unable to recover it. 00:27:20.204 [2024-07-14 07:44:36.200288] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.204 [2024-07-14 07:44:36.200438] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.204 [2024-07-14 07:44:36.200463] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0c30000b90 with addr=10.0.0.2, port=4420 00:27:20.204 qpair failed and we were unable to recover it. 00:27:20.204 [2024-07-14 07:44:36.200670] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.204 [2024-07-14 07:44:36.200840] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.204 [2024-07-14 07:44:36.200863] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0c30000b90 with addr=10.0.0.2, port=4420 00:27:20.204 qpair failed and we were unable to recover it. 00:27:20.204 [2024-07-14 07:44:36.201052] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.204 [2024-07-14 07:44:36.201208] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.204 [2024-07-14 07:44:36.201233] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0c30000b90 with addr=10.0.0.2, port=4420 00:27:20.204 qpair failed and we were unable to recover it. 00:27:20.204 [2024-07-14 07:44:36.201383] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.204 [2024-07-14 07:44:36.201554] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.204 [2024-07-14 07:44:36.201579] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0c30000b90 with addr=10.0.0.2, port=4420 00:27:20.204 qpair failed and we were unable to recover it. 00:27:20.204 [2024-07-14 07:44:36.201729] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.204 [2024-07-14 07:44:36.201900] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.204 [2024-07-14 07:44:36.201926] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0c30000b90 with addr=10.0.0.2, port=4420 00:27:20.204 qpair failed and we were unable to recover it. 00:27:20.204 [2024-07-14 07:44:36.202087] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.204 [2024-07-14 07:44:36.202247] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.204 [2024-07-14 07:44:36.202272] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0c30000b90 with addr=10.0.0.2, port=4420 00:27:20.204 qpair failed and we were unable to recover it. 00:27:20.204 [2024-07-14 07:44:36.202454] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.204 [2024-07-14 07:44:36.202636] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.204 [2024-07-14 07:44:36.202662] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0c30000b90 with addr=10.0.0.2, port=4420 00:27:20.204 qpair failed and we were unable to recover it. 00:27:20.204 [2024-07-14 07:44:36.202806] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.204 [2024-07-14 07:44:36.202987] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.204 [2024-07-14 07:44:36.203013] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0c30000b90 with addr=10.0.0.2, port=4420 00:27:20.204 qpair failed and we were unable to recover it. 00:27:20.204 [2024-07-14 07:44:36.203223] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.204 [2024-07-14 07:44:36.203363] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.204 [2024-07-14 07:44:36.203389] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0c30000b90 with addr=10.0.0.2, port=4420 00:27:20.204 qpair failed and we were unable to recover it. 00:27:20.204 [2024-07-14 07:44:36.203594] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.204 [2024-07-14 07:44:36.203784] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.204 [2024-07-14 07:44:36.203809] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0c30000b90 with addr=10.0.0.2, port=4420 00:27:20.204 qpair failed and we were unable to recover it. 00:27:20.204 [2024-07-14 07:44:36.204006] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.204 [2024-07-14 07:44:36.204191] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.204 [2024-07-14 07:44:36.204217] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0c30000b90 with addr=10.0.0.2, port=4420 00:27:20.204 qpair failed and we were unable to recover it. 00:27:20.204 [2024-07-14 07:44:36.204410] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.204 [2024-07-14 07:44:36.204590] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.204 [2024-07-14 07:44:36.204615] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0c30000b90 with addr=10.0.0.2, port=4420 00:27:20.204 qpair failed and we were unable to recover it. 00:27:20.204 [2024-07-14 07:44:36.204802] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.204 [2024-07-14 07:44:36.204987] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.204 [2024-07-14 07:44:36.205011] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0c30000b90 with addr=10.0.0.2, port=4420 00:27:20.204 qpair failed and we were unable to recover it. 00:27:20.204 [2024-07-14 07:44:36.205187] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.204 [2024-07-14 07:44:36.205369] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.204 [2024-07-14 07:44:36.205394] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0c30000b90 with addr=10.0.0.2, port=4420 00:27:20.204 qpair failed and we were unable to recover it. 00:27:20.204 [2024-07-14 07:44:36.205547] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.204 [2024-07-14 07:44:36.205714] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.204 [2024-07-14 07:44:36.205739] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0c30000b90 with addr=10.0.0.2, port=4420 00:27:20.204 qpair failed and we were unable to recover it. 00:27:20.204 [2024-07-14 07:44:36.205956] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.204 [2024-07-14 07:44:36.206133] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.204 [2024-07-14 07:44:36.206158] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0c30000b90 with addr=10.0.0.2, port=4420 00:27:20.204 qpair failed and we were unable to recover it. 00:27:20.204 [2024-07-14 07:44:36.206308] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.204 [2024-07-14 07:44:36.206472] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.204 [2024-07-14 07:44:36.206497] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0c30000b90 with addr=10.0.0.2, port=4420 00:27:20.204 qpair failed and we were unable to recover it. 00:27:20.204 [2024-07-14 07:44:36.206679] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.204 [2024-07-14 07:44:36.206860] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.204 [2024-07-14 07:44:36.206890] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0c30000b90 with addr=10.0.0.2, port=4420 00:27:20.204 qpair failed and we were unable to recover it. 00:27:20.204 [2024-07-14 07:44:36.207072] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.204 [2024-07-14 07:44:36.207255] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.204 [2024-07-14 07:44:36.207280] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0c30000b90 with addr=10.0.0.2, port=4420 00:27:20.204 qpair failed and we were unable to recover it. 00:27:20.204 [2024-07-14 07:44:36.207433] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.204 [2024-07-14 07:44:36.207647] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.204 [2024-07-14 07:44:36.207671] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0c30000b90 with addr=10.0.0.2, port=4420 00:27:20.204 qpair failed and we were unable to recover it. 00:27:20.204 [2024-07-14 07:44:36.207844] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.204 [2024-07-14 07:44:36.208004] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.204 [2024-07-14 07:44:36.208030] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0c30000b90 with addr=10.0.0.2, port=4420 00:27:20.204 qpair failed and we were unable to recover it. 00:27:20.204 [2024-07-14 07:44:36.208216] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.204 [2024-07-14 07:44:36.208374] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.204 [2024-07-14 07:44:36.208399] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0c30000b90 with addr=10.0.0.2, port=4420 00:27:20.204 qpair failed and we were unable to recover it. 00:27:20.204 [2024-07-14 07:44:36.208548] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.204 [2024-07-14 07:44:36.208700] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.204 [2024-07-14 07:44:36.208725] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0c30000b90 with addr=10.0.0.2, port=4420 00:27:20.204 qpair failed and we were unable to recover it. 00:27:20.204 [2024-07-14 07:44:36.208948] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.204 [2024-07-14 07:44:36.209116] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.204 [2024-07-14 07:44:36.209141] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0c30000b90 with addr=10.0.0.2, port=4420 00:27:20.204 qpair failed and we were unable to recover it. 00:27:20.204 [2024-07-14 07:44:36.209335] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.204 [2024-07-14 07:44:36.209495] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.204 [2024-07-14 07:44:36.209520] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0c30000b90 with addr=10.0.0.2, port=4420 00:27:20.204 qpair failed and we were unable to recover it. 00:27:20.204 [2024-07-14 07:44:36.209666] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.204 [2024-07-14 07:44:36.209868] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.204 [2024-07-14 07:44:36.209894] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0c30000b90 with addr=10.0.0.2, port=4420 00:27:20.204 qpair failed and we were unable to recover it. 00:27:20.204 [2024-07-14 07:44:36.210081] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.204 [2024-07-14 07:44:36.210269] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.204 [2024-07-14 07:44:36.210294] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0c30000b90 with addr=10.0.0.2, port=4420 00:27:20.204 qpair failed and we were unable to recover it. 00:27:20.204 [2024-07-14 07:44:36.210455] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.204 [2024-07-14 07:44:36.210655] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.204 [2024-07-14 07:44:36.210680] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0c30000b90 with addr=10.0.0.2, port=4420 00:27:20.205 qpair failed and we were unable to recover it. 00:27:20.205 [2024-07-14 07:44:36.210860] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.205 [2024-07-14 07:44:36.211043] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.205 [2024-07-14 07:44:36.211067] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0c30000b90 with addr=10.0.0.2, port=4420 00:27:20.205 qpair failed and we were unable to recover it. 00:27:20.205 [2024-07-14 07:44:36.211247] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.205 [2024-07-14 07:44:36.211429] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.205 [2024-07-14 07:44:36.211454] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0c30000b90 with addr=10.0.0.2, port=4420 00:27:20.205 qpair failed and we were unable to recover it. 00:27:20.205 [2024-07-14 07:44:36.211602] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.205 [2024-07-14 07:44:36.211762] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.205 [2024-07-14 07:44:36.211787] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0c30000b90 with addr=10.0.0.2, port=4420 00:27:20.205 qpair failed and we were unable to recover it. 00:27:20.205 [2024-07-14 07:44:36.212008] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.205 [2024-07-14 07:44:36.212165] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.205 [2024-07-14 07:44:36.212189] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0c30000b90 with addr=10.0.0.2, port=4420 00:27:20.205 qpair failed and we were unable to recover it. 00:27:20.205 [2024-07-14 07:44:36.212347] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.205 [2024-07-14 07:44:36.212496] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.205 [2024-07-14 07:44:36.212520] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0c30000b90 with addr=10.0.0.2, port=4420 00:27:20.205 qpair failed and we were unable to recover it. 00:27:20.205 [2024-07-14 07:44:36.212679] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.205 [2024-07-14 07:44:36.212835] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.205 [2024-07-14 07:44:36.212861] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0c30000b90 with addr=10.0.0.2, port=4420 00:27:20.205 qpair failed and we were unable to recover it. 00:27:20.205 [2024-07-14 07:44:36.213016] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.205 [2024-07-14 07:44:36.213196] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.205 [2024-07-14 07:44:36.213221] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0c30000b90 with addr=10.0.0.2, port=4420 00:27:20.205 qpair failed and we were unable to recover it. 00:27:20.205 [2024-07-14 07:44:36.213393] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.205 [2024-07-14 07:44:36.213558] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.205 [2024-07-14 07:44:36.213582] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0c30000b90 with addr=10.0.0.2, port=4420 00:27:20.205 qpair failed and we were unable to recover it. 00:27:20.205 [2024-07-14 07:44:36.213760] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.205 [2024-07-14 07:44:36.213918] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.205 [2024-07-14 07:44:36.213944] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0c30000b90 with addr=10.0.0.2, port=4420 00:27:20.205 qpair failed and we were unable to recover it. 00:27:20.205 [2024-07-14 07:44:36.214106] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.205 [2024-07-14 07:44:36.214270] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.205 [2024-07-14 07:44:36.214296] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0c30000b90 with addr=10.0.0.2, port=4420 00:27:20.205 qpair failed and we were unable to recover it. 00:27:20.205 [2024-07-14 07:44:36.214487] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.205 [2024-07-14 07:44:36.214670] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.205 [2024-07-14 07:44:36.214694] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0c30000b90 with addr=10.0.0.2, port=4420 00:27:20.205 qpair failed and we were unable to recover it. 00:27:20.205 [2024-07-14 07:44:36.214895] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.205 [2024-07-14 07:44:36.215082] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.205 [2024-07-14 07:44:36.215107] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0c30000b90 with addr=10.0.0.2, port=4420 00:27:20.205 qpair failed and we were unable to recover it. 00:27:20.205 [2024-07-14 07:44:36.215270] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.205 [2024-07-14 07:44:36.215440] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.205 [2024-07-14 07:44:36.215465] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0c30000b90 with addr=10.0.0.2, port=4420 00:27:20.205 qpair failed and we were unable to recover it. 00:27:20.205 [2024-07-14 07:44:36.215647] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.205 [2024-07-14 07:44:36.215823] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.205 [2024-07-14 07:44:36.215846] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0c30000b90 with addr=10.0.0.2, port=4420 00:27:20.205 qpair failed and we were unable to recover it. 00:27:20.205 [2024-07-14 07:44:36.216009] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.205 [2024-07-14 07:44:36.216197] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.205 [2024-07-14 07:44:36.216221] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0c30000b90 with addr=10.0.0.2, port=4420 00:27:20.205 qpair failed and we were unable to recover it. 00:27:20.205 [2024-07-14 07:44:36.216389] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.205 [2024-07-14 07:44:36.216574] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.205 [2024-07-14 07:44:36.216601] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0c30000b90 with addr=10.0.0.2, port=4420 00:27:20.205 qpair failed and we were unable to recover it. 00:27:20.205 [2024-07-14 07:44:36.216792] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.205 [2024-07-14 07:44:36.217007] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.205 [2024-07-14 07:44:36.217032] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0c30000b90 with addr=10.0.0.2, port=4420 00:27:20.205 qpair failed and we were unable to recover it. 00:27:20.205 [2024-07-14 07:44:36.217213] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.205 [2024-07-14 07:44:36.217396] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.205 [2024-07-14 07:44:36.217420] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0c30000b90 with addr=10.0.0.2, port=4420 00:27:20.205 qpair failed and we were unable to recover it. 00:27:20.205 [2024-07-14 07:44:36.217600] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.205 [2024-07-14 07:44:36.217772] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.205 [2024-07-14 07:44:36.217797] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0c30000b90 with addr=10.0.0.2, port=4420 00:27:20.205 qpair failed and we were unable to recover it. 00:27:20.205 [2024-07-14 07:44:36.217994] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.205 [2024-07-14 07:44:36.218170] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.205 [2024-07-14 07:44:36.218194] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0c30000b90 with addr=10.0.0.2, port=4420 00:27:20.205 qpair failed and we were unable to recover it. 00:27:20.205 [2024-07-14 07:44:36.218371] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.205 [2024-07-14 07:44:36.218547] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.205 [2024-07-14 07:44:36.218571] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0c30000b90 with addr=10.0.0.2, port=4420 00:27:20.205 qpair failed and we were unable to recover it. 00:27:20.205 [2024-07-14 07:44:36.218713] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.205 [2024-07-14 07:44:36.218892] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.205 [2024-07-14 07:44:36.218922] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0c30000b90 with addr=10.0.0.2, port=4420 00:27:20.205 qpair failed and we were unable to recover it. 00:27:20.205 [2024-07-14 07:44:36.219071] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.205 [2024-07-14 07:44:36.219251] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.205 [2024-07-14 07:44:36.219275] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0c30000b90 with addr=10.0.0.2, port=4420 00:27:20.205 qpair failed and we were unable to recover it. 00:27:20.205 [2024-07-14 07:44:36.219430] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.205 [2024-07-14 07:44:36.219616] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.205 [2024-07-14 07:44:36.219641] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0c30000b90 with addr=10.0.0.2, port=4420 00:27:20.205 qpair failed and we were unable to recover it. 00:27:20.205 [2024-07-14 07:44:36.219799] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.205 [2024-07-14 07:44:36.219972] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.205 [2024-07-14 07:44:36.219997] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0c30000b90 with addr=10.0.0.2, port=4420 00:27:20.205 qpair failed and we were unable to recover it. 00:27:20.205 [2024-07-14 07:44:36.220147] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.205 [2024-07-14 07:44:36.220328] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.205 [2024-07-14 07:44:36.220352] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0c30000b90 with addr=10.0.0.2, port=4420 00:27:20.205 qpair failed and we were unable to recover it. 00:27:20.205 [2024-07-14 07:44:36.220506] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.205 [2024-07-14 07:44:36.220681] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.205 [2024-07-14 07:44:36.220707] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0c30000b90 with addr=10.0.0.2, port=4420 00:27:20.205 qpair failed and we were unable to recover it. 00:27:20.205 [2024-07-14 07:44:36.220897] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.205 [2024-07-14 07:44:36.221078] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.205 [2024-07-14 07:44:36.221103] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0c30000b90 with addr=10.0.0.2, port=4420 00:27:20.205 qpair failed and we were unable to recover it. 00:27:20.205 [2024-07-14 07:44:36.221289] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.205 [2024-07-14 07:44:36.221443] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.205 [2024-07-14 07:44:36.221467] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0c30000b90 with addr=10.0.0.2, port=4420 00:27:20.205 qpair failed and we were unable to recover it. 00:27:20.205 [2024-07-14 07:44:36.221639] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.205 [2024-07-14 07:44:36.221791] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.205 [2024-07-14 07:44:36.221816] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0c30000b90 with addr=10.0.0.2, port=4420 00:27:20.205 qpair failed and we were unable to recover it. 00:27:20.206 [2024-07-14 07:44:36.221994] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.206 [2024-07-14 07:44:36.222175] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.206 [2024-07-14 07:44:36.222200] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0c30000b90 with addr=10.0.0.2, port=4420 00:27:20.206 qpair failed and we were unable to recover it. 00:27:20.206 [2024-07-14 07:44:36.222371] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.206 [2024-07-14 07:44:36.222542] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.206 [2024-07-14 07:44:36.222571] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0c30000b90 with addr=10.0.0.2, port=4420 00:27:20.206 qpair failed and we were unable to recover it. 00:27:20.206 [2024-07-14 07:44:36.222780] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.206 [2024-07-14 07:44:36.222938] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.206 [2024-07-14 07:44:36.222963] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0c30000b90 with addr=10.0.0.2, port=4420 00:27:20.206 qpair failed and we were unable to recover it. 00:27:20.206 [2024-07-14 07:44:36.223148] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.206 [2024-07-14 07:44:36.223330] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.206 [2024-07-14 07:44:36.223354] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0c30000b90 with addr=10.0.0.2, port=4420 00:27:20.206 qpair failed and we were unable to recover it. 00:27:20.206 [2024-07-14 07:44:36.223549] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.206 [2024-07-14 07:44:36.223706] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.206 [2024-07-14 07:44:36.223731] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0c30000b90 with addr=10.0.0.2, port=4420 00:27:20.206 qpair failed and we were unable to recover it. 00:27:20.206 [2024-07-14 07:44:36.223890] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.206 [2024-07-14 07:44:36.224068] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.206 [2024-07-14 07:44:36.224092] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0c30000b90 with addr=10.0.0.2, port=4420 00:27:20.206 qpair failed and we were unable to recover it. 00:27:20.206 [2024-07-14 07:44:36.224247] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.206 [2024-07-14 07:44:36.224426] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.206 [2024-07-14 07:44:36.224452] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0c30000b90 with addr=10.0.0.2, port=4420 00:27:20.206 qpair failed and we were unable to recover it. 00:27:20.206 [2024-07-14 07:44:36.224622] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.206 [2024-07-14 07:44:36.224796] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.206 [2024-07-14 07:44:36.224821] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0c30000b90 with addr=10.0.0.2, port=4420 00:27:20.206 qpair failed and we were unable to recover it. 00:27:20.206 [2024-07-14 07:44:36.224982] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.206 [2024-07-14 07:44:36.225136] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.206 [2024-07-14 07:44:36.225161] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0c30000b90 with addr=10.0.0.2, port=4420 00:27:20.206 qpair failed and we were unable to recover it. 00:27:20.206 [2024-07-14 07:44:36.225314] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.206 [2024-07-14 07:44:36.225485] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.206 [2024-07-14 07:44:36.225509] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0c30000b90 with addr=10.0.0.2, port=4420 00:27:20.206 qpair failed and we were unable to recover it. 00:27:20.206 [2024-07-14 07:44:36.225659] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.206 [2024-07-14 07:44:36.225811] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.206 [2024-07-14 07:44:36.225838] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0c30000b90 with addr=10.0.0.2, port=4420 00:27:20.206 qpair failed and we were unable to recover it. 00:27:20.206 [2024-07-14 07:44:36.226035] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.206 [2024-07-14 07:44:36.226187] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.206 [2024-07-14 07:44:36.226216] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0c30000b90 with addr=10.0.0.2, port=4420 00:27:20.206 qpair failed and we were unable to recover it. 00:27:20.206 [2024-07-14 07:44:36.226408] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.206 [2024-07-14 07:44:36.226563] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.206 [2024-07-14 07:44:36.226589] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0c30000b90 with addr=10.0.0.2, port=4420 00:27:20.206 qpair failed and we were unable to recover it. 00:27:20.206 [2024-07-14 07:44:36.226779] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.206 [2024-07-14 07:44:36.226958] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.206 [2024-07-14 07:44:36.226984] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0c30000b90 with addr=10.0.0.2, port=4420 00:27:20.206 qpair failed and we were unable to recover it. 00:27:20.206 [2024-07-14 07:44:36.227133] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.206 [2024-07-14 07:44:36.227292] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.206 [2024-07-14 07:44:36.227323] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0c30000b90 with addr=10.0.0.2, port=4420 00:27:20.206 qpair failed and we were unable to recover it. 00:27:20.206 [2024-07-14 07:44:36.227472] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.206 [2024-07-14 07:44:36.227653] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.206 [2024-07-14 07:44:36.227678] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0c30000b90 with addr=10.0.0.2, port=4420 00:27:20.206 qpair failed and we were unable to recover it. 00:27:20.206 [2024-07-14 07:44:36.227848] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.206 [2024-07-14 07:44:36.228019] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.206 [2024-07-14 07:44:36.228044] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0c30000b90 with addr=10.0.0.2, port=4420 00:27:20.206 qpair failed and we were unable to recover it. 00:27:20.206 [2024-07-14 07:44:36.228200] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.206 [2024-07-14 07:44:36.228381] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.206 [2024-07-14 07:44:36.228406] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0c30000b90 with addr=10.0.0.2, port=4420 00:27:20.206 qpair failed and we were unable to recover it. 00:27:20.206 [2024-07-14 07:44:36.228556] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.206 [2024-07-14 07:44:36.228737] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.206 [2024-07-14 07:44:36.228762] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0c30000b90 with addr=10.0.0.2, port=4420 00:27:20.206 qpair failed and we were unable to recover it. 00:27:20.206 [2024-07-14 07:44:36.228947] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.206 [2024-07-14 07:44:36.229121] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.206 [2024-07-14 07:44:36.229145] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0c30000b90 with addr=10.0.0.2, port=4420 00:27:20.206 qpair failed and we were unable to recover it. 00:27:20.206 [2024-07-14 07:44:36.229321] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.206 [2024-07-14 07:44:36.229501] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.206 [2024-07-14 07:44:36.229526] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0c30000b90 with addr=10.0.0.2, port=4420 00:27:20.206 qpair failed and we were unable to recover it. 00:27:20.206 [2024-07-14 07:44:36.229705] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.206 [2024-07-14 07:44:36.229862] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.206 [2024-07-14 07:44:36.229896] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0c30000b90 with addr=10.0.0.2, port=4420 00:27:20.206 qpair failed and we were unable to recover it. 00:27:20.206 [2024-07-14 07:44:36.230101] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.206 [2024-07-14 07:44:36.230281] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.206 [2024-07-14 07:44:36.230306] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0c30000b90 with addr=10.0.0.2, port=4420 00:27:20.206 qpair failed and we were unable to recover it. 00:27:20.206 [2024-07-14 07:44:36.230514] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.206 [2024-07-14 07:44:36.230691] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.206 [2024-07-14 07:44:36.230715] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0c30000b90 with addr=10.0.0.2, port=4420 00:27:20.206 qpair failed and we were unable to recover it. 00:27:20.206 [2024-07-14 07:44:36.230916] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.206 [2024-07-14 07:44:36.231073] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.206 [2024-07-14 07:44:36.231099] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0c30000b90 with addr=10.0.0.2, port=4420 00:27:20.206 qpair failed and we were unable to recover it. 00:27:20.206 [2024-07-14 07:44:36.231284] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.206 [2024-07-14 07:44:36.231467] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.206 [2024-07-14 07:44:36.231493] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0c30000b90 with addr=10.0.0.2, port=4420 00:27:20.206 qpair failed and we were unable to recover it. 00:27:20.206 [2024-07-14 07:44:36.231665] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.206 [2024-07-14 07:44:36.231856] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.206 [2024-07-14 07:44:36.231887] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0c30000b90 with addr=10.0.0.2, port=4420 00:27:20.206 qpair failed and we were unable to recover it. 00:27:20.206 [2024-07-14 07:44:36.232069] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.206 [2024-07-14 07:44:36.232216] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.206 [2024-07-14 07:44:36.232242] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0c30000b90 with addr=10.0.0.2, port=4420 00:27:20.206 qpair failed and we were unable to recover it. 00:27:20.206 [2024-07-14 07:44:36.232396] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.206 [2024-07-14 07:44:36.232549] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.206 [2024-07-14 07:44:36.232574] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0c30000b90 with addr=10.0.0.2, port=4420 00:27:20.206 qpair failed and we were unable to recover it. 00:27:20.206 [2024-07-14 07:44:36.232746] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.206 [2024-07-14 07:44:36.232922] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.206 [2024-07-14 07:44:36.232947] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0c30000b90 with addr=10.0.0.2, port=4420 00:27:20.206 qpair failed and we were unable to recover it. 00:27:20.206 [2024-07-14 07:44:36.233129] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.206 [2024-07-14 07:44:36.233303] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.207 [2024-07-14 07:44:36.233328] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0c30000b90 with addr=10.0.0.2, port=4420 00:27:20.207 qpair failed and we were unable to recover it. 00:27:20.207 [2024-07-14 07:44:36.233478] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.207 [2024-07-14 07:44:36.233659] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.207 [2024-07-14 07:44:36.233684] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0c30000b90 with addr=10.0.0.2, port=4420 00:27:20.207 qpair failed and we were unable to recover it. 00:27:20.207 [2024-07-14 07:44:36.233872] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.207 [2024-07-14 07:44:36.234061] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.207 [2024-07-14 07:44:36.234086] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0c30000b90 with addr=10.0.0.2, port=4420 00:27:20.207 qpair failed and we were unable to recover it. 00:27:20.207 [2024-07-14 07:44:36.234241] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.207 [2024-07-14 07:44:36.234389] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.207 [2024-07-14 07:44:36.234414] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0c30000b90 with addr=10.0.0.2, port=4420 00:27:20.207 qpair failed and we were unable to recover it. 00:27:20.207 [2024-07-14 07:44:36.234572] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.207 [2024-07-14 07:44:36.234781] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.207 [2024-07-14 07:44:36.234806] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0c30000b90 with addr=10.0.0.2, port=4420 00:27:20.207 qpair failed and we were unable to recover it. 00:27:20.207 [2024-07-14 07:44:36.234990] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.207 [2024-07-14 07:44:36.235172] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.207 [2024-07-14 07:44:36.235197] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0c30000b90 with addr=10.0.0.2, port=4420 00:27:20.207 qpair failed and we were unable to recover it. 00:27:20.207 [2024-07-14 07:44:36.235379] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.207 [2024-07-14 07:44:36.235533] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.207 [2024-07-14 07:44:36.235557] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0c30000b90 with addr=10.0.0.2, port=4420 00:27:20.207 qpair failed and we were unable to recover it. 00:27:20.207 [2024-07-14 07:44:36.235706] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.207 [2024-07-14 07:44:36.235890] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.207 [2024-07-14 07:44:36.235916] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0c30000b90 with addr=10.0.0.2, port=4420 00:27:20.207 qpair failed and we were unable to recover it. 00:27:20.207 [2024-07-14 07:44:36.236103] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.207 [2024-07-14 07:44:36.236277] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.207 [2024-07-14 07:44:36.236303] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0c30000b90 with addr=10.0.0.2, port=4420 00:27:20.207 qpair failed and we were unable to recover it. 00:27:20.207 [2024-07-14 07:44:36.236514] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.207 [2024-07-14 07:44:36.236693] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.207 [2024-07-14 07:44:36.236719] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0c30000b90 with addr=10.0.0.2, port=4420 00:27:20.207 qpair failed and we were unable to recover it. 00:27:20.207 [2024-07-14 07:44:36.236915] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.207 [2024-07-14 07:44:36.237067] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.207 [2024-07-14 07:44:36.237091] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0c30000b90 with addr=10.0.0.2, port=4420 00:27:20.207 qpair failed and we were unable to recover it. 00:27:20.207 [2024-07-14 07:44:36.237299] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.207 [2024-07-14 07:44:36.237455] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.207 [2024-07-14 07:44:36.237480] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0c30000b90 with addr=10.0.0.2, port=4420 00:27:20.207 qpair failed and we were unable to recover it. 00:27:20.207 [2024-07-14 07:44:36.237665] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.207 [2024-07-14 07:44:36.237851] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.207 [2024-07-14 07:44:36.237880] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0c30000b90 with addr=10.0.0.2, port=4420 00:27:20.207 qpair failed and we were unable to recover it. 00:27:20.207 [2024-07-14 07:44:36.238032] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.207 [2024-07-14 07:44:36.238179] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.207 [2024-07-14 07:44:36.238202] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0c30000b90 with addr=10.0.0.2, port=4420 00:27:20.207 qpair failed and we were unable to recover it. 00:27:20.207 [2024-07-14 07:44:36.238409] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.207 [2024-07-14 07:44:36.238560] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.207 [2024-07-14 07:44:36.238586] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0c30000b90 with addr=10.0.0.2, port=4420 00:27:20.207 qpair failed and we were unable to recover it. 00:27:20.207 [2024-07-14 07:44:36.238787] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.207 [2024-07-14 07:44:36.238971] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.207 [2024-07-14 07:44:36.238997] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0c30000b90 with addr=10.0.0.2, port=4420 00:27:20.207 qpair failed and we were unable to recover it. 00:27:20.207 [2024-07-14 07:44:36.239184] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.207 [2024-07-14 07:44:36.239366] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.207 [2024-07-14 07:44:36.239391] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0c30000b90 with addr=10.0.0.2, port=4420 00:27:20.207 qpair failed and we were unable to recover it. 00:27:20.207 [2024-07-14 07:44:36.239544] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.207 [2024-07-14 07:44:36.239725] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.207 [2024-07-14 07:44:36.239749] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0c30000b90 with addr=10.0.0.2, port=4420 00:27:20.207 qpair failed and we were unable to recover it. 00:27:20.207 [2024-07-14 07:44:36.239925] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.207 [2024-07-14 07:44:36.240093] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.207 [2024-07-14 07:44:36.240118] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0c30000b90 with addr=10.0.0.2, port=4420 00:27:20.207 qpair failed and we were unable to recover it. 00:27:20.207 [2024-07-14 07:44:36.240299] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.207 [2024-07-14 07:44:36.240508] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.207 [2024-07-14 07:44:36.240533] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0c30000b90 with addr=10.0.0.2, port=4420 00:27:20.207 qpair failed and we were unable to recover it. 00:27:20.207 [2024-07-14 07:44:36.240707] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.207 [2024-07-14 07:44:36.240875] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.207 [2024-07-14 07:44:36.240900] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0c30000b90 with addr=10.0.0.2, port=4420 00:27:20.207 qpair failed and we were unable to recover it. 00:27:20.207 [2024-07-14 07:44:36.241100] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.207 [2024-07-14 07:44:36.241270] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.207 [2024-07-14 07:44:36.241294] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0c30000b90 with addr=10.0.0.2, port=4420 00:27:20.207 qpair failed and we were unable to recover it. 00:27:20.207 [2024-07-14 07:44:36.241512] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.207 [2024-07-14 07:44:36.241674] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.207 [2024-07-14 07:44:36.241698] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0c30000b90 with addr=10.0.0.2, port=4420 00:27:20.207 qpair failed and we were unable to recover it. 00:27:20.207 [2024-07-14 07:44:36.241858] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.207 [2024-07-14 07:44:36.242046] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.207 [2024-07-14 07:44:36.242071] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0c30000b90 with addr=10.0.0.2, port=4420 00:27:20.207 qpair failed and we were unable to recover it. 00:27:20.207 [2024-07-14 07:44:36.242276] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.207 [2024-07-14 07:44:36.242432] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.207 [2024-07-14 07:44:36.242455] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0c30000b90 with addr=10.0.0.2, port=4420 00:27:20.207 qpair failed and we were unable to recover it. 00:27:20.207 [2024-07-14 07:44:36.242637] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.207 [2024-07-14 07:44:36.242848] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.207 [2024-07-14 07:44:36.242880] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0c30000b90 with addr=10.0.0.2, port=4420 00:27:20.207 qpair failed and we were unable to recover it. 00:27:20.207 [2024-07-14 07:44:36.243059] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.207 [2024-07-14 07:44:36.243237] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.208 [2024-07-14 07:44:36.243261] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0c30000b90 with addr=10.0.0.2, port=4420 00:27:20.208 qpair failed and we were unable to recover it. 00:27:20.208 [2024-07-14 07:44:36.243416] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.208 [2024-07-14 07:44:36.243593] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.208 [2024-07-14 07:44:36.243617] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0c30000b90 with addr=10.0.0.2, port=4420 00:27:20.208 qpair failed and we were unable to recover it. 00:27:20.208 [2024-07-14 07:44:36.243778] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.208 [2024-07-14 07:44:36.243945] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.208 [2024-07-14 07:44:36.243972] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0c30000b90 with addr=10.0.0.2, port=4420 00:27:20.208 qpair failed and we were unable to recover it. 00:27:20.208 [2024-07-14 07:44:36.244154] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.208 [2024-07-14 07:44:36.244306] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.208 [2024-07-14 07:44:36.244331] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0c30000b90 with addr=10.0.0.2, port=4420 00:27:20.208 qpair failed and we were unable to recover it. 00:27:20.208 [2024-07-14 07:44:36.244530] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.208 [2024-07-14 07:44:36.244707] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.208 [2024-07-14 07:44:36.244732] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0c30000b90 with addr=10.0.0.2, port=4420 00:27:20.208 qpair failed and we were unable to recover it. 00:27:20.208 [2024-07-14 07:44:36.244940] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.208 [2024-07-14 07:44:36.245095] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.208 [2024-07-14 07:44:36.245121] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0c30000b90 with addr=10.0.0.2, port=4420 00:27:20.208 qpair failed and we were unable to recover it. 00:27:20.208 [2024-07-14 07:44:36.245306] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.208 [2024-07-14 07:44:36.245463] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.208 [2024-07-14 07:44:36.245489] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0c30000b90 with addr=10.0.0.2, port=4420 00:27:20.208 qpair failed and we were unable to recover it. 00:27:20.208 [2024-07-14 07:44:36.245693] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.208 [2024-07-14 07:44:36.245863] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.208 [2024-07-14 07:44:36.245896] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0c30000b90 with addr=10.0.0.2, port=4420 00:27:20.208 qpair failed and we were unable to recover it. 00:27:20.208 [2024-07-14 07:44:36.246052] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.208 [2024-07-14 07:44:36.246267] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.208 [2024-07-14 07:44:36.246291] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0c30000b90 with addr=10.0.0.2, port=4420 00:27:20.208 qpair failed and we were unable to recover it. 00:27:20.208 [2024-07-14 07:44:36.246447] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.208 [2024-07-14 07:44:36.246645] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.208 [2024-07-14 07:44:36.246669] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0c30000b90 with addr=10.0.0.2, port=4420 00:27:20.208 qpair failed and we were unable to recover it. 00:27:20.208 [2024-07-14 07:44:36.246876] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.208 [2024-07-14 07:44:36.247052] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.208 [2024-07-14 07:44:36.247077] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0c30000b90 with addr=10.0.0.2, port=4420 00:27:20.208 qpair failed and we were unable to recover it. 00:27:20.208 [2024-07-14 07:44:36.247240] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.208 [2024-07-14 07:44:36.247423] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.208 [2024-07-14 07:44:36.247449] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0c30000b90 with addr=10.0.0.2, port=4420 00:27:20.208 qpair failed and we were unable to recover it. 00:27:20.208 [2024-07-14 07:44:36.247606] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.208 [2024-07-14 07:44:36.247751] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.208 [2024-07-14 07:44:36.247777] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0c30000b90 with addr=10.0.0.2, port=4420 00:27:20.208 qpair failed and we were unable to recover it. 00:27:20.208 [2024-07-14 07:44:36.247954] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.208 [2024-07-14 07:44:36.248133] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.208 [2024-07-14 07:44:36.248157] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0c30000b90 with addr=10.0.0.2, port=4420 00:27:20.208 qpair failed and we were unable to recover it. 00:27:20.208 [2024-07-14 07:44:36.248344] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.208 [2024-07-14 07:44:36.248529] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.208 [2024-07-14 07:44:36.248555] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0c30000b90 with addr=10.0.0.2, port=4420 00:27:20.208 qpair failed and we were unable to recover it. 00:27:20.208 [2024-07-14 07:44:36.248722] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.208 [2024-07-14 07:44:36.248877] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.208 [2024-07-14 07:44:36.248903] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0c30000b90 with addr=10.0.0.2, port=4420 00:27:20.208 qpair failed and we were unable to recover it. 00:27:20.208 [2024-07-14 07:44:36.249065] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.208 [2024-07-14 07:44:36.249250] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.208 [2024-07-14 07:44:36.249277] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0c30000b90 with addr=10.0.0.2, port=4420 00:27:20.208 qpair failed and we were unable to recover it. 00:27:20.208 [2024-07-14 07:44:36.249430] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.208 [2024-07-14 07:44:36.249616] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.208 [2024-07-14 07:44:36.249641] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0c30000b90 with addr=10.0.0.2, port=4420 00:27:20.208 qpair failed and we were unable to recover it. 00:27:20.208 [2024-07-14 07:44:36.249851] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.208 [2024-07-14 07:44:36.250009] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.208 [2024-07-14 07:44:36.250034] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0c30000b90 with addr=10.0.0.2, port=4420 00:27:20.208 qpair failed and we were unable to recover it. 00:27:20.208 [2024-07-14 07:44:36.250212] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.208 [2024-07-14 07:44:36.250395] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.208 [2024-07-14 07:44:36.250420] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0c30000b90 with addr=10.0.0.2, port=4420 00:27:20.208 qpair failed and we were unable to recover it. 00:27:20.208 [2024-07-14 07:44:36.250596] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.208 [2024-07-14 07:44:36.250781] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.208 [2024-07-14 07:44:36.250809] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0c30000b90 with addr=10.0.0.2, port=4420 00:27:20.208 qpair failed and we were unable to recover it. 00:27:20.208 [2024-07-14 07:44:36.250970] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.208 [2024-07-14 07:44:36.251156] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.208 [2024-07-14 07:44:36.251183] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0c30000b90 with addr=10.0.0.2, port=4420 00:27:20.208 qpair failed and we were unable to recover it. 00:27:20.208 [2024-07-14 07:44:36.251357] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.208 [2024-07-14 07:44:36.251542] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.208 [2024-07-14 07:44:36.251566] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0c30000b90 with addr=10.0.0.2, port=4420 00:27:20.208 qpair failed and we were unable to recover it. 00:27:20.208 [2024-07-14 07:44:36.251778] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.208 [2024-07-14 07:44:36.251939] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.208 [2024-07-14 07:44:36.251965] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0c30000b90 with addr=10.0.0.2, port=4420 00:27:20.208 qpair failed and we were unable to recover it. 00:27:20.208 [2024-07-14 07:44:36.252148] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.208 [2024-07-14 07:44:36.252328] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.208 [2024-07-14 07:44:36.252352] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0c30000b90 with addr=10.0.0.2, port=4420 00:27:20.208 qpair failed and we were unable to recover it. 00:27:20.208 [2024-07-14 07:44:36.252529] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.208 [2024-07-14 07:44:36.252712] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.208 [2024-07-14 07:44:36.252737] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0c30000b90 with addr=10.0.0.2, port=4420 00:27:20.208 qpair failed and we were unable to recover it. 00:27:20.208 [2024-07-14 07:44:36.252909] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.208 [2024-07-14 07:44:36.253069] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.208 [2024-07-14 07:44:36.253096] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0c30000b90 with addr=10.0.0.2, port=4420 00:27:20.208 qpair failed and we were unable to recover it. 00:27:20.208 [2024-07-14 07:44:36.253281] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.208 [2024-07-14 07:44:36.253459] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.208 [2024-07-14 07:44:36.253484] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0c30000b90 with addr=10.0.0.2, port=4420 00:27:20.208 qpair failed and we were unable to recover it. 00:27:20.208 [2024-07-14 07:44:36.253662] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.208 [2024-07-14 07:44:36.253814] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.208 [2024-07-14 07:44:36.253838] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0c30000b90 with addr=10.0.0.2, port=4420 00:27:20.208 qpair failed and we were unable to recover it. 00:27:20.208 [2024-07-14 07:44:36.254005] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.208 [2024-07-14 07:44:36.254156] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.208 [2024-07-14 07:44:36.254181] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0c30000b90 with addr=10.0.0.2, port=4420 00:27:20.208 qpair failed and we were unable to recover it. 00:27:20.208 [2024-07-14 07:44:36.254362] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.208 [2024-07-14 07:44:36.254519] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.209 [2024-07-14 07:44:36.254545] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0c30000b90 with addr=10.0.0.2, port=4420 00:27:20.209 qpair failed and we were unable to recover it. 00:27:20.209 [2024-07-14 07:44:36.254702] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.209 [2024-07-14 07:44:36.254905] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.209 [2024-07-14 07:44:36.254931] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0c30000b90 with addr=10.0.0.2, port=4420 00:27:20.209 qpair failed and we were unable to recover it. 00:27:20.209 [2024-07-14 07:44:36.255095] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.209 [2024-07-14 07:44:36.255248] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.209 [2024-07-14 07:44:36.255273] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0c30000b90 with addr=10.0.0.2, port=4420 00:27:20.209 qpair failed and we were unable to recover it. 00:27:20.209 [2024-07-14 07:44:36.255434] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.209 [2024-07-14 07:44:36.255581] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.209 [2024-07-14 07:44:36.255606] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0c30000b90 with addr=10.0.0.2, port=4420 00:27:20.209 qpair failed and we were unable to recover it. 00:27:20.209 [2024-07-14 07:44:36.255764] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.209 [2024-07-14 07:44:36.255950] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.209 [2024-07-14 07:44:36.255975] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0c30000b90 with addr=10.0.0.2, port=4420 00:27:20.209 qpair failed and we were unable to recover it. 00:27:20.209 [2024-07-14 07:44:36.256164] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.209 [2024-07-14 07:44:36.256354] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.209 [2024-07-14 07:44:36.256379] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0c30000b90 with addr=10.0.0.2, port=4420 00:27:20.209 qpair failed and we were unable to recover it. 00:27:20.209 [2024-07-14 07:44:36.256541] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.209 [2024-07-14 07:44:36.256726] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.209 [2024-07-14 07:44:36.256751] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0c30000b90 with addr=10.0.0.2, port=4420 00:27:20.209 qpair failed and we were unable to recover it. 00:27:20.209 [2024-07-14 07:44:36.256936] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.209 [2024-07-14 07:44:36.257138] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.209 [2024-07-14 07:44:36.257163] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0c30000b90 with addr=10.0.0.2, port=4420 00:27:20.209 qpair failed and we were unable to recover it. 00:27:20.209 [2024-07-14 07:44:36.257318] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.209 [2024-07-14 07:44:36.257493] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.209 [2024-07-14 07:44:36.257517] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0c30000b90 with addr=10.0.0.2, port=4420 00:27:20.209 qpair failed and we were unable to recover it. 00:27:20.209 [2024-07-14 07:44:36.257668] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.209 [2024-07-14 07:44:36.257824] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.209 [2024-07-14 07:44:36.257848] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0c30000b90 with addr=10.0.0.2, port=4420 00:27:20.209 qpair failed and we were unable to recover it. 00:27:20.209 [2024-07-14 07:44:36.258037] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.209 [2024-07-14 07:44:36.258206] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.209 [2024-07-14 07:44:36.258231] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0c30000b90 with addr=10.0.0.2, port=4420 00:27:20.209 qpair failed and we were unable to recover it. 00:27:20.209 [2024-07-14 07:44:36.258415] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.209 [2024-07-14 07:44:36.258572] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.209 [2024-07-14 07:44:36.258599] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0c30000b90 with addr=10.0.0.2, port=4420 00:27:20.209 qpair failed and we were unable to recover it. 00:27:20.209 [2024-07-14 07:44:36.258751] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.209 [2024-07-14 07:44:36.258952] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.209 [2024-07-14 07:44:36.258978] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0c30000b90 with addr=10.0.0.2, port=4420 00:27:20.209 qpair failed and we were unable to recover it. 00:27:20.209 [2024-07-14 07:44:36.259136] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.209 [2024-07-14 07:44:36.259292] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.209 [2024-07-14 07:44:36.259318] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0c30000b90 with addr=10.0.0.2, port=4420 00:27:20.209 qpair failed and we were unable to recover it. 00:27:20.209 [2024-07-14 07:44:36.259515] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.209 [2024-07-14 07:44:36.259692] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.209 [2024-07-14 07:44:36.259717] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0c30000b90 with addr=10.0.0.2, port=4420 00:27:20.209 qpair failed and we were unable to recover it. 00:27:20.209 [2024-07-14 07:44:36.259913] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.209 [2024-07-14 07:44:36.260063] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.209 [2024-07-14 07:44:36.260087] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0c30000b90 with addr=10.0.0.2, port=4420 00:27:20.209 qpair failed and we were unable to recover it. 00:27:20.209 [2024-07-14 07:44:36.260304] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.209 [2024-07-14 07:44:36.260461] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.209 [2024-07-14 07:44:36.260486] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0c30000b90 with addr=10.0.0.2, port=4420 00:27:20.209 qpair failed and we were unable to recover it. 00:27:20.209 [2024-07-14 07:44:36.260661] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.209 [2024-07-14 07:44:36.260835] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.209 [2024-07-14 07:44:36.260860] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0c30000b90 with addr=10.0.0.2, port=4420 00:27:20.209 qpair failed and we were unable to recover it. 00:27:20.209 [2024-07-14 07:44:36.261038] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.209 [2024-07-14 07:44:36.261192] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.209 [2024-07-14 07:44:36.261215] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0c30000b90 with addr=10.0.0.2, port=4420 00:27:20.209 qpair failed and we were unable to recover it. 00:27:20.209 [2024-07-14 07:44:36.261375] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.209 [2024-07-14 07:44:36.261582] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.209 [2024-07-14 07:44:36.261608] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0c30000b90 with addr=10.0.0.2, port=4420 00:27:20.209 qpair failed and we were unable to recover it. 00:27:20.209 [2024-07-14 07:44:36.261764] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.209 [2024-07-14 07:44:36.261950] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.209 [2024-07-14 07:44:36.261976] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0c30000b90 with addr=10.0.0.2, port=4420 00:27:20.209 qpair failed and we were unable to recover it. 00:27:20.209 [2024-07-14 07:44:36.262129] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.209 [2024-07-14 07:44:36.262275] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.209 [2024-07-14 07:44:36.262300] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0c30000b90 with addr=10.0.0.2, port=4420 00:27:20.209 qpair failed and we were unable to recover it. 00:27:20.209 [2024-07-14 07:44:36.262452] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.209 [2024-07-14 07:44:36.262663] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.209 [2024-07-14 07:44:36.262687] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0c30000b90 with addr=10.0.0.2, port=4420 00:27:20.209 qpair failed and we were unable to recover it. 00:27:20.209 [2024-07-14 07:44:36.262841] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.209 [2024-07-14 07:44:36.263000] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.209 [2024-07-14 07:44:36.263025] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0c30000b90 with addr=10.0.0.2, port=4420 00:27:20.209 qpair failed and we were unable to recover it. 00:27:20.209 [2024-07-14 07:44:36.263182] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.209 [2024-07-14 07:44:36.263362] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.209 [2024-07-14 07:44:36.263387] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0c30000b90 with addr=10.0.0.2, port=4420 00:27:20.209 qpair failed and we were unable to recover it. 00:27:20.209 [2024-07-14 07:44:36.263541] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.209 [2024-07-14 07:44:36.263695] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.209 [2024-07-14 07:44:36.263718] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0c30000b90 with addr=10.0.0.2, port=4420 00:27:20.209 qpair failed and we were unable to recover it. 00:27:20.209 [2024-07-14 07:44:36.263908] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.209 [2024-07-14 07:44:36.264097] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.209 [2024-07-14 07:44:36.264122] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0c30000b90 with addr=10.0.0.2, port=4420 00:27:20.209 qpair failed and we were unable to recover it. 00:27:20.209 [2024-07-14 07:44:36.264277] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.209 [2024-07-14 07:44:36.264450] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.209 [2024-07-14 07:44:36.264475] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0c30000b90 with addr=10.0.0.2, port=4420 00:27:20.209 qpair failed and we were unable to recover it. 00:27:20.209 [2024-07-14 07:44:36.264683] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.209 [2024-07-14 07:44:36.264888] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.209 [2024-07-14 07:44:36.264914] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0c30000b90 with addr=10.0.0.2, port=4420 00:27:20.209 qpair failed and we were unable to recover it. 00:27:20.209 [2024-07-14 07:44:36.265072] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.209 [2024-07-14 07:44:36.265252] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.209 [2024-07-14 07:44:36.265276] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0c30000b90 with addr=10.0.0.2, port=4420 00:27:20.209 qpair failed and we were unable to recover it. 00:27:20.209 [2024-07-14 07:44:36.265458] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.209 [2024-07-14 07:44:36.265651] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.209 [2024-07-14 07:44:36.265676] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0c30000b90 with addr=10.0.0.2, port=4420 00:27:20.209 qpair failed and we were unable to recover it. 00:27:20.209 [2024-07-14 07:44:36.265839] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.210 [2024-07-14 07:44:36.265998] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.210 [2024-07-14 07:44:36.266022] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0c30000b90 with addr=10.0.0.2, port=4420 00:27:20.210 qpair failed and we were unable to recover it. 00:27:20.210 [2024-07-14 07:44:36.266214] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.210 [2024-07-14 07:44:36.266365] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.210 [2024-07-14 07:44:36.266392] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0c30000b90 with addr=10.0.0.2, port=4420 00:27:20.210 qpair failed and we were unable to recover it. 00:27:20.210 [2024-07-14 07:44:36.266581] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.210 [2024-07-14 07:44:36.266749] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.210 [2024-07-14 07:44:36.266774] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0c30000b90 with addr=10.0.0.2, port=4420 00:27:20.210 qpair failed and we were unable to recover it. 00:27:20.210 [2024-07-14 07:44:36.266985] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.210 [2024-07-14 07:44:36.267132] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.210 [2024-07-14 07:44:36.267158] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0c30000b90 with addr=10.0.0.2, port=4420 00:27:20.210 qpair failed and we were unable to recover it. 00:27:20.210 [2024-07-14 07:44:36.267338] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.210 [2024-07-14 07:44:36.267490] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.210 [2024-07-14 07:44:36.267516] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0c30000b90 with addr=10.0.0.2, port=4420 00:27:20.210 qpair failed and we were unable to recover it. 00:27:20.210 [2024-07-14 07:44:36.267672] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.210 [2024-07-14 07:44:36.267831] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.210 [2024-07-14 07:44:36.267855] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0c30000b90 with addr=10.0.0.2, port=4420 00:27:20.210 qpair failed and we were unable to recover it. 00:27:20.210 [2024-07-14 07:44:36.268022] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.210 [2024-07-14 07:44:36.268204] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.210 [2024-07-14 07:44:36.268229] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0c30000b90 with addr=10.0.0.2, port=4420 00:27:20.210 qpair failed and we were unable to recover it. 00:27:20.210 [2024-07-14 07:44:36.268389] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.210 [2024-07-14 07:44:36.268541] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.210 [2024-07-14 07:44:36.268567] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0c30000b90 with addr=10.0.0.2, port=4420 00:27:20.210 qpair failed and we were unable to recover it. 00:27:20.210 [2024-07-14 07:44:36.268723] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.210 [2024-07-14 07:44:36.268881] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.210 [2024-07-14 07:44:36.268908] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0c30000b90 with addr=10.0.0.2, port=4420 00:27:20.210 qpair failed and we were unable to recover it. 00:27:20.210 [2024-07-14 07:44:36.269083] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.210 [2024-07-14 07:44:36.269282] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.210 [2024-07-14 07:44:36.269307] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0c30000b90 with addr=10.0.0.2, port=4420 00:27:20.210 qpair failed and we were unable to recover it. 00:27:20.210 [2024-07-14 07:44:36.269468] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.210 [2024-07-14 07:44:36.269613] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.210 [2024-07-14 07:44:36.269637] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0c30000b90 with addr=10.0.0.2, port=4420 00:27:20.210 qpair failed and we were unable to recover it. 00:27:20.210 [2024-07-14 07:44:36.269806] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.210 [2024-07-14 07:44:36.269967] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.210 [2024-07-14 07:44:36.269995] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0c30000b90 with addr=10.0.0.2, port=4420 00:27:20.210 qpair failed and we were unable to recover it. 00:27:20.210 [2024-07-14 07:44:36.270176] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.210 [2024-07-14 07:44:36.270366] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.210 [2024-07-14 07:44:36.270392] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0c30000b90 with addr=10.0.0.2, port=4420 00:27:20.210 qpair failed and we were unable to recover it. 00:27:20.210 [2024-07-14 07:44:36.270611] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.210 [2024-07-14 07:44:36.270759] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.210 [2024-07-14 07:44:36.270784] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0c30000b90 with addr=10.0.0.2, port=4420 00:27:20.210 qpair failed and we were unable to recover it. 00:27:20.210 [2024-07-14 07:44:36.270973] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.210 [2024-07-14 07:44:36.271132] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.210 [2024-07-14 07:44:36.271155] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0c30000b90 with addr=10.0.0.2, port=4420 00:27:20.210 qpair failed and we were unable to recover it. 00:27:20.210 [2024-07-14 07:44:36.271305] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.210 [2024-07-14 07:44:36.271481] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.210 [2024-07-14 07:44:36.271508] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0c30000b90 with addr=10.0.0.2, port=4420 00:27:20.210 qpair failed and we were unable to recover it. 00:27:20.210 [2024-07-14 07:44:36.271695] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.210 [2024-07-14 07:44:36.271887] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.210 [2024-07-14 07:44:36.271914] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0c30000b90 with addr=10.0.0.2, port=4420 00:27:20.210 qpair failed and we were unable to recover it. 00:27:20.210 [2024-07-14 07:44:36.272076] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.210 [2024-07-14 07:44:36.272256] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.210 [2024-07-14 07:44:36.272280] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0c30000b90 with addr=10.0.0.2, port=4420 00:27:20.210 qpair failed and we were unable to recover it. 00:27:20.210 [2024-07-14 07:44:36.272426] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.210 [2024-07-14 07:44:36.272615] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.210 [2024-07-14 07:44:36.272640] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0c30000b90 with addr=10.0.0.2, port=4420 00:27:20.210 qpair failed and we were unable to recover it. 00:27:20.210 [2024-07-14 07:44:36.272795] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.210 [2024-07-14 07:44:36.273002] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.210 [2024-07-14 07:44:36.273027] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0c30000b90 with addr=10.0.0.2, port=4420 00:27:20.210 qpair failed and we were unable to recover it. 00:27:20.210 [2024-07-14 07:44:36.273213] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.210 [2024-07-14 07:44:36.273400] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.210 [2024-07-14 07:44:36.273426] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0c30000b90 with addr=10.0.0.2, port=4420 00:27:20.210 qpair failed and we were unable to recover it. 00:27:20.210 [2024-07-14 07:44:36.273575] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.210 [2024-07-14 07:44:36.273730] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.210 [2024-07-14 07:44:36.273754] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0c30000b90 with addr=10.0.0.2, port=4420 00:27:20.210 qpair failed and we were unable to recover it. 00:27:20.210 [2024-07-14 07:44:36.273969] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.210 [2024-07-14 07:44:36.274140] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.210 [2024-07-14 07:44:36.274165] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0c30000b90 with addr=10.0.0.2, port=4420 00:27:20.210 qpair failed and we were unable to recover it. 00:27:20.210 [2024-07-14 07:44:36.274343] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.210 [2024-07-14 07:44:36.274501] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.210 [2024-07-14 07:44:36.274527] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0c30000b90 with addr=10.0.0.2, port=4420 00:27:20.210 qpair failed and we were unable to recover it. 00:27:20.210 [2024-07-14 07:44:36.274713] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.210 [2024-07-14 07:44:36.274894] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.210 [2024-07-14 07:44:36.274920] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0c30000b90 with addr=10.0.0.2, port=4420 00:27:20.210 qpair failed and we were unable to recover it. 00:27:20.210 [2024-07-14 07:44:36.275094] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.210 [2024-07-14 07:44:36.275278] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.210 [2024-07-14 07:44:36.275303] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0c30000b90 with addr=10.0.0.2, port=4420 00:27:20.210 qpair failed and we were unable to recover it. 00:27:20.210 [2024-07-14 07:44:36.275460] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.210 [2024-07-14 07:44:36.275615] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.210 [2024-07-14 07:44:36.275640] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0c30000b90 with addr=10.0.0.2, port=4420 00:27:20.210 qpair failed and we were unable to recover it. 00:27:20.210 [2024-07-14 07:44:36.275852] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.210 [2024-07-14 07:44:36.276043] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.210 [2024-07-14 07:44:36.276069] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0c30000b90 with addr=10.0.0.2, port=4420 00:27:20.210 qpair failed and we were unable to recover it. 00:27:20.210 [2024-07-14 07:44:36.276225] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.210 [2024-07-14 07:44:36.276426] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.210 [2024-07-14 07:44:36.276451] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0c30000b90 with addr=10.0.0.2, port=4420 00:27:20.210 qpair failed and we were unable to recover it. 00:27:20.210 [2024-07-14 07:44:36.276607] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.210 [2024-07-14 07:44:36.276793] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.210 [2024-07-14 07:44:36.276818] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0c30000b90 with addr=10.0.0.2, port=4420 00:27:20.210 qpair failed and we were unable to recover it. 00:27:20.210 [2024-07-14 07:44:36.277002] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.210 [2024-07-14 07:44:36.277157] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.211 [2024-07-14 07:44:36.277182] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0c30000b90 with addr=10.0.0.2, port=4420 00:27:20.211 qpair failed and we were unable to recover it. 00:27:20.211 [2024-07-14 07:44:36.277338] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.211 [2024-07-14 07:44:36.277510] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.211 [2024-07-14 07:44:36.277534] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0c30000b90 with addr=10.0.0.2, port=4420 00:27:20.211 qpair failed and we were unable to recover it. 00:27:20.211 [2024-07-14 07:44:36.277722] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.211 [2024-07-14 07:44:36.277899] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.211 [2024-07-14 07:44:36.277925] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0c30000b90 with addr=10.0.0.2, port=4420 00:27:20.211 qpair failed and we were unable to recover it. 00:27:20.211 [2024-07-14 07:44:36.278098] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.211 [2024-07-14 07:44:36.278249] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.211 [2024-07-14 07:44:36.278275] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0c30000b90 with addr=10.0.0.2, port=4420 00:27:20.211 qpair failed and we were unable to recover it. 00:27:20.211 [2024-07-14 07:44:36.278460] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.211 [2024-07-14 07:44:36.278641] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.211 [2024-07-14 07:44:36.278667] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0c30000b90 with addr=10.0.0.2, port=4420 00:27:20.211 qpair failed and we were unable to recover it. 00:27:20.211 [2024-07-14 07:44:36.278838] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.211 [2024-07-14 07:44:36.279011] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.211 [2024-07-14 07:44:36.279040] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0c30000b90 with addr=10.0.0.2, port=4420 00:27:20.211 qpair failed and we were unable to recover it. 00:27:20.211 [2024-07-14 07:44:36.279223] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.211 [2024-07-14 07:44:36.279409] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.211 [2024-07-14 07:44:36.279436] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0c30000b90 with addr=10.0.0.2, port=4420 00:27:20.211 qpair failed and we were unable to recover it. 00:27:20.211 [2024-07-14 07:44:36.279612] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.211 [2024-07-14 07:44:36.279817] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.211 [2024-07-14 07:44:36.279843] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0c30000b90 with addr=10.0.0.2, port=4420 00:27:20.211 qpair failed and we were unable to recover it. 00:27:20.211 [2024-07-14 07:44:36.280032] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.211 [2024-07-14 07:44:36.280187] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.211 [2024-07-14 07:44:36.280214] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0c30000b90 with addr=10.0.0.2, port=4420 00:27:20.211 qpair failed and we were unable to recover it. 00:27:20.211 [2024-07-14 07:44:36.280402] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.211 [2024-07-14 07:44:36.280557] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.211 [2024-07-14 07:44:36.280581] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0c30000b90 with addr=10.0.0.2, port=4420 00:27:20.211 qpair failed and we were unable to recover it. 00:27:20.211 [2024-07-14 07:44:36.280774] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.211 [2024-07-14 07:44:36.280993] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.211 [2024-07-14 07:44:36.281031] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0c30000b90 with addr=10.0.0.2, port=4420 00:27:20.211 qpair failed and we were unable to recover it. 00:27:20.211 [2024-07-14 07:44:36.281218] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.211 [2024-07-14 07:44:36.281367] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.211 [2024-07-14 07:44:36.281392] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0c30000b90 with addr=10.0.0.2, port=4420 00:27:20.211 qpair failed and we were unable to recover it. 00:27:20.211 [2024-07-14 07:44:36.281591] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.211 [2024-07-14 07:44:36.281742] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.211 [2024-07-14 07:44:36.281769] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0c30000b90 with addr=10.0.0.2, port=4420 00:27:20.211 qpair failed and we were unable to recover it. 00:27:20.211 [2024-07-14 07:44:36.281980] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.211 [2024-07-14 07:44:36.282136] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.211 [2024-07-14 07:44:36.282160] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0c30000b90 with addr=10.0.0.2, port=4420 00:27:20.211 qpair failed and we were unable to recover it. 00:27:20.211 [2024-07-14 07:44:36.282339] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.211 [2024-07-14 07:44:36.282500] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.211 [2024-07-14 07:44:36.282526] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0c30000b90 with addr=10.0.0.2, port=4420 00:27:20.211 qpair failed and we were unable to recover it. 00:27:20.211 [2024-07-14 07:44:36.282706] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.211 [2024-07-14 07:44:36.282893] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.211 [2024-07-14 07:44:36.282931] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0c30000b90 with addr=10.0.0.2, port=4420 00:27:20.211 qpair failed and we were unable to recover it. 00:27:20.211 [2024-07-14 07:44:36.283141] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.211 [2024-07-14 07:44:36.283314] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.211 [2024-07-14 07:44:36.283340] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0c30000b90 with addr=10.0.0.2, port=4420 00:27:20.211 qpair failed and we were unable to recover it. 00:27:20.211 [2024-07-14 07:44:36.283518] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.211 [2024-07-14 07:44:36.283718] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.211 [2024-07-14 07:44:36.283743] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0c30000b90 with addr=10.0.0.2, port=4420 00:27:20.211 qpair failed and we were unable to recover it. 00:27:20.211 [2024-07-14 07:44:36.283925] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.211 [2024-07-14 07:44:36.284110] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.211 [2024-07-14 07:44:36.284137] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0c30000b90 with addr=10.0.0.2, port=4420 00:27:20.211 qpair failed and we were unable to recover it. 00:27:20.211 [2024-07-14 07:44:36.284291] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.211 [2024-07-14 07:44:36.284444] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.211 [2024-07-14 07:44:36.284468] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0c30000b90 with addr=10.0.0.2, port=4420 00:27:20.211 qpair failed and we were unable to recover it. 00:27:20.211 [2024-07-14 07:44:36.284676] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.211 [2024-07-14 07:44:36.284831] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.211 [2024-07-14 07:44:36.284857] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0c30000b90 with addr=10.0.0.2, port=4420 00:27:20.211 qpair failed and we were unable to recover it. 00:27:20.211 [2024-07-14 07:44:36.285026] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.211 [2024-07-14 07:44:36.285184] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.211 [2024-07-14 07:44:36.285212] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0c30000b90 with addr=10.0.0.2, port=4420 00:27:20.211 qpair failed and we were unable to recover it. 00:27:20.211 [2024-07-14 07:44:36.285428] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.211 [2024-07-14 07:44:36.285616] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.211 [2024-07-14 07:44:36.285655] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0c30000b90 with addr=10.0.0.2, port=4420 00:27:20.211 qpair failed and we were unable to recover it. 00:27:20.211 [2024-07-14 07:44:36.285800] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.211 [2024-07-14 07:44:36.285978] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.211 [2024-07-14 07:44:36.286006] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0c30000b90 with addr=10.0.0.2, port=4420 00:27:20.211 qpair failed and we were unable to recover it. 00:27:20.211 [2024-07-14 07:44:36.286194] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.211 [2024-07-14 07:44:36.286382] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.211 [2024-07-14 07:44:36.286415] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0c30000b90 with addr=10.0.0.2, port=4420 00:27:20.211 qpair failed and we were unable to recover it. 00:27:20.211 [2024-07-14 07:44:36.286604] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.211 [2024-07-14 07:44:36.286790] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.211 [2024-07-14 07:44:36.286830] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0c30000b90 with addr=10.0.0.2, port=4420 00:27:20.211 qpair failed and we were unable to recover it. 00:27:20.211 [2024-07-14 07:44:36.287026] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.211 [2024-07-14 07:44:36.287215] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.211 [2024-07-14 07:44:36.287240] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0c30000b90 with addr=10.0.0.2, port=4420 00:27:20.211 qpair failed and we were unable to recover it. 00:27:20.211 [2024-07-14 07:44:36.287416] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.211 [2024-07-14 07:44:36.287611] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.211 [2024-07-14 07:44:36.287639] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0c30000b90 with addr=10.0.0.2, port=4420 00:27:20.211 qpair failed and we were unable to recover it. 00:27:20.211 [2024-07-14 07:44:36.287796] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.211 [2024-07-14 07:44:36.287975] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.211 [2024-07-14 07:44:36.288000] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0c30000b90 with addr=10.0.0.2, port=4420 00:27:20.211 qpair failed and we were unable to recover it. 00:27:20.211 [2024-07-14 07:44:36.288203] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.211 [2024-07-14 07:44:36.288356] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.211 [2024-07-14 07:44:36.288387] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0c30000b90 with addr=10.0.0.2, port=4420 00:27:20.212 qpair failed and we were unable to recover it. 00:27:20.212 [2024-07-14 07:44:36.288601] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.212 [2024-07-14 07:44:36.288790] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.212 [2024-07-14 07:44:36.288815] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0c30000b90 with addr=10.0.0.2, port=4420 00:27:20.212 qpair failed and we were unable to recover it. 00:27:20.212 [2024-07-14 07:44:36.288984] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.212 [2024-07-14 07:44:36.289164] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.212 [2024-07-14 07:44:36.289190] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0c30000b90 with addr=10.0.0.2, port=4420 00:27:20.212 qpair failed and we were unable to recover it. 00:27:20.212 [2024-07-14 07:44:36.289372] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.212 [2024-07-14 07:44:36.289542] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.212 [2024-07-14 07:44:36.289566] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0c30000b90 with addr=10.0.0.2, port=4420 00:27:20.212 qpair failed and we were unable to recover it. 00:27:20.212 [2024-07-14 07:44:36.289741] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.212 [2024-07-14 07:44:36.289917] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.212 [2024-07-14 07:44:36.289943] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0c30000b90 with addr=10.0.0.2, port=4420 00:27:20.212 qpair failed and we were unable to recover it. 00:27:20.212 [2024-07-14 07:44:36.290153] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.212 [2024-07-14 07:44:36.290314] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.212 [2024-07-14 07:44:36.290341] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0c30000b90 with addr=10.0.0.2, port=4420 00:27:20.212 qpair failed and we were unable to recover it. 00:27:20.212 [2024-07-14 07:44:36.290529] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.212 [2024-07-14 07:44:36.290715] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.212 [2024-07-14 07:44:36.290745] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0c30000b90 with addr=10.0.0.2, port=4420 00:27:20.212 qpair failed and we were unable to recover it. 00:27:20.212 [2024-07-14 07:44:36.290938] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.212 [2024-07-14 07:44:36.291093] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.212 [2024-07-14 07:44:36.291120] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0c30000b90 with addr=10.0.0.2, port=4420 00:27:20.212 qpair failed and we were unable to recover it. 00:27:20.212 [2024-07-14 07:44:36.291335] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.212 [2024-07-14 07:44:36.291542] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.212 [2024-07-14 07:44:36.291569] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0c30000b90 with addr=10.0.0.2, port=4420 00:27:20.212 qpair failed and we were unable to recover it. 00:27:20.212 [2024-07-14 07:44:36.291732] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.212 [2024-07-14 07:44:36.291914] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.212 [2024-07-14 07:44:36.291940] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0c30000b90 with addr=10.0.0.2, port=4420 00:27:20.212 qpair failed and we were unable to recover it. 00:27:20.212 [2024-07-14 07:44:36.292094] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.212 [2024-07-14 07:44:36.292317] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.212 [2024-07-14 07:44:36.292344] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0c30000b90 with addr=10.0.0.2, port=4420 00:27:20.212 qpair failed and we were unable to recover it. 00:27:20.212 [2024-07-14 07:44:36.292523] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.212 [2024-07-14 07:44:36.292681] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.212 [2024-07-14 07:44:36.292709] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0c30000b90 with addr=10.0.0.2, port=4420 00:27:20.212 qpair failed and we were unable to recover it. 00:27:20.212 [2024-07-14 07:44:36.292898] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.212 [2024-07-14 07:44:36.293075] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.212 [2024-07-14 07:44:36.293102] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0c30000b90 with addr=10.0.0.2, port=4420 00:27:20.212 qpair failed and we were unable to recover it. 00:27:20.212 [2024-07-14 07:44:36.293259] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.212 [2024-07-14 07:44:36.293463] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.212 [2024-07-14 07:44:36.293500] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0c30000b90 with addr=10.0.0.2, port=4420 00:27:20.212 qpair failed and we were unable to recover it. 00:27:20.212 [2024-07-14 07:44:36.293669] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.212 [2024-07-14 07:44:36.293855] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.212 [2024-07-14 07:44:36.293901] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0c30000b90 with addr=10.0.0.2, port=4420 00:27:20.212 qpair failed and we were unable to recover it. 00:27:20.212 [2024-07-14 07:44:36.294087] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.212 [2024-07-14 07:44:36.294262] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.212 [2024-07-14 07:44:36.294295] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0c30000b90 with addr=10.0.0.2, port=4420 00:27:20.212 qpair failed and we were unable to recover it. 00:27:20.212 [2024-07-14 07:44:36.294499] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.212 [2024-07-14 07:44:36.294660] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.212 [2024-07-14 07:44:36.294687] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0c30000b90 with addr=10.0.0.2, port=4420 00:27:20.212 qpair failed and we were unable to recover it. 00:27:20.212 [2024-07-14 07:44:36.294892] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.212 [2024-07-14 07:44:36.295047] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.212 [2024-07-14 07:44:36.295073] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0c30000b90 with addr=10.0.0.2, port=4420 00:27:20.212 qpair failed and we were unable to recover it. 00:27:20.212 [2024-07-14 07:44:36.295231] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.212 [2024-07-14 07:44:36.295399] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.212 [2024-07-14 07:44:36.295424] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0c30000b90 with addr=10.0.0.2, port=4420 00:27:20.212 qpair failed and we were unable to recover it. 00:27:20.212 [2024-07-14 07:44:36.295639] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.212 [2024-07-14 07:44:36.295794] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.212 [2024-07-14 07:44:36.295821] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0c30000b90 with addr=10.0.0.2, port=4420 00:27:20.212 qpair failed and we were unable to recover it. 00:27:20.212 [2024-07-14 07:44:36.295993] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.212 [2024-07-14 07:44:36.296194] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.212 [2024-07-14 07:44:36.296229] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0c30000b90 with addr=10.0.0.2, port=4420 00:27:20.212 qpair failed and we were unable to recover it. 00:27:20.212 [2024-07-14 07:44:36.296393] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.212 [2024-07-14 07:44:36.296557] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.212 [2024-07-14 07:44:36.296584] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0c30000b90 with addr=10.0.0.2, port=4420 00:27:20.212 qpair failed and we were unable to recover it. 00:27:20.212 [2024-07-14 07:44:36.296776] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.212 [2024-07-14 07:44:36.296996] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.212 [2024-07-14 07:44:36.297030] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0c30000b90 with addr=10.0.0.2, port=4420 00:27:20.212 qpair failed and we were unable to recover it. 00:27:20.212 [2024-07-14 07:44:36.297241] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.212 [2024-07-14 07:44:36.297431] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.212 [2024-07-14 07:44:36.297461] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0c30000b90 with addr=10.0.0.2, port=4420 00:27:20.212 qpair failed and we were unable to recover it. 00:27:20.212 [2024-07-14 07:44:36.297628] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.212 [2024-07-14 07:44:36.297816] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.212 [2024-07-14 07:44:36.297844] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0c30000b90 with addr=10.0.0.2, port=4420 00:27:20.212 qpair failed and we were unable to recover it. 00:27:20.212 [2024-07-14 07:44:36.298029] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.212 [2024-07-14 07:44:36.298188] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.212 [2024-07-14 07:44:36.298216] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0c30000b90 with addr=10.0.0.2, port=4420 00:27:20.212 qpair failed and we were unable to recover it. 00:27:20.212 [2024-07-14 07:44:36.298390] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.212 [2024-07-14 07:44:36.298579] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.212 [2024-07-14 07:44:36.298603] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0c30000b90 with addr=10.0.0.2, port=4420 00:27:20.212 qpair failed and we were unable to recover it. 00:27:20.212 [2024-07-14 07:44:36.298777] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.212 [2024-07-14 07:44:36.298959] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.212 [2024-07-14 07:44:36.298987] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0c30000b90 with addr=10.0.0.2, port=4420 00:27:20.212 qpair failed and we were unable to recover it. 00:27:20.212 [2024-07-14 07:44:36.299145] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.213 [2024-07-14 07:44:36.299327] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.213 [2024-07-14 07:44:36.299354] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0c30000b90 with addr=10.0.0.2, port=4420 00:27:20.213 qpair failed and we were unable to recover it. 00:27:20.213 [2024-07-14 07:44:36.299514] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.213 [2024-07-14 07:44:36.299689] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.213 [2024-07-14 07:44:36.299716] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0c30000b90 with addr=10.0.0.2, port=4420 00:27:20.213 qpair failed and we were unable to recover it. 00:27:20.213 [2024-07-14 07:44:36.299912] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.213 [2024-07-14 07:44:36.300074] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.213 [2024-07-14 07:44:36.300099] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0c30000b90 with addr=10.0.0.2, port=4420 00:27:20.213 qpair failed and we were unable to recover it. 00:27:20.213 [2024-07-14 07:44:36.300316] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.213 [2024-07-14 07:44:36.300502] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.213 [2024-07-14 07:44:36.300528] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0c30000b90 with addr=10.0.0.2, port=4420 00:27:20.213 qpair failed and we were unable to recover it. 00:27:20.213 [2024-07-14 07:44:36.300710] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.213 [2024-07-14 07:44:36.300873] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.213 [2024-07-14 07:44:36.300910] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0c30000b90 with addr=10.0.0.2, port=4420 00:27:20.213 qpair failed and we were unable to recover it. 00:27:20.213 [2024-07-14 07:44:36.301098] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.213 [2024-07-14 07:44:36.301267] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.213 [2024-07-14 07:44:36.301300] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0c30000b90 with addr=10.0.0.2, port=4420 00:27:20.213 qpair failed and we were unable to recover it. 00:27:20.213 [2024-07-14 07:44:36.301481] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.213 [2024-07-14 07:44:36.301637] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.213 [2024-07-14 07:44:36.301668] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0c30000b90 with addr=10.0.0.2, port=4420 00:27:20.213 qpair failed and we were unable to recover it. 00:27:20.213 [2024-07-14 07:44:36.301870] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.213 [2024-07-14 07:44:36.302087] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.213 [2024-07-14 07:44:36.302113] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0c30000b90 with addr=10.0.0.2, port=4420 00:27:20.213 qpair failed and we were unable to recover it. 00:27:20.213 [2024-07-14 07:44:36.302273] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.213 [2024-07-14 07:44:36.302431] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.213 [2024-07-14 07:44:36.302469] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0c30000b90 with addr=10.0.0.2, port=4420 00:27:20.213 qpair failed and we were unable to recover it. 00:27:20.213 [2024-07-14 07:44:36.302671] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.213 [2024-07-14 07:44:36.302831] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.213 [2024-07-14 07:44:36.302855] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0c30000b90 with addr=10.0.0.2, port=4420 00:27:20.213 qpair failed and we were unable to recover it. 00:27:20.213 [2024-07-14 07:44:36.303058] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.213 [2024-07-14 07:44:36.303221] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.213 [2024-07-14 07:44:36.303247] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0c30000b90 with addr=10.0.0.2, port=4420 00:27:20.213 qpair failed and we were unable to recover it. 00:27:20.213 [2024-07-14 07:44:36.303409] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.213 [2024-07-14 07:44:36.303594] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.213 [2024-07-14 07:44:36.303622] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0c30000b90 with addr=10.0.0.2, port=4420 00:27:20.213 qpair failed and we were unable to recover it. 00:27:20.213 [2024-07-14 07:44:36.303804] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.213 [2024-07-14 07:44:36.303974] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.213 [2024-07-14 07:44:36.304001] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0c30000b90 with addr=10.0.0.2, port=4420 00:27:20.213 qpair failed and we were unable to recover it. 00:27:20.213 [2024-07-14 07:44:36.304188] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.213 [2024-07-14 07:44:36.304370] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.213 [2024-07-14 07:44:36.304395] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0c30000b90 with addr=10.0.0.2, port=4420 00:27:20.213 qpair failed and we were unable to recover it. 00:27:20.213 [2024-07-14 07:44:36.304571] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.213 [2024-07-14 07:44:36.304754] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.213 [2024-07-14 07:44:36.304780] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0c30000b90 with addr=10.0.0.2, port=4420 00:27:20.213 qpair failed and we were unable to recover it. 00:27:20.213 [2024-07-14 07:44:36.304938] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.213 [2024-07-14 07:44:36.305096] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.213 [2024-07-14 07:44:36.305121] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0c30000b90 with addr=10.0.0.2, port=4420 00:27:20.213 qpair failed and we were unable to recover it. 00:27:20.213 [2024-07-14 07:44:36.305278] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.213 [2024-07-14 07:44:36.305474] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.213 [2024-07-14 07:44:36.305502] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0c30000b90 with addr=10.0.0.2, port=4420 00:27:20.213 qpair failed and we were unable to recover it. 00:27:20.213 [2024-07-14 07:44:36.305653] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.213 [2024-07-14 07:44:36.305830] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.213 [2024-07-14 07:44:36.305854] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0c30000b90 with addr=10.0.0.2, port=4420 00:27:20.213 qpair failed and we were unable to recover it. 00:27:20.213 [2024-07-14 07:44:36.306027] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.213 [2024-07-14 07:44:36.306197] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.213 [2024-07-14 07:44:36.306224] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0c30000b90 with addr=10.0.0.2, port=4420 00:27:20.213 qpair failed and we were unable to recover it. 00:27:20.213 [2024-07-14 07:44:36.306443] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.213 [2024-07-14 07:44:36.306623] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.213 [2024-07-14 07:44:36.306648] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0c30000b90 with addr=10.0.0.2, port=4420 00:27:20.213 qpair failed and we were unable to recover it. 00:27:20.213 [2024-07-14 07:44:36.306825] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.213 [2024-07-14 07:44:36.306999] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.213 [2024-07-14 07:44:36.307026] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0c30000b90 with addr=10.0.0.2, port=4420 00:27:20.213 qpair failed and we were unable to recover it. 00:27:20.213 [2024-07-14 07:44:36.307226] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.213 [2024-07-14 07:44:36.307404] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.213 [2024-07-14 07:44:36.307432] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0c30000b90 with addr=10.0.0.2, port=4420 00:27:20.213 qpair failed and we were unable to recover it. 00:27:20.213 [2024-07-14 07:44:36.307632] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.213 [2024-07-14 07:44:36.307791] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.213 [2024-07-14 07:44:36.307821] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0c30000b90 with addr=10.0.0.2, port=4420 00:27:20.213 qpair failed and we were unable to recover it. 00:27:20.213 [2024-07-14 07:44:36.308047] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.213 [2024-07-14 07:44:36.308232] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.213 [2024-07-14 07:44:36.308267] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0c30000b90 with addr=10.0.0.2, port=4420 00:27:20.213 qpair failed and we were unable to recover it. 00:27:20.213 [2024-07-14 07:44:36.308437] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.213 [2024-07-14 07:44:36.308617] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.213 [2024-07-14 07:44:36.308652] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0c30000b90 with addr=10.0.0.2, port=4420 00:27:20.213 qpair failed and we were unable to recover it. 00:27:20.213 [2024-07-14 07:44:36.308847] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.213 [2024-07-14 07:44:36.309034] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.213 [2024-07-14 07:44:36.309070] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0c30000b90 with addr=10.0.0.2, port=4420 00:27:20.213 qpair failed and we were unable to recover it. 00:27:20.213 [2024-07-14 07:44:36.309259] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.213 [2024-07-14 07:44:36.309413] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.213 [2024-07-14 07:44:36.309438] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0c30000b90 with addr=10.0.0.2, port=4420 00:27:20.213 qpair failed and we were unable to recover it. 00:27:20.213 [2024-07-14 07:44:36.309626] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.213 [2024-07-14 07:44:36.309782] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.213 [2024-07-14 07:44:36.309808] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0c30000b90 with addr=10.0.0.2, port=4420 00:27:20.213 qpair failed and we were unable to recover it. 00:27:20.213 [2024-07-14 07:44:36.310002] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.213 [2024-07-14 07:44:36.310159] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.213 [2024-07-14 07:44:36.310185] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0c30000b90 with addr=10.0.0.2, port=4420 00:27:20.213 qpair failed and we were unable to recover it. 00:27:20.213 [2024-07-14 07:44:36.310389] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.213 [2024-07-14 07:44:36.310567] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.213 [2024-07-14 07:44:36.310594] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0c30000b90 with addr=10.0.0.2, port=4420 00:27:20.213 qpair failed and we were unable to recover it. 00:27:20.213 [2024-07-14 07:44:36.310784] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.213 [2024-07-14 07:44:36.310942] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.214 [2024-07-14 07:44:36.310968] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0c30000b90 with addr=10.0.0.2, port=4420 00:27:20.214 qpair failed and we were unable to recover it. 00:27:20.214 [2024-07-14 07:44:36.311166] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.214 [2024-07-14 07:44:36.311355] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.214 [2024-07-14 07:44:36.311381] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0c30000b90 with addr=10.0.0.2, port=4420 00:27:20.214 qpair failed and we were unable to recover it. 00:27:20.214 [2024-07-14 07:44:36.311561] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.214 [2024-07-14 07:44:36.311719] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.214 [2024-07-14 07:44:36.311745] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0c30000b90 with addr=10.0.0.2, port=4420 00:27:20.214 qpair failed and we were unable to recover it. 00:27:20.214 [2024-07-14 07:44:36.311904] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.214 [2024-07-14 07:44:36.312094] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.214 [2024-07-14 07:44:36.312120] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0c30000b90 with addr=10.0.0.2, port=4420 00:27:20.214 qpair failed and we were unable to recover it. 00:27:20.214 [2024-07-14 07:44:36.312310] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.214 [2024-07-14 07:44:36.312498] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.214 [2024-07-14 07:44:36.312525] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0c30000b90 with addr=10.0.0.2, port=4420 00:27:20.214 qpair failed and we were unable to recover it. 00:27:20.214 [2024-07-14 07:44:36.312725] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.214 [2024-07-14 07:44:36.312881] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.214 [2024-07-14 07:44:36.312909] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0c30000b90 with addr=10.0.0.2, port=4420 00:27:20.214 qpair failed and we were unable to recover it. 00:27:20.214 [2024-07-14 07:44:36.313070] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.214 [2024-07-14 07:44:36.313260] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.214 [2024-07-14 07:44:36.313286] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0c30000b90 with addr=10.0.0.2, port=4420 00:27:20.214 qpair failed and we were unable to recover it. 00:27:20.214 [2024-07-14 07:44:36.313511] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.214 [2024-07-14 07:44:36.313668] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.214 [2024-07-14 07:44:36.313694] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0c30000b90 with addr=10.0.0.2, port=4420 00:27:20.214 qpair failed and we were unable to recover it. 00:27:20.214 [2024-07-14 07:44:36.313889] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.214 [2024-07-14 07:44:36.314076] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.214 [2024-07-14 07:44:36.314105] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0c30000b90 with addr=10.0.0.2, port=4420 00:27:20.214 qpair failed and we were unable to recover it. 00:27:20.214 [2024-07-14 07:44:36.314324] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.214 [2024-07-14 07:44:36.314499] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.214 [2024-07-14 07:44:36.314525] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0c30000b90 with addr=10.0.0.2, port=4420 00:27:20.214 qpair failed and we were unable to recover it. 00:27:20.214 [2024-07-14 07:44:36.314717] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.214 [2024-07-14 07:44:36.314877] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.214 [2024-07-14 07:44:36.314903] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0c30000b90 with addr=10.0.0.2, port=4420 00:27:20.214 qpair failed and we were unable to recover it. 00:27:20.214 [2024-07-14 07:44:36.315088] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.214 [2024-07-14 07:44:36.315269] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.214 [2024-07-14 07:44:36.315295] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0c30000b90 with addr=10.0.0.2, port=4420 00:27:20.214 qpair failed and we were unable to recover it. 00:27:20.214 [2024-07-14 07:44:36.315446] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.214 [2024-07-14 07:44:36.315629] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.214 [2024-07-14 07:44:36.315656] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0c30000b90 with addr=10.0.0.2, port=4420 00:27:20.214 qpair failed and we were unable to recover it. 00:27:20.214 [2024-07-14 07:44:36.315845] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.214 [2024-07-14 07:44:36.316051] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.214 [2024-07-14 07:44:36.316079] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0c30000b90 with addr=10.0.0.2, port=4420 00:27:20.214 qpair failed and we were unable to recover it. 00:27:20.214 [2024-07-14 07:44:36.316223] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.214 [2024-07-14 07:44:36.316411] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.214 [2024-07-14 07:44:36.316437] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0c30000b90 with addr=10.0.0.2, port=4420 00:27:20.214 qpair failed and we were unable to recover it. 00:27:20.214 [2024-07-14 07:44:36.316596] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.214 [2024-07-14 07:44:36.316787] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.214 [2024-07-14 07:44:36.316814] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0c30000b90 with addr=10.0.0.2, port=4420 00:27:20.214 qpair failed and we were unable to recover it. 00:27:20.214 [2024-07-14 07:44:36.317007] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.214 [2024-07-14 07:44:36.317179] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.214 [2024-07-14 07:44:36.317206] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0c30000b90 with addr=10.0.0.2, port=4420 00:27:20.214 qpair failed and we were unable to recover it. 00:27:20.214 [2024-07-14 07:44:36.317396] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.214 [2024-07-14 07:44:36.317556] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.214 [2024-07-14 07:44:36.317581] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0c30000b90 with addr=10.0.0.2, port=4420 00:27:20.214 qpair failed and we were unable to recover it. 00:27:20.214 [2024-07-14 07:44:36.317756] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.214 [2024-07-14 07:44:36.317958] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.214 [2024-07-14 07:44:36.317984] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0c30000b90 with addr=10.0.0.2, port=4420 00:27:20.214 qpair failed and we were unable to recover it. 00:27:20.214 [2024-07-14 07:44:36.318197] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.214 [2024-07-14 07:44:36.318377] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.214 [2024-07-14 07:44:36.318402] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0c30000b90 with addr=10.0.0.2, port=4420 00:27:20.214 qpair failed and we were unable to recover it. 00:27:20.214 [2024-07-14 07:44:36.318580] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.214 [2024-07-14 07:44:36.318731] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.214 [2024-07-14 07:44:36.318767] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0c30000b90 with addr=10.0.0.2, port=4420 00:27:20.214 qpair failed and we were unable to recover it. 00:27:20.214 [2024-07-14 07:44:36.318957] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.214 [2024-07-14 07:44:36.319171] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.214 [2024-07-14 07:44:36.319197] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0c30000b90 with addr=10.0.0.2, port=4420 00:27:20.214 qpair failed and we were unable to recover it. 00:27:20.214 [2024-07-14 07:44:36.319380] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.214 [2024-07-14 07:44:36.319553] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.214 [2024-07-14 07:44:36.319578] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0c30000b90 with addr=10.0.0.2, port=4420 00:27:20.214 qpair failed and we were unable to recover it. 00:27:20.214 [2024-07-14 07:44:36.319748] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.214 [2024-07-14 07:44:36.319923] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.214 [2024-07-14 07:44:36.319952] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0c30000b90 with addr=10.0.0.2, port=4420 00:27:20.214 qpair failed and we were unable to recover it. 00:27:20.214 [2024-07-14 07:44:36.320106] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.214 [2024-07-14 07:44:36.320319] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.214 [2024-07-14 07:44:36.320346] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0c30000b90 with addr=10.0.0.2, port=4420 00:27:20.214 qpair failed and we were unable to recover it. 00:27:20.214 [2024-07-14 07:44:36.320537] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.214 [2024-07-14 07:44:36.320723] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.214 [2024-07-14 07:44:36.320749] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0c30000b90 with addr=10.0.0.2, port=4420 00:27:20.214 qpair failed and we were unable to recover it. 00:27:20.214 [2024-07-14 07:44:36.320912] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.214 [2024-07-14 07:44:36.321070] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.214 [2024-07-14 07:44:36.321097] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0c30000b90 with addr=10.0.0.2, port=4420 00:27:20.214 qpair failed and we were unable to recover it. 00:27:20.214 [2024-07-14 07:44:36.321276] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.214 [2024-07-14 07:44:36.321436] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.214 [2024-07-14 07:44:36.321470] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0c30000b90 with addr=10.0.0.2, port=4420 00:27:20.214 qpair failed and we were unable to recover it. 00:27:20.214 [2024-07-14 07:44:36.321661] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.214 [2024-07-14 07:44:36.321824] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.214 [2024-07-14 07:44:36.321849] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0c30000b90 with addr=10.0.0.2, port=4420 00:27:20.214 qpair failed and we were unable to recover it. 00:27:20.214 [2024-07-14 07:44:36.322046] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.214 [2024-07-14 07:44:36.322205] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.214 [2024-07-14 07:44:36.322231] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0c30000b90 with addr=10.0.0.2, port=4420 00:27:20.214 qpair failed and we were unable to recover it. 00:27:20.214 [2024-07-14 07:44:36.322405] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.214 [2024-07-14 07:44:36.322557] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.214 [2024-07-14 07:44:36.322583] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0c30000b90 with addr=10.0.0.2, port=4420 00:27:20.214 qpair failed and we were unable to recover it. 00:27:20.214 [2024-07-14 07:44:36.322779] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.215 [2024-07-14 07:44:36.322952] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.215 [2024-07-14 07:44:36.322979] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0c30000b90 with addr=10.0.0.2, port=4420 00:27:20.215 qpair failed and we were unable to recover it. 00:27:20.215 [2024-07-14 07:44:36.323166] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.215 [2024-07-14 07:44:36.323326] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.215 [2024-07-14 07:44:36.323354] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0c30000b90 with addr=10.0.0.2, port=4420 00:27:20.215 qpair failed and we were unable to recover it. 00:27:20.215 [2024-07-14 07:44:36.323541] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.215 [2024-07-14 07:44:36.323723] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.215 [2024-07-14 07:44:36.323750] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0c30000b90 with addr=10.0.0.2, port=4420 00:27:20.215 qpair failed and we were unable to recover it. 00:27:20.215 [2024-07-14 07:44:36.323936] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.215 [2024-07-14 07:44:36.324122] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.215 [2024-07-14 07:44:36.324151] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0c30000b90 with addr=10.0.0.2, port=4420 00:27:20.215 qpair failed and we were unable to recover it. 00:27:20.215 [2024-07-14 07:44:36.324343] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.215 [2024-07-14 07:44:36.324517] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.215 [2024-07-14 07:44:36.324544] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0c30000b90 with addr=10.0.0.2, port=4420 00:27:20.215 qpair failed and we were unable to recover it. 00:27:20.215 [2024-07-14 07:44:36.324726] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.215 [2024-07-14 07:44:36.324947] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.215 [2024-07-14 07:44:36.324973] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0c30000b90 with addr=10.0.0.2, port=4420 00:27:20.215 qpair failed and we were unable to recover it. 00:27:20.215 [2024-07-14 07:44:36.325131] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.215 [2024-07-14 07:44:36.325311] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.215 [2024-07-14 07:44:36.325336] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0c30000b90 with addr=10.0.0.2, port=4420 00:27:20.215 qpair failed and we were unable to recover it. 00:27:20.215 [2024-07-14 07:44:36.325539] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.215 [2024-07-14 07:44:36.325690] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.215 [2024-07-14 07:44:36.325721] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0c30000b90 with addr=10.0.0.2, port=4420 00:27:20.215 qpair failed and we were unable to recover it. 00:27:20.215 [2024-07-14 07:44:36.325919] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.215 [2024-07-14 07:44:36.326107] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.215 [2024-07-14 07:44:36.326133] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0c30000b90 with addr=10.0.0.2, port=4420 00:27:20.215 qpair failed and we were unable to recover it. 00:27:20.215 [2024-07-14 07:44:36.326297] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.215 [2024-07-14 07:44:36.326482] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.215 [2024-07-14 07:44:36.326514] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0c30000b90 with addr=10.0.0.2, port=4420 00:27:20.215 qpair failed and we were unable to recover it. 00:27:20.215 [2024-07-14 07:44:36.326703] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.215 [2024-07-14 07:44:36.326882] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.215 [2024-07-14 07:44:36.326909] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0c30000b90 with addr=10.0.0.2, port=4420 00:27:20.215 qpair failed and we were unable to recover it. 00:27:20.215 [2024-07-14 07:44:36.327068] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.215 [2024-07-14 07:44:36.327259] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.215 [2024-07-14 07:44:36.327286] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0c30000b90 with addr=10.0.0.2, port=4420 00:27:20.215 qpair failed and we were unable to recover it. 00:27:20.215 [2024-07-14 07:44:36.327490] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.215 [2024-07-14 07:44:36.327674] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.215 [2024-07-14 07:44:36.327700] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0c30000b90 with addr=10.0.0.2, port=4420 00:27:20.215 qpair failed and we were unable to recover it. 00:27:20.215 [2024-07-14 07:44:36.327891] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.215 [2024-07-14 07:44:36.328071] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.215 [2024-07-14 07:44:36.328097] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0c30000b90 with addr=10.0.0.2, port=4420 00:27:20.215 qpair failed and we were unable to recover it. 00:27:20.215 [2024-07-14 07:44:36.328267] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.215 [2024-07-14 07:44:36.328431] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.215 [2024-07-14 07:44:36.328457] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0c30000b90 with addr=10.0.0.2, port=4420 00:27:20.215 qpair failed and we were unable to recover it. 00:27:20.215 [2024-07-14 07:44:36.328628] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.215 [2024-07-14 07:44:36.328824] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.215 [2024-07-14 07:44:36.328850] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0c30000b90 with addr=10.0.0.2, port=4420 00:27:20.215 qpair failed and we were unable to recover it. 00:27:20.215 [2024-07-14 07:44:36.329070] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.215 [2024-07-14 07:44:36.329256] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.215 [2024-07-14 07:44:36.329283] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0c30000b90 with addr=10.0.0.2, port=4420 00:27:20.215 qpair failed and we were unable to recover it. 00:27:20.215 [2024-07-14 07:44:36.329482] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.215 [2024-07-14 07:44:36.329629] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.215 [2024-07-14 07:44:36.329653] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0c30000b90 with addr=10.0.0.2, port=4420 00:27:20.215 qpair failed and we were unable to recover it. 00:27:20.215 [2024-07-14 07:44:36.329841] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.215 [2024-07-14 07:44:36.330035] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.215 [2024-07-14 07:44:36.330063] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0c30000b90 with addr=10.0.0.2, port=4420 00:27:20.215 qpair failed and we were unable to recover it. 00:27:20.215 [2024-07-14 07:44:36.330246] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.215 [2024-07-14 07:44:36.330412] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.215 [2024-07-14 07:44:36.330438] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0c30000b90 with addr=10.0.0.2, port=4420 00:27:20.215 qpair failed and we were unable to recover it. 00:27:20.215 [2024-07-14 07:44:36.330595] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.215 [2024-07-14 07:44:36.330778] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.215 [2024-07-14 07:44:36.330805] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0c30000b90 with addr=10.0.0.2, port=4420 00:27:20.215 qpair failed and we were unable to recover it. 00:27:20.215 [2024-07-14 07:44:36.330992] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.215 [2024-07-14 07:44:36.331175] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.215 [2024-07-14 07:44:36.331201] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0c30000b90 with addr=10.0.0.2, port=4420 00:27:20.215 qpair failed and we were unable to recover it. 00:27:20.215 [2024-07-14 07:44:36.331392] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.215 [2024-07-14 07:44:36.331594] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.215 [2024-07-14 07:44:36.331622] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0c30000b90 with addr=10.0.0.2, port=4420 00:27:20.215 qpair failed and we were unable to recover it. 00:27:20.215 [2024-07-14 07:44:36.331816] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.215 [2024-07-14 07:44:36.332005] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.215 [2024-07-14 07:44:36.332041] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0c30000b90 with addr=10.0.0.2, port=4420 00:27:20.215 qpair failed and we were unable to recover it. 00:27:20.215 [2024-07-14 07:44:36.332250] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.215 [2024-07-14 07:44:36.332409] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.215 [2024-07-14 07:44:36.332435] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0c30000b90 with addr=10.0.0.2, port=4420 00:27:20.215 qpair failed and we were unable to recover it. 00:27:20.215 [2024-07-14 07:44:36.332622] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.215 [2024-07-14 07:44:36.332800] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.215 [2024-07-14 07:44:36.332825] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0c30000b90 with addr=10.0.0.2, port=4420 00:27:20.215 qpair failed and we were unable to recover it. 00:27:20.215 [2024-07-14 07:44:36.333019] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.215 [2024-07-14 07:44:36.333203] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.215 [2024-07-14 07:44:36.333230] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0c30000b90 with addr=10.0.0.2, port=4420 00:27:20.215 qpair failed and we were unable to recover it. 00:27:20.215 [2024-07-14 07:44:36.333410] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.215 [2024-07-14 07:44:36.333623] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.215 [2024-07-14 07:44:36.333649] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0c30000b90 with addr=10.0.0.2, port=4420 00:27:20.215 qpair failed and we were unable to recover it. 00:27:20.215 [2024-07-14 07:44:36.333835] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.215 [2024-07-14 07:44:36.334028] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.215 [2024-07-14 07:44:36.334054] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0c30000b90 with addr=10.0.0.2, port=4420 00:27:20.215 qpair failed and we were unable to recover it. 00:27:20.215 [2024-07-14 07:44:36.334242] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.215 [2024-07-14 07:44:36.334458] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.215 [2024-07-14 07:44:36.334486] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0c30000b90 with addr=10.0.0.2, port=4420 00:27:20.215 qpair failed and we were unable to recover it. 00:27:20.215 [2024-07-14 07:44:36.334666] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.215 [2024-07-14 07:44:36.334842] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.215 [2024-07-14 07:44:36.334874] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0c30000b90 with addr=10.0.0.2, port=4420 00:27:20.216 qpair failed and we were unable to recover it. 00:27:20.216 [2024-07-14 07:44:36.335035] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.216 [2024-07-14 07:44:36.335203] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.216 [2024-07-14 07:44:36.335229] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0c30000b90 with addr=10.0.0.2, port=4420 00:27:20.216 qpair failed and we were unable to recover it. 00:27:20.216 [2024-07-14 07:44:36.335406] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.216 [2024-07-14 07:44:36.335621] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.216 [2024-07-14 07:44:36.335647] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0c30000b90 with addr=10.0.0.2, port=4420 00:27:20.216 qpair failed and we were unable to recover it. 00:27:20.216 [2024-07-14 07:44:36.335812] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.216 [2024-07-14 07:44:36.335995] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.216 [2024-07-14 07:44:36.336021] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0c30000b90 with addr=10.0.0.2, port=4420 00:27:20.216 qpair failed and we were unable to recover it. 00:27:20.216 [2024-07-14 07:44:36.336180] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.216 [2024-07-14 07:44:36.336357] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.216 [2024-07-14 07:44:36.336382] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0c30000b90 with addr=10.0.0.2, port=4420 00:27:20.216 qpair failed and we were unable to recover it. 00:27:20.216 [2024-07-14 07:44:36.336552] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.216 [2024-07-14 07:44:36.336707] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.216 [2024-07-14 07:44:36.336741] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0c30000b90 with addr=10.0.0.2, port=4420 00:27:20.216 qpair failed and we were unable to recover it. 00:27:20.216 [2024-07-14 07:44:36.336914] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.216 [2024-07-14 07:44:36.337102] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.216 [2024-07-14 07:44:36.337139] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0c30000b90 with addr=10.0.0.2, port=4420 00:27:20.216 qpair failed and we were unable to recover it. 00:27:20.216 [2024-07-14 07:44:36.337303] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.216 [2024-07-14 07:44:36.337464] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.216 [2024-07-14 07:44:36.337490] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0c30000b90 with addr=10.0.0.2, port=4420 00:27:20.216 qpair failed and we were unable to recover it. 00:27:20.216 [2024-07-14 07:44:36.337651] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.216 [2024-07-14 07:44:36.337845] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.216 [2024-07-14 07:44:36.337877] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0c30000b90 with addr=10.0.0.2, port=4420 00:27:20.216 qpair failed and we were unable to recover it. 00:27:20.216 [2024-07-14 07:44:36.338052] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.216 [2024-07-14 07:44:36.338261] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.216 [2024-07-14 07:44:36.338289] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0c30000b90 with addr=10.0.0.2, port=4420 00:27:20.216 qpair failed and we were unable to recover it. 00:27:20.216 [2024-07-14 07:44:36.338479] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.216 [2024-07-14 07:44:36.338647] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.216 [2024-07-14 07:44:36.338683] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0c30000b90 with addr=10.0.0.2, port=4420 00:27:20.216 qpair failed and we were unable to recover it. 00:27:20.216 [2024-07-14 07:44:36.338879] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.216 [2024-07-14 07:44:36.339056] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.216 [2024-07-14 07:44:36.339084] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0c30000b90 with addr=10.0.0.2, port=4420 00:27:20.216 qpair failed and we were unable to recover it. 00:27:20.216 [2024-07-14 07:44:36.339277] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.216 [2024-07-14 07:44:36.339439] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.216 [2024-07-14 07:44:36.339473] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0c30000b90 with addr=10.0.0.2, port=4420 00:27:20.216 qpair failed and we were unable to recover it. 00:27:20.216 [2024-07-14 07:44:36.339644] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.216 [2024-07-14 07:44:36.339793] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.216 [2024-07-14 07:44:36.339817] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0c30000b90 with addr=10.0.0.2, port=4420 00:27:20.216 qpair failed and we were unable to recover it. 00:27:20.216 [2024-07-14 07:44:36.339975] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.216 [2024-07-14 07:44:36.340182] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.216 [2024-07-14 07:44:36.340210] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0c30000b90 with addr=10.0.0.2, port=4420 00:27:20.216 qpair failed and we were unable to recover it. 00:27:20.216 [2024-07-14 07:44:36.340375] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.216 [2024-07-14 07:44:36.340547] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.216 [2024-07-14 07:44:36.340574] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0c30000b90 with addr=10.0.0.2, port=4420 00:27:20.216 qpair failed and we were unable to recover it. 00:27:20.216 [2024-07-14 07:44:36.340726] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.216 [2024-07-14 07:44:36.340914] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.216 [2024-07-14 07:44:36.340941] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0c30000b90 with addr=10.0.0.2, port=4420 00:27:20.216 qpair failed and we were unable to recover it. 00:27:20.216 [2024-07-14 07:44:36.341140] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.216 [2024-07-14 07:44:36.341297] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.216 [2024-07-14 07:44:36.341323] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0c30000b90 with addr=10.0.0.2, port=4420 00:27:20.216 qpair failed and we were unable to recover it. 00:27:20.216 [2024-07-14 07:44:36.341482] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.216 [2024-07-14 07:44:36.341639] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.216 [2024-07-14 07:44:36.341681] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0c30000b90 with addr=10.0.0.2, port=4420 00:27:20.216 qpair failed and we were unable to recover it. 00:27:20.216 [2024-07-14 07:44:36.341876] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.216 [2024-07-14 07:44:36.342032] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.216 [2024-07-14 07:44:36.342060] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0c30000b90 with addr=10.0.0.2, port=4420 00:27:20.216 qpair failed and we were unable to recover it. 00:27:20.216 [2024-07-14 07:44:36.342215] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.216 [2024-07-14 07:44:36.342394] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.216 [2024-07-14 07:44:36.342428] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0c30000b90 with addr=10.0.0.2, port=4420 00:27:20.216 qpair failed and we were unable to recover it. 00:27:20.216 [2024-07-14 07:44:36.342659] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.216 [2024-07-14 07:44:36.342841] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.216 [2024-07-14 07:44:36.342875] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0c30000b90 with addr=10.0.0.2, port=4420 00:27:20.216 qpair failed and we were unable to recover it. 00:27:20.216 [2024-07-14 07:44:36.343067] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.216 [2024-07-14 07:44:36.343224] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.216 [2024-07-14 07:44:36.343248] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0c30000b90 with addr=10.0.0.2, port=4420 00:27:20.216 qpair failed and we were unable to recover it. 00:27:20.216 [2024-07-14 07:44:36.343407] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.216 [2024-07-14 07:44:36.343566] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.216 [2024-07-14 07:44:36.343592] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0c30000b90 with addr=10.0.0.2, port=4420 00:27:20.216 qpair failed and we were unable to recover it. 00:27:20.216 [2024-07-14 07:44:36.343801] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.216 [2024-07-14 07:44:36.343975] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.216 [2024-07-14 07:44:36.344003] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0c30000b90 with addr=10.0.0.2, port=4420 00:27:20.216 qpair failed and we were unable to recover it. 00:27:20.216 [2024-07-14 07:44:36.344174] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.216 [2024-07-14 07:44:36.344357] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.216 [2024-07-14 07:44:36.344382] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0c30000b90 with addr=10.0.0.2, port=4420 00:27:20.216 qpair failed and we were unable to recover it. 00:27:20.217 [2024-07-14 07:44:36.344565] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.217 [2024-07-14 07:44:36.344716] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.217 [2024-07-14 07:44:36.344742] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0c30000b90 with addr=10.0.0.2, port=4420 00:27:20.217 qpair failed and we were unable to recover it. 00:27:20.217 [2024-07-14 07:44:36.344919] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.217 [2024-07-14 07:44:36.345112] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.217 [2024-07-14 07:44:36.345138] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0c30000b90 with addr=10.0.0.2, port=4420 00:27:20.217 qpair failed and we were unable to recover it. 00:27:20.217 [2024-07-14 07:44:36.345285] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.217 [2024-07-14 07:44:36.345461] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.490 [2024-07-14 07:44:36.345491] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0c30000b90 with addr=10.0.0.2, port=4420 00:27:20.490 qpair failed and we were unable to recover it. 00:27:20.490 [2024-07-14 07:44:36.345647] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.490 [2024-07-14 07:44:36.345806] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.491 [2024-07-14 07:44:36.345832] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0c30000b90 with addr=10.0.0.2, port=4420 00:27:20.491 qpair failed and we were unable to recover it. 00:27:20.491 [2024-07-14 07:44:36.346018] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.491 [2024-07-14 07:44:36.346172] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.491 [2024-07-14 07:44:36.346197] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0c30000b90 with addr=10.0.0.2, port=4420 00:27:20.491 qpair failed and we were unable to recover it. 00:27:20.491 [2024-07-14 07:44:36.346345] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.491 [2024-07-14 07:44:36.346543] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.491 [2024-07-14 07:44:36.346568] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0c30000b90 with addr=10.0.0.2, port=4420 00:27:20.491 qpair failed and we were unable to recover it. 00:27:20.491 [2024-07-14 07:44:36.346736] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.491 [2024-07-14 07:44:36.346918] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.491 [2024-07-14 07:44:36.346945] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0c30000b90 with addr=10.0.0.2, port=4420 00:27:20.491 qpair failed and we were unable to recover it. 00:27:20.491 [2024-07-14 07:44:36.347137] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.491 [2024-07-14 07:44:36.347314] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.491 [2024-07-14 07:44:36.347340] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0c30000b90 with addr=10.0.0.2, port=4420 00:27:20.491 qpair failed and we were unable to recover it. 00:27:20.491 [2024-07-14 07:44:36.347520] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.491 [2024-07-14 07:44:36.347669] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.491 [2024-07-14 07:44:36.347694] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0c30000b90 with addr=10.0.0.2, port=4420 00:27:20.491 qpair failed and we were unable to recover it. 00:27:20.491 [2024-07-14 07:44:36.347887] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.491 [2024-07-14 07:44:36.348044] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.491 [2024-07-14 07:44:36.348068] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0c30000b90 with addr=10.0.0.2, port=4420 00:27:20.491 qpair failed and we were unable to recover it. 00:27:20.491 [2024-07-14 07:44:36.348255] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.491 [2024-07-14 07:44:36.348406] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.491 [2024-07-14 07:44:36.348431] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0c30000b90 with addr=10.0.0.2, port=4420 00:27:20.491 qpair failed and we were unable to recover it. 00:27:20.491 [2024-07-14 07:44:36.348628] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.491 [2024-07-14 07:44:36.348785] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.491 [2024-07-14 07:44:36.348809] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0c30000b90 with addr=10.0.0.2, port=4420 00:27:20.491 qpair failed and we were unable to recover it. 00:27:20.491 [2024-07-14 07:44:36.349007] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.491 [2024-07-14 07:44:36.349163] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.491 [2024-07-14 07:44:36.349192] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0c30000b90 with addr=10.0.0.2, port=4420 00:27:20.491 qpair failed and we were unable to recover it. 00:27:20.491 [2024-07-14 07:44:36.349372] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.491 [2024-07-14 07:44:36.349542] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.491 [2024-07-14 07:44:36.349566] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0c30000b90 with addr=10.0.0.2, port=4420 00:27:20.491 qpair failed and we were unable to recover it. 00:27:20.491 [2024-07-14 07:44:36.349719] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.491 [2024-07-14 07:44:36.349902] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.491 [2024-07-14 07:44:36.349927] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0c30000b90 with addr=10.0.0.2, port=4420 00:27:20.491 qpair failed and we were unable to recover it. 00:27:20.491 [2024-07-14 07:44:36.350099] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.491 [2024-07-14 07:44:36.350313] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.491 [2024-07-14 07:44:36.350337] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0c30000b90 with addr=10.0.0.2, port=4420 00:27:20.491 qpair failed and we were unable to recover it. 00:27:20.491 [2024-07-14 07:44:36.350498] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.491 [2024-07-14 07:44:36.350660] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.491 [2024-07-14 07:44:36.350685] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0c30000b90 with addr=10.0.0.2, port=4420 00:27:20.491 qpair failed and we were unable to recover it. 00:27:20.491 [2024-07-14 07:44:36.350840] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.491 [2024-07-14 07:44:36.351031] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.491 [2024-07-14 07:44:36.351058] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0c30000b90 with addr=10.0.0.2, port=4420 00:27:20.491 qpair failed and we were unable to recover it. 00:27:20.491 [2024-07-14 07:44:36.351247] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.491 [2024-07-14 07:44:36.351402] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.491 [2024-07-14 07:44:36.351429] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0c30000b90 with addr=10.0.0.2, port=4420 00:27:20.491 qpair failed and we were unable to recover it. 00:27:20.491 [2024-07-14 07:44:36.351606] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.491 [2024-07-14 07:44:36.351791] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.491 [2024-07-14 07:44:36.351817] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0c30000b90 with addr=10.0.0.2, port=4420 00:27:20.491 qpair failed and we were unable to recover it. 00:27:20.491 [2024-07-14 07:44:36.352029] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.491 [2024-07-14 07:44:36.352187] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.491 [2024-07-14 07:44:36.352211] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0c30000b90 with addr=10.0.0.2, port=4420 00:27:20.491 qpair failed and we were unable to recover it. 00:27:20.491 [2024-07-14 07:44:36.352363] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.491 [2024-07-14 07:44:36.352545] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.491 [2024-07-14 07:44:36.352571] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0c30000b90 with addr=10.0.0.2, port=4420 00:27:20.491 qpair failed and we were unable to recover it. 00:27:20.491 [2024-07-14 07:44:36.352751] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.491 [2024-07-14 07:44:36.352908] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.491 [2024-07-14 07:44:36.352933] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0c30000b90 with addr=10.0.0.2, port=4420 00:27:20.491 qpair failed and we were unable to recover it. 00:27:20.491 [2024-07-14 07:44:36.353091] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.491 [2024-07-14 07:44:36.353273] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.491 [2024-07-14 07:44:36.353298] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0c30000b90 with addr=10.0.0.2, port=4420 00:27:20.491 qpair failed and we were unable to recover it. 00:27:20.491 [2024-07-14 07:44:36.353478] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.491 [2024-07-14 07:44:36.353631] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.491 [2024-07-14 07:44:36.353658] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0c30000b90 with addr=10.0.0.2, port=4420 00:27:20.491 qpair failed and we were unable to recover it. 00:27:20.491 [2024-07-14 07:44:36.353822] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.491 [2024-07-14 07:44:36.353986] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.491 [2024-07-14 07:44:36.354012] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0c30000b90 with addr=10.0.0.2, port=4420 00:27:20.491 qpair failed and we were unable to recover it. 00:27:20.491 [2024-07-14 07:44:36.354189] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.491 [2024-07-14 07:44:36.354370] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.491 [2024-07-14 07:44:36.354395] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0c30000b90 with addr=10.0.0.2, port=4420 00:27:20.491 qpair failed and we were unable to recover it. 00:27:20.491 [2024-07-14 07:44:36.354556] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.491 [2024-07-14 07:44:36.354707] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.491 [2024-07-14 07:44:36.354733] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0c30000b90 with addr=10.0.0.2, port=4420 00:27:20.491 qpair failed and we were unable to recover it. 00:27:20.491 [2024-07-14 07:44:36.354918] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.491 [2024-07-14 07:44:36.355127] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.491 [2024-07-14 07:44:36.355152] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0c30000b90 with addr=10.0.0.2, port=4420 00:27:20.491 qpair failed and we were unable to recover it. 00:27:20.491 [2024-07-14 07:44:36.355325] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.491 [2024-07-14 07:44:36.355475] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.491 [2024-07-14 07:44:36.355499] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0c30000b90 with addr=10.0.0.2, port=4420 00:27:20.491 qpair failed and we were unable to recover it. 00:27:20.491 [2024-07-14 07:44:36.355649] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.491 [2024-07-14 07:44:36.355801] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.491 [2024-07-14 07:44:36.355827] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0c30000b90 with addr=10.0.0.2, port=4420 00:27:20.491 qpair failed and we were unable to recover it. 00:27:20.491 [2024-07-14 07:44:36.356009] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.491 [2024-07-14 07:44:36.356179] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.491 [2024-07-14 07:44:36.356204] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0c30000b90 with addr=10.0.0.2, port=4420 00:27:20.491 qpair failed and we were unable to recover it. 00:27:20.491 [2024-07-14 07:44:36.356390] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.491 [2024-07-14 07:44:36.356572] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.491 [2024-07-14 07:44:36.356597] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0c30000b90 with addr=10.0.0.2, port=4420 00:27:20.491 qpair failed and we were unable to recover it. 00:27:20.491 [2024-07-14 07:44:36.356755] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.491 [2024-07-14 07:44:36.356917] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.492 [2024-07-14 07:44:36.356944] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0c30000b90 with addr=10.0.0.2, port=4420 00:27:20.492 qpair failed and we were unable to recover it. 00:27:20.492 [2024-07-14 07:44:36.357134] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.492 [2024-07-14 07:44:36.357311] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.492 [2024-07-14 07:44:36.357336] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0c30000b90 with addr=10.0.0.2, port=4420 00:27:20.492 qpair failed and we were unable to recover it. 00:27:20.492 [2024-07-14 07:44:36.357496] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.492 [2024-07-14 07:44:36.357641] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.492 [2024-07-14 07:44:36.357666] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0c30000b90 with addr=10.0.0.2, port=4420 00:27:20.492 qpair failed and we were unable to recover it. 00:27:20.492 [2024-07-14 07:44:36.357818] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.492 [2024-07-14 07:44:36.357963] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.492 [2024-07-14 07:44:36.357989] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0c30000b90 with addr=10.0.0.2, port=4420 00:27:20.492 qpair failed and we were unable to recover it. 00:27:20.492 [2024-07-14 07:44:36.358143] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.492 [2024-07-14 07:44:36.358344] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.492 [2024-07-14 07:44:36.358369] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0c30000b90 with addr=10.0.0.2, port=4420 00:27:20.492 qpair failed and we were unable to recover it. 00:27:20.492 [2024-07-14 07:44:36.358556] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.492 [2024-07-14 07:44:36.358701] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.492 [2024-07-14 07:44:36.358726] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0c30000b90 with addr=10.0.0.2, port=4420 00:27:20.492 qpair failed and we were unable to recover it. 00:27:20.492 [2024-07-14 07:44:36.358900] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.492 [2024-07-14 07:44:36.359049] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.492 [2024-07-14 07:44:36.359073] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0c30000b90 with addr=10.0.0.2, port=4420 00:27:20.492 qpair failed and we were unable to recover it. 00:27:20.492 [2024-07-14 07:44:36.359264] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.492 [2024-07-14 07:44:36.359418] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.492 [2024-07-14 07:44:36.359442] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0c30000b90 with addr=10.0.0.2, port=4420 00:27:20.492 qpair failed and we were unable to recover it. 00:27:20.492 [2024-07-14 07:44:36.359616] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.492 [2024-07-14 07:44:36.359767] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.492 [2024-07-14 07:44:36.359793] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0c30000b90 with addr=10.0.0.2, port=4420 00:27:20.492 qpair failed and we were unable to recover it. 00:27:20.492 [2024-07-14 07:44:36.359952] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.492 [2024-07-14 07:44:36.360151] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.492 [2024-07-14 07:44:36.360176] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0c30000b90 with addr=10.0.0.2, port=4420 00:27:20.492 qpair failed and we were unable to recover it. 00:27:20.492 [2024-07-14 07:44:36.360360] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.492 [2024-07-14 07:44:36.360515] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.492 [2024-07-14 07:44:36.360540] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0c30000b90 with addr=10.0.0.2, port=4420 00:27:20.492 qpair failed and we were unable to recover it. 00:27:20.492 [2024-07-14 07:44:36.360716] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.492 [2024-07-14 07:44:36.360899] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.492 [2024-07-14 07:44:36.360925] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0c30000b90 with addr=10.0.0.2, port=4420 00:27:20.492 qpair failed and we were unable to recover it. 00:27:20.492 [2024-07-14 07:44:36.361107] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.492 [2024-07-14 07:44:36.361267] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.492 [2024-07-14 07:44:36.361294] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0c30000b90 with addr=10.0.0.2, port=4420 00:27:20.492 qpair failed and we were unable to recover it. 00:27:20.492 [2024-07-14 07:44:36.361496] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.492 [2024-07-14 07:44:36.361679] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.492 [2024-07-14 07:44:36.361704] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0c30000b90 with addr=10.0.0.2, port=4420 00:27:20.492 qpair failed and we were unable to recover it. 00:27:20.492 [2024-07-14 07:44:36.361891] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.492 [2024-07-14 07:44:36.362072] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.492 [2024-07-14 07:44:36.362098] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0c30000b90 with addr=10.0.0.2, port=4420 00:27:20.492 qpair failed and we were unable to recover it. 00:27:20.492 [2024-07-14 07:44:36.362304] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.492 [2024-07-14 07:44:36.362483] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.492 [2024-07-14 07:44:36.362508] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0c30000b90 with addr=10.0.0.2, port=4420 00:27:20.492 qpair failed and we were unable to recover it. 00:27:20.492 [2024-07-14 07:44:36.362712] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.492 [2024-07-14 07:44:36.362882] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.492 [2024-07-14 07:44:36.362908] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0c30000b90 with addr=10.0.0.2, port=4420 00:27:20.492 qpair failed and we were unable to recover it. 00:27:20.492 [2024-07-14 07:44:36.363074] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.492 [2024-07-14 07:44:36.363253] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.492 [2024-07-14 07:44:36.363278] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0c30000b90 with addr=10.0.0.2, port=4420 00:27:20.492 qpair failed and we were unable to recover it. 00:27:20.492 [2024-07-14 07:44:36.363454] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.492 [2024-07-14 07:44:36.363610] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.492 [2024-07-14 07:44:36.363634] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0c30000b90 with addr=10.0.0.2, port=4420 00:27:20.492 qpair failed and we were unable to recover it. 00:27:20.492 [2024-07-14 07:44:36.363813] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.492 [2024-07-14 07:44:36.363996] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.492 [2024-07-14 07:44:36.364021] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0c30000b90 with addr=10.0.0.2, port=4420 00:27:20.492 qpair failed and we were unable to recover it. 00:27:20.492 [2024-07-14 07:44:36.364197] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.492 [2024-07-14 07:44:36.364379] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.492 [2024-07-14 07:44:36.364404] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0c30000b90 with addr=10.0.0.2, port=4420 00:27:20.492 qpair failed and we were unable to recover it. 00:27:20.492 [2024-07-14 07:44:36.364613] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.492 [2024-07-14 07:44:36.364789] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.492 [2024-07-14 07:44:36.364814] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0c30000b90 with addr=10.0.0.2, port=4420 00:27:20.492 qpair failed and we were unable to recover it. 00:27:20.492 [2024-07-14 07:44:36.365020] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.492 [2024-07-14 07:44:36.365195] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.492 [2024-07-14 07:44:36.365219] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0c30000b90 with addr=10.0.0.2, port=4420 00:27:20.492 qpair failed and we were unable to recover it. 00:27:20.492 [2024-07-14 07:44:36.365369] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.492 [2024-07-14 07:44:36.365520] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.492 [2024-07-14 07:44:36.365545] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0c30000b90 with addr=10.0.0.2, port=4420 00:27:20.492 qpair failed and we were unable to recover it. 00:27:20.492 [2024-07-14 07:44:36.365748] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.492 [2024-07-14 07:44:36.365961] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.492 [2024-07-14 07:44:36.365986] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0c30000b90 with addr=10.0.0.2, port=4420 00:27:20.492 qpair failed and we were unable to recover it. 00:27:20.492 [2024-07-14 07:44:36.366174] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.492 [2024-07-14 07:44:36.366347] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.492 [2024-07-14 07:44:36.366371] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0c30000b90 with addr=10.0.0.2, port=4420 00:27:20.492 qpair failed and we were unable to recover it. 00:27:20.492 [2024-07-14 07:44:36.366557] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.492 [2024-07-14 07:44:36.366724] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.492 [2024-07-14 07:44:36.366748] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0c30000b90 with addr=10.0.0.2, port=4420 00:27:20.492 qpair failed and we were unable to recover it. 00:27:20.492 [2024-07-14 07:44:36.366928] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.492 [2024-07-14 07:44:36.367106] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.492 [2024-07-14 07:44:36.367131] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0c30000b90 with addr=10.0.0.2, port=4420 00:27:20.492 qpair failed and we were unable to recover it. 00:27:20.492 [2024-07-14 07:44:36.367298] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.492 [2024-07-14 07:44:36.367444] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.492 [2024-07-14 07:44:36.367468] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0c30000b90 with addr=10.0.0.2, port=4420 00:27:20.492 qpair failed and we were unable to recover it. 00:27:20.492 [2024-07-14 07:44:36.367641] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.492 [2024-07-14 07:44:36.367820] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.492 [2024-07-14 07:44:36.367843] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0c30000b90 with addr=10.0.0.2, port=4420 00:27:20.492 qpair failed and we were unable to recover it. 00:27:20.492 [2024-07-14 07:44:36.368040] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.492 [2024-07-14 07:44:36.368249] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.493 [2024-07-14 07:44:36.368273] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0c30000b90 with addr=10.0.0.2, port=4420 00:27:20.493 qpair failed and we were unable to recover it. 00:27:20.493 [2024-07-14 07:44:36.368444] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.493 [2024-07-14 07:44:36.368594] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.493 [2024-07-14 07:44:36.368618] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0c30000b90 with addr=10.0.0.2, port=4420 00:27:20.493 qpair failed and we were unable to recover it. 00:27:20.493 [2024-07-14 07:44:36.368796] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.493 [2024-07-14 07:44:36.368955] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.493 [2024-07-14 07:44:36.368980] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0c30000b90 with addr=10.0.0.2, port=4420 00:27:20.493 qpair failed and we were unable to recover it. 00:27:20.493 [2024-07-14 07:44:36.369133] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.493 [2024-07-14 07:44:36.369303] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.493 [2024-07-14 07:44:36.369327] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0c30000b90 with addr=10.0.0.2, port=4420 00:27:20.493 qpair failed and we were unable to recover it. 00:27:20.493 [2024-07-14 07:44:36.369502] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.493 [2024-07-14 07:44:36.369671] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.493 [2024-07-14 07:44:36.369695] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0c30000b90 with addr=10.0.0.2, port=4420 00:27:20.493 qpair failed and we were unable to recover it. 00:27:20.493 [2024-07-14 07:44:36.369851] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.493 [2024-07-14 07:44:36.370023] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.493 [2024-07-14 07:44:36.370048] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0c30000b90 with addr=10.0.0.2, port=4420 00:27:20.493 qpair failed and we were unable to recover it. 00:27:20.493 [2024-07-14 07:44:36.370211] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.493 [2024-07-14 07:44:36.370385] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.493 [2024-07-14 07:44:36.370409] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0c30000b90 with addr=10.0.0.2, port=4420 00:27:20.493 qpair failed and we were unable to recover it. 00:27:20.493 [2024-07-14 07:44:36.370600] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.493 [2024-07-14 07:44:36.370798] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.493 [2024-07-14 07:44:36.370823] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0c30000b90 with addr=10.0.0.2, port=4420 00:27:20.493 qpair failed and we were unable to recover it. 00:27:20.493 [2024-07-14 07:44:36.370994] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.493 [2024-07-14 07:44:36.371157] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.493 [2024-07-14 07:44:36.371180] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0c30000b90 with addr=10.0.0.2, port=4420 00:27:20.493 qpair failed and we were unable to recover it. 00:27:20.493 [2024-07-14 07:44:36.371361] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.493 [2024-07-14 07:44:36.371581] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.493 [2024-07-14 07:44:36.371605] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0c30000b90 with addr=10.0.0.2, port=4420 00:27:20.493 qpair failed and we were unable to recover it. 00:27:20.493 [2024-07-14 07:44:36.371785] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.493 [2024-07-14 07:44:36.371976] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.493 [2024-07-14 07:44:36.372002] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0c30000b90 with addr=10.0.0.2, port=4420 00:27:20.493 qpair failed and we were unable to recover it. 00:27:20.493 [2024-07-14 07:44:36.372187] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.493 [2024-07-14 07:44:36.372343] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.493 [2024-07-14 07:44:36.372368] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0c30000b90 with addr=10.0.0.2, port=4420 00:27:20.493 qpair failed and we were unable to recover it. 00:27:20.493 [2024-07-14 07:44:36.372545] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.493 [2024-07-14 07:44:36.372728] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.493 [2024-07-14 07:44:36.372754] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0c30000b90 with addr=10.0.0.2, port=4420 00:27:20.493 qpair failed and we were unable to recover it. 00:27:20.493 [2024-07-14 07:44:36.372912] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.493 [2024-07-14 07:44:36.373065] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.493 [2024-07-14 07:44:36.373092] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0c30000b90 with addr=10.0.0.2, port=4420 00:27:20.493 qpair failed and we were unable to recover it. 00:27:20.493 [2024-07-14 07:44:36.373268] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.493 [2024-07-14 07:44:36.373414] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.493 [2024-07-14 07:44:36.373439] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0c30000b90 with addr=10.0.0.2, port=4420 00:27:20.493 qpair failed and we were unable to recover it. 00:27:20.493 [2024-07-14 07:44:36.373619] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.493 [2024-07-14 07:44:36.373790] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.493 [2024-07-14 07:44:36.373814] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0c30000b90 with addr=10.0.0.2, port=4420 00:27:20.493 qpair failed and we were unable to recover it. 00:27:20.493 [2024-07-14 07:44:36.373992] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.493 [2024-07-14 07:44:36.374148] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.493 [2024-07-14 07:44:36.374174] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0c30000b90 with addr=10.0.0.2, port=4420 00:27:20.493 qpair failed and we were unable to recover it. 00:27:20.493 [2024-07-14 07:44:36.374365] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.493 [2024-07-14 07:44:36.374519] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.493 [2024-07-14 07:44:36.374543] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0c30000b90 with addr=10.0.0.2, port=4420 00:27:20.493 qpair failed and we were unable to recover it. 00:27:20.493 [2024-07-14 07:44:36.374715] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.493 [2024-07-14 07:44:36.374863] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.493 [2024-07-14 07:44:36.374893] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0c30000b90 with addr=10.0.0.2, port=4420 00:27:20.493 qpair failed and we were unable to recover it. 00:27:20.493 [2024-07-14 07:44:36.375041] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.493 [2024-07-14 07:44:36.375218] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.493 [2024-07-14 07:44:36.375242] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0c30000b90 with addr=10.0.0.2, port=4420 00:27:20.493 qpair failed and we were unable to recover it. 00:27:20.493 [2024-07-14 07:44:36.375425] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.493 [2024-07-14 07:44:36.375594] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.493 [2024-07-14 07:44:36.375619] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0c30000b90 with addr=10.0.0.2, port=4420 00:27:20.493 qpair failed and we were unable to recover it. 00:27:20.493 [2024-07-14 07:44:36.375804] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.493 [2024-07-14 07:44:36.375979] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.493 [2024-07-14 07:44:36.376005] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0c30000b90 with addr=10.0.0.2, port=4420 00:27:20.493 qpair failed and we were unable to recover it. 00:27:20.493 [2024-07-14 07:44:36.376160] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.493 [2024-07-14 07:44:36.376367] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.493 [2024-07-14 07:44:36.376390] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0c30000b90 with addr=10.0.0.2, port=4420 00:27:20.493 qpair failed and we were unable to recover it. 00:27:20.493 [2024-07-14 07:44:36.376567] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.493 [2024-07-14 07:44:36.376736] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.493 [2024-07-14 07:44:36.376761] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0c30000b90 with addr=10.0.0.2, port=4420 00:27:20.493 qpair failed and we were unable to recover it. 00:27:20.493 [2024-07-14 07:44:36.376942] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.493 [2024-07-14 07:44:36.377125] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.493 [2024-07-14 07:44:36.377149] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0c30000b90 with addr=10.0.0.2, port=4420 00:27:20.493 qpair failed and we were unable to recover it. 00:27:20.493 [2024-07-14 07:44:36.377344] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.493 [2024-07-14 07:44:36.377495] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.493 [2024-07-14 07:44:36.377521] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0c30000b90 with addr=10.0.0.2, port=4420 00:27:20.493 qpair failed and we were unable to recover it. 00:27:20.493 [2024-07-14 07:44:36.377705] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.493 [2024-07-14 07:44:36.377921] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.493 [2024-07-14 07:44:36.377946] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0c30000b90 with addr=10.0.0.2, port=4420 00:27:20.493 qpair failed and we were unable to recover it. 00:27:20.493 [2024-07-14 07:44:36.378105] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.493 [2024-07-14 07:44:36.378283] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.493 [2024-07-14 07:44:36.378308] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0c30000b90 with addr=10.0.0.2, port=4420 00:27:20.493 qpair failed and we were unable to recover it. 00:27:20.493 [2024-07-14 07:44:36.378479] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.493 [2024-07-14 07:44:36.378659] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.493 [2024-07-14 07:44:36.378682] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0c30000b90 with addr=10.0.0.2, port=4420 00:27:20.493 qpair failed and we were unable to recover it. 00:27:20.493 [2024-07-14 07:44:36.378860] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.493 [2024-07-14 07:44:36.379020] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.493 [2024-07-14 07:44:36.379045] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0c30000b90 with addr=10.0.0.2, port=4420 00:27:20.493 qpair failed and we were unable to recover it. 00:27:20.493 [2024-07-14 07:44:36.379247] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.493 [2024-07-14 07:44:36.379447] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.493 [2024-07-14 07:44:36.379471] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0c30000b90 with addr=10.0.0.2, port=4420 00:27:20.493 qpair failed and we were unable to recover it. 00:27:20.493 [2024-07-14 07:44:36.379653] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.494 [2024-07-14 07:44:36.379818] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.494 [2024-07-14 07:44:36.379843] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0c30000b90 with addr=10.0.0.2, port=4420 00:27:20.494 qpair failed and we were unable to recover it. 00:27:20.494 [2024-07-14 07:44:36.380024] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.494 [2024-07-14 07:44:36.380175] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.494 [2024-07-14 07:44:36.380200] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0c30000b90 with addr=10.0.0.2, port=4420 00:27:20.494 qpair failed and we were unable to recover it. 00:27:20.494 [2024-07-14 07:44:36.380388] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.494 [2024-07-14 07:44:36.380573] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.494 [2024-07-14 07:44:36.380597] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0c30000b90 with addr=10.0.0.2, port=4420 00:27:20.494 qpair failed and we were unable to recover it. 00:27:20.494 [2024-07-14 07:44:36.380752] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.494 [2024-07-14 07:44:36.380916] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.494 [2024-07-14 07:44:36.380941] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0c30000b90 with addr=10.0.0.2, port=4420 00:27:20.494 qpair failed and we were unable to recover it. 00:27:20.494 [2024-07-14 07:44:36.381125] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.494 [2024-07-14 07:44:36.381315] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.494 [2024-07-14 07:44:36.381341] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0c30000b90 with addr=10.0.0.2, port=4420 00:27:20.494 qpair failed and we were unable to recover it. 00:27:20.494 [2024-07-14 07:44:36.381502] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.494 [2024-07-14 07:44:36.381707] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.494 [2024-07-14 07:44:36.381732] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0c30000b90 with addr=10.0.0.2, port=4420 00:27:20.494 qpair failed and we were unable to recover it. 00:27:20.494 [2024-07-14 07:44:36.381911] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.494 [2024-07-14 07:44:36.382088] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.494 [2024-07-14 07:44:36.382113] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0c30000b90 with addr=10.0.0.2, port=4420 00:27:20.494 qpair failed and we were unable to recover it. 00:27:20.494 [2024-07-14 07:44:36.382266] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.494 [2024-07-14 07:44:36.382449] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.494 [2024-07-14 07:44:36.382475] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0c30000b90 with addr=10.0.0.2, port=4420 00:27:20.494 qpair failed and we were unable to recover it. 00:27:20.494 [2024-07-14 07:44:36.382631] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.494 [2024-07-14 07:44:36.382807] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.494 [2024-07-14 07:44:36.382831] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0c30000b90 with addr=10.0.0.2, port=4420 00:27:20.494 qpair failed and we were unable to recover it. 00:27:20.494 [2024-07-14 07:44:36.383036] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.494 [2024-07-14 07:44:36.383198] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.494 [2024-07-14 07:44:36.383223] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0c30000b90 with addr=10.0.0.2, port=4420 00:27:20.494 qpair failed and we were unable to recover it. 00:27:20.494 [2024-07-14 07:44:36.383379] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.494 [2024-07-14 07:44:36.383562] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.494 [2024-07-14 07:44:36.383586] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0c30000b90 with addr=10.0.0.2, port=4420 00:27:20.494 qpair failed and we were unable to recover it. 00:27:20.494 [2024-07-14 07:44:36.383736] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.494 [2024-07-14 07:44:36.383918] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.494 [2024-07-14 07:44:36.383943] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0c30000b90 with addr=10.0.0.2, port=4420 00:27:20.494 qpair failed and we were unable to recover it. 00:27:20.494 [2024-07-14 07:44:36.384096] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.494 [2024-07-14 07:44:36.384242] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.494 [2024-07-14 07:44:36.384268] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0c30000b90 with addr=10.0.0.2, port=4420 00:27:20.494 qpair failed and we were unable to recover it. 00:27:20.494 [2024-07-14 07:44:36.384451] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.494 [2024-07-14 07:44:36.384629] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.494 [2024-07-14 07:44:36.384654] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0c30000b90 with addr=10.0.0.2, port=4420 00:27:20.494 qpair failed and we were unable to recover it. 00:27:20.494 [2024-07-14 07:44:36.384836] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.494 [2024-07-14 07:44:36.385023] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.494 [2024-07-14 07:44:36.385049] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0c30000b90 with addr=10.0.0.2, port=4420 00:27:20.494 qpair failed and we were unable to recover it. 00:27:20.494 [2024-07-14 07:44:36.385226] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.494 [2024-07-14 07:44:36.385369] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.494 [2024-07-14 07:44:36.385392] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0c30000b90 with addr=10.0.0.2, port=4420 00:27:20.494 qpair failed and we were unable to recover it. 00:27:20.494 [2024-07-14 07:44:36.385573] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.494 [2024-07-14 07:44:36.385747] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.494 [2024-07-14 07:44:36.385771] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0c30000b90 with addr=10.0.0.2, port=4420 00:27:20.494 qpair failed and we were unable to recover it. 00:27:20.494 [2024-07-14 07:44:36.385942] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.494 [2024-07-14 07:44:36.386129] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.494 [2024-07-14 07:44:36.386153] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0c30000b90 with addr=10.0.0.2, port=4420 00:27:20.494 qpair failed and we were unable to recover it. 00:27:20.494 [2024-07-14 07:44:36.386310] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.494 [2024-07-14 07:44:36.386477] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.494 [2024-07-14 07:44:36.386502] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0c30000b90 with addr=10.0.0.2, port=4420 00:27:20.494 qpair failed and we were unable to recover it. 00:27:20.494 [2024-07-14 07:44:36.386685] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.494 [2024-07-14 07:44:36.386862] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.494 [2024-07-14 07:44:36.386898] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0c30000b90 with addr=10.0.0.2, port=4420 00:27:20.494 qpair failed and we were unable to recover it. 00:27:20.494 [2024-07-14 07:44:36.387058] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.494 [2024-07-14 07:44:36.387236] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.494 [2024-07-14 07:44:36.387260] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0c30000b90 with addr=10.0.0.2, port=4420 00:27:20.494 qpair failed and we were unable to recover it. 00:27:20.494 [2024-07-14 07:44:36.387415] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.494 [2024-07-14 07:44:36.387563] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.494 [2024-07-14 07:44:36.387587] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0c30000b90 with addr=10.0.0.2, port=4420 00:27:20.494 qpair failed and we were unable to recover it. 00:27:20.494 [2024-07-14 07:44:36.387737] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.494 [2024-07-14 07:44:36.387917] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.494 [2024-07-14 07:44:36.387941] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0c30000b90 with addr=10.0.0.2, port=4420 00:27:20.494 qpair failed and we were unable to recover it. 00:27:20.494 [2024-07-14 07:44:36.388118] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.494 [2024-07-14 07:44:36.388295] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.494 [2024-07-14 07:44:36.388320] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0c30000b90 with addr=10.0.0.2, port=4420 00:27:20.494 qpair failed and we were unable to recover it. 00:27:20.494 [2024-07-14 07:44:36.388470] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.494 [2024-07-14 07:44:36.388625] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.494 [2024-07-14 07:44:36.388649] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0c30000b90 with addr=10.0.0.2, port=4420 00:27:20.494 qpair failed and we were unable to recover it. 00:27:20.494 [2024-07-14 07:44:36.388833] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.494 [2024-07-14 07:44:36.389006] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.494 [2024-07-14 07:44:36.389030] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0c30000b90 with addr=10.0.0.2, port=4420 00:27:20.494 qpair failed and we were unable to recover it. 00:27:20.494 [2024-07-14 07:44:36.389248] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.494 [2024-07-14 07:44:36.389397] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.494 [2024-07-14 07:44:36.389423] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0c30000b90 with addr=10.0.0.2, port=4420 00:27:20.494 qpair failed and we were unable to recover it. 00:27:20.494 [2024-07-14 07:44:36.389618] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.494 [2024-07-14 07:44:36.389768] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.494 [2024-07-14 07:44:36.389792] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0c30000b90 with addr=10.0.0.2, port=4420 00:27:20.494 qpair failed and we were unable to recover it. 00:27:20.494 [2024-07-14 07:44:36.389963] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.494 [2024-07-14 07:44:36.390139] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.494 [2024-07-14 07:44:36.390164] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0c30000b90 with addr=10.0.0.2, port=4420 00:27:20.494 qpair failed and we were unable to recover it. 00:27:20.494 [2024-07-14 07:44:36.390342] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.494 [2024-07-14 07:44:36.390521] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.494 [2024-07-14 07:44:36.390546] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0c30000b90 with addr=10.0.0.2, port=4420 00:27:20.494 qpair failed and we were unable to recover it. 00:27:20.494 [2024-07-14 07:44:36.390698] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.494 [2024-07-14 07:44:36.390879] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.494 [2024-07-14 07:44:36.390904] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0c30000b90 with addr=10.0.0.2, port=4420 00:27:20.495 qpair failed and we were unable to recover it. 00:27:20.495 [2024-07-14 07:44:36.391051] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.495 [2024-07-14 07:44:36.391228] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.495 [2024-07-14 07:44:36.391252] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0c30000b90 with addr=10.0.0.2, port=4420 00:27:20.495 qpair failed and we were unable to recover it. 00:27:20.495 [2024-07-14 07:44:36.391453] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.495 [2024-07-14 07:44:36.391635] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.495 [2024-07-14 07:44:36.391659] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0c30000b90 with addr=10.0.0.2, port=4420 00:27:20.495 qpair failed and we were unable to recover it. 00:27:20.495 [2024-07-14 07:44:36.391839] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.495 [2024-07-14 07:44:36.392001] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.495 [2024-07-14 07:44:36.392026] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0c30000b90 with addr=10.0.0.2, port=4420 00:27:20.495 qpair failed and we were unable to recover it. 00:27:20.495 [2024-07-14 07:44:36.392216] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.495 [2024-07-14 07:44:36.392373] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.495 [2024-07-14 07:44:36.392398] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0c30000b90 with addr=10.0.0.2, port=4420 00:27:20.495 qpair failed and we were unable to recover it. 00:27:20.495 [2024-07-14 07:44:36.392608] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.495 [2024-07-14 07:44:36.392751] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.495 [2024-07-14 07:44:36.392774] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0c30000b90 with addr=10.0.0.2, port=4420 00:27:20.495 qpair failed and we were unable to recover it. 00:27:20.495 [2024-07-14 07:44:36.392966] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.495 [2024-07-14 07:44:36.393132] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.495 [2024-07-14 07:44:36.393159] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0c30000b90 with addr=10.0.0.2, port=4420 00:27:20.495 qpair failed and we were unable to recover it. 00:27:20.495 [2024-07-14 07:44:36.393315] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.495 [2024-07-14 07:44:36.393493] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.495 [2024-07-14 07:44:36.393518] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0c30000b90 with addr=10.0.0.2, port=4420 00:27:20.495 qpair failed and we were unable to recover it. 00:27:20.495 [2024-07-14 07:44:36.393686] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.495 [2024-07-14 07:44:36.393846] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.495 [2024-07-14 07:44:36.393879] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0c30000b90 with addr=10.0.0.2, port=4420 00:27:20.495 qpair failed and we were unable to recover it. 00:27:20.495 [2024-07-14 07:44:36.394030] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.495 [2024-07-14 07:44:36.394188] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.495 [2024-07-14 07:44:36.394214] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0c30000b90 with addr=10.0.0.2, port=4420 00:27:20.495 qpair failed and we were unable to recover it. 00:27:20.495 [2024-07-14 07:44:36.394386] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.495 [2024-07-14 07:44:36.394538] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.495 [2024-07-14 07:44:36.394562] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0c30000b90 with addr=10.0.0.2, port=4420 00:27:20.495 qpair failed and we were unable to recover it. 00:27:20.495 [2024-07-14 07:44:36.394720] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.495 [2024-07-14 07:44:36.394883] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.495 [2024-07-14 07:44:36.394910] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0c30000b90 with addr=10.0.0.2, port=4420 00:27:20.495 qpair failed and we were unable to recover it. 00:27:20.495 [2024-07-14 07:44:36.395096] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.495 [2024-07-14 07:44:36.395292] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.495 [2024-07-14 07:44:36.395317] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0c30000b90 with addr=10.0.0.2, port=4420 00:27:20.495 qpair failed and we were unable to recover it. 00:27:20.495 [2024-07-14 07:44:36.395503] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.495 [2024-07-14 07:44:36.395676] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.495 [2024-07-14 07:44:36.395700] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0c30000b90 with addr=10.0.0.2, port=4420 00:27:20.495 qpair failed and we were unable to recover it. 00:27:20.495 [2024-07-14 07:44:36.395880] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.495 [2024-07-14 07:44:36.396064] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.495 [2024-07-14 07:44:36.396088] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0c30000b90 with addr=10.0.0.2, port=4420 00:27:20.495 qpair failed and we were unable to recover it. 00:27:20.495 [2024-07-14 07:44:36.396263] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.495 [2024-07-14 07:44:36.396445] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.495 [2024-07-14 07:44:36.396469] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0c30000b90 with addr=10.0.0.2, port=4420 00:27:20.495 qpair failed and we were unable to recover it. 00:27:20.495 [2024-07-14 07:44:36.396628] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.495 [2024-07-14 07:44:36.396810] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.495 [2024-07-14 07:44:36.396834] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0c30000b90 with addr=10.0.0.2, port=4420 00:27:20.495 qpair failed and we were unable to recover it. 00:27:20.495 [2024-07-14 07:44:36.396998] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.495 [2024-07-14 07:44:36.397145] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.495 [2024-07-14 07:44:36.397170] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0c30000b90 with addr=10.0.0.2, port=4420 00:27:20.495 qpair failed and we were unable to recover it. 00:27:20.495 [2024-07-14 07:44:36.397345] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.495 [2024-07-14 07:44:36.397526] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.495 [2024-07-14 07:44:36.397551] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0c30000b90 with addr=10.0.0.2, port=4420 00:27:20.495 qpair failed and we were unable to recover it. 00:27:20.495 [2024-07-14 07:44:36.397695] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.495 [2024-07-14 07:44:36.397840] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.495 [2024-07-14 07:44:36.397873] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0c30000b90 with addr=10.0.0.2, port=4420 00:27:20.495 qpair failed and we were unable to recover it. 00:27:20.495 [2024-07-14 07:44:36.398065] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.495 [2024-07-14 07:44:36.398249] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.495 [2024-07-14 07:44:36.398275] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0c30000b90 with addr=10.0.0.2, port=4420 00:27:20.495 qpair failed and we were unable to recover it. 00:27:20.495 [2024-07-14 07:44:36.398434] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.495 [2024-07-14 07:44:36.398582] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.495 [2024-07-14 07:44:36.398608] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0c30000b90 with addr=10.0.0.2, port=4420 00:27:20.495 qpair failed and we were unable to recover it. 00:27:20.495 [2024-07-14 07:44:36.398763] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.495 [2024-07-14 07:44:36.398931] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.495 [2024-07-14 07:44:36.398957] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0c30000b90 with addr=10.0.0.2, port=4420 00:27:20.495 qpair failed and we were unable to recover it. 00:27:20.495 [2024-07-14 07:44:36.399146] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.495 [2024-07-14 07:44:36.399361] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.495 [2024-07-14 07:44:36.399386] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0c30000b90 with addr=10.0.0.2, port=4420 00:27:20.495 qpair failed and we were unable to recover it. 00:27:20.495 [2024-07-14 07:44:36.399542] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.495 [2024-07-14 07:44:36.399722] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.495 [2024-07-14 07:44:36.399747] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0c30000b90 with addr=10.0.0.2, port=4420 00:27:20.495 qpair failed and we were unable to recover it. 00:27:20.495 [2024-07-14 07:44:36.399957] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.496 [2024-07-14 07:44:36.400124] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.496 [2024-07-14 07:44:36.400149] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0c30000b90 with addr=10.0.0.2, port=4420 00:27:20.496 qpair failed and we were unable to recover it. 00:27:20.496 [2024-07-14 07:44:36.400335] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.496 [2024-07-14 07:44:36.400509] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.496 [2024-07-14 07:44:36.400534] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0c30000b90 with addr=10.0.0.2, port=4420 00:27:20.496 qpair failed and we were unable to recover it. 00:27:20.496 [2024-07-14 07:44:36.400717] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.496 [2024-07-14 07:44:36.400932] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.496 [2024-07-14 07:44:36.400957] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0c30000b90 with addr=10.0.0.2, port=4420 00:27:20.496 qpair failed and we were unable to recover it. 00:27:20.496 [2024-07-14 07:44:36.401170] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.496 [2024-07-14 07:44:36.401359] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.496 [2024-07-14 07:44:36.401387] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0c30000b90 with addr=10.0.0.2, port=4420 00:27:20.496 qpair failed and we were unable to recover it. 00:27:20.496 [2024-07-14 07:44:36.401597] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.496 [2024-07-14 07:44:36.401773] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.496 [2024-07-14 07:44:36.401802] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0c30000b90 with addr=10.0.0.2, port=4420 00:27:20.496 qpair failed and we were unable to recover it. 00:27:20.496 [2024-07-14 07:44:36.401972] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.496 [2024-07-14 07:44:36.402139] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.496 [2024-07-14 07:44:36.402165] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0c30000b90 with addr=10.0.0.2, port=4420 00:27:20.496 qpair failed and we were unable to recover it. 00:27:20.496 [2024-07-14 07:44:36.402325] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.496 [2024-07-14 07:44:36.402508] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.496 [2024-07-14 07:44:36.402536] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0c30000b90 with addr=10.0.0.2, port=4420 00:27:20.496 qpair failed and we were unable to recover it. 00:27:20.496 [2024-07-14 07:44:36.402732] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.496 [2024-07-14 07:44:36.402903] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.496 [2024-07-14 07:44:36.402928] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0c30000b90 with addr=10.0.0.2, port=4420 00:27:20.496 qpair failed and we were unable to recover it. 00:27:20.496 [2024-07-14 07:44:36.403140] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.496 [2024-07-14 07:44:36.403320] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.496 [2024-07-14 07:44:36.403346] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0c30000b90 with addr=10.0.0.2, port=4420 00:27:20.496 qpair failed and we were unable to recover it. 00:27:20.496 [2024-07-14 07:44:36.403492] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.496 [2024-07-14 07:44:36.403671] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.496 [2024-07-14 07:44:36.403697] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0c30000b90 with addr=10.0.0.2, port=4420 00:27:20.496 qpair failed and we were unable to recover it. 00:27:20.496 [2024-07-14 07:44:36.403842] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.496 [2024-07-14 07:44:36.404055] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.496 [2024-07-14 07:44:36.404081] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0c30000b90 with addr=10.0.0.2, port=4420 00:27:20.496 qpair failed and we were unable to recover it. 00:27:20.496 [2024-07-14 07:44:36.404262] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.496 [2024-07-14 07:44:36.404446] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.496 [2024-07-14 07:44:36.404472] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0c30000b90 with addr=10.0.0.2, port=4420 00:27:20.496 qpair failed and we were unable to recover it. 00:27:20.496 [2024-07-14 07:44:36.404630] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.496 [2024-07-14 07:44:36.404820] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.496 [2024-07-14 07:44:36.404846] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0c30000b90 with addr=10.0.0.2, port=4420 00:27:20.496 qpair failed and we were unable to recover it. 00:27:20.496 [2024-07-14 07:44:36.405030] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.496 [2024-07-14 07:44:36.405219] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.496 [2024-07-14 07:44:36.405244] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0c30000b90 with addr=10.0.0.2, port=4420 00:27:20.496 qpair failed and we were unable to recover it. 00:27:20.496 [2024-07-14 07:44:36.405402] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.496 [2024-07-14 07:44:36.405557] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.496 [2024-07-14 07:44:36.405588] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0c30000b90 with addr=10.0.0.2, port=4420 00:27:20.496 qpair failed and we were unable to recover it. 00:27:20.496 [2024-07-14 07:44:36.405795] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.496 [2024-07-14 07:44:36.405950] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.496 [2024-07-14 07:44:36.405977] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0c30000b90 with addr=10.0.0.2, port=4420 00:27:20.496 qpair failed and we were unable to recover it. 00:27:20.496 [2024-07-14 07:44:36.406157] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.496 [2024-07-14 07:44:36.406356] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.496 [2024-07-14 07:44:36.406380] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0c30000b90 with addr=10.0.0.2, port=4420 00:27:20.496 qpair failed and we were unable to recover it. 00:27:20.496 [2024-07-14 07:44:36.406538] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.496 [2024-07-14 07:44:36.406713] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.496 [2024-07-14 07:44:36.406739] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0c30000b90 with addr=10.0.0.2, port=4420 00:27:20.496 qpair failed and we were unable to recover it. 00:27:20.496 [2024-07-14 07:44:36.406892] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.496 [2024-07-14 07:44:36.407062] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.496 [2024-07-14 07:44:36.407087] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0c30000b90 with addr=10.0.0.2, port=4420 00:27:20.496 qpair failed and we were unable to recover it. 00:27:20.496 [2024-07-14 07:44:36.407254] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.496 [2024-07-14 07:44:36.407437] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.496 [2024-07-14 07:44:36.407463] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0c30000b90 with addr=10.0.0.2, port=4420 00:27:20.496 qpair failed and we were unable to recover it. 00:27:20.496 [2024-07-14 07:44:36.407648] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.496 [2024-07-14 07:44:36.407800] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.496 [2024-07-14 07:44:36.407826] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0c30000b90 with addr=10.0.0.2, port=4420 00:27:20.496 qpair failed and we were unable to recover it. 00:27:20.496 [2024-07-14 07:44:36.407992] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.496 [2024-07-14 07:44:36.408150] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.496 [2024-07-14 07:44:36.408176] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0c30000b90 with addr=10.0.0.2, port=4420 00:27:20.496 qpair failed and we were unable to recover it. 00:27:20.496 [2024-07-14 07:44:36.408362] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.496 [2024-07-14 07:44:36.408547] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.496 [2024-07-14 07:44:36.408573] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0c30000b90 with addr=10.0.0.2, port=4420 00:27:20.496 qpair failed and we were unable to recover it. 00:27:20.496 [2024-07-14 07:44:36.408753] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.496 [2024-07-14 07:44:36.408911] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.496 [2024-07-14 07:44:36.408938] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0c30000b90 with addr=10.0.0.2, port=4420 00:27:20.496 qpair failed and we were unable to recover it. 00:27:20.496 [2024-07-14 07:44:36.409116] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.496 [2024-07-14 07:44:36.409313] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.496 [2024-07-14 07:44:36.409343] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0c30000b90 with addr=10.0.0.2, port=4420 00:27:20.496 qpair failed and we were unable to recover it. 00:27:20.496 [2024-07-14 07:44:36.409527] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.496 [2024-07-14 07:44:36.409740] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.496 [2024-07-14 07:44:36.409765] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0c30000b90 with addr=10.0.0.2, port=4420 00:27:20.496 qpair failed and we were unable to recover it. 00:27:20.496 [2024-07-14 07:44:36.409952] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.496 [2024-07-14 07:44:36.410111] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.496 [2024-07-14 07:44:36.410136] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0c30000b90 with addr=10.0.0.2, port=4420 00:27:20.496 qpair failed and we were unable to recover it. 00:27:20.496 [2024-07-14 07:44:36.410296] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.496 [2024-07-14 07:44:36.410483] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.496 [2024-07-14 07:44:36.410509] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0c30000b90 with addr=10.0.0.2, port=4420 00:27:20.496 qpair failed and we were unable to recover it. 00:27:20.496 [2024-07-14 07:44:36.410695] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.496 [2024-07-14 07:44:36.410880] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.496 [2024-07-14 07:44:36.410906] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0c30000b90 with addr=10.0.0.2, port=4420 00:27:20.496 qpair failed and we were unable to recover it. 00:27:20.496 [2024-07-14 07:44:36.411069] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.496 [2024-07-14 07:44:36.411250] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.496 [2024-07-14 07:44:36.411276] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0c30000b90 with addr=10.0.0.2, port=4420 00:27:20.496 qpair failed and we were unable to recover it. 00:27:20.496 [2024-07-14 07:44:36.411452] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.496 [2024-07-14 07:44:36.411604] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.496 [2024-07-14 07:44:36.411630] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0c30000b90 with addr=10.0.0.2, port=4420 00:27:20.497 qpair failed and we were unable to recover it. 00:27:20.497 [2024-07-14 07:44:36.411803] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.497 [2024-07-14 07:44:36.411962] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.497 [2024-07-14 07:44:36.411988] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0c30000b90 with addr=10.0.0.2, port=4420 00:27:20.497 qpair failed and we were unable to recover it. 00:27:20.497 [2024-07-14 07:44:36.412143] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.497 [2024-07-14 07:44:36.412304] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.497 [2024-07-14 07:44:36.412330] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0c30000b90 with addr=10.0.0.2, port=4420 00:27:20.497 qpair failed and we were unable to recover it. 00:27:20.497 [2024-07-14 07:44:36.412519] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.497 [2024-07-14 07:44:36.412701] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.497 [2024-07-14 07:44:36.412727] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0c30000b90 with addr=10.0.0.2, port=4420 00:27:20.497 qpair failed and we were unable to recover it. 00:27:20.497 [2024-07-14 07:44:36.412938] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.497 [2024-07-14 07:44:36.413096] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.497 [2024-07-14 07:44:36.413122] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0c30000b90 with addr=10.0.0.2, port=4420 00:27:20.497 qpair failed and we were unable to recover it. 00:27:20.497 [2024-07-14 07:44:36.413303] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.497 [2024-07-14 07:44:36.413510] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.497 [2024-07-14 07:44:36.413536] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0c30000b90 with addr=10.0.0.2, port=4420 00:27:20.497 qpair failed and we were unable to recover it. 00:27:20.497 [2024-07-14 07:44:36.413720] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.497 [2024-07-14 07:44:36.413877] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.497 [2024-07-14 07:44:36.413904] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0c30000b90 with addr=10.0.0.2, port=4420 00:27:20.497 qpair failed and we were unable to recover it. 00:27:20.497 [2024-07-14 07:44:36.414087] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.497 [2024-07-14 07:44:36.414268] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.497 [2024-07-14 07:44:36.414294] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0c30000b90 with addr=10.0.0.2, port=4420 00:27:20.497 qpair failed and we were unable to recover it. 00:27:20.497 [2024-07-14 07:44:36.414480] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.497 [2024-07-14 07:44:36.414692] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.497 [2024-07-14 07:44:36.414718] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0c30000b90 with addr=10.0.0.2, port=4420 00:27:20.497 qpair failed and we were unable to recover it. 00:27:20.497 [2024-07-14 07:44:36.414901] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.497 [2024-07-14 07:44:36.415064] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.497 [2024-07-14 07:44:36.415090] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0c30000b90 with addr=10.0.0.2, port=4420 00:27:20.497 qpair failed and we were unable to recover it. 00:27:20.497 [2024-07-14 07:44:36.415294] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.497 [2024-07-14 07:44:36.415470] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.497 [2024-07-14 07:44:36.415496] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0c30000b90 with addr=10.0.0.2, port=4420 00:27:20.497 qpair failed and we were unable to recover it. 00:27:20.497 [2024-07-14 07:44:36.415650] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.497 [2024-07-14 07:44:36.415854] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.497 [2024-07-14 07:44:36.415886] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0c30000b90 with addr=10.0.0.2, port=4420 00:27:20.497 qpair failed and we were unable to recover it. 00:27:20.497 [2024-07-14 07:44:36.416072] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.497 [2024-07-14 07:44:36.416257] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.497 [2024-07-14 07:44:36.416284] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0c30000b90 with addr=10.0.0.2, port=4420 00:27:20.497 qpair failed and we were unable to recover it. 00:27:20.497 [2024-07-14 07:44:36.416441] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.497 [2024-07-14 07:44:36.416593] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.497 [2024-07-14 07:44:36.416618] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0c30000b90 with addr=10.0.0.2, port=4420 00:27:20.497 qpair failed and we were unable to recover it. 00:27:20.497 [2024-07-14 07:44:36.416826] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.497 [2024-07-14 07:44:36.416985] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.497 [2024-07-14 07:44:36.417011] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0c30000b90 with addr=10.0.0.2, port=4420 00:27:20.497 qpair failed and we were unable to recover it. 00:27:20.497 [2024-07-14 07:44:36.417204] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.497 [2024-07-14 07:44:36.417389] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.497 [2024-07-14 07:44:36.417416] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0c30000b90 with addr=10.0.0.2, port=4420 00:27:20.497 qpair failed and we were unable to recover it. 00:27:20.497 [2024-07-14 07:44:36.417628] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.497 [2024-07-14 07:44:36.417803] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.497 [2024-07-14 07:44:36.417829] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0c30000b90 with addr=10.0.0.2, port=4420 00:27:20.497 qpair failed and we were unable to recover it. 00:27:20.497 [2024-07-14 07:44:36.417984] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.497 [2024-07-14 07:44:36.418181] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.497 [2024-07-14 07:44:36.418207] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0c30000b90 with addr=10.0.0.2, port=4420 00:27:20.497 qpair failed and we were unable to recover it. 00:27:20.497 [2024-07-14 07:44:36.418385] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.497 [2024-07-14 07:44:36.418561] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.497 [2024-07-14 07:44:36.418586] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0c30000b90 with addr=10.0.0.2, port=4420 00:27:20.497 qpair failed and we were unable to recover it. 00:27:20.497 [2024-07-14 07:44:36.418763] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.497 [2024-07-14 07:44:36.418916] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.497 [2024-07-14 07:44:36.418942] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0c30000b90 with addr=10.0.0.2, port=4420 00:27:20.497 qpair failed and we were unable to recover it. 00:27:20.497 [2024-07-14 07:44:36.419112] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.497 [2024-07-14 07:44:36.419314] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.497 [2024-07-14 07:44:36.419339] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0c30000b90 with addr=10.0.0.2, port=4420 00:27:20.497 qpair failed and we were unable to recover it. 00:27:20.497 [2024-07-14 07:44:36.419522] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.497 [2024-07-14 07:44:36.419671] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.497 [2024-07-14 07:44:36.419698] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0c30000b90 with addr=10.0.0.2, port=4420 00:27:20.497 qpair failed and we were unable to recover it. 00:27:20.497 [2024-07-14 07:44:36.419850] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.497 [2024-07-14 07:44:36.420079] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.497 [2024-07-14 07:44:36.420105] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0c30000b90 with addr=10.0.0.2, port=4420 00:27:20.497 qpair failed and we were unable to recover it. 00:27:20.497 [2024-07-14 07:44:36.420261] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.497 [2024-07-14 07:44:36.420433] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.497 [2024-07-14 07:44:36.420460] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0c30000b90 with addr=10.0.0.2, port=4420 00:27:20.497 qpair failed and we were unable to recover it. 00:27:20.497 [2024-07-14 07:44:36.420621] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.497 [2024-07-14 07:44:36.420793] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.497 [2024-07-14 07:44:36.420819] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0c30000b90 with addr=10.0.0.2, port=4420 00:27:20.497 qpair failed and we were unable to recover it. 00:27:20.497 [2024-07-14 07:44:36.421011] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.497 [2024-07-14 07:44:36.421191] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.497 [2024-07-14 07:44:36.421216] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0c30000b90 with addr=10.0.0.2, port=4420 00:27:20.497 qpair failed and we were unable to recover it. 00:27:20.497 [2024-07-14 07:44:36.421391] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.497 [2024-07-14 07:44:36.421581] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.497 [2024-07-14 07:44:36.421606] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0c30000b90 with addr=10.0.0.2, port=4420 00:27:20.497 qpair failed and we were unable to recover it. 00:27:20.497 [2024-07-14 07:44:36.421786] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.497 [2024-07-14 07:44:36.421938] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.497 [2024-07-14 07:44:36.421965] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0c30000b90 with addr=10.0.0.2, port=4420 00:27:20.497 qpair failed and we were unable to recover it. 00:27:20.497 [2024-07-14 07:44:36.422144] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.497 [2024-07-14 07:44:36.422295] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.497 [2024-07-14 07:44:36.422322] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0c30000b90 with addr=10.0.0.2, port=4420 00:27:20.497 qpair failed and we were unable to recover it. 00:27:20.497 [2024-07-14 07:44:36.422532] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.497 [2024-07-14 07:44:36.422711] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.497 [2024-07-14 07:44:36.422736] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0c30000b90 with addr=10.0.0.2, port=4420 00:27:20.497 qpair failed and we were unable to recover it. 00:27:20.497 [2024-07-14 07:44:36.422926] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.497 [2024-07-14 07:44:36.423077] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.497 [2024-07-14 07:44:36.423104] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0c30000b90 with addr=10.0.0.2, port=4420 00:27:20.497 qpair failed and we were unable to recover it. 00:27:20.497 [2024-07-14 07:44:36.423293] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.497 [2024-07-14 07:44:36.423501] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.498 [2024-07-14 07:44:36.423526] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0c30000b90 with addr=10.0.0.2, port=4420 00:27:20.498 qpair failed and we were unable to recover it. 00:27:20.498 [2024-07-14 07:44:36.423688] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.498 [2024-07-14 07:44:36.423898] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.498 [2024-07-14 07:44:36.423923] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0c30000b90 with addr=10.0.0.2, port=4420 00:27:20.498 qpair failed and we were unable to recover it. 00:27:20.498 [2024-07-14 07:44:36.424102] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.498 [2024-07-14 07:44:36.424252] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.498 [2024-07-14 07:44:36.424278] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0c30000b90 with addr=10.0.0.2, port=4420 00:27:20.498 qpair failed and we were unable to recover it. 00:27:20.498 [2024-07-14 07:44:36.424462] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.498 [2024-07-14 07:44:36.424668] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.498 [2024-07-14 07:44:36.424693] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0c30000b90 with addr=10.0.0.2, port=4420 00:27:20.498 qpair failed and we were unable to recover it. 00:27:20.498 [2024-07-14 07:44:36.424910] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.498 [2024-07-14 07:44:36.425080] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.498 [2024-07-14 07:44:36.425106] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0c30000b90 with addr=10.0.0.2, port=4420 00:27:20.498 qpair failed and we were unable to recover it. 00:27:20.498 [2024-07-14 07:44:36.425301] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.498 [2024-07-14 07:44:36.425486] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.498 [2024-07-14 07:44:36.425512] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0c30000b90 with addr=10.0.0.2, port=4420 00:27:20.498 qpair failed and we were unable to recover it. 00:27:20.498 [2024-07-14 07:44:36.425709] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.498 [2024-07-14 07:44:36.425855] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.498 [2024-07-14 07:44:36.425887] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0c30000b90 with addr=10.0.0.2, port=4420 00:27:20.498 qpair failed and we were unable to recover it. 00:27:20.498 [2024-07-14 07:44:36.426097] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.498 [2024-07-14 07:44:36.426258] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.498 [2024-07-14 07:44:36.426286] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0c30000b90 with addr=10.0.0.2, port=4420 00:27:20.498 qpair failed and we were unable to recover it. 00:27:20.498 [2024-07-14 07:44:36.426495] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.498 [2024-07-14 07:44:36.426682] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.498 [2024-07-14 07:44:36.426707] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0c30000b90 with addr=10.0.0.2, port=4420 00:27:20.498 qpair failed and we were unable to recover it. 00:27:20.498 [2024-07-14 07:44:36.426886] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.498 [2024-07-14 07:44:36.427069] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.498 [2024-07-14 07:44:36.427095] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0c30000b90 with addr=10.0.0.2, port=4420 00:27:20.498 qpair failed and we were unable to recover it. 00:27:20.498 [2024-07-14 07:44:36.427273] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.498 [2024-07-14 07:44:36.427448] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.498 [2024-07-14 07:44:36.427473] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0c30000b90 with addr=10.0.0.2, port=4420 00:27:20.498 qpair failed and we were unable to recover it. 00:27:20.498 [2024-07-14 07:44:36.427625] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.498 [2024-07-14 07:44:36.427796] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.498 [2024-07-14 07:44:36.427822] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0c30000b90 with addr=10.0.0.2, port=4420 00:27:20.498 qpair failed and we were unable to recover it. 00:27:20.498 [2024-07-14 07:44:36.428012] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.498 [2024-07-14 07:44:36.428214] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.498 [2024-07-14 07:44:36.428240] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0c30000b90 with addr=10.0.0.2, port=4420 00:27:20.498 qpair failed and we were unable to recover it. 00:27:20.498 [2024-07-14 07:44:36.428422] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.498 [2024-07-14 07:44:36.428576] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.498 [2024-07-14 07:44:36.428602] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0c30000b90 with addr=10.0.0.2, port=4420 00:27:20.498 qpair failed and we were unable to recover it. 00:27:20.498 [2024-07-14 07:44:36.428795] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.498 [2024-07-14 07:44:36.428976] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.498 [2024-07-14 07:44:36.429002] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0c30000b90 with addr=10.0.0.2, port=4420 00:27:20.498 qpair failed and we were unable to recover it. 00:27:20.498 [2024-07-14 07:44:36.429175] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.498 [2024-07-14 07:44:36.429359] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.498 [2024-07-14 07:44:36.429385] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0c30000b90 with addr=10.0.0.2, port=4420 00:27:20.498 qpair failed and we were unable to recover it. 00:27:20.498 [2024-07-14 07:44:36.429561] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.498 [2024-07-14 07:44:36.429741] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.498 [2024-07-14 07:44:36.429766] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0c30000b90 with addr=10.0.0.2, port=4420 00:27:20.498 qpair failed and we were unable to recover it. 00:27:20.498 [2024-07-14 07:44:36.429961] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.498 [2024-07-14 07:44:36.430116] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.498 [2024-07-14 07:44:36.430142] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0c30000b90 with addr=10.0.0.2, port=4420 00:27:20.498 qpair failed and we were unable to recover it. 00:27:20.498 [2024-07-14 07:44:36.430288] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.498 [2024-07-14 07:44:36.430492] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.498 [2024-07-14 07:44:36.430517] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0c30000b90 with addr=10.0.0.2, port=4420 00:27:20.498 qpair failed and we were unable to recover it. 00:27:20.498 [2024-07-14 07:44:36.430684] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.498 [2024-07-14 07:44:36.430875] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.498 [2024-07-14 07:44:36.430902] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0c30000b90 with addr=10.0.0.2, port=4420 00:27:20.498 qpair failed and we were unable to recover it. 00:27:20.498 [2024-07-14 07:44:36.431104] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.498 [2024-07-14 07:44:36.431275] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.498 [2024-07-14 07:44:36.431300] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0c30000b90 with addr=10.0.0.2, port=4420 00:27:20.498 qpair failed and we were unable to recover it. 00:27:20.498 [2024-07-14 07:44:36.431486] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.498 [2024-07-14 07:44:36.431662] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.498 [2024-07-14 07:44:36.431687] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0c30000b90 with addr=10.0.0.2, port=4420 00:27:20.498 qpair failed and we were unable to recover it. 00:27:20.498 [2024-07-14 07:44:36.431841] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.498 [2024-07-14 07:44:36.432035] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.498 [2024-07-14 07:44:36.432061] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0c30000b90 with addr=10.0.0.2, port=4420 00:27:20.498 qpair failed and we were unable to recover it. 00:27:20.498 [2024-07-14 07:44:36.432240] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.498 [2024-07-14 07:44:36.432401] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.498 [2024-07-14 07:44:36.432428] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0c30000b90 with addr=10.0.0.2, port=4420 00:27:20.498 qpair failed and we were unable to recover it. 00:27:20.498 [2024-07-14 07:44:36.432579] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.498 [2024-07-14 07:44:36.432759] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.498 [2024-07-14 07:44:36.432783] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0c30000b90 with addr=10.0.0.2, port=4420 00:27:20.498 qpair failed and we were unable to recover it. 00:27:20.498 [2024-07-14 07:44:36.432990] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.498 [2024-07-14 07:44:36.433175] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.498 [2024-07-14 07:44:36.433201] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0c30000b90 with addr=10.0.0.2, port=4420 00:27:20.498 qpair failed and we were unable to recover it. 00:27:20.498 [2024-07-14 07:44:36.433360] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.498 [2024-07-14 07:44:36.433543] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.498 [2024-07-14 07:44:36.433569] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0c30000b90 with addr=10.0.0.2, port=4420 00:27:20.498 qpair failed and we were unable to recover it. 00:27:20.498 [2024-07-14 07:44:36.433751] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.498 [2024-07-14 07:44:36.433902] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.498 [2024-07-14 07:44:36.433928] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0c30000b90 with addr=10.0.0.2, port=4420 00:27:20.498 qpair failed and we were unable to recover it. 00:27:20.498 [2024-07-14 07:44:36.434103] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.498 [2024-07-14 07:44:36.434292] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.498 [2024-07-14 07:44:36.434318] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0c30000b90 with addr=10.0.0.2, port=4420 00:27:20.498 qpair failed and we were unable to recover it. 00:27:20.498 [2024-07-14 07:44:36.434517] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.498 [2024-07-14 07:44:36.434691] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.498 [2024-07-14 07:44:36.434718] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0c30000b90 with addr=10.0.0.2, port=4420 00:27:20.498 qpair failed and we were unable to recover it. 00:27:20.498 [2024-07-14 07:44:36.434904] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.498 [2024-07-14 07:44:36.435090] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.498 [2024-07-14 07:44:36.435114] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0c30000b90 with addr=10.0.0.2, port=4420 00:27:20.498 qpair failed and we were unable to recover it. 00:27:20.498 [2024-07-14 07:44:36.435298] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.499 [2024-07-14 07:44:36.435445] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.499 [2024-07-14 07:44:36.435470] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0c30000b90 with addr=10.0.0.2, port=4420 00:27:20.499 qpair failed and we were unable to recover it. 00:27:20.499 [2024-07-14 07:44:36.435681] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.499 [2024-07-14 07:44:36.435855] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.499 [2024-07-14 07:44:36.435902] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0c30000b90 with addr=10.0.0.2, port=4420 00:27:20.499 qpair failed and we were unable to recover it. 00:27:20.499 [2024-07-14 07:44:36.436049] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.499 [2024-07-14 07:44:36.436232] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.499 [2024-07-14 07:44:36.436257] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0c30000b90 with addr=10.0.0.2, port=4420 00:27:20.499 qpair failed and we were unable to recover it. 00:27:20.499 [2024-07-14 07:44:36.436475] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.499 [2024-07-14 07:44:36.436686] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.499 [2024-07-14 07:44:36.436711] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0c30000b90 with addr=10.0.0.2, port=4420 00:27:20.499 qpair failed and we were unable to recover it. 00:27:20.499 [2024-07-14 07:44:36.436904] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.499 [2024-07-14 07:44:36.437093] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.499 [2024-07-14 07:44:36.437121] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0c30000b90 with addr=10.0.0.2, port=4420 00:27:20.499 qpair failed and we were unable to recover it. 00:27:20.499 [2024-07-14 07:44:36.437336] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.499 [2024-07-14 07:44:36.437520] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.499 [2024-07-14 07:44:36.437545] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0c30000b90 with addr=10.0.0.2, port=4420 00:27:20.499 qpair failed and we were unable to recover it. 00:27:20.499 [2024-07-14 07:44:36.437730] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.499 [2024-07-14 07:44:36.437939] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.499 [2024-07-14 07:44:36.437965] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0c30000b90 with addr=10.0.0.2, port=4420 00:27:20.499 qpair failed and we were unable to recover it. 00:27:20.499 [2024-07-14 07:44:36.438141] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.499 [2024-07-14 07:44:36.438352] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.499 [2024-07-14 07:44:36.438377] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0c30000b90 with addr=10.0.0.2, port=4420 00:27:20.499 qpair failed and we were unable to recover it. 00:27:20.499 [2024-07-14 07:44:36.438534] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.499 [2024-07-14 07:44:36.438723] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.499 [2024-07-14 07:44:36.438750] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0c30000b90 with addr=10.0.0.2, port=4420 00:27:20.499 qpair failed and we were unable to recover it. 00:27:20.499 [2024-07-14 07:44:36.438950] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.499 [2024-07-14 07:44:36.439104] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.499 [2024-07-14 07:44:36.439130] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0c30000b90 with addr=10.0.0.2, port=4420 00:27:20.499 qpair failed and we were unable to recover it. 00:27:20.499 [2024-07-14 07:44:36.439292] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.499 [2024-07-14 07:44:36.439467] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.499 [2024-07-14 07:44:36.439493] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0c30000b90 with addr=10.0.0.2, port=4420 00:27:20.499 qpair failed and we were unable to recover it. 00:27:20.499 [2024-07-14 07:44:36.439686] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.499 [2024-07-14 07:44:36.439871] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.499 [2024-07-14 07:44:36.439898] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0c30000b90 with addr=10.0.0.2, port=4420 00:27:20.499 qpair failed and we were unable to recover it. 00:27:20.499 [2024-07-14 07:44:36.440084] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.499 [2024-07-14 07:44:36.440270] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.499 [2024-07-14 07:44:36.440297] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0c30000b90 with addr=10.0.0.2, port=4420 00:27:20.499 qpair failed and we were unable to recover it. 00:27:20.499 [2024-07-14 07:44:36.440490] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.499 [2024-07-14 07:44:36.440655] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.499 [2024-07-14 07:44:36.440682] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0c30000b90 with addr=10.0.0.2, port=4420 00:27:20.499 qpair failed and we were unable to recover it. 00:27:20.499 [2024-07-14 07:44:36.440872] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.499 [2024-07-14 07:44:36.441052] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.499 [2024-07-14 07:44:36.441077] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0c30000b90 with addr=10.0.0.2, port=4420 00:27:20.499 qpair failed and we were unable to recover it. 00:27:20.499 [2024-07-14 07:44:36.441303] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.499 [2024-07-14 07:44:36.441456] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.499 [2024-07-14 07:44:36.441481] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0c30000b90 with addr=10.0.0.2, port=4420 00:27:20.499 qpair failed and we were unable to recover it. 00:27:20.499 [2024-07-14 07:44:36.441680] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.499 [2024-07-14 07:44:36.441858] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.499 [2024-07-14 07:44:36.441890] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0c30000b90 with addr=10.0.0.2, port=4420 00:27:20.499 qpair failed and we were unable to recover it. 00:27:20.499 [2024-07-14 07:44:36.442092] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.499 [2024-07-14 07:44:36.442280] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.499 [2024-07-14 07:44:36.442306] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0c30000b90 with addr=10.0.0.2, port=4420 00:27:20.499 qpair failed and we were unable to recover it. 00:27:20.499 [2024-07-14 07:44:36.442493] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.499 [2024-07-14 07:44:36.442668] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.499 [2024-07-14 07:44:36.442693] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0c30000b90 with addr=10.0.0.2, port=4420 00:27:20.499 qpair failed and we were unable to recover it. 00:27:20.499 [2024-07-14 07:44:36.442883] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.499 [2024-07-14 07:44:36.443058] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.499 [2024-07-14 07:44:36.443084] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0c30000b90 with addr=10.0.0.2, port=4420 00:27:20.499 qpair failed and we were unable to recover it. 00:27:20.499 [2024-07-14 07:44:36.443275] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.499 [2024-07-14 07:44:36.443486] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.499 [2024-07-14 07:44:36.443510] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0c30000b90 with addr=10.0.0.2, port=4420 00:27:20.499 qpair failed and we were unable to recover it. 00:27:20.499 [2024-07-14 07:44:36.443693] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.499 [2024-07-14 07:44:36.443886] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.499 [2024-07-14 07:44:36.443915] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0c30000b90 with addr=10.0.0.2, port=4420 00:27:20.499 qpair failed and we were unable to recover it. 00:27:20.499 [2024-07-14 07:44:36.444103] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.499 [2024-07-14 07:44:36.444288] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.499 [2024-07-14 07:44:36.444315] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0c30000b90 with addr=10.0.0.2, port=4420 00:27:20.499 qpair failed and we were unable to recover it. 00:27:20.499 [2024-07-14 07:44:36.444469] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.499 [2024-07-14 07:44:36.444650] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.499 [2024-07-14 07:44:36.444675] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0c30000b90 with addr=10.0.0.2, port=4420 00:27:20.499 qpair failed and we were unable to recover it. 00:27:20.499 [2024-07-14 07:44:36.444872] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.499 [2024-07-14 07:44:36.445083] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.499 [2024-07-14 07:44:36.445108] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0c30000b90 with addr=10.0.0.2, port=4420 00:27:20.499 qpair failed and we were unable to recover it. 00:27:20.499 [2024-07-14 07:44:36.445302] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.499 [2024-07-14 07:44:36.445471] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.499 [2024-07-14 07:44:36.445496] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0c30000b90 with addr=10.0.0.2, port=4420 00:27:20.499 qpair failed and we were unable to recover it. 00:27:20.499 [2024-07-14 07:44:36.445683] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.499 [2024-07-14 07:44:36.445832] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.500 [2024-07-14 07:44:36.445872] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0c30000b90 with addr=10.0.0.2, port=4420 00:27:20.500 qpair failed and we were unable to recover it. 00:27:20.500 [2024-07-14 07:44:36.446033] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.500 [2024-07-14 07:44:36.446181] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.500 [2024-07-14 07:44:36.446207] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0c30000b90 with addr=10.0.0.2, port=4420 00:27:20.500 qpair failed and we were unable to recover it. 00:27:20.500 [2024-07-14 07:44:36.446394] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.500 [2024-07-14 07:44:36.446542] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.500 [2024-07-14 07:44:36.446567] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0c30000b90 with addr=10.0.0.2, port=4420 00:27:20.500 qpair failed and we were unable to recover it. 00:27:20.500 [2024-07-14 07:44:36.446781] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.500 [2024-07-14 07:44:36.446969] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.500 [2024-07-14 07:44:36.446995] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0c30000b90 with addr=10.0.0.2, port=4420 00:27:20.500 qpair failed and we were unable to recover it. 00:27:20.500 [2024-07-14 07:44:36.447151] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.500 [2024-07-14 07:44:36.447328] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.500 [2024-07-14 07:44:36.447352] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0c30000b90 with addr=10.0.0.2, port=4420 00:27:20.500 qpair failed and we were unable to recover it. 00:27:20.500 [2024-07-14 07:44:36.447520] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.500 [2024-07-14 07:44:36.447668] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.500 [2024-07-14 07:44:36.447695] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0c30000b90 with addr=10.0.0.2, port=4420 00:27:20.500 qpair failed and we were unable to recover it. 00:27:20.500 [2024-07-14 07:44:36.447897] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.500 [2024-07-14 07:44:36.448084] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.500 [2024-07-14 07:44:36.448108] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0c30000b90 with addr=10.0.0.2, port=4420 00:27:20.500 qpair failed and we were unable to recover it. 00:27:20.500 [2024-07-14 07:44:36.448272] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.500 [2024-07-14 07:44:36.448461] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.500 [2024-07-14 07:44:36.448486] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0c30000b90 with addr=10.0.0.2, port=4420 00:27:20.500 qpair failed and we were unable to recover it. 00:27:20.500 [2024-07-14 07:44:36.448667] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.500 [2024-07-14 07:44:36.448811] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.500 [2024-07-14 07:44:36.448836] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0c30000b90 with addr=10.0.0.2, port=4420 00:27:20.500 qpair failed and we were unable to recover it. 00:27:20.500 [2024-07-14 07:44:36.449034] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.500 [2024-07-14 07:44:36.449243] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.500 [2024-07-14 07:44:36.449268] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0c30000b90 with addr=10.0.0.2, port=4420 00:27:20.500 qpair failed and we were unable to recover it. 00:27:20.500 [2024-07-14 07:44:36.449470] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.500 [2024-07-14 07:44:36.449618] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.500 [2024-07-14 07:44:36.449643] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0c30000b90 with addr=10.0.0.2, port=4420 00:27:20.500 qpair failed and we were unable to recover it. 00:27:20.500 [2024-07-14 07:44:36.449799] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.500 [2024-07-14 07:44:36.450018] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.500 [2024-07-14 07:44:36.450044] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0c30000b90 with addr=10.0.0.2, port=4420 00:27:20.500 qpair failed and we were unable to recover it. 00:27:20.500 [2024-07-14 07:44:36.450227] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.500 [2024-07-14 07:44:36.450409] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.500 [2024-07-14 07:44:36.450436] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0c30000b90 with addr=10.0.0.2, port=4420 00:27:20.500 qpair failed and we were unable to recover it. 00:27:20.500 [2024-07-14 07:44:36.450622] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.500 [2024-07-14 07:44:36.450775] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.500 [2024-07-14 07:44:36.450801] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0c30000b90 with addr=10.0.0.2, port=4420 00:27:20.500 qpair failed and we were unable to recover it. 00:27:20.500 [2024-07-14 07:44:36.450969] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.500 [2024-07-14 07:44:36.451157] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.500 [2024-07-14 07:44:36.451182] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0c30000b90 with addr=10.0.0.2, port=4420 00:27:20.500 qpair failed and we were unable to recover it. 00:27:20.500 [2024-07-14 07:44:36.451397] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.500 [2024-07-14 07:44:36.451598] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.500 [2024-07-14 07:44:36.451624] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0c30000b90 with addr=10.0.0.2, port=4420 00:27:20.500 qpair failed and we were unable to recover it. 00:27:20.500 [2024-07-14 07:44:36.451769] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.500 [2024-07-14 07:44:36.451953] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.500 [2024-07-14 07:44:36.451978] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0c30000b90 with addr=10.0.0.2, port=4420 00:27:20.500 qpair failed and we were unable to recover it. 00:27:20.500 [2024-07-14 07:44:36.452161] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.500 [2024-07-14 07:44:36.452330] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.500 [2024-07-14 07:44:36.452358] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0c30000b90 with addr=10.0.0.2, port=4420 00:27:20.500 qpair failed and we were unable to recover it. 00:27:20.500 [2024-07-14 07:44:36.452544] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.500 [2024-07-14 07:44:36.452691] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.500 [2024-07-14 07:44:36.452716] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0c30000b90 with addr=10.0.0.2, port=4420 00:27:20.500 qpair failed and we were unable to recover it. 00:27:20.500 [2024-07-14 07:44:36.452901] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.500 [2024-07-14 07:44:36.453051] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.500 [2024-07-14 07:44:36.453076] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0c30000b90 with addr=10.0.0.2, port=4420 00:27:20.500 qpair failed and we were unable to recover it. 00:27:20.500 [2024-07-14 07:44:36.453297] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.500 [2024-07-14 07:44:36.453478] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.500 [2024-07-14 07:44:36.453502] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0c30000b90 with addr=10.0.0.2, port=4420 00:27:20.500 qpair failed and we were unable to recover it. 00:27:20.500 [2024-07-14 07:44:36.453649] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.500 [2024-07-14 07:44:36.453808] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.500 [2024-07-14 07:44:36.453835] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0c30000b90 with addr=10.0.0.2, port=4420 00:27:20.500 qpair failed and we were unable to recover it. 00:27:20.500 [2024-07-14 07:44:36.454040] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.500 [2024-07-14 07:44:36.454211] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.500 [2024-07-14 07:44:36.454236] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0c30000b90 with addr=10.0.0.2, port=4420 00:27:20.500 qpair failed and we were unable to recover it. 00:27:20.500 [2024-07-14 07:44:36.454418] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.500 [2024-07-14 07:44:36.454572] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.500 [2024-07-14 07:44:36.454598] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0c30000b90 with addr=10.0.0.2, port=4420 00:27:20.500 qpair failed and we were unable to recover it. 00:27:20.500 [2024-07-14 07:44:36.454780] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.500 [2024-07-14 07:44:36.454963] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.500 [2024-07-14 07:44:36.454989] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0c30000b90 with addr=10.0.0.2, port=4420 00:27:20.500 qpair failed and we were unable to recover it. 00:27:20.500 [2024-07-14 07:44:36.455199] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.500 [2024-07-14 07:44:36.455357] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.500 [2024-07-14 07:44:36.455383] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0c30000b90 with addr=10.0.0.2, port=4420 00:27:20.500 qpair failed and we were unable to recover it. 00:27:20.500 [2024-07-14 07:44:36.455566] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.500 [2024-07-14 07:44:36.455719] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.500 [2024-07-14 07:44:36.455746] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0c30000b90 with addr=10.0.0.2, port=4420 00:27:20.500 qpair failed and we were unable to recover it. 00:27:20.500 [2024-07-14 07:44:36.455945] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.500 [2024-07-14 07:44:36.456118] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.500 [2024-07-14 07:44:36.456145] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0c30000b90 with addr=10.0.0.2, port=4420 00:27:20.500 qpair failed and we were unable to recover it. 00:27:20.500 [2024-07-14 07:44:36.456347] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.500 [2024-07-14 07:44:36.456531] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.500 [2024-07-14 07:44:36.456555] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0c30000b90 with addr=10.0.0.2, port=4420 00:27:20.500 qpair failed and we were unable to recover it. 00:27:20.500 [2024-07-14 07:44:36.456707] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.500 [2024-07-14 07:44:36.456918] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.500 [2024-07-14 07:44:36.456943] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0c30000b90 with addr=10.0.0.2, port=4420 00:27:20.500 qpair failed and we were unable to recover it. 00:27:20.500 [2024-07-14 07:44:36.457122] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.500 [2024-07-14 07:44:36.457309] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.500 [2024-07-14 07:44:36.457335] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0c30000b90 with addr=10.0.0.2, port=4420 00:27:20.500 qpair failed and we were unable to recover it. 00:27:20.500 [2024-07-14 07:44:36.457488] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.501 [2024-07-14 07:44:36.457702] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.501 [2024-07-14 07:44:36.457737] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0c30000b90 with addr=10.0.0.2, port=4420 00:27:20.501 qpair failed and we were unable to recover it. 00:27:20.501 [2024-07-14 07:44:36.457919] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.501 [2024-07-14 07:44:36.458078] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.501 [2024-07-14 07:44:36.458115] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0c30000b90 with addr=10.0.0.2, port=4420 00:27:20.501 qpair failed and we were unable to recover it. 00:27:20.501 [2024-07-14 07:44:36.458315] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.501 [2024-07-14 07:44:36.458487] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.501 [2024-07-14 07:44:36.458512] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0c30000b90 with addr=10.0.0.2, port=4420 00:27:20.501 qpair failed and we were unable to recover it. 00:27:20.501 [2024-07-14 07:44:36.458677] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.501 [2024-07-14 07:44:36.458864] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.501 [2024-07-14 07:44:36.458897] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0c30000b90 with addr=10.0.0.2, port=4420 00:27:20.501 qpair failed and we were unable to recover it. 00:27:20.501 [2024-07-14 07:44:36.459061] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.501 [2024-07-14 07:44:36.459228] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.501 [2024-07-14 07:44:36.459253] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0c30000b90 with addr=10.0.0.2, port=4420 00:27:20.501 qpair failed and we were unable to recover it. 00:27:20.501 [2024-07-14 07:44:36.459467] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.501 [2024-07-14 07:44:36.459616] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.501 [2024-07-14 07:44:36.459642] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0c30000b90 with addr=10.0.0.2, port=4420 00:27:20.501 qpair failed and we were unable to recover it. 00:27:20.501 [2024-07-14 07:44:36.459854] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.501 [2024-07-14 07:44:36.460017] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.501 [2024-07-14 07:44:36.460047] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0c30000b90 with addr=10.0.0.2, port=4420 00:27:20.501 qpair failed and we were unable to recover it. 00:27:20.501 [2024-07-14 07:44:36.460229] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.501 [2024-07-14 07:44:36.460409] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.501 [2024-07-14 07:44:36.460435] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0c30000b90 with addr=10.0.0.2, port=4420 00:27:20.501 qpair failed and we were unable to recover it. 00:27:20.501 [2024-07-14 07:44:36.460615] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.501 [2024-07-14 07:44:36.460798] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.501 [2024-07-14 07:44:36.460825] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0c30000b90 with addr=10.0.0.2, port=4420 00:27:20.501 qpair failed and we were unable to recover it. 00:27:20.501 [2024-07-14 07:44:36.461002] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.501 [2024-07-14 07:44:36.461217] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.501 [2024-07-14 07:44:36.461244] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0c30000b90 with addr=10.0.0.2, port=4420 00:27:20.501 qpair failed and we were unable to recover it. 00:27:20.501 [2024-07-14 07:44:36.461402] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.501 [2024-07-14 07:44:36.461557] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.501 [2024-07-14 07:44:36.461584] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0c30000b90 with addr=10.0.0.2, port=4420 00:27:20.501 qpair failed and we were unable to recover it. 00:27:20.501 [2024-07-14 07:44:36.461736] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.501 [2024-07-14 07:44:36.461908] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.501 [2024-07-14 07:44:36.461934] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0c30000b90 with addr=10.0.0.2, port=4420 00:27:20.501 qpair failed and we were unable to recover it. 00:27:20.501 [2024-07-14 07:44:36.462112] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.501 [2024-07-14 07:44:36.462310] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.501 [2024-07-14 07:44:36.462336] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0c30000b90 with addr=10.0.0.2, port=4420 00:27:20.501 qpair failed and we were unable to recover it. 00:27:20.501 [2024-07-14 07:44:36.462488] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.501 [2024-07-14 07:44:36.462678] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.501 [2024-07-14 07:44:36.462704] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0c30000b90 with addr=10.0.0.2, port=4420 00:27:20.501 qpair failed and we were unable to recover it. 00:27:20.501 [2024-07-14 07:44:36.462861] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.501 [2024-07-14 07:44:36.463079] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.501 [2024-07-14 07:44:36.463105] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0c30000b90 with addr=10.0.0.2, port=4420 00:27:20.501 qpair failed and we were unable to recover it. 00:27:20.501 [2024-07-14 07:44:36.463261] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.501 [2024-07-14 07:44:36.463417] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.501 [2024-07-14 07:44:36.463443] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0c30000b90 with addr=10.0.0.2, port=4420 00:27:20.501 qpair failed and we were unable to recover it. 00:27:20.501 [2024-07-14 07:44:36.463605] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.501 [2024-07-14 07:44:36.463814] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.501 [2024-07-14 07:44:36.463845] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0c30000b90 with addr=10.0.0.2, port=4420 00:27:20.501 qpair failed and we were unable to recover it. 00:27:20.501 [2024-07-14 07:44:36.464053] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.501 [2024-07-14 07:44:36.464202] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.501 [2024-07-14 07:44:36.464228] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0c30000b90 with addr=10.0.0.2, port=4420 00:27:20.501 qpair failed and we were unable to recover it. 00:27:20.501 [2024-07-14 07:44:36.464410] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.501 [2024-07-14 07:44:36.464602] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.501 [2024-07-14 07:44:36.464629] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0c30000b90 with addr=10.0.0.2, port=4420 00:27:20.501 qpair failed and we were unable to recover it. 00:27:20.501 [2024-07-14 07:44:36.464810] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.501 [2024-07-14 07:44:36.464990] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.501 [2024-07-14 07:44:36.465015] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0c30000b90 with addr=10.0.0.2, port=4420 00:27:20.501 qpair failed and we were unable to recover it. 00:27:20.501 [2024-07-14 07:44:36.465194] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.501 [2024-07-14 07:44:36.465375] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.501 [2024-07-14 07:44:36.465402] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0c30000b90 with addr=10.0.0.2, port=4420 00:27:20.501 qpair failed and we were unable to recover it. 00:27:20.501 [2024-07-14 07:44:36.465561] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.501 [2024-07-14 07:44:36.465764] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.501 [2024-07-14 07:44:36.465790] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0c30000b90 with addr=10.0.0.2, port=4420 00:27:20.501 qpair failed and we were unable to recover it. 00:27:20.501 [2024-07-14 07:44:36.465972] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.501 [2024-07-14 07:44:36.466182] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.501 [2024-07-14 07:44:36.466211] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0c30000b90 with addr=10.0.0.2, port=4420 00:27:20.501 qpair failed and we were unable to recover it. 00:27:20.501 [2024-07-14 07:44:36.466390] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.501 [2024-07-14 07:44:36.466584] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.501 [2024-07-14 07:44:36.466609] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0c30000b90 with addr=10.0.0.2, port=4420 00:27:20.501 qpair failed and we were unable to recover it. 00:27:20.501 [2024-07-14 07:44:36.466787] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.501 [2024-07-14 07:44:36.466945] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.501 [2024-07-14 07:44:36.466971] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0c30000b90 with addr=10.0.0.2, port=4420 00:27:20.501 qpair failed and we were unable to recover it. 00:27:20.501 [2024-07-14 07:44:36.467160] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.501 [2024-07-14 07:44:36.467322] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.501 [2024-07-14 07:44:36.467348] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0c30000b90 with addr=10.0.0.2, port=4420 00:27:20.501 qpair failed and we were unable to recover it. 00:27:20.501 [2024-07-14 07:44:36.467521] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.501 [2024-07-14 07:44:36.467702] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.501 [2024-07-14 07:44:36.467732] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0c30000b90 with addr=10.0.0.2, port=4420 00:27:20.501 qpair failed and we were unable to recover it. 00:27:20.501 [2024-07-14 07:44:36.467913] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.501 [2024-07-14 07:44:36.468099] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.501 [2024-07-14 07:44:36.468126] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0c30000b90 with addr=10.0.0.2, port=4420 00:27:20.501 qpair failed and we were unable to recover it. 00:27:20.501 [2024-07-14 07:44:36.468308] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.501 [2024-07-14 07:44:36.468506] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.501 [2024-07-14 07:44:36.468532] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0c30000b90 with addr=10.0.0.2, port=4420 00:27:20.501 qpair failed and we were unable to recover it. 00:27:20.501 [2024-07-14 07:44:36.468678] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.501 [2024-07-14 07:44:36.468823] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.501 [2024-07-14 07:44:36.468849] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0c30000b90 with addr=10.0.0.2, port=4420 00:27:20.501 qpair failed and we were unable to recover it. 00:27:20.501 [2024-07-14 07:44:36.469077] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.501 [2024-07-14 07:44:36.469227] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.502 [2024-07-14 07:44:36.469253] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0c30000b90 with addr=10.0.0.2, port=4420 00:27:20.502 qpair failed and we were unable to recover it. 00:27:20.502 [2024-07-14 07:44:36.469462] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.502 [2024-07-14 07:44:36.469647] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.502 [2024-07-14 07:44:36.469672] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0c30000b90 with addr=10.0.0.2, port=4420 00:27:20.502 qpair failed and we were unable to recover it. 00:27:20.502 [2024-07-14 07:44:36.469858] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.502 [2024-07-14 07:44:36.470082] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.502 [2024-07-14 07:44:36.470107] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0c30000b90 with addr=10.0.0.2, port=4420 00:27:20.502 qpair failed and we were unable to recover it. 00:27:20.502 [2024-07-14 07:44:36.470276] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.502 [2024-07-14 07:44:36.470455] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.502 [2024-07-14 07:44:36.470481] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0c30000b90 with addr=10.0.0.2, port=4420 00:27:20.502 qpair failed and we were unable to recover it. 00:27:20.502 [2024-07-14 07:44:36.470630] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.502 [2024-07-14 07:44:36.470784] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.502 [2024-07-14 07:44:36.470812] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0c30000b90 with addr=10.0.0.2, port=4420 00:27:20.502 qpair failed and we were unable to recover it. 00:27:20.502 [2024-07-14 07:44:36.470977] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.502 [2024-07-14 07:44:36.471169] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.502 [2024-07-14 07:44:36.471194] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0c30000b90 with addr=10.0.0.2, port=4420 00:27:20.502 qpair failed and we were unable to recover it. 00:27:20.502 [2024-07-14 07:44:36.471366] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.502 [2024-07-14 07:44:36.471571] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.502 [2024-07-14 07:44:36.471602] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0c30000b90 with addr=10.0.0.2, port=4420 00:27:20.502 qpair failed and we were unable to recover it. 00:27:20.502 [2024-07-14 07:44:36.471806] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.502 [2024-07-14 07:44:36.472004] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.502 [2024-07-14 07:44:36.472029] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0c30000b90 with addr=10.0.0.2, port=4420 00:27:20.502 qpair failed and we were unable to recover it. 00:27:20.502 [2024-07-14 07:44:36.472186] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.502 [2024-07-14 07:44:36.472332] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.502 [2024-07-14 07:44:36.472358] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0c30000b90 with addr=10.0.0.2, port=4420 00:27:20.502 qpair failed and we were unable to recover it. 00:27:20.502 [2024-07-14 07:44:36.472512] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.502 [2024-07-14 07:44:36.472686] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.502 [2024-07-14 07:44:36.472712] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0c30000b90 with addr=10.0.0.2, port=4420 00:27:20.502 qpair failed and we were unable to recover it. 00:27:20.502 [2024-07-14 07:44:36.472922] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.502 [2024-07-14 07:44:36.473087] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.502 [2024-07-14 07:44:36.473118] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0c30000b90 with addr=10.0.0.2, port=4420 00:27:20.502 qpair failed and we were unable to recover it. 00:27:20.502 [2024-07-14 07:44:36.473316] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.502 [2024-07-14 07:44:36.473474] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.502 [2024-07-14 07:44:36.473500] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0c30000b90 with addr=10.0.0.2, port=4420 00:27:20.502 qpair failed and we were unable to recover it. 00:27:20.502 [2024-07-14 07:44:36.473675] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.502 [2024-07-14 07:44:36.473856] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.502 [2024-07-14 07:44:36.473886] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0c30000b90 with addr=10.0.0.2, port=4420 00:27:20.502 qpair failed and we were unable to recover it. 00:27:20.502 [2024-07-14 07:44:36.474094] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.502 [2024-07-14 07:44:36.474271] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.502 [2024-07-14 07:44:36.474296] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0c30000b90 with addr=10.0.0.2, port=4420 00:27:20.502 qpair failed and we were unable to recover it. 00:27:20.502 [2024-07-14 07:44:36.474473] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.502 [2024-07-14 07:44:36.474681] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.502 [2024-07-14 07:44:36.474707] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0c30000b90 with addr=10.0.0.2, port=4420 00:27:20.502 qpair failed and we were unable to recover it. 00:27:20.502 [2024-07-14 07:44:36.474886] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.502 [2024-07-14 07:44:36.475069] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.502 [2024-07-14 07:44:36.475094] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0c30000b90 with addr=10.0.0.2, port=4420 00:27:20.502 qpair failed and we were unable to recover it. 00:27:20.502 [2024-07-14 07:44:36.475302] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.502 [2024-07-14 07:44:36.475520] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.502 [2024-07-14 07:44:36.475547] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0c30000b90 with addr=10.0.0.2, port=4420 00:27:20.502 qpair failed and we were unable to recover it. 00:27:20.502 [2024-07-14 07:44:36.475705] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.502 [2024-07-14 07:44:36.475876] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.502 [2024-07-14 07:44:36.475913] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0c30000b90 with addr=10.0.0.2, port=4420 00:27:20.502 qpair failed and we were unable to recover it. 00:27:20.502 [2024-07-14 07:44:36.476116] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.502 [2024-07-14 07:44:36.476309] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.502 [2024-07-14 07:44:36.476335] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0c30000b90 with addr=10.0.0.2, port=4420 00:27:20.502 qpair failed and we were unable to recover it. 00:27:20.502 [2024-07-14 07:44:36.476491] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.502 [2024-07-14 07:44:36.476691] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.502 [2024-07-14 07:44:36.476716] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0c30000b90 with addr=10.0.0.2, port=4420 00:27:20.502 qpair failed and we were unable to recover it. 00:27:20.502 [2024-07-14 07:44:36.476912] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.502 [2024-07-14 07:44:36.477087] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.502 [2024-07-14 07:44:36.477112] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0c30000b90 with addr=10.0.0.2, port=4420 00:27:20.502 qpair failed and we were unable to recover it. 00:27:20.502 [2024-07-14 07:44:36.477292] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.502 [2024-07-14 07:44:36.477477] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.502 [2024-07-14 07:44:36.477502] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0c30000b90 with addr=10.0.0.2, port=4420 00:27:20.502 qpair failed and we were unable to recover it. 00:27:20.502 [2024-07-14 07:44:36.477654] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.502 [2024-07-14 07:44:36.477863] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.502 [2024-07-14 07:44:36.477895] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0c30000b90 with addr=10.0.0.2, port=4420 00:27:20.502 qpair failed and we were unable to recover it. 00:27:20.502 [2024-07-14 07:44:36.478075] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.502 [2024-07-14 07:44:36.478231] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.502 [2024-07-14 07:44:36.478256] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0c30000b90 with addr=10.0.0.2, port=4420 00:27:20.502 qpair failed and we were unable to recover it. 00:27:20.502 [2024-07-14 07:44:36.478471] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.502 [2024-07-14 07:44:36.478632] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.502 [2024-07-14 07:44:36.478658] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0c30000b90 with addr=10.0.0.2, port=4420 00:27:20.502 qpair failed and we were unable to recover it. 00:27:20.502 [2024-07-14 07:44:36.478805] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.502 [2024-07-14 07:44:36.478993] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.502 [2024-07-14 07:44:36.479019] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0c30000b90 with addr=10.0.0.2, port=4420 00:27:20.502 qpair failed and we were unable to recover it. 00:27:20.502 [2024-07-14 07:44:36.479199] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.502 [2024-07-14 07:44:36.479379] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.502 [2024-07-14 07:44:36.479406] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0c30000b90 with addr=10.0.0.2, port=4420 00:27:20.502 qpair failed and we were unable to recover it. 00:27:20.502 [2024-07-14 07:44:36.479564] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.502 [2024-07-14 07:44:36.479744] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.502 [2024-07-14 07:44:36.479771] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0c30000b90 with addr=10.0.0.2, port=4420 00:27:20.502 qpair failed and we were unable to recover it. 00:27:20.502 [2024-07-14 07:44:36.479944] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.502 [2024-07-14 07:44:36.480099] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.502 [2024-07-14 07:44:36.480125] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0c30000b90 with addr=10.0.0.2, port=4420 00:27:20.502 qpair failed and we were unable to recover it. 00:27:20.502 [2024-07-14 07:44:36.480306] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.502 [2024-07-14 07:44:36.480481] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.502 [2024-07-14 07:44:36.480507] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0c30000b90 with addr=10.0.0.2, port=4420 00:27:20.502 qpair failed and we were unable to recover it. 00:27:20.502 [2024-07-14 07:44:36.480699] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.502 [2024-07-14 07:44:36.480883] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.502 [2024-07-14 07:44:36.480919] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0c30000b90 with addr=10.0.0.2, port=4420 00:27:20.503 qpair failed and we were unable to recover it. 00:27:20.503 [2024-07-14 07:44:36.481129] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.503 [2024-07-14 07:44:36.481308] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.503 [2024-07-14 07:44:36.481333] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0c30000b90 with addr=10.0.0.2, port=4420 00:27:20.503 qpair failed and we were unable to recover it. 00:27:20.503 [2024-07-14 07:44:36.481516] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.503 [2024-07-14 07:44:36.481702] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.503 [2024-07-14 07:44:36.481728] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0c30000b90 with addr=10.0.0.2, port=4420 00:27:20.503 qpair failed and we were unable to recover it. 00:27:20.503 [2024-07-14 07:44:36.481909] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.503 [2024-07-14 07:44:36.482095] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.503 [2024-07-14 07:44:36.482120] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0c30000b90 with addr=10.0.0.2, port=4420 00:27:20.503 qpair failed and we were unable to recover it. 00:27:20.503 [2024-07-14 07:44:36.482279] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.503 [2024-07-14 07:44:36.482469] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.503 [2024-07-14 07:44:36.482495] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0c30000b90 with addr=10.0.0.2, port=4420 00:27:20.503 qpair failed and we were unable to recover it. 00:27:20.503 [2024-07-14 07:44:36.482651] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.503 [2024-07-14 07:44:36.482861] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.503 [2024-07-14 07:44:36.482892] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0c30000b90 with addr=10.0.0.2, port=4420 00:27:20.503 qpair failed and we were unable to recover it. 00:27:20.503 [2024-07-14 07:44:36.483054] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.503 [2024-07-14 07:44:36.483227] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.503 [2024-07-14 07:44:36.483253] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0c30000b90 with addr=10.0.0.2, port=4420 00:27:20.503 qpair failed and we were unable to recover it. 00:27:20.503 [2024-07-14 07:44:36.483460] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.503 [2024-07-14 07:44:36.483616] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.503 [2024-07-14 07:44:36.483642] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0c30000b90 with addr=10.0.0.2, port=4420 00:27:20.503 qpair failed and we were unable to recover it. 00:27:20.503 [2024-07-14 07:44:36.483844] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.503 [2024-07-14 07:44:36.484041] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.503 [2024-07-14 07:44:36.484067] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0c30000b90 with addr=10.0.0.2, port=4420 00:27:20.503 qpair failed and we were unable to recover it. 00:27:20.503 [2024-07-14 07:44:36.484257] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.503 [2024-07-14 07:44:36.484416] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.503 [2024-07-14 07:44:36.484442] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0c30000b90 with addr=10.0.0.2, port=4420 00:27:20.503 qpair failed and we were unable to recover it. 00:27:20.503 [2024-07-14 07:44:36.484648] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.503 [2024-07-14 07:44:36.484799] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.503 [2024-07-14 07:44:36.484825] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0c30000b90 with addr=10.0.0.2, port=4420 00:27:20.503 qpair failed and we were unable to recover it. 00:27:20.503 [2024-07-14 07:44:36.485025] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.503 [2024-07-14 07:44:36.485171] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.503 [2024-07-14 07:44:36.485203] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0c30000b90 with addr=10.0.0.2, port=4420 00:27:20.503 qpair failed and we were unable to recover it. 00:27:20.503 [2024-07-14 07:44:36.485380] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.503 [2024-07-14 07:44:36.485592] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.503 [2024-07-14 07:44:36.485618] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0c30000b90 with addr=10.0.0.2, port=4420 00:27:20.503 qpair failed and we were unable to recover it. 00:27:20.503 [2024-07-14 07:44:36.485790] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.503 [2024-07-14 07:44:36.485981] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.503 [2024-07-14 07:44:36.486007] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0c30000b90 with addr=10.0.0.2, port=4420 00:27:20.503 qpair failed and we were unable to recover it. 00:27:20.503 [2024-07-14 07:44:36.486187] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.503 [2024-07-14 07:44:36.486343] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.503 [2024-07-14 07:44:36.486371] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0c30000b90 with addr=10.0.0.2, port=4420 00:27:20.503 qpair failed and we were unable to recover it. 00:27:20.503 [2024-07-14 07:44:36.486534] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.503 [2024-07-14 07:44:36.486707] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.503 [2024-07-14 07:44:36.486733] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0c30000b90 with addr=10.0.0.2, port=4420 00:27:20.503 qpair failed and we were unable to recover it. 00:27:20.503 [2024-07-14 07:44:36.486925] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.503 [2024-07-14 07:44:36.487140] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.503 [2024-07-14 07:44:36.487176] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0c30000b90 with addr=10.0.0.2, port=4420 00:27:20.503 qpair failed and we were unable to recover it. 00:27:20.503 [2024-07-14 07:44:36.487358] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.503 [2024-07-14 07:44:36.487537] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.503 [2024-07-14 07:44:36.487562] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0c30000b90 with addr=10.0.0.2, port=4420 00:27:20.503 qpair failed and we were unable to recover it. 00:27:20.503 [2024-07-14 07:44:36.487744] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.503 [2024-07-14 07:44:36.487911] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.503 [2024-07-14 07:44:36.487937] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0c30000b90 with addr=10.0.0.2, port=4420 00:27:20.503 qpair failed and we were unable to recover it. 00:27:20.503 [2024-07-14 07:44:36.488145] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.503 [2024-07-14 07:44:36.488342] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.503 [2024-07-14 07:44:36.488367] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0c30000b90 with addr=10.0.0.2, port=4420 00:27:20.503 qpair failed and we were unable to recover it. 00:27:20.503 [2024-07-14 07:44:36.488519] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.503 [2024-07-14 07:44:36.488725] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.503 [2024-07-14 07:44:36.488751] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0c30000b90 with addr=10.0.0.2, port=4420 00:27:20.503 qpair failed and we were unable to recover it. 00:27:20.503 [2024-07-14 07:44:36.488922] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.503 [2024-07-14 07:44:36.489105] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.503 [2024-07-14 07:44:36.489131] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0c30000b90 with addr=10.0.0.2, port=4420 00:27:20.503 qpair failed and we were unable to recover it. 00:27:20.503 [2024-07-14 07:44:36.489309] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.503 [2024-07-14 07:44:36.489517] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.503 [2024-07-14 07:44:36.489543] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0c30000b90 with addr=10.0.0.2, port=4420 00:27:20.503 qpair failed and we were unable to recover it. 00:27:20.503 [2024-07-14 07:44:36.489694] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.503 [2024-07-14 07:44:36.489884] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.503 [2024-07-14 07:44:36.489912] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0c30000b90 with addr=10.0.0.2, port=4420 00:27:20.503 qpair failed and we were unable to recover it. 00:27:20.503 [2024-07-14 07:44:36.490090] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.503 [2024-07-14 07:44:36.490277] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.503 [2024-07-14 07:44:36.490302] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0c30000b90 with addr=10.0.0.2, port=4420 00:27:20.503 qpair failed and we were unable to recover it. 00:27:20.503 [2024-07-14 07:44:36.490462] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.503 [2024-07-14 07:44:36.490670] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.503 [2024-07-14 07:44:36.490697] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0c30000b90 with addr=10.0.0.2, port=4420 00:27:20.503 qpair failed and we were unable to recover it. 00:27:20.503 [2024-07-14 07:44:36.490855] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.503 [2024-07-14 07:44:36.491068] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.504 [2024-07-14 07:44:36.491095] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0c30000b90 with addr=10.0.0.2, port=4420 00:27:20.504 qpair failed and we were unable to recover it. 00:27:20.504 [2024-07-14 07:44:36.491253] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.504 [2024-07-14 07:44:36.491441] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.504 [2024-07-14 07:44:36.491467] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0c30000b90 with addr=10.0.0.2, port=4420 00:27:20.504 qpair failed and we were unable to recover it. 00:27:20.504 [2024-07-14 07:44:36.491617] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.504 [2024-07-14 07:44:36.491792] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.504 [2024-07-14 07:44:36.491819] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0c30000b90 with addr=10.0.0.2, port=4420 00:27:20.504 qpair failed and we were unable to recover it. 00:27:20.504 [2024-07-14 07:44:36.491995] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.504 [2024-07-14 07:44:36.492173] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.504 [2024-07-14 07:44:36.492200] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0c30000b90 with addr=10.0.0.2, port=4420 00:27:20.504 qpair failed and we were unable to recover it. 00:27:20.504 [2024-07-14 07:44:36.492372] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.504 [2024-07-14 07:44:36.492551] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.504 [2024-07-14 07:44:36.492576] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0c30000b90 with addr=10.0.0.2, port=4420 00:27:20.504 qpair failed and we were unable to recover it. 00:27:20.504 [2024-07-14 07:44:36.492764] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.504 [2024-07-14 07:44:36.492978] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.504 [2024-07-14 07:44:36.493005] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0c30000b90 with addr=10.0.0.2, port=4420 00:27:20.504 qpair failed and we were unable to recover it. 00:27:20.504 [2024-07-14 07:44:36.493185] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.504 [2024-07-14 07:44:36.493358] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.504 [2024-07-14 07:44:36.493383] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0c30000b90 with addr=10.0.0.2, port=4420 00:27:20.504 qpair failed and we were unable to recover it. 00:27:20.504 [2024-07-14 07:44:36.493561] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.504 [2024-07-14 07:44:36.493724] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.504 [2024-07-14 07:44:36.493750] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0c30000b90 with addr=10.0.0.2, port=4420 00:27:20.504 qpair failed and we were unable to recover it. 00:27:20.504 [2024-07-14 07:44:36.493960] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.504 [2024-07-14 07:44:36.494172] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.504 [2024-07-14 07:44:36.494198] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0c30000b90 with addr=10.0.0.2, port=4420 00:27:20.504 qpair failed and we were unable to recover it. 00:27:20.504 [2024-07-14 07:44:36.494377] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.504 [2024-07-14 07:44:36.494593] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.504 [2024-07-14 07:44:36.494619] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0c30000b90 with addr=10.0.0.2, port=4420 00:27:20.504 qpair failed and we were unable to recover it. 00:27:20.504 [2024-07-14 07:44:36.494810] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.504 [2024-07-14 07:44:36.494989] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.504 [2024-07-14 07:44:36.495016] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0c30000b90 with addr=10.0.0.2, port=4420 00:27:20.504 qpair failed and we were unable to recover it. 00:27:20.504 [2024-07-14 07:44:36.495196] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.504 [2024-07-14 07:44:36.495364] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.504 [2024-07-14 07:44:36.495389] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0c30000b90 with addr=10.0.0.2, port=4420 00:27:20.504 qpair failed and we were unable to recover it. 00:27:20.504 [2024-07-14 07:44:36.495579] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.504 [2024-07-14 07:44:36.495763] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.504 [2024-07-14 07:44:36.495790] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0c30000b90 with addr=10.0.0.2, port=4420 00:27:20.504 qpair failed and we were unable to recover it. 00:27:20.504 [2024-07-14 07:44:36.495977] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.504 [2024-07-14 07:44:36.496160] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.504 [2024-07-14 07:44:36.496186] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0c30000b90 with addr=10.0.0.2, port=4420 00:27:20.504 qpair failed and we were unable to recover it. 00:27:20.504 [2024-07-14 07:44:36.496372] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.504 [2024-07-14 07:44:36.496522] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.504 [2024-07-14 07:44:36.496548] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0c30000b90 with addr=10.0.0.2, port=4420 00:27:20.504 qpair failed and we were unable to recover it. 00:27:20.504 [2024-07-14 07:44:36.496733] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.504 [2024-07-14 07:44:36.496885] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.504 [2024-07-14 07:44:36.496912] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0c30000b90 with addr=10.0.0.2, port=4420 00:27:20.504 qpair failed and we were unable to recover it. 00:27:20.504 [2024-07-14 07:44:36.497066] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.504 [2024-07-14 07:44:36.497226] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.504 [2024-07-14 07:44:36.497254] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0c30000b90 with addr=10.0.0.2, port=4420 00:27:20.504 qpair failed and we were unable to recover it. 00:27:20.504 [2024-07-14 07:44:36.497434] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.504 [2024-07-14 07:44:36.497584] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.504 [2024-07-14 07:44:36.497610] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0c30000b90 with addr=10.0.0.2, port=4420 00:27:20.504 qpair failed and we were unable to recover it. 00:27:20.504 [2024-07-14 07:44:36.497767] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.504 [2024-07-14 07:44:36.497941] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.504 [2024-07-14 07:44:36.497969] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0c30000b90 with addr=10.0.0.2, port=4420 00:27:20.504 qpair failed and we were unable to recover it. 00:27:20.504 [2024-07-14 07:44:36.498128] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.504 [2024-07-14 07:44:36.498309] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.504 [2024-07-14 07:44:36.498336] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0c30000b90 with addr=10.0.0.2, port=4420 00:27:20.504 qpair failed and we were unable to recover it. 00:27:20.504 [2024-07-14 07:44:36.498515] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.504 [2024-07-14 07:44:36.498663] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.504 [2024-07-14 07:44:36.498692] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0c30000b90 with addr=10.0.0.2, port=4420 00:27:20.504 qpair failed and we were unable to recover it. 00:27:20.504 [2024-07-14 07:44:36.498886] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.504 [2024-07-14 07:44:36.499086] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.504 [2024-07-14 07:44:36.499115] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0c30000b90 with addr=10.0.0.2, port=4420 00:27:20.504 qpair failed and we were unable to recover it. 00:27:20.504 [2024-07-14 07:44:36.499289] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.504 [2024-07-14 07:44:36.499449] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.504 [2024-07-14 07:44:36.499476] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0c30000b90 with addr=10.0.0.2, port=4420 00:27:20.504 qpair failed and we were unable to recover it. 00:27:20.504 [2024-07-14 07:44:36.499665] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.504 [2024-07-14 07:44:36.499824] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.504 [2024-07-14 07:44:36.499850] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0c30000b90 with addr=10.0.0.2, port=4420 00:27:20.504 qpair failed and we were unable to recover it. 00:27:20.504 [2024-07-14 07:44:36.500025] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.504 [2024-07-14 07:44:36.500185] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.504 [2024-07-14 07:44:36.500214] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0c30000b90 with addr=10.0.0.2, port=4420 00:27:20.504 qpair failed and we were unable to recover it. 00:27:20.504 [2024-07-14 07:44:36.500425] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.504 [2024-07-14 07:44:36.500611] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.504 [2024-07-14 07:44:36.500637] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0c30000b90 with addr=10.0.0.2, port=4420 00:27:20.504 qpair failed and we were unable to recover it. 00:27:20.504 [2024-07-14 07:44:36.500815] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.504 [2024-07-14 07:44:36.501021] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.504 [2024-07-14 07:44:36.501047] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0c30000b90 with addr=10.0.0.2, port=4420 00:27:20.504 qpair failed and we were unable to recover it. 00:27:20.504 [2024-07-14 07:44:36.501207] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.504 [2024-07-14 07:44:36.501352] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.504 [2024-07-14 07:44:36.501378] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0c30000b90 with addr=10.0.0.2, port=4420 00:27:20.504 qpair failed and we were unable to recover it. 00:27:20.504 [2024-07-14 07:44:36.501526] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.504 [2024-07-14 07:44:36.501707] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.504 [2024-07-14 07:44:36.501734] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0c30000b90 with addr=10.0.0.2, port=4420 00:27:20.504 qpair failed and we were unable to recover it. 00:27:20.504 [2024-07-14 07:44:36.501911] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.504 [2024-07-14 07:44:36.502077] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.504 [2024-07-14 07:44:36.502103] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0c30000b90 with addr=10.0.0.2, port=4420 00:27:20.504 qpair failed and we were unable to recover it. 00:27:20.504 [2024-07-14 07:44:36.502287] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.504 [2024-07-14 07:44:36.502502] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.504 [2024-07-14 07:44:36.502529] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0c30000b90 with addr=10.0.0.2, port=4420 00:27:20.504 qpair failed and we were unable to recover it. 00:27:20.504 [2024-07-14 07:44:36.502687] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.505 [2024-07-14 07:44:36.502878] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.505 [2024-07-14 07:44:36.502904] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0c30000b90 with addr=10.0.0.2, port=4420 00:27:20.505 qpair failed and we were unable to recover it. 00:27:20.505 [2024-07-14 07:44:36.503093] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.505 [2024-07-14 07:44:36.503278] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.505 [2024-07-14 07:44:36.503305] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0c30000b90 with addr=10.0.0.2, port=4420 00:27:20.505 qpair failed and we were unable to recover it. 00:27:20.505 [2024-07-14 07:44:36.503476] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.505 [2024-07-14 07:44:36.503652] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.505 [2024-07-14 07:44:36.503678] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0c30000b90 with addr=10.0.0.2, port=4420 00:27:20.505 qpair failed and we were unable to recover it. 00:27:20.505 [2024-07-14 07:44:36.503883] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.505 [2024-07-14 07:44:36.504037] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.505 [2024-07-14 07:44:36.504065] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0c30000b90 with addr=10.0.0.2, port=4420 00:27:20.505 qpair failed and we were unable to recover it. 00:27:20.505 [2024-07-14 07:44:36.504272] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.505 [2024-07-14 07:44:36.504445] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.505 [2024-07-14 07:44:36.504472] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0c30000b90 with addr=10.0.0.2, port=4420 00:27:20.505 qpair failed and we were unable to recover it. 00:27:20.505 [2024-07-14 07:44:36.504679] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.505 [2024-07-14 07:44:36.504857] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.505 [2024-07-14 07:44:36.504889] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0c30000b90 with addr=10.0.0.2, port=4420 00:27:20.505 qpair failed and we were unable to recover it. 00:27:20.505 [2024-07-14 07:44:36.505048] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.505 [2024-07-14 07:44:36.505218] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.505 [2024-07-14 07:44:36.505244] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0c30000b90 with addr=10.0.0.2, port=4420 00:27:20.505 qpair failed and we were unable to recover it. 00:27:20.505 [2024-07-14 07:44:36.505389] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.505 [2024-07-14 07:44:36.505568] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.505 [2024-07-14 07:44:36.505594] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0c30000b90 with addr=10.0.0.2, port=4420 00:27:20.505 qpair failed and we were unable to recover it. 00:27:20.505 [2024-07-14 07:44:36.505805] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.505 [2024-07-14 07:44:36.505989] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.505 [2024-07-14 07:44:36.506016] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0c30000b90 with addr=10.0.0.2, port=4420 00:27:20.505 qpair failed and we were unable to recover it. 00:27:20.505 [2024-07-14 07:44:36.506165] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.505 [2024-07-14 07:44:36.506345] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.505 [2024-07-14 07:44:36.506371] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0c30000b90 with addr=10.0.0.2, port=4420 00:27:20.505 qpair failed and we were unable to recover it. 00:27:20.505 [2024-07-14 07:44:36.506540] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.505 [2024-07-14 07:44:36.506749] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.505 [2024-07-14 07:44:36.506775] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0c30000b90 with addr=10.0.0.2, port=4420 00:27:20.505 qpair failed and we were unable to recover it. 00:27:20.505 [2024-07-14 07:44:36.506949] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.505 [2024-07-14 07:44:36.507101] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.505 [2024-07-14 07:44:36.507127] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0c30000b90 with addr=10.0.0.2, port=4420 00:27:20.505 qpair failed and we were unable to recover it. 00:27:20.505 [2024-07-14 07:44:36.507302] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.505 [2024-07-14 07:44:36.507483] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.505 [2024-07-14 07:44:36.507509] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0c30000b90 with addr=10.0.0.2, port=4420 00:27:20.505 qpair failed and we were unable to recover it. 00:27:20.505 [2024-07-14 07:44:36.507692] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.505 [2024-07-14 07:44:36.507898] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.505 [2024-07-14 07:44:36.507934] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0c30000b90 with addr=10.0.0.2, port=4420 00:27:20.505 qpair failed and we were unable to recover it. 00:27:20.505 [2024-07-14 07:44:36.508137] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.505 [2024-07-14 07:44:36.508343] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.505 [2024-07-14 07:44:36.508370] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0c30000b90 with addr=10.0.0.2, port=4420 00:27:20.505 qpair failed and we were unable to recover it. 00:27:20.505 [2024-07-14 07:44:36.508521] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.505 [2024-07-14 07:44:36.508705] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.505 [2024-07-14 07:44:36.508732] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0c30000b90 with addr=10.0.0.2, port=4420 00:27:20.505 qpair failed and we were unable to recover it. 00:27:20.505 [2024-07-14 07:44:36.508911] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.505 [2024-07-14 07:44:36.509125] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.505 [2024-07-14 07:44:36.509152] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0c30000b90 with addr=10.0.0.2, port=4420 00:27:20.505 qpair failed and we were unable to recover it. 00:27:20.505 [2024-07-14 07:44:36.509338] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.505 [2024-07-14 07:44:36.509520] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.505 [2024-07-14 07:44:36.509547] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0c30000b90 with addr=10.0.0.2, port=4420 00:27:20.505 qpair failed and we were unable to recover it. 00:27:20.505 [2024-07-14 07:44:36.509727] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.505 [2024-07-14 07:44:36.509909] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.505 [2024-07-14 07:44:36.509937] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0c30000b90 with addr=10.0.0.2, port=4420 00:27:20.505 qpair failed and we were unable to recover it. 00:27:20.505 [2024-07-14 07:44:36.510093] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.505 [2024-07-14 07:44:36.510246] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.505 [2024-07-14 07:44:36.510274] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0c30000b90 with addr=10.0.0.2, port=4420 00:27:20.505 qpair failed and we were unable to recover it. 00:27:20.505 [2024-07-14 07:44:36.510436] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.505 [2024-07-14 07:44:36.510597] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.505 [2024-07-14 07:44:36.510624] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0c30000b90 with addr=10.0.0.2, port=4420 00:27:20.505 qpair failed and we were unable to recover it. 00:27:20.505 [2024-07-14 07:44:36.510796] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.505 [2024-07-14 07:44:36.510985] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.505 [2024-07-14 07:44:36.511012] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0c30000b90 with addr=10.0.0.2, port=4420 00:27:20.505 qpair failed and we were unable to recover it. 00:27:20.505 [2024-07-14 07:44:36.511177] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.505 [2024-07-14 07:44:36.511365] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.505 [2024-07-14 07:44:36.511391] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0c30000b90 with addr=10.0.0.2, port=4420 00:27:20.505 qpair failed and we were unable to recover it. 00:27:20.505 [2024-07-14 07:44:36.511603] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.505 [2024-07-14 07:44:36.511813] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.505 [2024-07-14 07:44:36.511840] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0c30000b90 with addr=10.0.0.2, port=4420 00:27:20.505 qpair failed and we were unable to recover it. 00:27:20.505 [2024-07-14 07:44:36.512015] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.505 [2024-07-14 07:44:36.512197] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.505 [2024-07-14 07:44:36.512224] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0c30000b90 with addr=10.0.0.2, port=4420 00:27:20.505 qpair failed and we were unable to recover it. 00:27:20.505 [2024-07-14 07:44:36.512393] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.505 [2024-07-14 07:44:36.512576] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.505 [2024-07-14 07:44:36.512602] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0c30000b90 with addr=10.0.0.2, port=4420 00:27:20.505 qpair failed and we were unable to recover it. 00:27:20.505 [2024-07-14 07:44:36.512783] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.505 [2024-07-14 07:44:36.512964] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.505 [2024-07-14 07:44:36.512991] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0c30000b90 with addr=10.0.0.2, port=4420 00:27:20.505 qpair failed and we were unable to recover it. 00:27:20.505 [2024-07-14 07:44:36.513197] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.505 [2024-07-14 07:44:36.513387] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.505 [2024-07-14 07:44:36.513413] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0c30000b90 with addr=10.0.0.2, port=4420 00:27:20.505 qpair failed and we were unable to recover it. 00:27:20.505 [2024-07-14 07:44:36.513600] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.505 [2024-07-14 07:44:36.513782] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.505 [2024-07-14 07:44:36.513808] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0c30000b90 with addr=10.0.0.2, port=4420 00:27:20.505 qpair failed and we were unable to recover it. 00:27:20.505 [2024-07-14 07:44:36.513963] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.505 [2024-07-14 07:44:36.514141] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.505 [2024-07-14 07:44:36.514167] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0c30000b90 with addr=10.0.0.2, port=4420 00:27:20.505 qpair failed and we were unable to recover it. 00:27:20.505 [2024-07-14 07:44:36.514331] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.505 [2024-07-14 07:44:36.514521] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.505 [2024-07-14 07:44:36.514548] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0c30000b90 with addr=10.0.0.2, port=4420 00:27:20.506 qpair failed and we were unable to recover it. 00:27:20.506 [2024-07-14 07:44:36.514732] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.506 [2024-07-14 07:44:36.514894] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.506 [2024-07-14 07:44:36.514920] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0c30000b90 with addr=10.0.0.2, port=4420 00:27:20.506 qpair failed and we were unable to recover it. 00:27:20.506 [2024-07-14 07:44:36.515078] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.506 [2024-07-14 07:44:36.515257] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.506 [2024-07-14 07:44:36.515285] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0c30000b90 with addr=10.0.0.2, port=4420 00:27:20.506 qpair failed and we were unable to recover it. 00:27:20.506 [2024-07-14 07:44:36.515439] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.506 [2024-07-14 07:44:36.515614] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.506 [2024-07-14 07:44:36.515641] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0c30000b90 with addr=10.0.0.2, port=4420 00:27:20.506 qpair failed and we were unable to recover it. 00:27:20.506 [2024-07-14 07:44:36.515816] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.506 [2024-07-14 07:44:36.515969] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.506 [2024-07-14 07:44:36.515996] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0c30000b90 with addr=10.0.0.2, port=4420 00:27:20.506 qpair failed and we were unable to recover it. 00:27:20.506 [2024-07-14 07:44:36.516210] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.506 [2024-07-14 07:44:36.516363] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.506 [2024-07-14 07:44:36.516389] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0c30000b90 with addr=10.0.0.2, port=4420 00:27:20.506 qpair failed and we were unable to recover it. 00:27:20.506 [2024-07-14 07:44:36.516589] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.506 [2024-07-14 07:44:36.516739] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.506 [2024-07-14 07:44:36.516765] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0c30000b90 with addr=10.0.0.2, port=4420 00:27:20.506 qpair failed and we were unable to recover it. 00:27:20.506 [2024-07-14 07:44:36.516948] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.506 [2024-07-14 07:44:36.517131] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.506 [2024-07-14 07:44:36.517158] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0c30000b90 with addr=10.0.0.2, port=4420 00:27:20.506 qpair failed and we were unable to recover it. 00:27:20.506 [2024-07-14 07:44:36.517338] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.506 [2024-07-14 07:44:36.517513] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.506 [2024-07-14 07:44:36.517538] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0c30000b90 with addr=10.0.0.2, port=4420 00:27:20.506 qpair failed and we were unable to recover it. 00:27:20.506 [2024-07-14 07:44:36.517740] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.506 [2024-07-14 07:44:36.517901] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.506 [2024-07-14 07:44:36.517929] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0c30000b90 with addr=10.0.0.2, port=4420 00:27:20.506 qpair failed and we were unable to recover it. 00:27:20.506 [2024-07-14 07:44:36.518150] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.506 [2024-07-14 07:44:36.518332] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.506 [2024-07-14 07:44:36.518358] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0c30000b90 with addr=10.0.0.2, port=4420 00:27:20.506 qpair failed and we were unable to recover it. 00:27:20.506 [2024-07-14 07:44:36.518533] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.506 [2024-07-14 07:44:36.518711] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.506 [2024-07-14 07:44:36.518736] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0c30000b90 with addr=10.0.0.2, port=4420 00:27:20.506 qpair failed and we were unable to recover it. 00:27:20.506 [2024-07-14 07:44:36.518909] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.506 [2024-07-14 07:44:36.519071] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.506 [2024-07-14 07:44:36.519097] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0c30000b90 with addr=10.0.0.2, port=4420 00:27:20.506 qpair failed and we were unable to recover it. 00:27:20.506 [2024-07-14 07:44:36.519244] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.506 [2024-07-14 07:44:36.519412] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.506 [2024-07-14 07:44:36.519438] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0c30000b90 with addr=10.0.0.2, port=4420 00:27:20.506 qpair failed and we were unable to recover it. 00:27:20.506 [2024-07-14 07:44:36.519614] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.506 [2024-07-14 07:44:36.519823] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.506 [2024-07-14 07:44:36.519848] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0c30000b90 with addr=10.0.0.2, port=4420 00:27:20.506 qpair failed and we were unable to recover it. 00:27:20.506 [2024-07-14 07:44:36.520042] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.506 [2024-07-14 07:44:36.520216] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.506 [2024-07-14 07:44:36.520241] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0c30000b90 with addr=10.0.0.2, port=4420 00:27:20.506 qpair failed and we were unable to recover it. 00:27:20.506 [2024-07-14 07:44:36.520431] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.506 [2024-07-14 07:44:36.520598] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.506 [2024-07-14 07:44:36.520623] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0c30000b90 with addr=10.0.0.2, port=4420 00:27:20.506 qpair failed and we were unable to recover it. 00:27:20.506 [2024-07-14 07:44:36.520803] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.506 [2024-07-14 07:44:36.520996] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.506 [2024-07-14 07:44:36.521023] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0c30000b90 with addr=10.0.0.2, port=4420 00:27:20.506 qpair failed and we were unable to recover it. 00:27:20.506 [2024-07-14 07:44:36.521200] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.506 [2024-07-14 07:44:36.521388] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.506 [2024-07-14 07:44:36.521413] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0c30000b90 with addr=10.0.0.2, port=4420 00:27:20.506 qpair failed and we were unable to recover it. 00:27:20.506 [2024-07-14 07:44:36.521603] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.506 [2024-07-14 07:44:36.521807] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.506 [2024-07-14 07:44:36.521833] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0c30000b90 with addr=10.0.0.2, port=4420 00:27:20.506 qpair failed and we were unable to recover it. 00:27:20.506 [2024-07-14 07:44:36.522014] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.506 [2024-07-14 07:44:36.522201] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.506 [2024-07-14 07:44:36.522231] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0c30000b90 with addr=10.0.0.2, port=4420 00:27:20.506 qpair failed and we were unable to recover it. 00:27:20.506 [2024-07-14 07:44:36.522411] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.506 [2024-07-14 07:44:36.522572] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.506 [2024-07-14 07:44:36.522597] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0c30000b90 with addr=10.0.0.2, port=4420 00:27:20.506 qpair failed and we were unable to recover it. 00:27:20.506 [2024-07-14 07:44:36.522807] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.506 [2024-07-14 07:44:36.522984] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.506 [2024-07-14 07:44:36.523011] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0c30000b90 with addr=10.0.0.2, port=4420 00:27:20.506 qpair failed and we were unable to recover it. 00:27:20.506 [2024-07-14 07:44:36.523172] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.506 [2024-07-14 07:44:36.523373] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.506 [2024-07-14 07:44:36.523398] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0c30000b90 with addr=10.0.0.2, port=4420 00:27:20.506 qpair failed and we were unable to recover it. 00:27:20.506 [2024-07-14 07:44:36.523588] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.506 [2024-07-14 07:44:36.523797] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.506 [2024-07-14 07:44:36.523823] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0c30000b90 with addr=10.0.0.2, port=4420 00:27:20.506 qpair failed and we were unable to recover it. 00:27:20.506 [2024-07-14 07:44:36.523988] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.506 [2024-07-14 07:44:36.524142] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.506 [2024-07-14 07:44:36.524169] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0c30000b90 with addr=10.0.0.2, port=4420 00:27:20.506 qpair failed and we were unable to recover it. 00:27:20.506 [2024-07-14 07:44:36.524342] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.506 [2024-07-14 07:44:36.524524] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.506 [2024-07-14 07:44:36.524551] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0c30000b90 with addr=10.0.0.2, port=4420 00:27:20.506 qpair failed and we were unable to recover it. 00:27:20.506 [2024-07-14 07:44:36.524743] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.506 [2024-07-14 07:44:36.524899] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.506 [2024-07-14 07:44:36.524931] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0c30000b90 with addr=10.0.0.2, port=4420 00:27:20.506 qpair failed and we were unable to recover it. 00:27:20.506 [2024-07-14 07:44:36.525122] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.506 [2024-07-14 07:44:36.525307] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.506 [2024-07-14 07:44:36.525332] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0c30000b90 with addr=10.0.0.2, port=4420 00:27:20.506 qpair failed and we were unable to recover it. 00:27:20.506 [2024-07-14 07:44:36.525506] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.506 [2024-07-14 07:44:36.525687] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.506 [2024-07-14 07:44:36.525712] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0c30000b90 with addr=10.0.0.2, port=4420 00:27:20.506 qpair failed and we were unable to recover it. 00:27:20.506 [2024-07-14 07:44:36.525882] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.506 [2024-07-14 07:44:36.526082] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.506 [2024-07-14 07:44:36.526112] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0c30000b90 with addr=10.0.0.2, port=4420 00:27:20.506 qpair failed and we were unable to recover it. 00:27:20.507 [2024-07-14 07:44:36.526269] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.507 [2024-07-14 07:44:36.526447] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.507 [2024-07-14 07:44:36.526473] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0c30000b90 with addr=10.0.0.2, port=4420 00:27:20.507 qpair failed and we were unable to recover it. 00:27:20.507 [2024-07-14 07:44:36.526655] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.507 [2024-07-14 07:44:36.526837] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.507 [2024-07-14 07:44:36.526863] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0c30000b90 with addr=10.0.0.2, port=4420 00:27:20.507 qpair failed and we were unable to recover it. 00:27:20.507 [2024-07-14 07:44:36.527072] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.507 [2024-07-14 07:44:36.527232] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.507 [2024-07-14 07:44:36.527257] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0c30000b90 with addr=10.0.0.2, port=4420 00:27:20.507 qpair failed and we were unable to recover it. 00:27:20.507 [2024-07-14 07:44:36.527425] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.507 [2024-07-14 07:44:36.527606] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.507 [2024-07-14 07:44:36.527633] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0c30000b90 with addr=10.0.0.2, port=4420 00:27:20.507 qpair failed and we were unable to recover it. 00:27:20.507 [2024-07-14 07:44:36.527803] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.507 [2024-07-14 07:44:36.527963] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.507 [2024-07-14 07:44:36.527991] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0c30000b90 with addr=10.0.0.2, port=4420 00:27:20.507 qpair failed and we were unable to recover it. 00:27:20.507 [2024-07-14 07:44:36.528200] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.507 [2024-07-14 07:44:36.528364] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.507 [2024-07-14 07:44:36.528389] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0c30000b90 with addr=10.0.0.2, port=4420 00:27:20.507 qpair failed and we were unable to recover it. 00:27:20.507 [2024-07-14 07:44:36.528572] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.507 [2024-07-14 07:44:36.528756] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.507 [2024-07-14 07:44:36.528782] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0c30000b90 with addr=10.0.0.2, port=4420 00:27:20.507 qpair failed and we were unable to recover it. 00:27:20.507 [2024-07-14 07:44:36.528997] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.507 [2024-07-14 07:44:36.529150] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.507 [2024-07-14 07:44:36.529176] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0c30000b90 with addr=10.0.0.2, port=4420 00:27:20.507 qpair failed and we were unable to recover it. 00:27:20.507 [2024-07-14 07:44:36.529339] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.507 [2024-07-14 07:44:36.529528] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.507 [2024-07-14 07:44:36.529555] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0c30000b90 with addr=10.0.0.2, port=4420 00:27:20.507 qpair failed and we were unable to recover it. 00:27:20.507 [2024-07-14 07:44:36.529726] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.507 [2024-07-14 07:44:36.529885] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.507 [2024-07-14 07:44:36.529917] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0c30000b90 with addr=10.0.0.2, port=4420 00:27:20.507 qpair failed and we were unable to recover it. 00:27:20.507 [2024-07-14 07:44:36.530074] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.507 [2024-07-14 07:44:36.530233] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.507 [2024-07-14 07:44:36.530259] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0c30000b90 with addr=10.0.0.2, port=4420 00:27:20.507 qpair failed and we were unable to recover it. 00:27:20.507 [2024-07-14 07:44:36.530461] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.507 [2024-07-14 07:44:36.530673] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.507 [2024-07-14 07:44:36.530699] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0c30000b90 with addr=10.0.0.2, port=4420 00:27:20.507 qpair failed and we were unable to recover it. 00:27:20.507 [2024-07-14 07:44:36.530893] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.507 [2024-07-14 07:44:36.531073] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.507 [2024-07-14 07:44:36.531101] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0c30000b90 with addr=10.0.0.2, port=4420 00:27:20.507 qpair failed and we were unable to recover it. 00:27:20.507 [2024-07-14 07:44:36.531289] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.507 [2024-07-14 07:44:36.531452] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.507 [2024-07-14 07:44:36.531490] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0c30000b90 with addr=10.0.0.2, port=4420 00:27:20.507 qpair failed and we were unable to recover it. 00:27:20.507 [2024-07-14 07:44:36.531659] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.507 [2024-07-14 07:44:36.531813] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.507 [2024-07-14 07:44:36.531842] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0c30000b90 with addr=10.0.0.2, port=4420 00:27:20.507 qpair failed and we were unable to recover it. 00:27:20.507 [2024-07-14 07:44:36.532049] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.507 [2024-07-14 07:44:36.532209] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.507 [2024-07-14 07:44:36.532234] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0c30000b90 with addr=10.0.0.2, port=4420 00:27:20.507 qpair failed and we were unable to recover it. 00:27:20.507 [2024-07-14 07:44:36.532388] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.507 [2024-07-14 07:44:36.532559] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.507 [2024-07-14 07:44:36.532585] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0c30000b90 with addr=10.0.0.2, port=4420 00:27:20.507 qpair failed and we were unable to recover it. 00:27:20.507 [2024-07-14 07:44:36.532763] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.507 [2024-07-14 07:44:36.532950] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.507 [2024-07-14 07:44:36.532977] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0c30000b90 with addr=10.0.0.2, port=4420 00:27:20.507 qpair failed and we were unable to recover it. 00:27:20.507 [2024-07-14 07:44:36.533136] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.507 [2024-07-14 07:44:36.533300] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.507 [2024-07-14 07:44:36.533326] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0c30000b90 with addr=10.0.0.2, port=4420 00:27:20.507 qpair failed and we were unable to recover it. 00:27:20.507 [2024-07-14 07:44:36.533512] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.507 [2024-07-14 07:44:36.533729] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.507 [2024-07-14 07:44:36.533755] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0c30000b90 with addr=10.0.0.2, port=4420 00:27:20.507 qpair failed and we were unable to recover it. 00:27:20.507 [2024-07-14 07:44:36.533956] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.507 [2024-07-14 07:44:36.534134] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.507 [2024-07-14 07:44:36.534172] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0c30000b90 with addr=10.0.0.2, port=4420 00:27:20.507 qpair failed and we were unable to recover it. 00:27:20.507 [2024-07-14 07:44:36.534367] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.507 [2024-07-14 07:44:36.534553] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.507 [2024-07-14 07:44:36.534579] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0c30000b90 with addr=10.0.0.2, port=4420 00:27:20.507 qpair failed and we were unable to recover it. 00:27:20.507 [2024-07-14 07:44:36.534759] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.507 [2024-07-14 07:44:36.534961] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.507 [2024-07-14 07:44:36.534991] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0c30000b90 with addr=10.0.0.2, port=4420 00:27:20.507 qpair failed and we were unable to recover it. 00:27:20.507 [2024-07-14 07:44:36.535177] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.507 [2024-07-14 07:44:36.535341] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.507 [2024-07-14 07:44:36.535370] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0c30000b90 with addr=10.0.0.2, port=4420 00:27:20.507 qpair failed and we were unable to recover it. 00:27:20.507 [2024-07-14 07:44:36.535539] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.507 [2024-07-14 07:44:36.535720] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.507 [2024-07-14 07:44:36.535751] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0c30000b90 with addr=10.0.0.2, port=4420 00:27:20.507 qpair failed and we were unable to recover it. 00:27:20.507 [2024-07-14 07:44:36.535924] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.507 [2024-07-14 07:44:36.536106] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.507 [2024-07-14 07:44:36.536133] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0c30000b90 with addr=10.0.0.2, port=4420 00:27:20.507 qpair failed and we were unable to recover it. 00:27:20.507 [2024-07-14 07:44:36.536359] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.507 [2024-07-14 07:44:36.536520] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.507 [2024-07-14 07:44:36.536546] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0c30000b90 with addr=10.0.0.2, port=4420 00:27:20.507 qpair failed and we were unable to recover it. 00:27:20.507 [2024-07-14 07:44:36.536706] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.507 [2024-07-14 07:44:36.536893] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.507 [2024-07-14 07:44:36.536922] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0c30000b90 with addr=10.0.0.2, port=4420 00:27:20.507 qpair failed and we were unable to recover it. 00:27:20.507 [2024-07-14 07:44:36.537076] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.507 [2024-07-14 07:44:36.537288] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.507 [2024-07-14 07:44:36.537315] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0c30000b90 with addr=10.0.0.2, port=4420 00:27:20.507 qpair failed and we were unable to recover it. 00:27:20.507 [2024-07-14 07:44:36.537484] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.507 [2024-07-14 07:44:36.537671] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.507 [2024-07-14 07:44:36.537699] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0c30000b90 with addr=10.0.0.2, port=4420 00:27:20.507 qpair failed and we were unable to recover it. 00:27:20.508 [2024-07-14 07:44:36.537879] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.508 [2024-07-14 07:44:36.538039] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.508 [2024-07-14 07:44:36.538065] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0c30000b90 with addr=10.0.0.2, port=4420 00:27:20.508 qpair failed and we were unable to recover it. 00:27:20.508 [2024-07-14 07:44:36.538231] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.508 [2024-07-14 07:44:36.538414] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.508 [2024-07-14 07:44:36.538441] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0c30000b90 with addr=10.0.0.2, port=4420 00:27:20.508 qpair failed and we were unable to recover it. 00:27:20.508 [2024-07-14 07:44:36.538623] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.508 [2024-07-14 07:44:36.538799] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.508 [2024-07-14 07:44:36.538827] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0c30000b90 with addr=10.0.0.2, port=4420 00:27:20.508 qpair failed and we were unable to recover it. 00:27:20.508 [2024-07-14 07:44:36.539031] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.508 [2024-07-14 07:44:36.539185] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.508 [2024-07-14 07:44:36.539211] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0c30000b90 with addr=10.0.0.2, port=4420 00:27:20.508 qpair failed and we were unable to recover it. 00:27:20.508 [2024-07-14 07:44:36.539394] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.508 [2024-07-14 07:44:36.539537] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.508 [2024-07-14 07:44:36.539561] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0c30000b90 with addr=10.0.0.2, port=4420 00:27:20.508 qpair failed and we were unable to recover it. 00:27:20.508 [2024-07-14 07:44:36.539714] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.508 [2024-07-14 07:44:36.539923] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.508 [2024-07-14 07:44:36.539960] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0c30000b90 with addr=10.0.0.2, port=4420 00:27:20.508 qpair failed and we were unable to recover it. 00:27:20.508 [2024-07-14 07:44:36.540132] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.508 [2024-07-14 07:44:36.540291] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.508 [2024-07-14 07:44:36.540319] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0c30000b90 with addr=10.0.0.2, port=4420 00:27:20.508 qpair failed and we were unable to recover it. 00:27:20.508 [2024-07-14 07:44:36.540475] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.508 [2024-07-14 07:44:36.540646] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.508 [2024-07-14 07:44:36.540671] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0c30000b90 with addr=10.0.0.2, port=4420 00:27:20.508 qpair failed and we were unable to recover it. 00:27:20.508 [2024-07-14 07:44:36.540831] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.508 [2024-07-14 07:44:36.541034] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.508 [2024-07-14 07:44:36.541061] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0c30000b90 with addr=10.0.0.2, port=4420 00:27:20.508 qpair failed and we were unable to recover it. 00:27:20.508 [2024-07-14 07:44:36.541221] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.508 [2024-07-14 07:44:36.541397] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.508 [2024-07-14 07:44:36.541423] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0c30000b90 with addr=10.0.0.2, port=4420 00:27:20.508 qpair failed and we were unable to recover it. 00:27:20.508 [2024-07-14 07:44:36.541579] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.508 [2024-07-14 07:44:36.541784] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.508 [2024-07-14 07:44:36.541818] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0c30000b90 with addr=10.0.0.2, port=4420 00:27:20.508 qpair failed and we were unable to recover it. 00:27:20.508 [2024-07-14 07:44:36.542007] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.508 [2024-07-14 07:44:36.542192] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.508 [2024-07-14 07:44:36.542218] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0c30000b90 with addr=10.0.0.2, port=4420 00:27:20.508 qpair failed and we were unable to recover it. 00:27:20.508 [2024-07-14 07:44:36.542411] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.508 [2024-07-14 07:44:36.542586] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.508 [2024-07-14 07:44:36.542614] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0c30000b90 with addr=10.0.0.2, port=4420 00:27:20.508 qpair failed and we were unable to recover it. 00:27:20.508 [2024-07-14 07:44:36.542827] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.508 [2024-07-14 07:44:36.543018] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.508 [2024-07-14 07:44:36.543044] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0c30000b90 with addr=10.0.0.2, port=4420 00:27:20.508 qpair failed and we were unable to recover it. 00:27:20.508 [2024-07-14 07:44:36.543201] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.508 [2024-07-14 07:44:36.543421] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.508 [2024-07-14 07:44:36.543447] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0c30000b90 with addr=10.0.0.2, port=4420 00:27:20.508 qpair failed and we were unable to recover it. 00:27:20.508 [2024-07-14 07:44:36.543664] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.508 [2024-07-14 07:44:36.543854] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.508 [2024-07-14 07:44:36.543901] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0c30000b90 with addr=10.0.0.2, port=4420 00:27:20.508 qpair failed and we were unable to recover it. 00:27:20.508 [2024-07-14 07:44:36.544066] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.508 [2024-07-14 07:44:36.544241] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.508 [2024-07-14 07:44:36.544267] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0c30000b90 with addr=10.0.0.2, port=4420 00:27:20.508 qpair failed and we were unable to recover it. 00:27:20.508 [2024-07-14 07:44:36.544448] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.508 [2024-07-14 07:44:36.544634] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.508 [2024-07-14 07:44:36.544660] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0c30000b90 with addr=10.0.0.2, port=4420 00:27:20.508 qpair failed and we were unable to recover it. 00:27:20.508 [2024-07-14 07:44:36.544839] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.508 [2024-07-14 07:44:36.545020] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.508 [2024-07-14 07:44:36.545046] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0c30000b90 with addr=10.0.0.2, port=4420 00:27:20.508 qpair failed and we were unable to recover it. 00:27:20.508 [2024-07-14 07:44:36.545206] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.508 [2024-07-14 07:44:36.545418] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.508 [2024-07-14 07:44:36.545445] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0c30000b90 with addr=10.0.0.2, port=4420 00:27:20.508 qpair failed and we were unable to recover it. 00:27:20.508 [2024-07-14 07:44:36.545641] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.508 [2024-07-14 07:44:36.545795] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.508 [2024-07-14 07:44:36.545820] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0c30000b90 with addr=10.0.0.2, port=4420 00:27:20.508 qpair failed and we were unable to recover it. 00:27:20.508 [2024-07-14 07:44:36.546016] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.508 [2024-07-14 07:44:36.546173] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.508 [2024-07-14 07:44:36.546199] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0c30000b90 with addr=10.0.0.2, port=4420 00:27:20.508 qpair failed and we were unable to recover it. 00:27:20.508 [2024-07-14 07:44:36.546383] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.508 [2024-07-14 07:44:36.546567] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.508 [2024-07-14 07:44:36.546593] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0c30000b90 with addr=10.0.0.2, port=4420 00:27:20.508 qpair failed and we were unable to recover it. 00:27:20.508 [2024-07-14 07:44:36.546770] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.508 [2024-07-14 07:44:36.546964] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.508 [2024-07-14 07:44:36.546991] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0c30000b90 with addr=10.0.0.2, port=4420 00:27:20.508 qpair failed and we were unable to recover it. 00:27:20.508 [2024-07-14 07:44:36.547193] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.508 [2024-07-14 07:44:36.547350] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.508 [2024-07-14 07:44:36.547375] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0c30000b90 with addr=10.0.0.2, port=4420 00:27:20.508 qpair failed and we were unable to recover it. 00:27:20.508 [2024-07-14 07:44:36.547559] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.508 [2024-07-14 07:44:36.547767] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.509 [2024-07-14 07:44:36.547792] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0c30000b90 with addr=10.0.0.2, port=4420 00:27:20.509 qpair failed and we were unable to recover it. 00:27:20.509 [2024-07-14 07:44:36.547978] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.509 [2024-07-14 07:44:36.548126] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.509 [2024-07-14 07:44:36.548152] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0c30000b90 with addr=10.0.0.2, port=4420 00:27:20.509 qpair failed and we were unable to recover it. 00:27:20.509 [2024-07-14 07:44:36.548363] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.509 [2024-07-14 07:44:36.548531] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.509 [2024-07-14 07:44:36.548556] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0c30000b90 with addr=10.0.0.2, port=4420 00:27:20.509 qpair failed and we were unable to recover it. 00:27:20.509 [2024-07-14 07:44:36.548731] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.509 [2024-07-14 07:44:36.548945] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.509 [2024-07-14 07:44:36.548971] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0c30000b90 with addr=10.0.0.2, port=4420 00:27:20.509 qpair failed and we were unable to recover it. 00:27:20.509 [2024-07-14 07:44:36.549152] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.509 [2024-07-14 07:44:36.549344] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.509 [2024-07-14 07:44:36.549371] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0c30000b90 with addr=10.0.0.2, port=4420 00:27:20.509 qpair failed and we were unable to recover it. 00:27:20.509 [2024-07-14 07:44:36.549559] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.509 [2024-07-14 07:44:36.549747] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.509 [2024-07-14 07:44:36.549772] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0c30000b90 with addr=10.0.0.2, port=4420 00:27:20.509 qpair failed and we were unable to recover it. 00:27:20.509 [2024-07-14 07:44:36.549953] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.509 [2024-07-14 07:44:36.550105] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.509 [2024-07-14 07:44:36.550132] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0c30000b90 with addr=10.0.0.2, port=4420 00:27:20.509 qpair failed and we were unable to recover it. 00:27:20.509 [2024-07-14 07:44:36.550325] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.509 [2024-07-14 07:44:36.550495] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.509 [2024-07-14 07:44:36.550521] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0c30000b90 with addr=10.0.0.2, port=4420 00:27:20.509 qpair failed and we were unable to recover it. 00:27:20.509 [2024-07-14 07:44:36.550710] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.509 [2024-07-14 07:44:36.550896] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.509 [2024-07-14 07:44:36.550924] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0c30000b90 with addr=10.0.0.2, port=4420 00:27:20.509 qpair failed and we were unable to recover it. 00:27:20.509 [2024-07-14 07:44:36.551080] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.509 [2024-07-14 07:44:36.551232] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.509 [2024-07-14 07:44:36.551258] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0c30000b90 with addr=10.0.0.2, port=4420 00:27:20.509 qpair failed and we were unable to recover it. 00:27:20.509 [2024-07-14 07:44:36.551439] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.509 [2024-07-14 07:44:36.551660] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.509 [2024-07-14 07:44:36.551691] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0c30000b90 with addr=10.0.0.2, port=4420 00:27:20.509 qpair failed and we were unable to recover it. 00:27:20.509 [2024-07-14 07:44:36.551845] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.509 [2024-07-14 07:44:36.552014] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.509 [2024-07-14 07:44:36.552041] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0c30000b90 with addr=10.0.0.2, port=4420 00:27:20.509 qpair failed and we were unable to recover it. 00:27:20.509 [2024-07-14 07:44:36.552199] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.509 [2024-07-14 07:44:36.552379] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.509 [2024-07-14 07:44:36.552405] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0c30000b90 with addr=10.0.0.2, port=4420 00:27:20.509 qpair failed and we were unable to recover it. 00:27:20.509 [2024-07-14 07:44:36.552602] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.509 [2024-07-14 07:44:36.552793] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.509 [2024-07-14 07:44:36.552818] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0c30000b90 with addr=10.0.0.2, port=4420 00:27:20.509 qpair failed and we were unable to recover it. 00:27:20.509 [2024-07-14 07:44:36.552994] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.509 [2024-07-14 07:44:36.553145] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.509 [2024-07-14 07:44:36.553171] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0c30000b90 with addr=10.0.0.2, port=4420 00:27:20.509 qpair failed and we were unable to recover it. 00:27:20.509 [2024-07-14 07:44:36.553353] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.509 [2024-07-14 07:44:36.553506] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.509 [2024-07-14 07:44:36.553534] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0c30000b90 with addr=10.0.0.2, port=4420 00:27:20.509 qpair failed and we were unable to recover it. 00:27:20.509 [2024-07-14 07:44:36.553715] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.509 [2024-07-14 07:44:36.553889] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.509 [2024-07-14 07:44:36.553916] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0c30000b90 with addr=10.0.0.2, port=4420 00:27:20.509 qpair failed and we were unable to recover it. 00:27:20.509 [2024-07-14 07:44:36.554106] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.509 [2024-07-14 07:44:36.554258] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.509 [2024-07-14 07:44:36.554284] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0c30000b90 with addr=10.0.0.2, port=4420 00:27:20.509 qpair failed and we were unable to recover it. 00:27:20.509 [2024-07-14 07:44:36.554490] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.509 [2024-07-14 07:44:36.554670] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.509 [2024-07-14 07:44:36.554695] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0c30000b90 with addr=10.0.0.2, port=4420 00:27:20.509 qpair failed and we were unable to recover it. 00:27:20.509 [2024-07-14 07:44:36.554882] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.509 [2024-07-14 07:44:36.555033] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.509 [2024-07-14 07:44:36.555059] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0c30000b90 with addr=10.0.0.2, port=4420 00:27:20.509 qpair failed and we were unable to recover it. 00:27:20.509 [2024-07-14 07:44:36.555233] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.509 [2024-07-14 07:44:36.555423] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.509 [2024-07-14 07:44:36.555451] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0c30000b90 with addr=10.0.0.2, port=4420 00:27:20.509 qpair failed and we were unable to recover it. 00:27:20.509 [2024-07-14 07:44:36.555637] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.509 [2024-07-14 07:44:36.555789] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.509 [2024-07-14 07:44:36.555815] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0c30000b90 with addr=10.0.0.2, port=4420 00:27:20.509 qpair failed and we were unable to recover it. 00:27:20.509 [2024-07-14 07:44:36.555975] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.509 [2024-07-14 07:44:36.556162] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.509 [2024-07-14 07:44:36.556188] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0c30000b90 with addr=10.0.0.2, port=4420 00:27:20.509 qpair failed and we were unable to recover it. 00:27:20.509 [2024-07-14 07:44:36.556370] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.509 [2024-07-14 07:44:36.556561] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.509 [2024-07-14 07:44:36.556588] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0c30000b90 with addr=10.0.0.2, port=4420 00:27:20.509 qpair failed and we were unable to recover it. 00:27:20.509 [2024-07-14 07:44:36.556745] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.509 [2024-07-14 07:44:36.556916] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.509 [2024-07-14 07:44:36.556942] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0c30000b90 with addr=10.0.0.2, port=4420 00:27:20.509 qpair failed and we were unable to recover it. 00:27:20.509 [2024-07-14 07:44:36.557112] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.509 [2024-07-14 07:44:36.557271] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.509 [2024-07-14 07:44:36.557297] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0c30000b90 with addr=10.0.0.2, port=4420 00:27:20.509 qpair failed and we were unable to recover it. 00:27:20.509 [2024-07-14 07:44:36.557488] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.509 [2024-07-14 07:44:36.557674] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.509 [2024-07-14 07:44:36.557700] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0c30000b90 with addr=10.0.0.2, port=4420 00:27:20.509 qpair failed and we were unable to recover it. 00:27:20.509 [2024-07-14 07:44:36.557854] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.509 [2024-07-14 07:44:36.558038] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.509 [2024-07-14 07:44:36.558066] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0c30000b90 with addr=10.0.0.2, port=4420 00:27:20.509 qpair failed and we were unable to recover it. 00:27:20.509 [2024-07-14 07:44:36.558280] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.509 [2024-07-14 07:44:36.558457] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.509 [2024-07-14 07:44:36.558484] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0c30000b90 with addr=10.0.0.2, port=4420 00:27:20.509 qpair failed and we were unable to recover it. 00:27:20.509 [2024-07-14 07:44:36.558675] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.509 [2024-07-14 07:44:36.558863] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.509 [2024-07-14 07:44:36.558896] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0c30000b90 with addr=10.0.0.2, port=4420 00:27:20.509 qpair failed and we were unable to recover it. 00:27:20.509 [2024-07-14 07:44:36.559057] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.509 [2024-07-14 07:44:36.559236] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.510 [2024-07-14 07:44:36.559262] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0c30000b90 with addr=10.0.0.2, port=4420 00:27:20.510 qpair failed and we were unable to recover it. 00:27:20.510 [2024-07-14 07:44:36.559437] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.510 [2024-07-14 07:44:36.559654] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.510 [2024-07-14 07:44:36.559681] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0c30000b90 with addr=10.0.0.2, port=4420 00:27:20.510 qpair failed and we were unable to recover it. 00:27:20.510 [2024-07-14 07:44:36.559862] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.510 [2024-07-14 07:44:36.560068] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.510 [2024-07-14 07:44:36.560094] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0c30000b90 with addr=10.0.0.2, port=4420 00:27:20.510 qpair failed and we were unable to recover it. 00:27:20.510 [2024-07-14 07:44:36.560271] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.510 [2024-07-14 07:44:36.560475] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.510 [2024-07-14 07:44:36.560500] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0c30000b90 with addr=10.0.0.2, port=4420 00:27:20.510 qpair failed and we were unable to recover it. 00:27:20.510 [2024-07-14 07:44:36.560668] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.510 [2024-07-14 07:44:36.560883] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.510 [2024-07-14 07:44:36.560911] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0c30000b90 with addr=10.0.0.2, port=4420 00:27:20.510 qpair failed and we were unable to recover it. 00:27:20.510 [2024-07-14 07:44:36.561074] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.510 [2024-07-14 07:44:36.561275] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.510 [2024-07-14 07:44:36.561301] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0c30000b90 with addr=10.0.0.2, port=4420 00:27:20.510 qpair failed and we were unable to recover it. 00:27:20.510 [2024-07-14 07:44:36.561483] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.510 [2024-07-14 07:44:36.561663] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.510 [2024-07-14 07:44:36.561690] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0c30000b90 with addr=10.0.0.2, port=4420 00:27:20.510 qpair failed and we were unable to recover it. 00:27:20.510 [2024-07-14 07:44:36.561839] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.510 [2024-07-14 07:44:36.562032] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.510 [2024-07-14 07:44:36.562059] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0c30000b90 with addr=10.0.0.2, port=4420 00:27:20.510 qpair failed and we were unable to recover it. 00:27:20.510 [2024-07-14 07:44:36.562244] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.510 [2024-07-14 07:44:36.562394] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.510 [2024-07-14 07:44:36.562421] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0c30000b90 with addr=10.0.0.2, port=4420 00:27:20.510 qpair failed and we were unable to recover it. 00:27:20.510 [2024-07-14 07:44:36.562607] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.510 [2024-07-14 07:44:36.562787] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.510 [2024-07-14 07:44:36.562813] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0c30000b90 with addr=10.0.0.2, port=4420 00:27:20.510 qpair failed and we were unable to recover it. 00:27:20.510 [2024-07-14 07:44:36.563000] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.510 [2024-07-14 07:44:36.563170] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.510 [2024-07-14 07:44:36.563195] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0c30000b90 with addr=10.0.0.2, port=4420 00:27:20.510 qpair failed and we were unable to recover it. 00:27:20.510 [2024-07-14 07:44:36.563397] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.510 [2024-07-14 07:44:36.563556] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.510 [2024-07-14 07:44:36.563581] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0c30000b90 with addr=10.0.0.2, port=4420 00:27:20.510 qpair failed and we were unable to recover it. 00:27:20.510 [2024-07-14 07:44:36.563755] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.510 [2024-07-14 07:44:36.563946] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.510 [2024-07-14 07:44:36.563972] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0c30000b90 with addr=10.0.0.2, port=4420 00:27:20.510 qpair failed and we were unable to recover it. 00:27:20.510 [2024-07-14 07:44:36.564157] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.510 [2024-07-14 07:44:36.564342] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.510 [2024-07-14 07:44:36.564368] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0c30000b90 with addr=10.0.0.2, port=4420 00:27:20.510 qpair failed and we were unable to recover it. 00:27:20.510 [2024-07-14 07:44:36.564544] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.510 [2024-07-14 07:44:36.564709] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.510 [2024-07-14 07:44:36.564733] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0c30000b90 with addr=10.0.0.2, port=4420 00:27:20.510 qpair failed and we were unable to recover it. 00:27:20.510 [2024-07-14 07:44:36.564885] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.510 [2024-07-14 07:44:36.565059] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.510 [2024-07-14 07:44:36.565085] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0c30000b90 with addr=10.0.0.2, port=4420 00:27:20.510 qpair failed and we were unable to recover it. 00:27:20.510 [2024-07-14 07:44:36.565268] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.510 [2024-07-14 07:44:36.565466] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.510 [2024-07-14 07:44:36.565490] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0c30000b90 with addr=10.0.0.2, port=4420 00:27:20.510 qpair failed and we were unable to recover it. 00:27:20.510 [2024-07-14 07:44:36.565653] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.510 [2024-07-14 07:44:36.565808] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.510 [2024-07-14 07:44:36.565833] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0c30000b90 with addr=10.0.0.2, port=4420 00:27:20.510 qpair failed and we were unable to recover it. 00:27:20.510 [2024-07-14 07:44:36.566067] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.510 [2024-07-14 07:44:36.566209] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.510 [2024-07-14 07:44:36.566233] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0c30000b90 with addr=10.0.0.2, port=4420 00:27:20.510 qpair failed and we were unable to recover it. 00:27:20.510 [2024-07-14 07:44:36.566445] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.510 [2024-07-14 07:44:36.566588] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.510 [2024-07-14 07:44:36.566613] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0c30000b90 with addr=10.0.0.2, port=4420 00:27:20.510 qpair failed and we were unable to recover it. 00:27:20.510 [2024-07-14 07:44:36.566777] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.510 [2024-07-14 07:44:36.566969] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.510 [2024-07-14 07:44:36.566994] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0c30000b90 with addr=10.0.0.2, port=4420 00:27:20.510 qpair failed and we were unable to recover it. 00:27:20.510 [2024-07-14 07:44:36.567175] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.510 [2024-07-14 07:44:36.567354] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.510 [2024-07-14 07:44:36.567379] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0c30000b90 with addr=10.0.0.2, port=4420 00:27:20.510 qpair failed and we were unable to recover it. 00:27:20.510 [2024-07-14 07:44:36.567558] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.510 [2024-07-14 07:44:36.567715] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.510 [2024-07-14 07:44:36.567739] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0c30000b90 with addr=10.0.0.2, port=4420 00:27:20.510 qpair failed and we were unable to recover it. 00:27:20.510 [2024-07-14 07:44:36.567929] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.510 [2024-07-14 07:44:36.568114] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.510 [2024-07-14 07:44:36.568139] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0c30000b90 with addr=10.0.0.2, port=4420 00:27:20.510 qpair failed and we were unable to recover it. 00:27:20.510 [2024-07-14 07:44:36.568291] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.510 [2024-07-14 07:44:36.568473] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.510 [2024-07-14 07:44:36.568498] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0c30000b90 with addr=10.0.0.2, port=4420 00:27:20.510 qpair failed and we were unable to recover it. 00:27:20.510 [2024-07-14 07:44:36.568697] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.510 [2024-07-14 07:44:36.568912] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.510 [2024-07-14 07:44:36.568939] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0c30000b90 with addr=10.0.0.2, port=4420 00:27:20.510 qpair failed and we were unable to recover it. 00:27:20.510 [2024-07-14 07:44:36.569097] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.510 [2024-07-14 07:44:36.569287] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.510 [2024-07-14 07:44:36.569311] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0c30000b90 with addr=10.0.0.2, port=4420 00:27:20.510 qpair failed and we were unable to recover it. 00:27:20.510 [2024-07-14 07:44:36.569501] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.510 [2024-07-14 07:44:36.569714] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.510 [2024-07-14 07:44:36.569740] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0c30000b90 with addr=10.0.0.2, port=4420 00:27:20.510 qpair failed and we were unable to recover it. 00:27:20.510 [2024-07-14 07:44:36.569917] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.510 [2024-07-14 07:44:36.570061] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.510 [2024-07-14 07:44:36.570085] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0c30000b90 with addr=10.0.0.2, port=4420 00:27:20.510 qpair failed and we were unable to recover it. 00:27:20.510 [2024-07-14 07:44:36.570259] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.510 [2024-07-14 07:44:36.570438] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.510 [2024-07-14 07:44:36.570463] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0c30000b90 with addr=10.0.0.2, port=4420 00:27:20.510 qpair failed and we were unable to recover it. 00:27:20.510 [2024-07-14 07:44:36.570643] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.510 [2024-07-14 07:44:36.570793] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.510 [2024-07-14 07:44:36.570817] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0c30000b90 with addr=10.0.0.2, port=4420 00:27:20.511 qpair failed and we were unable to recover it. 00:27:20.511 [2024-07-14 07:44:36.571025] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.511 [2024-07-14 07:44:36.571179] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.511 [2024-07-14 07:44:36.571204] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0c30000b90 with addr=10.0.0.2, port=4420 00:27:20.511 qpair failed and we were unable to recover it. 00:27:20.511 [2024-07-14 07:44:36.571356] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.511 [2024-07-14 07:44:36.571524] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.511 [2024-07-14 07:44:36.571549] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0c30000b90 with addr=10.0.0.2, port=4420 00:27:20.511 qpair failed and we were unable to recover it. 00:27:20.511 [2024-07-14 07:44:36.571707] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.511 [2024-07-14 07:44:36.571914] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.511 [2024-07-14 07:44:36.571939] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0c30000b90 with addr=10.0.0.2, port=4420 00:27:20.511 qpair failed and we were unable to recover it. 00:27:20.511 [2024-07-14 07:44:36.572086] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.511 [2024-07-14 07:44:36.572258] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.511 [2024-07-14 07:44:36.572283] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0c30000b90 with addr=10.0.0.2, port=4420 00:27:20.511 qpair failed and we were unable to recover it. 00:27:20.511 [2024-07-14 07:44:36.572436] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.511 [2024-07-14 07:44:36.572618] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.511 [2024-07-14 07:44:36.572644] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0c30000b90 with addr=10.0.0.2, port=4420 00:27:20.511 qpair failed and we were unable to recover it. 00:27:20.511 [2024-07-14 07:44:36.572830] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.511 [2024-07-14 07:44:36.573035] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.511 [2024-07-14 07:44:36.573061] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0c30000b90 with addr=10.0.0.2, port=4420 00:27:20.511 qpair failed and we were unable to recover it. 00:27:20.511 [2024-07-14 07:44:36.573290] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.511 [2024-07-14 07:44:36.573476] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.511 [2024-07-14 07:44:36.573502] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0c30000b90 with addr=10.0.0.2, port=4420 00:27:20.511 qpair failed and we were unable to recover it. 00:27:20.511 [2024-07-14 07:44:36.573651] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.511 [2024-07-14 07:44:36.573836] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.511 [2024-07-14 07:44:36.573873] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0c30000b90 with addr=10.0.0.2, port=4420 00:27:20.511 qpair failed and we were unable to recover it. 00:27:20.511 [2024-07-14 07:44:36.574045] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.511 [2024-07-14 07:44:36.574223] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.511 [2024-07-14 07:44:36.574248] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0c30000b90 with addr=10.0.0.2, port=4420 00:27:20.511 qpair failed and we were unable to recover it. 00:27:20.511 [2024-07-14 07:44:36.574397] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.511 [2024-07-14 07:44:36.574602] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.511 [2024-07-14 07:44:36.574627] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0c30000b90 with addr=10.0.0.2, port=4420 00:27:20.511 qpair failed and we were unable to recover it. 00:27:20.511 [2024-07-14 07:44:36.574804] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.511 [2024-07-14 07:44:36.574972] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.511 [2024-07-14 07:44:36.575001] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0c30000b90 with addr=10.0.0.2, port=4420 00:27:20.511 qpair failed and we were unable to recover it. 00:27:20.511 [2024-07-14 07:44:36.575176] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.511 [2024-07-14 07:44:36.575363] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.511 [2024-07-14 07:44:36.575388] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0c30000b90 with addr=10.0.0.2, port=4420 00:27:20.511 qpair failed and we were unable to recover it. 00:27:20.511 [2024-07-14 07:44:36.575570] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.511 [2024-07-14 07:44:36.575755] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.511 [2024-07-14 07:44:36.575780] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0c30000b90 with addr=10.0.0.2, port=4420 00:27:20.511 qpair failed and we were unable to recover it. 00:27:20.511 [2024-07-14 07:44:36.575946] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.511 [2024-07-14 07:44:36.576126] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.511 [2024-07-14 07:44:36.576164] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0c30000b90 with addr=10.0.0.2, port=4420 00:27:20.511 qpair failed and we were unable to recover it. 00:27:20.511 [2024-07-14 07:44:36.576354] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.511 [2024-07-14 07:44:36.576540] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.511 [2024-07-14 07:44:36.576564] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0c30000b90 with addr=10.0.0.2, port=4420 00:27:20.511 qpair failed and we were unable to recover it. 00:27:20.511 [2024-07-14 07:44:36.576743] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.511 [2024-07-14 07:44:36.576939] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.511 [2024-07-14 07:44:36.576965] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0c30000b90 with addr=10.0.0.2, port=4420 00:27:20.511 qpair failed and we were unable to recover it. 00:27:20.511 [2024-07-14 07:44:36.577155] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.511 [2024-07-14 07:44:36.577337] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.511 [2024-07-14 07:44:36.577363] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0c30000b90 with addr=10.0.0.2, port=4420 00:27:20.511 qpair failed and we were unable to recover it. 00:27:20.511 [2024-07-14 07:44:36.577516] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.511 [2024-07-14 07:44:36.577729] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.511 [2024-07-14 07:44:36.577755] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0c30000b90 with addr=10.0.0.2, port=4420 00:27:20.511 qpair failed and we were unable to recover it. 00:27:20.511 [2024-07-14 07:44:36.577934] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.511 [2024-07-14 07:44:36.578089] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.511 [2024-07-14 07:44:36.578114] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0c30000b90 with addr=10.0.0.2, port=4420 00:27:20.511 qpair failed and we were unable to recover it. 00:27:20.511 [2024-07-14 07:44:36.578275] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.511 [2024-07-14 07:44:36.578435] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.511 [2024-07-14 07:44:36.578461] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0c30000b90 with addr=10.0.0.2, port=4420 00:27:20.511 qpair failed and we were unable to recover it. 00:27:20.511 [2024-07-14 07:44:36.578621] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.511 [2024-07-14 07:44:36.578802] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.511 [2024-07-14 07:44:36.578828] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0c30000b90 with addr=10.0.0.2, port=4420 00:27:20.511 qpair failed and we were unable to recover it. 00:27:20.511 [2024-07-14 07:44:36.579032] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.511 [2024-07-14 07:44:36.579199] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.511 [2024-07-14 07:44:36.579223] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0c30000b90 with addr=10.0.0.2, port=4420 00:27:20.511 qpair failed and we were unable to recover it. 00:27:20.511 [2024-07-14 07:44:36.579410] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.511 [2024-07-14 07:44:36.579588] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.511 [2024-07-14 07:44:36.579613] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0c30000b90 with addr=10.0.0.2, port=4420 00:27:20.511 qpair failed and we were unable to recover it. 00:27:20.511 [2024-07-14 07:44:36.579828] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.511 [2024-07-14 07:44:36.580043] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.511 [2024-07-14 07:44:36.580068] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0c30000b90 with addr=10.0.0.2, port=4420 00:27:20.511 qpair failed and we were unable to recover it. 00:27:20.512 [2024-07-14 07:44:36.580270] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.512 [2024-07-14 07:44:36.580475] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.512 [2024-07-14 07:44:36.580504] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0c30000b90 with addr=10.0.0.2, port=4420 00:27:20.512 qpair failed and we were unable to recover it. 00:27:20.512 [2024-07-14 07:44:36.580687] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.512 [2024-07-14 07:44:36.580870] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.512 [2024-07-14 07:44:36.580896] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0c30000b90 with addr=10.0.0.2, port=4420 00:27:20.512 qpair failed and we were unable to recover it. 00:27:20.512 [2024-07-14 07:44:36.581110] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.512 [2024-07-14 07:44:36.581322] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.512 [2024-07-14 07:44:36.581347] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0c30000b90 with addr=10.0.0.2, port=4420 00:27:20.512 qpair failed and we were unable to recover it. 00:27:20.512 [2024-07-14 07:44:36.581535] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.512 [2024-07-14 07:44:36.581692] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.512 [2024-07-14 07:44:36.581718] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0c30000b90 with addr=10.0.0.2, port=4420 00:27:20.512 qpair failed and we were unable to recover it. 00:27:20.512 [2024-07-14 07:44:36.581904] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.512 [2024-07-14 07:44:36.582085] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.512 [2024-07-14 07:44:36.582110] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0c30000b90 with addr=10.0.0.2, port=4420 00:27:20.512 qpair failed and we were unable to recover it. 00:27:20.512 [2024-07-14 07:44:36.582278] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.512 [2024-07-14 07:44:36.582456] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.512 [2024-07-14 07:44:36.582481] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0c30000b90 with addr=10.0.0.2, port=4420 00:27:20.512 qpair failed and we were unable to recover it. 00:27:20.512 [2024-07-14 07:44:36.582637] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.512 [2024-07-14 07:44:36.582787] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.512 [2024-07-14 07:44:36.582811] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0c30000b90 with addr=10.0.0.2, port=4420 00:27:20.512 qpair failed and we were unable to recover it. 00:27:20.512 [2024-07-14 07:44:36.583004] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.512 [2024-07-14 07:44:36.583184] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.512 [2024-07-14 07:44:36.583209] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0c30000b90 with addr=10.0.0.2, port=4420 00:27:20.512 qpair failed and we were unable to recover it. 00:27:20.512 [2024-07-14 07:44:36.583423] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.512 [2024-07-14 07:44:36.583600] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.512 [2024-07-14 07:44:36.583625] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0c30000b90 with addr=10.0.0.2, port=4420 00:27:20.512 qpair failed and we were unable to recover it. 00:27:20.512 [2024-07-14 07:44:36.583803] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.512 [2024-07-14 07:44:36.584017] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.512 [2024-07-14 07:44:36.584043] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0c30000b90 with addr=10.0.0.2, port=4420 00:27:20.512 qpair failed and we were unable to recover it. 00:27:20.512 [2024-07-14 07:44:36.584220] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.512 [2024-07-14 07:44:36.584382] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.512 [2024-07-14 07:44:36.584414] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0c30000b90 with addr=10.0.0.2, port=4420 00:27:20.512 qpair failed and we were unable to recover it. 00:27:20.512 [2024-07-14 07:44:36.584571] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.512 [2024-07-14 07:44:36.584750] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.512 [2024-07-14 07:44:36.584776] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0c30000b90 with addr=10.0.0.2, port=4420 00:27:20.512 qpair failed and we were unable to recover it. 00:27:20.512 [2024-07-14 07:44:36.584960] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.512 [2024-07-14 07:44:36.585147] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.512 [2024-07-14 07:44:36.585171] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0c30000b90 with addr=10.0.0.2, port=4420 00:27:20.512 qpair failed and we were unable to recover it. 00:27:20.512 [2024-07-14 07:44:36.585330] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.512 [2024-07-14 07:44:36.585478] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.512 [2024-07-14 07:44:36.585506] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0c30000b90 with addr=10.0.0.2, port=4420 00:27:20.512 qpair failed and we were unable to recover it. 00:27:20.512 [2024-07-14 07:44:36.585718] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.512 [2024-07-14 07:44:36.585902] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.512 [2024-07-14 07:44:36.585927] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0c30000b90 with addr=10.0.0.2, port=4420 00:27:20.512 qpair failed and we were unable to recover it. 00:27:20.512 [2024-07-14 07:44:36.586104] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.512 [2024-07-14 07:44:36.586266] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.512 [2024-07-14 07:44:36.586293] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0c30000b90 with addr=10.0.0.2, port=4420 00:27:20.512 qpair failed and we were unable to recover it. 00:27:20.512 [2024-07-14 07:44:36.586443] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.512 [2024-07-14 07:44:36.586595] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.512 [2024-07-14 07:44:36.586619] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0c30000b90 with addr=10.0.0.2, port=4420 00:27:20.512 qpair failed and we were unable to recover it. 00:27:20.512 [2024-07-14 07:44:36.586798] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.512 [2024-07-14 07:44:36.586977] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.512 [2024-07-14 07:44:36.587003] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0c30000b90 with addr=10.0.0.2, port=4420 00:27:20.512 qpair failed and we were unable to recover it. 00:27:20.512 [2024-07-14 07:44:36.587172] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.512 [2024-07-14 07:44:36.587348] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.512 [2024-07-14 07:44:36.587374] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0c30000b90 with addr=10.0.0.2, port=4420 00:27:20.512 qpair failed and we were unable to recover it. 00:27:20.512 [2024-07-14 07:44:36.587532] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.512 [2024-07-14 07:44:36.587713] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.512 [2024-07-14 07:44:36.587737] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0c30000b90 with addr=10.0.0.2, port=4420 00:27:20.512 qpair failed and we were unable to recover it. 00:27:20.512 [2024-07-14 07:44:36.587909] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.512 [2024-07-14 07:44:36.588130] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.512 [2024-07-14 07:44:36.588160] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0c30000b90 with addr=10.0.0.2, port=4420 00:27:20.512 qpair failed and we were unable to recover it. 00:27:20.512 [2024-07-14 07:44:36.588349] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.512 [2024-07-14 07:44:36.588528] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.512 [2024-07-14 07:44:36.588554] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0c30000b90 with addr=10.0.0.2, port=4420 00:27:20.513 qpair failed and we were unable to recover it. 00:27:20.513 [2024-07-14 07:44:36.588708] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.513 [2024-07-14 07:44:36.588859] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.513 [2024-07-14 07:44:36.588889] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0c30000b90 with addr=10.0.0.2, port=4420 00:27:20.513 qpair failed and we were unable to recover it. 00:27:20.513 [2024-07-14 07:44:36.589069] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.513 [2024-07-14 07:44:36.589246] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.513 [2024-07-14 07:44:36.589271] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0c30000b90 with addr=10.0.0.2, port=4420 00:27:20.513 qpair failed and we were unable to recover it. 00:27:20.513 [2024-07-14 07:44:36.589419] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.513 [2024-07-14 07:44:36.589597] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.513 [2024-07-14 07:44:36.589622] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0c30000b90 with addr=10.0.0.2, port=4420 00:27:20.513 qpair failed and we were unable to recover it. 00:27:20.513 [2024-07-14 07:44:36.589798] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.513 [2024-07-14 07:44:36.589966] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.513 [2024-07-14 07:44:36.589992] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0c30000b90 with addr=10.0.0.2, port=4420 00:27:20.513 qpair failed and we were unable to recover it. 00:27:20.513 [2024-07-14 07:44:36.590176] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.513 [2024-07-14 07:44:36.590349] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.513 [2024-07-14 07:44:36.590374] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0c30000b90 with addr=10.0.0.2, port=4420 00:27:20.513 qpair failed and we were unable to recover it. 00:27:20.513 [2024-07-14 07:44:36.590555] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.513 [2024-07-14 07:44:36.590707] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.513 [2024-07-14 07:44:36.590733] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0c30000b90 with addr=10.0.0.2, port=4420 00:27:20.513 qpair failed and we were unable to recover it. 00:27:20.513 [2024-07-14 07:44:36.590923] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.513 [2024-07-14 07:44:36.591092] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.513 [2024-07-14 07:44:36.591117] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0c30000b90 with addr=10.0.0.2, port=4420 00:27:20.513 qpair failed and we were unable to recover it. 00:27:20.513 [2024-07-14 07:44:36.591297] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.513 [2024-07-14 07:44:36.591514] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.513 [2024-07-14 07:44:36.591540] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0c30000b90 with addr=10.0.0.2, port=4420 00:27:20.513 qpair failed and we were unable to recover it. 00:27:20.513 [2024-07-14 07:44:36.591690] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.513 [2024-07-14 07:44:36.591845] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.513 [2024-07-14 07:44:36.591888] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0c30000b90 with addr=10.0.0.2, port=4420 00:27:20.513 qpair failed and we were unable to recover it. 00:27:20.513 [2024-07-14 07:44:36.592105] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.513 [2024-07-14 07:44:36.592292] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.513 [2024-07-14 07:44:36.592317] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0c30000b90 with addr=10.0.0.2, port=4420 00:27:20.513 qpair failed and we were unable to recover it. 00:27:20.513 [2024-07-14 07:44:36.592474] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.513 [2024-07-14 07:44:36.592627] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.513 [2024-07-14 07:44:36.592652] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0c30000b90 with addr=10.0.0.2, port=4420 00:27:20.513 qpair failed and we were unable to recover it. 00:27:20.513 [2024-07-14 07:44:36.592871] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.513 [2024-07-14 07:44:36.593028] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.513 [2024-07-14 07:44:36.593052] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0c30000b90 with addr=10.0.0.2, port=4420 00:27:20.513 qpair failed and we were unable to recover it. 00:27:20.513 [2024-07-14 07:44:36.593219] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.513 [2024-07-14 07:44:36.593374] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.513 [2024-07-14 07:44:36.593399] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0c30000b90 with addr=10.0.0.2, port=4420 00:27:20.513 qpair failed and we were unable to recover it. 00:27:20.513 [2024-07-14 07:44:36.593586] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.513 [2024-07-14 07:44:36.593771] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.513 [2024-07-14 07:44:36.593799] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0c30000b90 with addr=10.0.0.2, port=4420 00:27:20.513 qpair failed and we were unable to recover it. 00:27:20.513 [2024-07-14 07:44:36.593956] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.513 [2024-07-14 07:44:36.594135] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.513 [2024-07-14 07:44:36.594161] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0c30000b90 with addr=10.0.0.2, port=4420 00:27:20.513 qpair failed and we were unable to recover it. 00:27:20.513 [2024-07-14 07:44:36.594370] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.513 [2024-07-14 07:44:36.594519] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.513 [2024-07-14 07:44:36.594545] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0c30000b90 with addr=10.0.0.2, port=4420 00:27:20.513 qpair failed and we were unable to recover it. 00:27:20.513 [2024-07-14 07:44:36.594721] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.513 [2024-07-14 07:44:36.594902] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.513 [2024-07-14 07:44:36.594927] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0c30000b90 with addr=10.0.0.2, port=4420 00:27:20.513 qpair failed and we were unable to recover it. 00:27:20.513 [2024-07-14 07:44:36.595074] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.513 [2024-07-14 07:44:36.595233] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.513 [2024-07-14 07:44:36.595260] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0c30000b90 with addr=10.0.0.2, port=4420 00:27:20.513 qpair failed and we were unable to recover it. 00:27:20.513 [2024-07-14 07:44:36.595452] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.513 [2024-07-14 07:44:36.595633] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.513 [2024-07-14 07:44:36.595658] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0c30000b90 with addr=10.0.0.2, port=4420 00:27:20.513 qpair failed and we were unable to recover it. 00:27:20.513 [2024-07-14 07:44:36.595876] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.513 [2024-07-14 07:44:36.596023] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.513 [2024-07-14 07:44:36.596049] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0c30000b90 with addr=10.0.0.2, port=4420 00:27:20.513 qpair failed and we were unable to recover it. 00:27:20.513 [2024-07-14 07:44:36.596257] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.513 [2024-07-14 07:44:36.596436] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.513 [2024-07-14 07:44:36.596460] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0c30000b90 with addr=10.0.0.2, port=4420 00:27:20.513 qpair failed and we were unable to recover it. 00:27:20.513 [2024-07-14 07:44:36.596677] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.513 [2024-07-14 07:44:36.596853] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.513 [2024-07-14 07:44:36.596884] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0c30000b90 with addr=10.0.0.2, port=4420 00:27:20.513 qpair failed and we were unable to recover it. 00:27:20.513 [2024-07-14 07:44:36.597079] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.513 [2024-07-14 07:44:36.597232] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.513 [2024-07-14 07:44:36.597258] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0c30000b90 with addr=10.0.0.2, port=4420 00:27:20.513 qpair failed and we were unable to recover it. 00:27:20.513 [2024-07-14 07:44:36.597442] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.513 [2024-07-14 07:44:36.597610] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.513 [2024-07-14 07:44:36.597635] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0c30000b90 with addr=10.0.0.2, port=4420 00:27:20.513 qpair failed and we were unable to recover it. 00:27:20.513 [2024-07-14 07:44:36.597827] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.513 [2024-07-14 07:44:36.597985] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.513 [2024-07-14 07:44:36.598011] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0c30000b90 with addr=10.0.0.2, port=4420 00:27:20.513 qpair failed and we were unable to recover it. 00:27:20.514 [2024-07-14 07:44:36.598169] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.514 [2024-07-14 07:44:36.598367] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.514 [2024-07-14 07:44:36.598393] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0c30000b90 with addr=10.0.0.2, port=4420 00:27:20.514 qpair failed and we were unable to recover it. 00:27:20.514 [2024-07-14 07:44:36.598540] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.514 [2024-07-14 07:44:36.598737] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.514 [2024-07-14 07:44:36.598762] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0c30000b90 with addr=10.0.0.2, port=4420 00:27:20.514 qpair failed and we were unable to recover it. 00:27:20.514 [2024-07-14 07:44:36.598946] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.514 [2024-07-14 07:44:36.599132] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.514 [2024-07-14 07:44:36.599157] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0c30000b90 with addr=10.0.0.2, port=4420 00:27:20.514 qpair failed and we were unable to recover it. 00:27:20.514 [2024-07-14 07:44:36.599337] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.514 [2024-07-14 07:44:36.599514] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.514 [2024-07-14 07:44:36.599539] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0c30000b90 with addr=10.0.0.2, port=4420 00:27:20.514 qpair failed and we were unable to recover it. 00:27:20.514 [2024-07-14 07:44:36.599706] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.514 [2024-07-14 07:44:36.599860] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.514 [2024-07-14 07:44:36.599892] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0c30000b90 with addr=10.0.0.2, port=4420 00:27:20.514 qpair failed and we were unable to recover it. 00:27:20.514 [2024-07-14 07:44:36.600104] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.514 [2024-07-14 07:44:36.600288] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.514 [2024-07-14 07:44:36.600314] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0c30000b90 with addr=10.0.0.2, port=4420 00:27:20.514 qpair failed and we were unable to recover it. 00:27:20.514 [2024-07-14 07:44:36.600503] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.514 [2024-07-14 07:44:36.600655] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.514 [2024-07-14 07:44:36.600681] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0c30000b90 with addr=10.0.0.2, port=4420 00:27:20.514 qpair failed and we were unable to recover it. 00:27:20.514 [2024-07-14 07:44:36.600893] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.514 [2024-07-14 07:44:36.601046] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.514 [2024-07-14 07:44:36.601070] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0c30000b90 with addr=10.0.0.2, port=4420 00:27:20.514 qpair failed and we were unable to recover it. 00:27:20.514 [2024-07-14 07:44:36.601222] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.514 [2024-07-14 07:44:36.601388] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.514 [2024-07-14 07:44:36.601413] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0c30000b90 with addr=10.0.0.2, port=4420 00:27:20.514 qpair failed and we were unable to recover it. 00:27:20.514 [2024-07-14 07:44:36.601613] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.514 [2024-07-14 07:44:36.601777] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.514 [2024-07-14 07:44:36.601803] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0c30000b90 with addr=10.0.0.2, port=4420 00:27:20.514 qpair failed and we were unable to recover it. 00:27:20.514 [2024-07-14 07:44:36.601986] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.514 [2024-07-14 07:44:36.602166] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.514 [2024-07-14 07:44:36.602191] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0c30000b90 with addr=10.0.0.2, port=4420 00:27:20.514 qpair failed and we were unable to recover it. 00:27:20.514 [2024-07-14 07:44:36.602367] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.514 [2024-07-14 07:44:36.602544] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.514 [2024-07-14 07:44:36.602569] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0c30000b90 with addr=10.0.0.2, port=4420 00:27:20.514 qpair failed and we were unable to recover it. 00:27:20.514 [2024-07-14 07:44:36.602754] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.514 [2024-07-14 07:44:36.602911] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.514 [2024-07-14 07:44:36.602936] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0c30000b90 with addr=10.0.0.2, port=4420 00:27:20.514 qpair failed and we were unable to recover it. 00:27:20.514 [2024-07-14 07:44:36.603118] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.514 [2024-07-14 07:44:36.603270] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.514 [2024-07-14 07:44:36.603296] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0c30000b90 with addr=10.0.0.2, port=4420 00:27:20.514 qpair failed and we were unable to recover it. 00:27:20.514 [2024-07-14 07:44:36.603479] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.514 [2024-07-14 07:44:36.603697] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.514 [2024-07-14 07:44:36.603722] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0c30000b90 with addr=10.0.0.2, port=4420 00:27:20.514 qpair failed and we were unable to recover it. 00:27:20.514 [2024-07-14 07:44:36.603921] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.514 [2024-07-14 07:44:36.604136] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.514 [2024-07-14 07:44:36.604160] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0c30000b90 with addr=10.0.0.2, port=4420 00:27:20.514 qpair failed and we were unable to recover it. 00:27:20.514 [2024-07-14 07:44:36.604337] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.514 [2024-07-14 07:44:36.604485] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.514 [2024-07-14 07:44:36.604510] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0c30000b90 with addr=10.0.0.2, port=4420 00:27:20.514 qpair failed and we were unable to recover it. 00:27:20.514 [2024-07-14 07:44:36.604692] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.514 [2024-07-14 07:44:36.604855] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.514 [2024-07-14 07:44:36.604885] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0c30000b90 with addr=10.0.0.2, port=4420 00:27:20.514 qpair failed and we were unable to recover it. 00:27:20.514 [2024-07-14 07:44:36.605075] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.514 [2024-07-14 07:44:36.605247] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.514 [2024-07-14 07:44:36.605271] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0c30000b90 with addr=10.0.0.2, port=4420 00:27:20.514 qpair failed and we were unable to recover it. 00:27:20.514 [2024-07-14 07:44:36.605444] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.514 [2024-07-14 07:44:36.605628] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.514 [2024-07-14 07:44:36.605654] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0c30000b90 with addr=10.0.0.2, port=4420 00:27:20.514 qpair failed and we were unable to recover it. 00:27:20.514 [2024-07-14 07:44:36.605820] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.514 [2024-07-14 07:44:36.606027] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.514 [2024-07-14 07:44:36.606053] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0c30000b90 with addr=10.0.0.2, port=4420 00:27:20.514 qpair failed and we were unable to recover it. 00:27:20.514 [2024-07-14 07:44:36.606223] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.514 [2024-07-14 07:44:36.606433] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.514 [2024-07-14 07:44:36.606458] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0c30000b90 with addr=10.0.0.2, port=4420 00:27:20.514 qpair failed and we were unable to recover it. 00:27:20.514 [2024-07-14 07:44:36.606610] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.514 [2024-07-14 07:44:36.606811] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.514 [2024-07-14 07:44:36.606836] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0c30000b90 with addr=10.0.0.2, port=4420 00:27:20.514 qpair failed and we were unable to recover it. 00:27:20.514 [2024-07-14 07:44:36.607043] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.514 [2024-07-14 07:44:36.607219] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.514 [2024-07-14 07:44:36.607244] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0c30000b90 with addr=10.0.0.2, port=4420 00:27:20.514 qpair failed and we were unable to recover it. 00:27:20.514 [2024-07-14 07:44:36.607432] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.514 [2024-07-14 07:44:36.607612] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.514 [2024-07-14 07:44:36.607637] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0c30000b90 with addr=10.0.0.2, port=4420 00:27:20.514 qpair failed and we were unable to recover it. 00:27:20.514 [2024-07-14 07:44:36.607818] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.514 [2024-07-14 07:44:36.607998] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.514 [2024-07-14 07:44:36.608023] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0c30000b90 with addr=10.0.0.2, port=4420 00:27:20.514 qpair failed and we were unable to recover it. 00:27:20.514 [2024-07-14 07:44:36.608176] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.514 [2024-07-14 07:44:36.608341] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.514 [2024-07-14 07:44:36.608367] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0c30000b90 with addr=10.0.0.2, port=4420 00:27:20.514 qpair failed and we were unable to recover it. 00:27:20.514 [2024-07-14 07:44:36.608549] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.514 [2024-07-14 07:44:36.608738] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.514 [2024-07-14 07:44:36.608762] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0c30000b90 with addr=10.0.0.2, port=4420 00:27:20.514 qpair failed and we were unable to recover it. 00:27:20.514 [2024-07-14 07:44:36.608921] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.514 [2024-07-14 07:44:36.609095] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.515 [2024-07-14 07:44:36.609121] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0c30000b90 with addr=10.0.0.2, port=4420 00:27:20.515 qpair failed and we were unable to recover it. 00:27:20.515 [2024-07-14 07:44:36.609305] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.515 [2024-07-14 07:44:36.609471] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.515 [2024-07-14 07:44:36.609498] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0c30000b90 with addr=10.0.0.2, port=4420 00:27:20.515 qpair failed and we were unable to recover it. 00:27:20.515 [2024-07-14 07:44:36.609679] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.515 [2024-07-14 07:44:36.609888] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.515 [2024-07-14 07:44:36.609914] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0c30000b90 with addr=10.0.0.2, port=4420 00:27:20.515 qpair failed and we were unable to recover it. 00:27:20.515 [2024-07-14 07:44:36.610069] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.515 [2024-07-14 07:44:36.610254] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.515 [2024-07-14 07:44:36.610281] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0c30000b90 with addr=10.0.0.2, port=4420 00:27:20.515 qpair failed and we were unable to recover it. 00:27:20.515 [2024-07-14 07:44:36.610469] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.515 [2024-07-14 07:44:36.610646] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.515 [2024-07-14 07:44:36.610671] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0c30000b90 with addr=10.0.0.2, port=4420 00:27:20.515 qpair failed and we were unable to recover it. 00:27:20.515 [2024-07-14 07:44:36.610825] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.515 [2024-07-14 07:44:36.611018] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.515 [2024-07-14 07:44:36.611045] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0c30000b90 with addr=10.0.0.2, port=4420 00:27:20.515 qpair failed and we were unable to recover it. 00:27:20.515 [2024-07-14 07:44:36.611263] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.515 [2024-07-14 07:44:36.611418] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.515 [2024-07-14 07:44:36.611445] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0c30000b90 with addr=10.0.0.2, port=4420 00:27:20.515 qpair failed and we were unable to recover it. 00:27:20.515 [2024-07-14 07:44:36.611632] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.515 [2024-07-14 07:44:36.611821] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.515 [2024-07-14 07:44:36.611847] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0c30000b90 with addr=10.0.0.2, port=4420 00:27:20.515 qpair failed and we were unable to recover it. 00:27:20.515 [2024-07-14 07:44:36.612048] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.515 [2024-07-14 07:44:36.612228] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.515 [2024-07-14 07:44:36.612255] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0c30000b90 with addr=10.0.0.2, port=4420 00:27:20.515 qpair failed and we were unable to recover it. 00:27:20.515 [2024-07-14 07:44:36.612432] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.515 [2024-07-14 07:44:36.612586] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.515 [2024-07-14 07:44:36.612611] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0c30000b90 with addr=10.0.0.2, port=4420 00:27:20.515 qpair failed and we were unable to recover it. 00:27:20.515 [2024-07-14 07:44:36.612765] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.515 [2024-07-14 07:44:36.612943] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.515 [2024-07-14 07:44:36.612969] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0c30000b90 with addr=10.0.0.2, port=4420 00:27:20.515 qpair failed and we were unable to recover it. 00:27:20.515 [2024-07-14 07:44:36.613149] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.515 [2024-07-14 07:44:36.613301] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.515 [2024-07-14 07:44:36.613325] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0c30000b90 with addr=10.0.0.2, port=4420 00:27:20.515 qpair failed and we were unable to recover it. 00:27:20.515 [2024-07-14 07:44:36.613538] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.515 [2024-07-14 07:44:36.613757] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.515 [2024-07-14 07:44:36.613782] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0c30000b90 with addr=10.0.0.2, port=4420 00:27:20.515 qpair failed and we were unable to recover it. 00:27:20.515 [2024-07-14 07:44:36.613961] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.515 [2024-07-14 07:44:36.614142] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.515 [2024-07-14 07:44:36.614167] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0c30000b90 with addr=10.0.0.2, port=4420 00:27:20.515 qpair failed and we were unable to recover it. 00:27:20.515 [2024-07-14 07:44:36.614322] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.515 [2024-07-14 07:44:36.614484] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.515 [2024-07-14 07:44:36.614511] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0c30000b90 with addr=10.0.0.2, port=4420 00:27:20.515 qpair failed and we were unable to recover it. 00:27:20.515 [2024-07-14 07:44:36.614702] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.515 [2024-07-14 07:44:36.614853] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.515 [2024-07-14 07:44:36.614882] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0c30000b90 with addr=10.0.0.2, port=4420 00:27:20.515 qpair failed and we were unable to recover it. 00:27:20.515 [2024-07-14 07:44:36.615080] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.515 [2024-07-14 07:44:36.615227] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.515 [2024-07-14 07:44:36.615253] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0c30000b90 with addr=10.0.0.2, port=4420 00:27:20.515 qpair failed and we were unable to recover it. 00:27:20.515 [2024-07-14 07:44:36.615435] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.515 [2024-07-14 07:44:36.615588] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.515 [2024-07-14 07:44:36.615614] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0c30000b90 with addr=10.0.0.2, port=4420 00:27:20.515 qpair failed and we were unable to recover it. 00:27:20.515 [2024-07-14 07:44:36.615795] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.515 [2024-07-14 07:44:36.615955] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.515 [2024-07-14 07:44:36.615980] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0c30000b90 with addr=10.0.0.2, port=4420 00:27:20.515 qpair failed and we were unable to recover it. 00:27:20.515 [2024-07-14 07:44:36.616189] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.515 [2024-07-14 07:44:36.616389] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.515 [2024-07-14 07:44:36.616415] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0c30000b90 with addr=10.0.0.2, port=4420 00:27:20.515 qpair failed and we were unable to recover it. 00:27:20.515 [2024-07-14 07:44:36.616609] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.515 [2024-07-14 07:44:36.616763] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.515 [2024-07-14 07:44:36.616789] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0c30000b90 with addr=10.0.0.2, port=4420 00:27:20.515 qpair failed and we were unable to recover it. 00:27:20.515 [2024-07-14 07:44:36.616949] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.515 [2024-07-14 07:44:36.617135] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.515 [2024-07-14 07:44:36.617161] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0c30000b90 with addr=10.0.0.2, port=4420 00:27:20.515 qpair failed and we were unable to recover it. 00:27:20.515 [2024-07-14 07:44:36.617346] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.515 [2024-07-14 07:44:36.617534] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.515 [2024-07-14 07:44:36.617561] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0c30000b90 with addr=10.0.0.2, port=4420 00:27:20.515 qpair failed and we were unable to recover it. 00:27:20.515 [2024-07-14 07:44:36.617742] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.515 [2024-07-14 07:44:36.617895] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.515 [2024-07-14 07:44:36.617922] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0c30000b90 with addr=10.0.0.2, port=4420 00:27:20.515 qpair failed and we were unable to recover it. 00:27:20.515 [2024-07-14 07:44:36.618074] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.515 [2024-07-14 07:44:36.618257] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.515 [2024-07-14 07:44:36.618282] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0c30000b90 with addr=10.0.0.2, port=4420 00:27:20.515 qpair failed and we were unable to recover it. 00:27:20.515 [2024-07-14 07:44:36.618468] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.515 [2024-07-14 07:44:36.618655] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.515 [2024-07-14 07:44:36.618681] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0c30000b90 with addr=10.0.0.2, port=4420 00:27:20.515 qpair failed and we were unable to recover it. 00:27:20.515 [2024-07-14 07:44:36.618862] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.515 [2024-07-14 07:44:36.619020] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.515 [2024-07-14 07:44:36.619047] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0c30000b90 with addr=10.0.0.2, port=4420 00:27:20.515 qpair failed and we were unable to recover it. 00:27:20.515 [2024-07-14 07:44:36.619200] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.515 [2024-07-14 07:44:36.619421] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.515 [2024-07-14 07:44:36.619446] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0c30000b90 with addr=10.0.0.2, port=4420 00:27:20.515 qpair failed and we were unable to recover it. 00:27:20.515 [2024-07-14 07:44:36.619612] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.515 [2024-07-14 07:44:36.619814] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.515 [2024-07-14 07:44:36.619839] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0c30000b90 with addr=10.0.0.2, port=4420 00:27:20.515 qpair failed and we were unable to recover it. 00:27:20.515 [2024-07-14 07:44:36.620033] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.515 [2024-07-14 07:44:36.620225] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.515 [2024-07-14 07:44:36.620252] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0c30000b90 with addr=10.0.0.2, port=4420 00:27:20.515 qpair failed and we were unable to recover it. 00:27:20.515 [2024-07-14 07:44:36.620438] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.515 [2024-07-14 07:44:36.620593] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.515 [2024-07-14 07:44:36.620619] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0c30000b90 with addr=10.0.0.2, port=4420 00:27:20.515 qpair failed and we were unable to recover it. 00:27:20.516 [2024-07-14 07:44:36.620833] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.516 [2024-07-14 07:44:36.620998] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.516 [2024-07-14 07:44:36.621024] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0c30000b90 with addr=10.0.0.2, port=4420 00:27:20.516 qpair failed and we were unable to recover it. 00:27:20.516 [2024-07-14 07:44:36.621239] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.516 [2024-07-14 07:44:36.621423] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.516 [2024-07-14 07:44:36.621450] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0c30000b90 with addr=10.0.0.2, port=4420 00:27:20.516 qpair failed and we were unable to recover it. 00:27:20.516 [2024-07-14 07:44:36.621606] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.516 [2024-07-14 07:44:36.621785] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.516 [2024-07-14 07:44:36.621811] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0c30000b90 with addr=10.0.0.2, port=4420 00:27:20.516 qpair failed and we were unable to recover it. 00:27:20.516 [2024-07-14 07:44:36.621992] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.516 [2024-07-14 07:44:36.622201] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.516 [2024-07-14 07:44:36.622226] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0c30000b90 with addr=10.0.0.2, port=4420 00:27:20.516 qpair failed and we were unable to recover it. 00:27:20.516 [2024-07-14 07:44:36.622398] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.516 [2024-07-14 07:44:36.622547] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.516 [2024-07-14 07:44:36.622572] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0c30000b90 with addr=10.0.0.2, port=4420 00:27:20.516 qpair failed and we were unable to recover it. 00:27:20.516 [2024-07-14 07:44:36.622728] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.516 [2024-07-14 07:44:36.625989] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.516 [2024-07-14 07:44:36.626032] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0c30000b90 with addr=10.0.0.2, port=4420 00:27:20.516 qpair failed and we were unable to recover it. 00:27:20.516 [2024-07-14 07:44:36.626233] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.516 [2024-07-14 07:44:36.626395] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.516 [2024-07-14 07:44:36.626424] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0c30000b90 with addr=10.0.0.2, port=4420 00:27:20.516 qpair failed and we were unable to recover it. 00:27:20.516 [2024-07-14 07:44:36.626624] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.516 [2024-07-14 07:44:36.626804] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.516 [2024-07-14 07:44:36.626831] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0c30000b90 with addr=10.0.0.2, port=4420 00:27:20.516 qpair failed and we were unable to recover it. 00:27:20.516 [2024-07-14 07:44:36.627012] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.516 [2024-07-14 07:44:36.627191] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.516 [2024-07-14 07:44:36.627218] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0c30000b90 with addr=10.0.0.2, port=4420 00:27:20.516 qpair failed and we were unable to recover it. 00:27:20.516 [2024-07-14 07:44:36.627401] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.516 [2024-07-14 07:44:36.627617] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.516 [2024-07-14 07:44:36.627642] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0c30000b90 with addr=10.0.0.2, port=4420 00:27:20.516 qpair failed and we were unable to recover it. 00:27:20.516 [2024-07-14 07:44:36.627824] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.516 [2024-07-14 07:44:36.628028] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.516 [2024-07-14 07:44:36.628056] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0c30000b90 with addr=10.0.0.2, port=4420 00:27:20.516 qpair failed and we were unable to recover it. 00:27:20.516 [2024-07-14 07:44:36.628257] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.516 [2024-07-14 07:44:36.628436] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.516 [2024-07-14 07:44:36.628464] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0c30000b90 with addr=10.0.0.2, port=4420 00:27:20.516 qpair failed and we were unable to recover it. 00:27:20.516 [2024-07-14 07:44:36.628630] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.516 [2024-07-14 07:44:36.628818] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.516 [2024-07-14 07:44:36.628844] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0c30000b90 with addr=10.0.0.2, port=4420 00:27:20.516 qpair failed and we were unable to recover it. 00:27:20.516 [2024-07-14 07:44:36.629006] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.516 [2024-07-14 07:44:36.629198] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.516 [2024-07-14 07:44:36.629225] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0c30000b90 with addr=10.0.0.2, port=4420 00:27:20.516 qpair failed and we were unable to recover it. 00:27:20.516 [2024-07-14 07:44:36.629420] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.516 [2024-07-14 07:44:36.629598] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.516 [2024-07-14 07:44:36.629623] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0c30000b90 with addr=10.0.0.2, port=4420 00:27:20.516 qpair failed and we were unable to recover it. 00:27:20.516 [2024-07-14 07:44:36.629772] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.516 [2024-07-14 07:44:36.629927] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.516 [2024-07-14 07:44:36.629953] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0c30000b90 with addr=10.0.0.2, port=4420 00:27:20.516 qpair failed and we were unable to recover it. 00:27:20.516 [2024-07-14 07:44:36.630131] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.516 [2024-07-14 07:44:36.630304] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.516 [2024-07-14 07:44:36.630330] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0c30000b90 with addr=10.0.0.2, port=4420 00:27:20.516 qpair failed and we were unable to recover it. 00:27:20.516 [2024-07-14 07:44:36.630542] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.516 [2024-07-14 07:44:36.630702] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.516 [2024-07-14 07:44:36.630727] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0c30000b90 with addr=10.0.0.2, port=4420 00:27:20.516 qpair failed and we were unable to recover it. 00:27:20.516 [2024-07-14 07:44:36.630905] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.516 [2024-07-14 07:44:36.631064] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.516 [2024-07-14 07:44:36.631091] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0c30000b90 with addr=10.0.0.2, port=4420 00:27:20.516 qpair failed and we were unable to recover it. 00:27:20.516 [2024-07-14 07:44:36.631265] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.516 [2024-07-14 07:44:36.631410] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.516 [2024-07-14 07:44:36.631435] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0c30000b90 with addr=10.0.0.2, port=4420 00:27:20.516 qpair failed and we were unable to recover it. 00:27:20.516 [2024-07-14 07:44:36.631632] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.516 [2024-07-14 07:44:36.631834] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.516 [2024-07-14 07:44:36.631879] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0c30000b90 with addr=10.0.0.2, port=4420 00:27:20.516 qpair failed and we were unable to recover it. 00:27:20.516 [2024-07-14 07:44:36.632037] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.516 [2024-07-14 07:44:36.632189] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.516 [2024-07-14 07:44:36.632214] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0c30000b90 with addr=10.0.0.2, port=4420 00:27:20.516 qpair failed and we were unable to recover it. 00:27:20.516 [2024-07-14 07:44:36.632396] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.516 [2024-07-14 07:44:36.632547] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.516 [2024-07-14 07:44:36.632572] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0c30000b90 with addr=10.0.0.2, port=4420 00:27:20.516 qpair failed and we were unable to recover it. 00:27:20.516 [2024-07-14 07:44:36.632751] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.516 [2024-07-14 07:44:36.632917] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.516 [2024-07-14 07:44:36.632943] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0c30000b90 with addr=10.0.0.2, port=4420 00:27:20.516 qpair failed and we were unable to recover it. 00:27:20.516 [2024-07-14 07:44:36.633107] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.516 [2024-07-14 07:44:36.633283] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.516 [2024-07-14 07:44:36.633308] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0c30000b90 with addr=10.0.0.2, port=4420 00:27:20.516 qpair failed and we were unable to recover it. 00:27:20.516 [2024-07-14 07:44:36.633509] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.516 [2024-07-14 07:44:36.633732] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.516 [2024-07-14 07:44:36.633757] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0c30000b90 with addr=10.0.0.2, port=4420 00:27:20.516 qpair failed and we were unable to recover it. 00:27:20.516 [2024-07-14 07:44:36.633954] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.516 [2024-07-14 07:44:36.634141] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.516 [2024-07-14 07:44:36.634166] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0c30000b90 with addr=10.0.0.2, port=4420 00:27:20.516 qpair failed and we were unable to recover it. 00:27:20.516 [2024-07-14 07:44:36.634357] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.516 [2024-07-14 07:44:36.634533] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.516 [2024-07-14 07:44:36.634558] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0c30000b90 with addr=10.0.0.2, port=4420 00:27:20.516 qpair failed and we were unable to recover it. 00:27:20.516 [2024-07-14 07:44:36.634738] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.516 [2024-07-14 07:44:36.634924] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.516 [2024-07-14 07:44:36.634950] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0c30000b90 with addr=10.0.0.2, port=4420 00:27:20.516 qpair failed and we were unable to recover it. 00:27:20.516 [2024-07-14 07:44:36.635138] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.516 [2024-07-14 07:44:36.635323] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.516 [2024-07-14 07:44:36.635349] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0c30000b90 with addr=10.0.0.2, port=4420 00:27:20.516 qpair failed and we were unable to recover it. 00:27:20.516 [2024-07-14 07:44:36.635546] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.516 [2024-07-14 07:44:36.635705] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.517 [2024-07-14 07:44:36.635731] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0c30000b90 with addr=10.0.0.2, port=4420 00:27:20.517 qpair failed and we were unable to recover it. 00:27:20.517 [2024-07-14 07:44:36.635887] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.517 [2024-07-14 07:44:36.636068] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.517 [2024-07-14 07:44:36.636094] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0c30000b90 with addr=10.0.0.2, port=4420 00:27:20.517 qpair failed and we were unable to recover it. 00:27:20.517 [2024-07-14 07:44:36.636275] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.517 [2024-07-14 07:44:36.636454] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.517 [2024-07-14 07:44:36.636479] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0c30000b90 with addr=10.0.0.2, port=4420 00:27:20.517 qpair failed and we were unable to recover it. 00:27:20.517 [2024-07-14 07:44:36.636664] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.517 [2024-07-14 07:44:36.636873] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.517 [2024-07-14 07:44:36.636899] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0c30000b90 with addr=10.0.0.2, port=4420 00:27:20.517 qpair failed and we were unable to recover it. 00:27:20.517 [2024-07-14 07:44:36.637076] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.517 [2024-07-14 07:44:36.637272] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.517 [2024-07-14 07:44:36.637297] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0c30000b90 with addr=10.0.0.2, port=4420 00:27:20.517 qpair failed and we were unable to recover it. 00:27:20.517 [2024-07-14 07:44:36.637486] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.517 [2024-07-14 07:44:36.637673] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.517 [2024-07-14 07:44:36.637699] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0c30000b90 with addr=10.0.0.2, port=4420 00:27:20.517 qpair failed and we were unable to recover it. 00:27:20.517 [2024-07-14 07:44:36.637863] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.517 [2024-07-14 07:44:36.638048] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.517 [2024-07-14 07:44:36.638073] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0c30000b90 with addr=10.0.0.2, port=4420 00:27:20.517 qpair failed and we were unable to recover it. 00:27:20.517 [2024-07-14 07:44:36.638300] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.517 [2024-07-14 07:44:36.638485] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.517 [2024-07-14 07:44:36.638510] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0c30000b90 with addr=10.0.0.2, port=4420 00:27:20.517 qpair failed and we were unable to recover it. 00:27:20.517 [2024-07-14 07:44:36.638702] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.517 [2024-07-14 07:44:36.638883] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.517 [2024-07-14 07:44:36.638909] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0c30000b90 with addr=10.0.0.2, port=4420 00:27:20.517 qpair failed and we were unable to recover it. 00:27:20.517 [2024-07-14 07:44:36.639056] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.517 [2024-07-14 07:44:36.639213] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.517 [2024-07-14 07:44:36.639241] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0c30000b90 with addr=10.0.0.2, port=4420 00:27:20.517 qpair failed and we were unable to recover it. 00:27:20.517 [2024-07-14 07:44:36.639457] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.517 [2024-07-14 07:44:36.639613] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.517 [2024-07-14 07:44:36.639638] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0c30000b90 with addr=10.0.0.2, port=4420 00:27:20.517 qpair failed and we were unable to recover it. 00:27:20.517 [2024-07-14 07:44:36.639788] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.517 [2024-07-14 07:44:36.639967] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.517 [2024-07-14 07:44:36.639994] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0c30000b90 with addr=10.0.0.2, port=4420 00:27:20.517 qpair failed and we were unable to recover it. 00:27:20.517 [2024-07-14 07:44:36.640160] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.517 [2024-07-14 07:44:36.640364] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.517 [2024-07-14 07:44:36.640389] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0c30000b90 with addr=10.0.0.2, port=4420 00:27:20.517 qpair failed and we were unable to recover it. 00:27:20.517 [2024-07-14 07:44:36.640572] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.517 [2024-07-14 07:44:36.640740] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.517 [2024-07-14 07:44:36.640765] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0c30000b90 with addr=10.0.0.2, port=4420 00:27:20.517 qpair failed and we were unable to recover it. 00:27:20.517 [2024-07-14 07:44:36.640953] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.517 [2024-07-14 07:44:36.641135] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.517 [2024-07-14 07:44:36.641160] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0c30000b90 with addr=10.0.0.2, port=4420 00:27:20.517 qpair failed and we were unable to recover it. 00:27:20.517 [2024-07-14 07:44:36.641332] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.517 [2024-07-14 07:44:36.641489] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.517 [2024-07-14 07:44:36.641514] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0c30000b90 with addr=10.0.0.2, port=4420 00:27:20.517 qpair failed and we were unable to recover it. 00:27:20.517 [2024-07-14 07:44:36.641699] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.517 [2024-07-14 07:44:36.641857] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.517 [2024-07-14 07:44:36.641888] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0c30000b90 with addr=10.0.0.2, port=4420 00:27:20.517 qpair failed and we were unable to recover it. 00:27:20.517 [2024-07-14 07:44:36.642051] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.517 [2024-07-14 07:44:36.642203] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.517 [2024-07-14 07:44:36.642235] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0c30000b90 with addr=10.0.0.2, port=4420 00:27:20.517 qpair failed and we were unable to recover it. 00:27:20.517 [2024-07-14 07:44:36.642416] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.517 [2024-07-14 07:44:36.642614] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.517 [2024-07-14 07:44:36.642639] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0c30000b90 with addr=10.0.0.2, port=4420 00:27:20.517 qpair failed and we were unable to recover it. 00:27:20.517 [2024-07-14 07:44:36.642824] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.517 [2024-07-14 07:44:36.642986] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.517 [2024-07-14 07:44:36.643014] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0c30000b90 with addr=10.0.0.2, port=4420 00:27:20.517 qpair failed and we were unable to recover it. 00:27:20.517 [2024-07-14 07:44:36.643204] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.517 [2024-07-14 07:44:36.643378] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.517 [2024-07-14 07:44:36.643404] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0c30000b90 with addr=10.0.0.2, port=4420 00:27:20.517 qpair failed and we were unable to recover it. 00:27:20.517 [2024-07-14 07:44:36.643618] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.517 [2024-07-14 07:44:36.643804] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.517 [2024-07-14 07:44:36.643836] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0c30000b90 with addr=10.0.0.2, port=4420 00:27:20.517 qpair failed and we were unable to recover it. 00:27:20.517 [2024-07-14 07:44:36.644048] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.517 [2024-07-14 07:44:36.644236] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.517 [2024-07-14 07:44:36.644262] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0c30000b90 with addr=10.0.0.2, port=4420 00:27:20.517 qpair failed and we were unable to recover it. 00:27:20.517 [2024-07-14 07:44:36.644448] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.517 [2024-07-14 07:44:36.644628] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.517 [2024-07-14 07:44:36.644654] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0c30000b90 with addr=10.0.0.2, port=4420 00:27:20.517 qpair failed and we were unable to recover it. 00:27:20.517 [2024-07-14 07:44:36.644873] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.517 [2024-07-14 07:44:36.645035] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.517 [2024-07-14 07:44:36.645061] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0c30000b90 with addr=10.0.0.2, port=4420 00:27:20.517 qpair failed and we were unable to recover it. 00:27:20.788 [2024-07-14 07:44:36.645253] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.788 [2024-07-14 07:44:36.645448] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.788 [2024-07-14 07:44:36.645477] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0c30000b90 with addr=10.0.0.2, port=4420 00:27:20.788 qpair failed and we were unable to recover it. 00:27:20.788 [2024-07-14 07:44:36.645841] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.788 [2024-07-14 07:44:36.647359] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.788 [2024-07-14 07:44:36.647392] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0c30000b90 with addr=10.0.0.2, port=4420 00:27:20.788 qpair failed and we were unable to recover it. 00:27:20.788 [2024-07-14 07:44:36.647556] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.788 [2024-07-14 07:44:36.647722] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.788 [2024-07-14 07:44:36.647748] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0c30000b90 with addr=10.0.0.2, port=4420 00:27:20.788 qpair failed and we were unable to recover it. 00:27:20.788 [2024-07-14 07:44:36.647945] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.788 [2024-07-14 07:44:36.648152] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.788 [2024-07-14 07:44:36.648177] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0c30000b90 with addr=10.0.0.2, port=4420 00:27:20.788 qpair failed and we were unable to recover it. 00:27:20.788 [2024-07-14 07:44:36.648351] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.788 [2024-07-14 07:44:36.648566] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.788 [2024-07-14 07:44:36.648592] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0c30000b90 with addr=10.0.0.2, port=4420 00:27:20.788 qpair failed and we were unable to recover it. 00:27:20.788 [2024-07-14 07:44:36.648760] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.788 [2024-07-14 07:44:36.648965] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.788 [2024-07-14 07:44:36.648992] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0c30000b90 with addr=10.0.0.2, port=4420 00:27:20.788 qpair failed and we were unable to recover it. 00:27:20.788 [2024-07-14 07:44:36.649148] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.788 [2024-07-14 07:44:36.649304] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.788 [2024-07-14 07:44:36.649329] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0c30000b90 with addr=10.0.0.2, port=4420 00:27:20.788 qpair failed and we were unable to recover it. 00:27:20.788 [2024-07-14 07:44:36.649534] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.788 [2024-07-14 07:44:36.649716] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.788 [2024-07-14 07:44:36.649741] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0c30000b90 with addr=10.0.0.2, port=4420 00:27:20.788 qpair failed and we were unable to recover it. 00:27:20.788 [2024-07-14 07:44:36.649918] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.788 [2024-07-14 07:44:36.650079] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.788 [2024-07-14 07:44:36.650105] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0c30000b90 with addr=10.0.0.2, port=4420 00:27:20.788 qpair failed and we were unable to recover it. 00:27:20.788 [2024-07-14 07:44:36.650264] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.788 [2024-07-14 07:44:36.650432] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.788 [2024-07-14 07:44:36.650456] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0c30000b90 with addr=10.0.0.2, port=4420 00:27:20.788 qpair failed and we were unable to recover it. 00:27:20.788 [2024-07-14 07:44:36.650656] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.788 [2024-07-14 07:44:36.650843] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.788 [2024-07-14 07:44:36.650890] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0c30000b90 with addr=10.0.0.2, port=4420 00:27:20.788 qpair failed and we were unable to recover it. 00:27:20.788 [2024-07-14 07:44:36.651049] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.788 [2024-07-14 07:44:36.651195] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.788 [2024-07-14 07:44:36.651220] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0c30000b90 with addr=10.0.0.2, port=4420 00:27:20.788 qpair failed and we were unable to recover it. 00:27:20.788 [2024-07-14 07:44:36.651395] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.788 [2024-07-14 07:44:36.651575] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.788 [2024-07-14 07:44:36.651600] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0c30000b90 with addr=10.0.0.2, port=4420 00:27:20.788 qpair failed and we were unable to recover it. 00:27:20.788 [2024-07-14 07:44:36.651783] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.788 [2024-07-14 07:44:36.651951] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.788 [2024-07-14 07:44:36.651979] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0c30000b90 with addr=10.0.0.2, port=4420 00:27:20.788 qpair failed and we were unable to recover it. 00:27:20.788 [2024-07-14 07:44:36.652136] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.788 [2024-07-14 07:44:36.652301] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.788 [2024-07-14 07:44:36.652327] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0c30000b90 with addr=10.0.0.2, port=4420 00:27:20.788 qpair failed and we were unable to recover it. 00:27:20.788 [2024-07-14 07:44:36.652539] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.788 [2024-07-14 07:44:36.652699] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.788 [2024-07-14 07:44:36.652724] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0c30000b90 with addr=10.0.0.2, port=4420 00:27:20.788 qpair failed and we were unable to recover it. 00:27:20.788 [2024-07-14 07:44:36.652905] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.788 [2024-07-14 07:44:36.653113] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.788 [2024-07-14 07:44:36.653138] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0c30000b90 with addr=10.0.0.2, port=4420 00:27:20.788 qpair failed and we were unable to recover it. 00:27:20.788 [2024-07-14 07:44:36.653310] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.788 [2024-07-14 07:44:36.653499] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.788 [2024-07-14 07:44:36.653524] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0c30000b90 with addr=10.0.0.2, port=4420 00:27:20.788 qpair failed and we were unable to recover it. 00:27:20.788 [2024-07-14 07:44:36.653704] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.788 [2024-07-14 07:44:36.653892] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.788 [2024-07-14 07:44:36.653919] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0c30000b90 with addr=10.0.0.2, port=4420 00:27:20.788 qpair failed and we were unable to recover it. 00:27:20.788 [2024-07-14 07:44:36.654075] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.788 [2024-07-14 07:44:36.654259] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.788 [2024-07-14 07:44:36.654285] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0c30000b90 with addr=10.0.0.2, port=4420 00:27:20.788 qpair failed and we were unable to recover it. 00:27:20.788 [2024-07-14 07:44:36.654432] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.788 [2024-07-14 07:44:36.654586] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.788 [2024-07-14 07:44:36.654615] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0c30000b90 with addr=10.0.0.2, port=4420 00:27:20.788 qpair failed and we were unable to recover it. 00:27:20.788 [2024-07-14 07:44:36.654798] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.788 [2024-07-14 07:44:36.654992] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.788 [2024-07-14 07:44:36.655017] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0c30000b90 with addr=10.0.0.2, port=4420 00:27:20.788 qpair failed and we were unable to recover it. 00:27:20.788 [2024-07-14 07:44:36.655202] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.788 [2024-07-14 07:44:36.655378] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.789 [2024-07-14 07:44:36.655403] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0c30000b90 with addr=10.0.0.2, port=4420 00:27:20.789 qpair failed and we were unable to recover it. 00:27:20.789 [2024-07-14 07:44:36.655603] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.789 [2024-07-14 07:44:36.655782] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.789 [2024-07-14 07:44:36.655806] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0c30000b90 with addr=10.0.0.2, port=4420 00:27:20.789 qpair failed and we were unable to recover it. 00:27:20.789 [2024-07-14 07:44:36.655969] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.789 [2024-07-14 07:44:36.656125] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.789 [2024-07-14 07:44:36.656150] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0c30000b90 with addr=10.0.0.2, port=4420 00:27:20.789 qpair failed and we were unable to recover it. 00:27:20.789 [2024-07-14 07:44:36.656352] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.789 [2024-07-14 07:44:36.656501] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.789 [2024-07-14 07:44:36.656526] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0c30000b90 with addr=10.0.0.2, port=4420 00:27:20.789 qpair failed and we were unable to recover it. 00:27:20.789 [2024-07-14 07:44:36.656704] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.789 [2024-07-14 07:44:36.656860] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.789 [2024-07-14 07:44:36.656892] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0c30000b90 with addr=10.0.0.2, port=4420 00:27:20.789 qpair failed and we were unable to recover it. 00:27:20.789 [2024-07-14 07:44:36.657077] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.789 [2024-07-14 07:44:36.657255] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.789 [2024-07-14 07:44:36.657281] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0c30000b90 with addr=10.0.0.2, port=4420 00:27:20.789 qpair failed and we were unable to recover it. 00:27:20.789 [2024-07-14 07:44:36.657477] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.789 [2024-07-14 07:44:36.657619] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.789 [2024-07-14 07:44:36.657644] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0c30000b90 with addr=10.0.0.2, port=4420 00:27:20.789 qpair failed and we were unable to recover it. 00:27:20.789 [2024-07-14 07:44:36.657831] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.789 [2024-07-14 07:44:36.658004] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.789 [2024-07-14 07:44:36.658030] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0c30000b90 with addr=10.0.0.2, port=4420 00:27:20.789 qpair failed and we were unable to recover it. 00:27:20.789 [2024-07-14 07:44:36.658181] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.789 [2024-07-14 07:44:36.658344] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.789 [2024-07-14 07:44:36.658374] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0c30000b90 with addr=10.0.0.2, port=4420 00:27:20.789 qpair failed and we were unable to recover it. 00:27:20.789 [2024-07-14 07:44:36.658593] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.789 [2024-07-14 07:44:36.658764] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.789 [2024-07-14 07:44:36.658789] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0c30000b90 with addr=10.0.0.2, port=4420 00:27:20.789 qpair failed and we were unable to recover it. 00:27:20.789 [2024-07-14 07:44:36.659002] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.789 [2024-07-14 07:44:36.659164] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.789 [2024-07-14 07:44:36.659195] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0c30000b90 with addr=10.0.0.2, port=4420 00:27:20.789 qpair failed and we were unable to recover it. 00:27:20.789 [2024-07-14 07:44:36.659372] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.789 [2024-07-14 07:44:36.659529] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.789 [2024-07-14 07:44:36.659557] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0c30000b90 with addr=10.0.0.2, port=4420 00:27:20.789 qpair failed and we were unable to recover it. 00:27:20.789 [2024-07-14 07:44:36.659716] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.789 [2024-07-14 07:44:36.659941] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.789 [2024-07-14 07:44:36.659967] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0c30000b90 with addr=10.0.0.2, port=4420 00:27:20.789 qpair failed and we were unable to recover it. 00:27:20.789 [2024-07-14 07:44:36.660119] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.789 [2024-07-14 07:44:36.660300] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.789 [2024-07-14 07:44:36.660325] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0c30000b90 with addr=10.0.0.2, port=4420 00:27:20.789 qpair failed and we were unable to recover it. 00:27:20.789 [2024-07-14 07:44:36.660490] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.789 [2024-07-14 07:44:36.660698] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.789 [2024-07-14 07:44:36.660724] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0c30000b90 with addr=10.0.0.2, port=4420 00:27:20.789 qpair failed and we were unable to recover it. 00:27:20.789 [2024-07-14 07:44:36.660883] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.789 [2024-07-14 07:44:36.661052] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.789 [2024-07-14 07:44:36.661078] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0c30000b90 with addr=10.0.0.2, port=4420 00:27:20.789 qpair failed and we were unable to recover it. 00:27:20.789 [2024-07-14 07:44:36.661249] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.789 [2024-07-14 07:44:36.661407] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.789 [2024-07-14 07:44:36.661433] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0c30000b90 with addr=10.0.0.2, port=4420 00:27:20.789 qpair failed and we were unable to recover it. 00:27:20.789 [2024-07-14 07:44:36.661614] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.789 [2024-07-14 07:44:36.661796] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.789 [2024-07-14 07:44:36.661821] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0c30000b90 with addr=10.0.0.2, port=4420 00:27:20.789 qpair failed and we were unable to recover it. 00:27:20.789 [2024-07-14 07:44:36.662007] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.789 [2024-07-14 07:44:36.662222] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.789 [2024-07-14 07:44:36.662247] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0c30000b90 with addr=10.0.0.2, port=4420 00:27:20.789 qpair failed and we were unable to recover it. 00:27:20.789 [2024-07-14 07:44:36.662405] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.789 [2024-07-14 07:44:36.662587] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.789 [2024-07-14 07:44:36.662612] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0c30000b90 with addr=10.0.0.2, port=4420 00:27:20.789 qpair failed and we were unable to recover it. 00:27:20.789 [2024-07-14 07:44:36.662766] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.789 [2024-07-14 07:44:36.662936] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.789 [2024-07-14 07:44:36.662962] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0c30000b90 with addr=10.0.0.2, port=4420 00:27:20.789 qpair failed and we were unable to recover it. 00:27:20.789 [2024-07-14 07:44:36.663111] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.789 [2024-07-14 07:44:36.663284] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.789 [2024-07-14 07:44:36.663311] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0c30000b90 with addr=10.0.0.2, port=4420 00:27:20.789 qpair failed and we were unable to recover it. 00:27:20.789 [2024-07-14 07:44:36.663490] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.789 [2024-07-14 07:44:36.663663] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.789 [2024-07-14 07:44:36.663689] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0c30000b90 with addr=10.0.0.2, port=4420 00:27:20.789 qpair failed and we were unable to recover it. 00:27:20.789 [2024-07-14 07:44:36.663850] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.789 [2024-07-14 07:44:36.664076] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.789 [2024-07-14 07:44:36.664103] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0c30000b90 with addr=10.0.0.2, port=4420 00:27:20.789 qpair failed and we were unable to recover it. 00:27:20.789 [2024-07-14 07:44:36.664303] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.789 [2024-07-14 07:44:36.664490] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.789 [2024-07-14 07:44:36.664516] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0c30000b90 with addr=10.0.0.2, port=4420 00:27:20.789 qpair failed and we were unable to recover it. 00:27:20.789 [2024-07-14 07:44:36.664703] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.789 [2024-07-14 07:44:36.664883] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.789 [2024-07-14 07:44:36.664909] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0c30000b90 with addr=10.0.0.2, port=4420 00:27:20.789 qpair failed and we were unable to recover it. 00:27:20.789 [2024-07-14 07:44:36.665095] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.789 [2024-07-14 07:44:36.665252] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.789 [2024-07-14 07:44:36.665279] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0c30000b90 with addr=10.0.0.2, port=4420 00:27:20.789 qpair failed and we were unable to recover it. 00:27:20.789 [2024-07-14 07:44:36.665463] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.789 [2024-07-14 07:44:36.665632] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.789 [2024-07-14 07:44:36.665658] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0c30000b90 with addr=10.0.0.2, port=4420 00:27:20.789 qpair failed and we were unable to recover it. 00:27:20.789 [2024-07-14 07:44:36.665821] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.789 [2024-07-14 07:44:36.666004] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.789 [2024-07-14 07:44:36.666031] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0c30000b90 with addr=10.0.0.2, port=4420 00:27:20.789 qpair failed and we were unable to recover it. 00:27:20.789 [2024-07-14 07:44:36.666225] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.789 [2024-07-14 07:44:36.666418] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.789 [2024-07-14 07:44:36.666445] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0c30000b90 with addr=10.0.0.2, port=4420 00:27:20.789 qpair failed and we were unable to recover it. 00:27:20.789 [2024-07-14 07:44:36.666606] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.789 [2024-07-14 07:44:36.666788] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.789 [2024-07-14 07:44:36.666813] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0c30000b90 with addr=10.0.0.2, port=4420 00:27:20.789 qpair failed and we were unable to recover it. 00:27:20.790 [2024-07-14 07:44:36.666968] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.790 [2024-07-14 07:44:36.667150] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.790 [2024-07-14 07:44:36.667177] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0c30000b90 with addr=10.0.0.2, port=4420 00:27:20.790 qpair failed and we were unable to recover it. 00:27:20.790 [2024-07-14 07:44:36.667363] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.790 [2024-07-14 07:44:36.667526] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.790 [2024-07-14 07:44:36.667551] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0c30000b90 with addr=10.0.0.2, port=4420 00:27:20.790 qpair failed and we were unable to recover it. 00:27:20.790 [2024-07-14 07:44:36.667705] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.790 [2024-07-14 07:44:36.667846] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.790 [2024-07-14 07:44:36.667887] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0c30000b90 with addr=10.0.0.2, port=4420 00:27:20.790 qpair failed and we were unable to recover it. 00:27:20.790 [2024-07-14 07:44:36.668080] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.790 [2024-07-14 07:44:36.668256] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.790 [2024-07-14 07:44:36.668281] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0c30000b90 with addr=10.0.0.2, port=4420 00:27:20.790 qpair failed and we were unable to recover it. 00:27:20.790 [2024-07-14 07:44:36.668479] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.790 [2024-07-14 07:44:36.668686] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.790 [2024-07-14 07:44:36.668713] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0c30000b90 with addr=10.0.0.2, port=4420 00:27:20.790 qpair failed and we were unable to recover it. 00:27:20.790 [2024-07-14 07:44:36.668905] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.790 [2024-07-14 07:44:36.669057] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.790 [2024-07-14 07:44:36.669082] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0c30000b90 with addr=10.0.0.2, port=4420 00:27:20.790 qpair failed and we were unable to recover it. 00:27:20.790 [2024-07-14 07:44:36.669273] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.790 [2024-07-14 07:44:36.669435] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.790 [2024-07-14 07:44:36.669460] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0c30000b90 with addr=10.0.0.2, port=4420 00:27:20.790 qpair failed and we were unable to recover it. 00:27:20.790 [2024-07-14 07:44:36.669610] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.790 [2024-07-14 07:44:36.669819] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.790 [2024-07-14 07:44:36.669844] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0c30000b90 with addr=10.0.0.2, port=4420 00:27:20.790 qpair failed and we were unable to recover it. 00:27:20.790 [2024-07-14 07:44:36.670042] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.790 [2024-07-14 07:44:36.670227] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.790 [2024-07-14 07:44:36.670252] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0c30000b90 with addr=10.0.0.2, port=4420 00:27:20.790 qpair failed and we were unable to recover it. 00:27:20.790 [2024-07-14 07:44:36.670436] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.790 [2024-07-14 07:44:36.670592] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.790 [2024-07-14 07:44:36.670617] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0c30000b90 with addr=10.0.0.2, port=4420 00:27:20.790 qpair failed and we were unable to recover it. 00:27:20.790 [2024-07-14 07:44:36.670806] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.790 [2024-07-14 07:44:36.670955] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.790 [2024-07-14 07:44:36.670982] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0c30000b90 with addr=10.0.0.2, port=4420 00:27:20.790 qpair failed and we were unable to recover it. 00:27:20.790 [2024-07-14 07:44:36.671170] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.790 [2024-07-14 07:44:36.671317] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.790 [2024-07-14 07:44:36.671342] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0c30000b90 with addr=10.0.0.2, port=4420 00:27:20.790 qpair failed and we were unable to recover it. 00:27:20.790 [2024-07-14 07:44:36.671534] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.790 [2024-07-14 07:44:36.671691] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.790 [2024-07-14 07:44:36.671719] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0c30000b90 with addr=10.0.0.2, port=4420 00:27:20.790 qpair failed and we were unable to recover it. 00:27:20.790 [2024-07-14 07:44:36.671916] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.790 [2024-07-14 07:44:36.672096] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.790 [2024-07-14 07:44:36.672121] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0c30000b90 with addr=10.0.0.2, port=4420 00:27:20.790 qpair failed and we were unable to recover it. 00:27:20.790 [2024-07-14 07:44:36.672354] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.790 [2024-07-14 07:44:36.672571] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.790 [2024-07-14 07:44:36.672596] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0c30000b90 with addr=10.0.0.2, port=4420 00:27:20.790 qpair failed and we were unable to recover it. 00:27:20.790 [2024-07-14 07:44:36.672795] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.790 [2024-07-14 07:44:36.672976] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.790 [2024-07-14 07:44:36.673003] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0c30000b90 with addr=10.0.0.2, port=4420 00:27:20.790 qpair failed and we were unable to recover it. 00:27:20.790 [2024-07-14 07:44:36.673212] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.790 [2024-07-14 07:44:36.673380] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.790 [2024-07-14 07:44:36.673406] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0c30000b90 with addr=10.0.0.2, port=4420 00:27:20.790 qpair failed and we were unable to recover it. 00:27:20.790 [2024-07-14 07:44:36.673604] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.790 [2024-07-14 07:44:36.673787] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.790 [2024-07-14 07:44:36.673812] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0c30000b90 with addr=10.0.0.2, port=4420 00:27:20.790 qpair failed and we were unable to recover it. 00:27:20.790 [2024-07-14 07:44:36.673995] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.790 [2024-07-14 07:44:36.674149] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.790 [2024-07-14 07:44:36.674177] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0c30000b90 with addr=10.0.0.2, port=4420 00:27:20.790 qpair failed and we were unable to recover it. 00:27:20.790 [2024-07-14 07:44:36.674367] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.790 [2024-07-14 07:44:36.674555] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.790 [2024-07-14 07:44:36.674582] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0c30000b90 with addr=10.0.0.2, port=4420 00:27:20.790 qpair failed and we were unable to recover it. 00:27:20.790 [2024-07-14 07:44:36.674760] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.790 [2024-07-14 07:44:36.674913] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.790 [2024-07-14 07:44:36.674939] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0c30000b90 with addr=10.0.0.2, port=4420 00:27:20.790 qpair failed and we were unable to recover it. 00:27:20.790 [2024-07-14 07:44:36.675092] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.790 [2024-07-14 07:44:36.675260] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.790 [2024-07-14 07:44:36.675285] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0c30000b90 with addr=10.0.0.2, port=4420 00:27:20.790 qpair failed and we were unable to recover it. 00:27:20.790 [2024-07-14 07:44:36.675506] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.790 [2024-07-14 07:44:36.675651] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.790 [2024-07-14 07:44:36.675677] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0c30000b90 with addr=10.0.0.2, port=4420 00:27:20.790 qpair failed and we were unable to recover it. 00:27:20.790 [2024-07-14 07:44:36.675848] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.790 [2024-07-14 07:44:36.676071] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.790 [2024-07-14 07:44:36.676096] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0c30000b90 with addr=10.0.0.2, port=4420 00:27:20.790 qpair failed and we were unable to recover it. 00:27:20.790 [2024-07-14 07:44:36.676273] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.790 [2024-07-14 07:44:36.676434] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.790 [2024-07-14 07:44:36.676460] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0c30000b90 with addr=10.0.0.2, port=4420 00:27:20.790 qpair failed and we were unable to recover it. 00:27:20.790 [2024-07-14 07:44:36.676604] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.790 [2024-07-14 07:44:36.676792] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.790 [2024-07-14 07:44:36.676817] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0c30000b90 with addr=10.0.0.2, port=4420 00:27:20.790 qpair failed and we were unable to recover it. 00:27:20.790 [2024-07-14 07:44:36.677013] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.790 [2024-07-14 07:44:36.677184] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.790 [2024-07-14 07:44:36.677209] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0c30000b90 with addr=10.0.0.2, port=4420 00:27:20.790 qpair failed and we were unable to recover it. 00:27:20.790 [2024-07-14 07:44:36.677393] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.790 [2024-07-14 07:44:36.677574] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.790 [2024-07-14 07:44:36.677598] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0c30000b90 with addr=10.0.0.2, port=4420 00:27:20.790 qpair failed and we were unable to recover it. 00:27:20.790 [2024-07-14 07:44:36.677793] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.790 [2024-07-14 07:44:36.677980] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.790 [2024-07-14 07:44:36.678005] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0c30000b90 with addr=10.0.0.2, port=4420 00:27:20.790 qpair failed and we were unable to recover it. 00:27:20.790 [2024-07-14 07:44:36.678163] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.790 [2024-07-14 07:44:36.678372] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.790 [2024-07-14 07:44:36.678398] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0c30000b90 with addr=10.0.0.2, port=4420 00:27:20.790 qpair failed and we were unable to recover it. 00:27:20.790 [2024-07-14 07:44:36.678577] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.790 [2024-07-14 07:44:36.678756] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.791 [2024-07-14 07:44:36.678782] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0c30000b90 with addr=10.0.0.2, port=4420 00:27:20.791 qpair failed and we were unable to recover it. 00:27:20.791 [2024-07-14 07:44:36.678967] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.791 [2024-07-14 07:44:36.679171] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.791 [2024-07-14 07:44:36.679196] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0c30000b90 with addr=10.0.0.2, port=4420 00:27:20.791 qpair failed and we were unable to recover it. 00:27:20.791 [2024-07-14 07:44:36.679377] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.791 [2024-07-14 07:44:36.679555] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.791 [2024-07-14 07:44:36.679581] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0c30000b90 with addr=10.0.0.2, port=4420 00:27:20.791 qpair failed and we were unable to recover it. 00:27:20.791 [2024-07-14 07:44:36.679767] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.791 [2024-07-14 07:44:36.679962] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.791 [2024-07-14 07:44:36.679988] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0c30000b90 with addr=10.0.0.2, port=4420 00:27:20.791 qpair failed and we were unable to recover it. 00:27:20.791 [2024-07-14 07:44:36.680201] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.791 [2024-07-14 07:44:36.680362] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.791 [2024-07-14 07:44:36.680388] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0c30000b90 with addr=10.0.0.2, port=4420 00:27:20.791 qpair failed and we were unable to recover it. 00:27:20.791 [2024-07-14 07:44:36.680542] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.791 [2024-07-14 07:44:36.680718] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.791 [2024-07-14 07:44:36.680743] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0c30000b90 with addr=10.0.0.2, port=4420 00:27:20.791 qpair failed and we were unable to recover it. 00:27:20.791 [2024-07-14 07:44:36.680945] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.791 [2024-07-14 07:44:36.681118] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.791 [2024-07-14 07:44:36.681144] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0c30000b90 with addr=10.0.0.2, port=4420 00:27:20.791 qpair failed and we were unable to recover it. 00:27:20.791 [2024-07-14 07:44:36.681304] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.791 [2024-07-14 07:44:36.681496] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.791 [2024-07-14 07:44:36.681522] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0c30000b90 with addr=10.0.0.2, port=4420 00:27:20.791 qpair failed and we were unable to recover it. 00:27:20.791 [2024-07-14 07:44:36.681686] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.791 [2024-07-14 07:44:36.681895] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.791 [2024-07-14 07:44:36.681921] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0c30000b90 with addr=10.0.0.2, port=4420 00:27:20.791 qpair failed and we were unable to recover it. 00:27:20.791 [2024-07-14 07:44:36.682076] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.791 [2024-07-14 07:44:36.682260] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.791 [2024-07-14 07:44:36.682285] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0c30000b90 with addr=10.0.0.2, port=4420 00:27:20.791 qpair failed and we were unable to recover it. 00:27:20.791 [2024-07-14 07:44:36.682433] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.791 [2024-07-14 07:44:36.682622] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.791 [2024-07-14 07:44:36.682647] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0c30000b90 with addr=10.0.0.2, port=4420 00:27:20.791 qpair failed and we were unable to recover it. 00:27:20.791 [2024-07-14 07:44:36.682834] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.791 [2024-07-14 07:44:36.683007] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.791 [2024-07-14 07:44:36.683034] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0c30000b90 with addr=10.0.0.2, port=4420 00:27:20.791 qpair failed and we were unable to recover it. 00:27:20.791 [2024-07-14 07:44:36.683206] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.791 [2024-07-14 07:44:36.683385] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.791 [2024-07-14 07:44:36.683410] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0c30000b90 with addr=10.0.0.2, port=4420 00:27:20.791 qpair failed and we were unable to recover it. 00:27:20.791 [2024-07-14 07:44:36.683623] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.791 [2024-07-14 07:44:36.683820] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.791 [2024-07-14 07:44:36.683854] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0c30000b90 with addr=10.0.0.2, port=4420 00:27:20.791 qpair failed and we were unable to recover it. 00:27:20.791 [2024-07-14 07:44:36.684066] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.791 [2024-07-14 07:44:36.684241] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.791 [2024-07-14 07:44:36.684266] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0c30000b90 with addr=10.0.0.2, port=4420 00:27:20.791 qpair failed and we were unable to recover it. 00:27:20.791 [2024-07-14 07:44:36.684455] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.791 [2024-07-14 07:44:36.684614] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.791 [2024-07-14 07:44:36.684639] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0c30000b90 with addr=10.0.0.2, port=4420 00:27:20.791 qpair failed and we were unable to recover it. 00:27:20.791 [2024-07-14 07:44:36.684799] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.791 [2024-07-14 07:44:36.684997] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.791 [2024-07-14 07:44:36.685024] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0c30000b90 with addr=10.0.0.2, port=4420 00:27:20.791 qpair failed and we were unable to recover it. 00:27:20.791 [2024-07-14 07:44:36.685211] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.791 [2024-07-14 07:44:36.685364] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.791 [2024-07-14 07:44:36.685390] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0c30000b90 with addr=10.0.0.2, port=4420 00:27:20.791 qpair failed and we were unable to recover it. 00:27:20.791 [2024-07-14 07:44:36.685581] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.791 [2024-07-14 07:44:36.685741] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.791 [2024-07-14 07:44:36.685766] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0c30000b90 with addr=10.0.0.2, port=4420 00:27:20.791 qpair failed and we were unable to recover it. 00:27:20.791 [2024-07-14 07:44:36.685949] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.791 [2024-07-14 07:44:36.686099] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.791 [2024-07-14 07:44:36.686125] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0c30000b90 with addr=10.0.0.2, port=4420 00:27:20.791 qpair failed and we were unable to recover it. 00:27:20.791 [2024-07-14 07:44:36.686307] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.791 [2024-07-14 07:44:36.686523] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.791 [2024-07-14 07:44:36.686548] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0c30000b90 with addr=10.0.0.2, port=4420 00:27:20.791 qpair failed and we were unable to recover it. 00:27:20.791 [2024-07-14 07:44:36.686735] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.791 [2024-07-14 07:44:36.686896] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.791 [2024-07-14 07:44:36.686922] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0c30000b90 with addr=10.0.0.2, port=4420 00:27:20.791 qpair failed and we were unable to recover it. 00:27:20.791 [2024-07-14 07:44:36.687124] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.791 [2024-07-14 07:44:36.687341] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.791 [2024-07-14 07:44:36.687366] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0c30000b90 with addr=10.0.0.2, port=4420 00:27:20.791 qpair failed and we were unable to recover it. 00:27:20.791 [2024-07-14 07:44:36.687552] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.791 [2024-07-14 07:44:36.687731] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.791 [2024-07-14 07:44:36.687756] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0c30000b90 with addr=10.0.0.2, port=4420 00:27:20.791 qpair failed and we were unable to recover it. 00:27:20.791 [2024-07-14 07:44:36.687912] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.791 [2024-07-14 07:44:36.688089] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.791 [2024-07-14 07:44:36.688115] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0c30000b90 with addr=10.0.0.2, port=4420 00:27:20.791 qpair failed and we were unable to recover it. 00:27:20.791 [2024-07-14 07:44:36.688291] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.791 [2024-07-14 07:44:36.688446] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.791 [2024-07-14 07:44:36.688471] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0c30000b90 with addr=10.0.0.2, port=4420 00:27:20.791 qpair failed and we were unable to recover it. 00:27:20.791 [2024-07-14 07:44:36.688633] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.791 [2024-07-14 07:44:36.688813] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.791 [2024-07-14 07:44:36.688838] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0c30000b90 with addr=10.0.0.2, port=4420 00:27:20.791 qpair failed and we were unable to recover it. 00:27:20.791 [2024-07-14 07:44:36.689038] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.791 [2024-07-14 07:44:36.689189] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.791 [2024-07-14 07:44:36.689214] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0c30000b90 with addr=10.0.0.2, port=4420 00:27:20.791 qpair failed and we were unable to recover it. 00:27:20.791 [2024-07-14 07:44:36.689427] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.791 [2024-07-14 07:44:36.689596] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.791 [2024-07-14 07:44:36.689626] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:20.791 qpair failed and we were unable to recover it. 00:27:20.791 [2024-07-14 07:44:36.689795] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.791 [2024-07-14 07:44:36.689963] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.791 [2024-07-14 07:44:36.689991] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:20.791 qpair failed and we were unable to recover it. 00:27:20.791 [2024-07-14 07:44:36.690176] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.791 [2024-07-14 07:44:36.690381] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.791 [2024-07-14 07:44:36.690406] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:20.791 qpair failed and we were unable to recover it. 00:27:20.791 [2024-07-14 07:44:36.690575] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.792 [2024-07-14 07:44:36.690734] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.792 [2024-07-14 07:44:36.690762] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:20.792 qpair failed and we were unable to recover it. 00:27:20.792 [2024-07-14 07:44:36.690928] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.792 [2024-07-14 07:44:36.691089] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.792 [2024-07-14 07:44:36.691117] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:20.792 qpair failed and we were unable to recover it. 00:27:20.792 [2024-07-14 07:44:36.691309] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.792 [2024-07-14 07:44:36.691491] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.792 [2024-07-14 07:44:36.691518] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:20.792 qpair failed and we were unable to recover it. 00:27:20.792 [2024-07-14 07:44:36.691702] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.792 [2024-07-14 07:44:36.691886] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.792 [2024-07-14 07:44:36.691913] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:20.792 qpair failed and we were unable to recover it. 00:27:20.792 [2024-07-14 07:44:36.692085] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.792 [2024-07-14 07:44:36.692227] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.792 [2024-07-14 07:44:36.692253] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:20.792 qpair failed and we were unable to recover it. 00:27:20.792 [2024-07-14 07:44:36.692444] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.792 [2024-07-14 07:44:36.692626] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.792 [2024-07-14 07:44:36.692651] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:20.792 qpair failed and we were unable to recover it. 00:27:20.792 [2024-07-14 07:44:36.692798] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.792 [2024-07-14 07:44:36.693010] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.792 [2024-07-14 07:44:36.693038] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:20.792 qpair failed and we were unable to recover it. 00:27:20.792 [2024-07-14 07:44:36.693217] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.792 [2024-07-14 07:44:36.693417] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.792 [2024-07-14 07:44:36.693442] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:20.792 qpair failed and we were unable to recover it. 00:27:20.792 [2024-07-14 07:44:36.693628] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.792 [2024-07-14 07:44:36.693781] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.792 [2024-07-14 07:44:36.693807] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:20.792 qpair failed and we were unable to recover it. 00:27:20.792 [2024-07-14 07:44:36.694010] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.792 [2024-07-14 07:44:36.694183] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.792 [2024-07-14 07:44:36.694209] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:20.792 qpair failed and we were unable to recover it. 00:27:20.792 [2024-07-14 07:44:36.694404] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.792 [2024-07-14 07:44:36.694576] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.792 [2024-07-14 07:44:36.694602] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:20.792 qpair failed and we were unable to recover it. 00:27:20.792 [2024-07-14 07:44:36.694815] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.792 [2024-07-14 07:44:36.695001] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.792 [2024-07-14 07:44:36.695027] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:20.792 qpair failed and we were unable to recover it. 00:27:20.792 [2024-07-14 07:44:36.695179] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.792 [2024-07-14 07:44:36.695357] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.792 [2024-07-14 07:44:36.695383] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:20.792 qpair failed and we were unable to recover it. 00:27:20.792 [2024-07-14 07:44:36.695594] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.792 [2024-07-14 07:44:36.695744] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.792 [2024-07-14 07:44:36.695770] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:20.792 qpair failed and we were unable to recover it. 00:27:20.792 [2024-07-14 07:44:36.695946] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.792 [2024-07-14 07:44:36.696129] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.792 [2024-07-14 07:44:36.696155] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:20.792 qpair failed and we were unable to recover it. 00:27:20.792 [2024-07-14 07:44:36.696317] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.792 [2024-07-14 07:44:36.696506] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.792 [2024-07-14 07:44:36.696532] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:20.792 qpair failed and we were unable to recover it. 00:27:20.792 [2024-07-14 07:44:36.696714] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.792 [2024-07-14 07:44:36.696862] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.792 [2024-07-14 07:44:36.696897] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:20.792 qpair failed and we were unable to recover it. 00:27:20.792 [2024-07-14 07:44:36.697089] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.792 [2024-07-14 07:44:36.697236] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.792 [2024-07-14 07:44:36.697267] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:20.792 qpair failed and we were unable to recover it. 00:27:20.792 [2024-07-14 07:44:36.697446] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.792 [2024-07-14 07:44:36.697628] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.792 [2024-07-14 07:44:36.697654] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:20.792 qpair failed and we were unable to recover it. 00:27:20.792 [2024-07-14 07:44:36.697872] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.792 [2024-07-14 07:44:36.698037] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.792 [2024-07-14 07:44:36.698062] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:20.792 qpair failed and we were unable to recover it. 00:27:20.792 [2024-07-14 07:44:36.698212] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.792 [2024-07-14 07:44:36.698389] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.792 [2024-07-14 07:44:36.698415] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:20.792 qpair failed and we were unable to recover it. 00:27:20.792 [2024-07-14 07:44:36.698591] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.792 [2024-07-14 07:44:36.698760] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.792 [2024-07-14 07:44:36.698786] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:20.792 qpair failed and we were unable to recover it. 00:27:20.792 [2024-07-14 07:44:36.698985] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.792 [2024-07-14 07:44:36.699145] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.792 [2024-07-14 07:44:36.699182] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:20.792 qpair failed and we were unable to recover it. 00:27:20.792 [2024-07-14 07:44:36.699364] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.792 [2024-07-14 07:44:36.699521] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.792 [2024-07-14 07:44:36.699549] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:20.792 qpair failed and we were unable to recover it. 00:27:20.792 [2024-07-14 07:44:36.699735] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.792 [2024-07-14 07:44:36.699923] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.793 [2024-07-14 07:44:36.699949] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:20.793 qpair failed and we were unable to recover it. 00:27:20.793 [2024-07-14 07:44:36.700121] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.793 [2024-07-14 07:44:36.700283] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.793 [2024-07-14 07:44:36.700317] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:20.793 qpair failed and we were unable to recover it. 00:27:20.793 [2024-07-14 07:44:36.700529] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.793 [2024-07-14 07:44:36.700738] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.793 [2024-07-14 07:44:36.700764] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:20.793 qpair failed and we were unable to recover it. 00:27:20.793 [2024-07-14 07:44:36.701062] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.793 [2024-07-14 07:44:36.701228] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.793 [2024-07-14 07:44:36.701256] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:20.793 qpair failed and we were unable to recover it. 00:27:20.793 [2024-07-14 07:44:36.701463] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.793 [2024-07-14 07:44:36.701651] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.793 [2024-07-14 07:44:36.701676] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:20.793 qpair failed and we were unable to recover it. 00:27:20.793 [2024-07-14 07:44:36.701844] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.793 [2024-07-14 07:44:36.702012] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.793 [2024-07-14 07:44:36.702040] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:20.793 qpair failed and we were unable to recover it. 00:27:20.793 [2024-07-14 07:44:36.702228] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.793 [2024-07-14 07:44:36.702376] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.793 [2024-07-14 07:44:36.702401] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:20.793 qpair failed and we were unable to recover it. 00:27:20.793 [2024-07-14 07:44:36.702585] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.793 [2024-07-14 07:44:36.702737] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.793 [2024-07-14 07:44:36.702763] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:20.793 qpair failed and we were unable to recover it. 00:27:20.793 [2024-07-14 07:44:36.702918] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.793 [2024-07-14 07:44:36.703122] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.793 [2024-07-14 07:44:36.703148] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:20.793 qpair failed and we were unable to recover it. 00:27:20.793 [2024-07-14 07:44:36.703301] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.793 [2024-07-14 07:44:36.703498] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.793 [2024-07-14 07:44:36.703524] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:20.793 qpair failed and we were unable to recover it. 00:27:20.793 [2024-07-14 07:44:36.703707] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.793 [2024-07-14 07:44:36.703873] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.793 [2024-07-14 07:44:36.703900] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:20.793 qpair failed and we were unable to recover it. 00:27:20.793 [2024-07-14 07:44:36.704090] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.793 [2024-07-14 07:44:36.704242] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.793 [2024-07-14 07:44:36.704268] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:20.793 qpair failed and we were unable to recover it. 00:27:20.793 [2024-07-14 07:44:36.704455] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.793 [2024-07-14 07:44:36.704644] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.793 [2024-07-14 07:44:36.704670] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:20.793 qpair failed and we were unable to recover it. 00:27:20.793 [2024-07-14 07:44:36.704822] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.793 [2024-07-14 07:44:36.704998] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.793 [2024-07-14 07:44:36.705026] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:20.793 qpair failed and we were unable to recover it. 00:27:20.793 [2024-07-14 07:44:36.705205] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.793 [2024-07-14 07:44:36.705414] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.793 [2024-07-14 07:44:36.705440] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:20.793 qpair failed and we were unable to recover it. 00:27:20.793 [2024-07-14 07:44:36.705599] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.793 [2024-07-14 07:44:36.705787] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.793 [2024-07-14 07:44:36.705814] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:20.793 qpair failed and we were unable to recover it. 00:27:20.793 [2024-07-14 07:44:36.705989] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.793 [2024-07-14 07:44:36.706143] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.793 [2024-07-14 07:44:36.706180] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:20.793 qpair failed and we were unable to recover it. 00:27:20.793 [2024-07-14 07:44:36.706329] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.793 [2024-07-14 07:44:36.706512] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.793 [2024-07-14 07:44:36.706538] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:20.793 qpair failed and we were unable to recover it. 00:27:20.793 [2024-07-14 07:44:36.706726] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.793 [2024-07-14 07:44:36.706897] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.793 [2024-07-14 07:44:36.706925] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:20.793 qpair failed and we were unable to recover it. 00:27:20.793 [2024-07-14 07:44:36.707083] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.793 [2024-07-14 07:44:36.707259] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.793 [2024-07-14 07:44:36.707297] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:20.793 qpair failed and we were unable to recover it. 00:27:20.793 [2024-07-14 07:44:36.707480] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.793 [2024-07-14 07:44:36.707627] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.793 [2024-07-14 07:44:36.707663] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:20.793 qpair failed and we were unable to recover it. 00:27:20.793 [2024-07-14 07:44:36.707848] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.793 [2024-07-14 07:44:36.708026] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.793 [2024-07-14 07:44:36.708052] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:20.793 qpair failed and we were unable to recover it. 00:27:20.793 [2024-07-14 07:44:36.708213] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.793 [2024-07-14 07:44:36.708380] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.793 [2024-07-14 07:44:36.708405] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:20.793 qpair failed and we were unable to recover it. 00:27:20.793 [2024-07-14 07:44:36.708592] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.793 [2024-07-14 07:44:36.708751] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.793 [2024-07-14 07:44:36.708778] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:20.793 qpair failed and we were unable to recover it. 00:27:20.793 [2024-07-14 07:44:36.708979] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.793 [2024-07-14 07:44:36.709142] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.793 [2024-07-14 07:44:36.709182] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0c30000b90 with addr=10.0.0.2, port=4420 00:27:20.793 qpair failed and we were unable to recover it. 00:27:20.793 [2024-07-14 07:44:36.709384] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.793 [2024-07-14 07:44:36.709551] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.793 [2024-07-14 07:44:36.709578] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0c30000b90 with addr=10.0.0.2, port=4420 00:27:20.793 qpair failed and we were unable to recover it. 00:27:20.793 [2024-07-14 07:44:36.709738] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.793 [2024-07-14 07:44:36.709954] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.793 [2024-07-14 07:44:36.709982] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0c30000b90 with addr=10.0.0.2, port=4420 00:27:20.793 qpair failed and we were unable to recover it. 00:27:20.793 [2024-07-14 07:44:36.710165] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.793 [2024-07-14 07:44:36.710357] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.793 [2024-07-14 07:44:36.710394] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0c30000b90 with addr=10.0.0.2, port=4420 00:27:20.793 qpair failed and we were unable to recover it. 00:27:20.793 [2024-07-14 07:44:36.710550] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.793 [2024-07-14 07:44:36.710758] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.793 [2024-07-14 07:44:36.710784] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0c30000b90 with addr=10.0.0.2, port=4420 00:27:20.793 qpair failed and we were unable to recover it. 00:27:20.793 [2024-07-14 07:44:36.710953] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.793 [2024-07-14 07:44:36.711105] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.793 [2024-07-14 07:44:36.711131] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0c30000b90 with addr=10.0.0.2, port=4420 00:27:20.793 qpair failed and we were unable to recover it. 00:27:20.793 [2024-07-14 07:44:36.711348] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.793 [2024-07-14 07:44:36.711511] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.793 [2024-07-14 07:44:36.711539] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0c30000b90 with addr=10.0.0.2, port=4420 00:27:20.793 qpair failed and we were unable to recover it. 00:27:20.793 [2024-07-14 07:44:36.711709] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.794 [2024-07-14 07:44:36.711886] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.794 [2024-07-14 07:44:36.711912] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0c30000b90 with addr=10.0.0.2, port=4420 00:27:20.794 qpair failed and we were unable to recover it. 00:27:20.794 [2024-07-14 07:44:36.712105] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.794 [2024-07-14 07:44:36.712259] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.794 [2024-07-14 07:44:36.712284] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0c30000b90 with addr=10.0.0.2, port=4420 00:27:20.794 qpair failed and we were unable to recover it. 00:27:20.794 [2024-07-14 07:44:36.712467] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.794 [2024-07-14 07:44:36.712625] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.794 [2024-07-14 07:44:36.712652] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0c30000b90 with addr=10.0.0.2, port=4420 00:27:20.794 qpair failed and we were unable to recover it. 00:27:20.794 [2024-07-14 07:44:36.712856] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.794 [2024-07-14 07:44:36.713056] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.794 [2024-07-14 07:44:36.713082] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0c30000b90 with addr=10.0.0.2, port=4420 00:27:20.794 qpair failed and we were unable to recover it. 00:27:20.794 [2024-07-14 07:44:36.713275] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.794 [2024-07-14 07:44:36.713450] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.794 [2024-07-14 07:44:36.713485] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0c30000b90 with addr=10.0.0.2, port=4420 00:27:20.794 qpair failed and we were unable to recover it. 00:27:20.794 [2024-07-14 07:44:36.713700] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.794 [2024-07-14 07:44:36.713876] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.794 [2024-07-14 07:44:36.713903] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0c30000b90 with addr=10.0.0.2, port=4420 00:27:20.794 qpair failed and we were unable to recover it. 00:27:20.794 [2024-07-14 07:44:36.714092] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.794 [2024-07-14 07:44:36.714270] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.794 [2024-07-14 07:44:36.714303] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0c30000b90 with addr=10.0.0.2, port=4420 00:27:20.794 qpair failed and we were unable to recover it. 00:27:20.794 [2024-07-14 07:44:36.714490] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.794 [2024-07-14 07:44:36.714647] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.794 [2024-07-14 07:44:36.714673] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0c30000b90 with addr=10.0.0.2, port=4420 00:27:20.794 qpair failed and we were unable to recover it. 00:27:20.794 [2024-07-14 07:44:36.714884] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.794 [2024-07-14 07:44:36.715061] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.794 [2024-07-14 07:44:36.715086] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0c30000b90 with addr=10.0.0.2, port=4420 00:27:20.794 qpair failed and we were unable to recover it. 00:27:20.794 [2024-07-14 07:44:36.715255] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.794 [2024-07-14 07:44:36.715441] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.794 [2024-07-14 07:44:36.715466] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0c30000b90 with addr=10.0.0.2, port=4420 00:27:20.794 qpair failed and we were unable to recover it. 00:27:20.794 [2024-07-14 07:44:36.715628] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.794 [2024-07-14 07:44:36.715838] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.794 [2024-07-14 07:44:36.715863] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0c30000b90 with addr=10.0.0.2, port=4420 00:27:20.794 qpair failed and we were unable to recover it. 00:27:20.794 [2024-07-14 07:44:36.716032] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.794 [2024-07-14 07:44:36.716189] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.794 [2024-07-14 07:44:36.716215] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0c30000b90 with addr=10.0.0.2, port=4420 00:27:20.794 qpair failed and we were unable to recover it. 00:27:20.794 [2024-07-14 07:44:36.716373] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.794 [2024-07-14 07:44:36.716554] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.794 [2024-07-14 07:44:36.716579] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0c30000b90 with addr=10.0.0.2, port=4420 00:27:20.794 qpair failed and we were unable to recover it. 00:27:20.794 [2024-07-14 07:44:36.716749] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.794 [2024-07-14 07:44:36.716907] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.794 [2024-07-14 07:44:36.716933] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0c30000b90 with addr=10.0.0.2, port=4420 00:27:20.794 qpair failed and we were unable to recover it. 00:27:20.794 [2024-07-14 07:44:36.717119] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.794 [2024-07-14 07:44:36.717298] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.794 [2024-07-14 07:44:36.717325] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0c30000b90 with addr=10.0.0.2, port=4420 00:27:20.794 qpair failed and we were unable to recover it. 00:27:20.794 [2024-07-14 07:44:36.717516] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.794 [2024-07-14 07:44:36.717675] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.794 [2024-07-14 07:44:36.717702] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0c30000b90 with addr=10.0.0.2, port=4420 00:27:20.794 qpair failed and we were unable to recover it. 00:27:20.794 [2024-07-14 07:44:36.717855] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.794 [2024-07-14 07:44:36.718071] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.794 [2024-07-14 07:44:36.718097] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0c30000b90 with addr=10.0.0.2, port=4420 00:27:20.794 qpair failed and we were unable to recover it. 00:27:20.794 [2024-07-14 07:44:36.718275] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.794 [2024-07-14 07:44:36.718458] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.794 [2024-07-14 07:44:36.718485] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0c30000b90 with addr=10.0.0.2, port=4420 00:27:20.794 qpair failed and we were unable to recover it. 00:27:20.794 [2024-07-14 07:44:36.718668] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.794 [2024-07-14 07:44:36.718825] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.794 [2024-07-14 07:44:36.718850] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0c30000b90 with addr=10.0.0.2, port=4420 00:27:20.794 qpair failed and we were unable to recover it. 00:27:20.794 [2024-07-14 07:44:36.719047] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.794 [2024-07-14 07:44:36.719224] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.794 [2024-07-14 07:44:36.719250] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0c30000b90 with addr=10.0.0.2, port=4420 00:27:20.794 qpair failed and we were unable to recover it. 00:27:20.794 [2024-07-14 07:44:36.719453] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.794 [2024-07-14 07:44:36.719641] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.794 [2024-07-14 07:44:36.719667] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0c30000b90 with addr=10.0.0.2, port=4420 00:27:20.794 qpair failed and we were unable to recover it. 00:27:20.794 [2024-07-14 07:44:36.719817] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.794 [2024-07-14 07:44:36.719990] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.794 [2024-07-14 07:44:36.720016] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0c30000b90 with addr=10.0.0.2, port=4420 00:27:20.794 qpair failed and we were unable to recover it. 00:27:20.794 [2024-07-14 07:44:36.720173] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.794 [2024-07-14 07:44:36.720329] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.794 [2024-07-14 07:44:36.720355] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0c30000b90 with addr=10.0.0.2, port=4420 00:27:20.794 qpair failed and we were unable to recover it. 00:27:20.794 [2024-07-14 07:44:36.720537] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.794 [2024-07-14 07:44:36.720728] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.794 [2024-07-14 07:44:36.720753] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0c30000b90 with addr=10.0.0.2, port=4420 00:27:20.794 qpair failed and we were unable to recover it. 00:27:20.794 [2024-07-14 07:44:36.720950] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.794 [2024-07-14 07:44:36.721134] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.794 [2024-07-14 07:44:36.721171] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0c30000b90 with addr=10.0.0.2, port=4420 00:27:20.794 qpair failed and we were unable to recover it. 00:27:20.794 [2024-07-14 07:44:36.721336] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.794 [2024-07-14 07:44:36.721515] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.794 [2024-07-14 07:44:36.721551] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0c30000b90 with addr=10.0.0.2, port=4420 00:27:20.794 qpair failed and we were unable to recover it. 00:27:20.794 [2024-07-14 07:44:36.721740] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.794 [2024-07-14 07:44:36.721922] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.794 [2024-07-14 07:44:36.721947] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0c30000b90 with addr=10.0.0.2, port=4420 00:27:20.794 qpair failed and we were unable to recover it. 00:27:20.794 [2024-07-14 07:44:36.722095] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.794 [2024-07-14 07:44:36.722275] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.794 [2024-07-14 07:44:36.722301] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0c30000b90 with addr=10.0.0.2, port=4420 00:27:20.794 qpair failed and we were unable to recover it. 00:27:20.794 [2024-07-14 07:44:36.722492] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.794 [2024-07-14 07:44:36.722647] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.794 [2024-07-14 07:44:36.722672] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0c30000b90 with addr=10.0.0.2, port=4420 00:27:20.794 qpair failed and we were unable to recover it. 00:27:20.794 [2024-07-14 07:44:36.722856] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.794 [2024-07-14 07:44:36.723021] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.794 [2024-07-14 07:44:36.723046] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0c30000b90 with addr=10.0.0.2, port=4420 00:27:20.794 qpair failed and we were unable to recover it. 00:27:20.794 [2024-07-14 07:44:36.723229] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.794 [2024-07-14 07:44:36.723420] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.794 [2024-07-14 07:44:36.723447] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0c30000b90 with addr=10.0.0.2, port=4420 00:27:20.794 qpair failed and we were unable to recover it. 00:27:20.795 [2024-07-14 07:44:36.723603] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.795 [2024-07-14 07:44:36.723762] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.795 [2024-07-14 07:44:36.723787] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0c30000b90 with addr=10.0.0.2, port=4420 00:27:20.795 qpair failed and we were unable to recover it. 00:27:20.795 [2024-07-14 07:44:36.723993] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.795 [2024-07-14 07:44:36.724172] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.795 [2024-07-14 07:44:36.724198] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0c30000b90 with addr=10.0.0.2, port=4420 00:27:20.795 qpair failed and we were unable to recover it. 00:27:20.795 [2024-07-14 07:44:36.724411] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.795 [2024-07-14 07:44:36.724593] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.795 [2024-07-14 07:44:36.724620] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0c30000b90 with addr=10.0.0.2, port=4420 00:27:20.795 qpair failed and we were unable to recover it. 00:27:20.795 [2024-07-14 07:44:36.724818] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.795 [2024-07-14 07:44:36.725006] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.795 [2024-07-14 07:44:36.725033] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0c30000b90 with addr=10.0.0.2, port=4420 00:27:20.795 qpair failed and we were unable to recover it. 00:27:20.795 [2024-07-14 07:44:36.725225] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.795 [2024-07-14 07:44:36.725389] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.795 [2024-07-14 07:44:36.725415] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0c30000b90 with addr=10.0.0.2, port=4420 00:27:20.795 qpair failed and we were unable to recover it. 00:27:20.795 [2024-07-14 07:44:36.725598] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.795 [2024-07-14 07:44:36.725822] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.795 [2024-07-14 07:44:36.725847] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0c30000b90 with addr=10.0.0.2, port=4420 00:27:20.795 qpair failed and we were unable to recover it. 00:27:20.795 [2024-07-14 07:44:36.726025] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.795 [2024-07-14 07:44:36.726207] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.795 [2024-07-14 07:44:36.726232] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0c30000b90 with addr=10.0.0.2, port=4420 00:27:20.795 qpair failed and we were unable to recover it. 00:27:20.795 [2024-07-14 07:44:36.726386] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.795 [2024-07-14 07:44:36.726533] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.795 [2024-07-14 07:44:36.726559] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0c30000b90 with addr=10.0.0.2, port=4420 00:27:20.795 qpair failed and we were unable to recover it. 00:27:20.795 [2024-07-14 07:44:36.726735] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.795 [2024-07-14 07:44:36.726891] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.795 [2024-07-14 07:44:36.726917] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0c30000b90 with addr=10.0.0.2, port=4420 00:27:20.795 qpair failed and we were unable to recover it. 00:27:20.795 [2024-07-14 07:44:36.727103] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.795 [2024-07-14 07:44:36.727292] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.795 [2024-07-14 07:44:36.727325] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0c30000b90 with addr=10.0.0.2, port=4420 00:27:20.795 qpair failed and we were unable to recover it. 00:27:20.795 [2024-07-14 07:44:36.727474] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.795 [2024-07-14 07:44:36.727651] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.795 [2024-07-14 07:44:36.727676] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0c30000b90 with addr=10.0.0.2, port=4420 00:27:20.795 qpair failed and we were unable to recover it. 00:27:20.795 [2024-07-14 07:44:36.727848] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.795 [2024-07-14 07:44:36.728041] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.795 [2024-07-14 07:44:36.728066] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0c30000b90 with addr=10.0.0.2, port=4420 00:27:20.795 qpair failed and we were unable to recover it. 00:27:20.795 [2024-07-14 07:44:36.728239] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.795 [2024-07-14 07:44:36.728451] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.795 [2024-07-14 07:44:36.728480] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0c30000b90 with addr=10.0.0.2, port=4420 00:27:20.795 qpair failed and we were unable to recover it. 00:27:20.795 [2024-07-14 07:44:36.728673] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.795 [2024-07-14 07:44:36.728853] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.795 [2024-07-14 07:44:36.728891] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0c30000b90 with addr=10.0.0.2, port=4420 00:27:20.795 qpair failed and we were unable to recover it. 00:27:20.795 [2024-07-14 07:44:36.729064] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.795 [2024-07-14 07:44:36.729221] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.795 [2024-07-14 07:44:36.729248] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0c30000b90 with addr=10.0.0.2, port=4420 00:27:20.795 qpair failed and we were unable to recover it. 00:27:20.795 [2024-07-14 07:44:36.729441] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.795 [2024-07-14 07:44:36.729621] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.795 [2024-07-14 07:44:36.729646] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0c30000b90 with addr=10.0.0.2, port=4420 00:27:20.795 qpair failed and we were unable to recover it. 00:27:20.795 [2024-07-14 07:44:36.729801] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.795 [2024-07-14 07:44:36.729998] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.795 [2024-07-14 07:44:36.730024] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0c30000b90 with addr=10.0.0.2, port=4420 00:27:20.795 qpair failed and we were unable to recover it. 00:27:20.795 [2024-07-14 07:44:36.730180] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.795 [2024-07-14 07:44:36.730329] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.795 [2024-07-14 07:44:36.730354] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0c30000b90 with addr=10.0.0.2, port=4420 00:27:20.795 qpair failed and we were unable to recover it. 00:27:20.795 [2024-07-14 07:44:36.730529] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.795 [2024-07-14 07:44:36.730714] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.795 [2024-07-14 07:44:36.730741] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0c30000b90 with addr=10.0.0.2, port=4420 00:27:20.795 qpair failed and we were unable to recover it. 00:27:20.795 [2024-07-14 07:44:36.730920] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.795 [2024-07-14 07:44:36.731103] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.795 [2024-07-14 07:44:36.731129] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0c30000b90 with addr=10.0.0.2, port=4420 00:27:20.795 qpair failed and we were unable to recover it. 00:27:20.795 [2024-07-14 07:44:36.731282] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.795 [2024-07-14 07:44:36.731492] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.795 [2024-07-14 07:44:36.731517] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0c30000b90 with addr=10.0.0.2, port=4420 00:27:20.795 qpair failed and we were unable to recover it. 00:27:20.795 [2024-07-14 07:44:36.731672] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.795 [2024-07-14 07:44:36.731892] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.795 [2024-07-14 07:44:36.731919] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0c30000b90 with addr=10.0.0.2, port=4420 00:27:20.795 qpair failed and we were unable to recover it. 00:27:20.795 [2024-07-14 07:44:36.732093] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.795 [2024-07-14 07:44:36.732309] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.795 [2024-07-14 07:44:36.732339] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0c30000b90 with addr=10.0.0.2, port=4420 00:27:20.795 qpair failed and we were unable to recover it. 00:27:20.795 [2024-07-14 07:44:36.732492] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.795 [2024-07-14 07:44:36.732667] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.795 [2024-07-14 07:44:36.732692] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0c30000b90 with addr=10.0.0.2, port=4420 00:27:20.795 qpair failed and we were unable to recover it. 00:27:20.795 [2024-07-14 07:44:36.732892] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.795 [2024-07-14 07:44:36.733044] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.795 [2024-07-14 07:44:36.733069] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0c30000b90 with addr=10.0.0.2, port=4420 00:27:20.795 qpair failed and we were unable to recover it. 00:27:20.795 [2024-07-14 07:44:36.733253] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.795 [2024-07-14 07:44:36.733431] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.795 [2024-07-14 07:44:36.733456] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0c30000b90 with addr=10.0.0.2, port=4420 00:27:20.795 qpair failed and we were unable to recover it. 00:27:20.795 [2024-07-14 07:44:36.733612] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.795 [2024-07-14 07:44:36.733772] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.795 [2024-07-14 07:44:36.733799] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0c30000b90 with addr=10.0.0.2, port=4420 00:27:20.795 qpair failed and we were unable to recover it. 00:27:20.795 [2024-07-14 07:44:36.734000] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.795 [2024-07-14 07:44:36.734177] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.795 [2024-07-14 07:44:36.734202] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0c30000b90 with addr=10.0.0.2, port=4420 00:27:20.795 qpair failed and we were unable to recover it. 00:27:20.795 [2024-07-14 07:44:36.734359] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.795 [2024-07-14 07:44:36.734558] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.795 [2024-07-14 07:44:36.734585] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0c30000b90 with addr=10.0.0.2, port=4420 00:27:20.795 qpair failed and we were unable to recover it. 00:27:20.795 [2024-07-14 07:44:36.734735] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.795 [2024-07-14 07:44:36.734921] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.795 [2024-07-14 07:44:36.734949] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0c30000b90 with addr=10.0.0.2, port=4420 00:27:20.795 qpair failed and we were unable to recover it. 00:27:20.795 [2024-07-14 07:44:36.735133] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.795 [2024-07-14 07:44:36.735348] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.796 [2024-07-14 07:44:36.735373] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0c30000b90 with addr=10.0.0.2, port=4420 00:27:20.796 qpair failed and we were unable to recover it. 00:27:20.796 [2024-07-14 07:44:36.735533] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.796 [2024-07-14 07:44:36.735695] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.796 [2024-07-14 07:44:36.735721] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0c30000b90 with addr=10.0.0.2, port=4420 00:27:20.796 qpair failed and we were unable to recover it. 00:27:20.796 [2024-07-14 07:44:36.735898] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.796 [2024-07-14 07:44:36.736047] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.796 [2024-07-14 07:44:36.736076] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0c30000b90 with addr=10.0.0.2, port=4420 00:27:20.796 qpair failed and we were unable to recover it. 00:27:20.796 [2024-07-14 07:44:36.736276] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.796 [2024-07-14 07:44:36.736466] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.796 [2024-07-14 07:44:36.736494] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0c30000b90 with addr=10.0.0.2, port=4420 00:27:20.796 qpair failed and we were unable to recover it. 00:27:20.796 [2024-07-14 07:44:36.736680] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.796 [2024-07-14 07:44:36.736850] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.796 [2024-07-14 07:44:36.736888] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0c30000b90 with addr=10.0.0.2, port=4420 00:27:20.796 qpair failed and we were unable to recover it. 00:27:20.796 [2024-07-14 07:44:36.737079] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.796 [2024-07-14 07:44:36.737263] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.796 [2024-07-14 07:44:36.737288] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0c30000b90 with addr=10.0.0.2, port=4420 00:27:20.796 qpair failed and we were unable to recover it. 00:27:20.796 [2024-07-14 07:44:36.737488] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.796 [2024-07-14 07:44:36.737690] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.796 [2024-07-14 07:44:36.737715] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0c30000b90 with addr=10.0.0.2, port=4420 00:27:20.796 qpair failed and we were unable to recover it. 00:27:20.796 [2024-07-14 07:44:36.737878] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.796 [2024-07-14 07:44:36.738065] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.796 [2024-07-14 07:44:36.738091] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0c30000b90 with addr=10.0.0.2, port=4420 00:27:20.796 qpair failed and we were unable to recover it. 00:27:20.796 [2024-07-14 07:44:36.738267] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.796 [2024-07-14 07:44:36.738420] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.796 [2024-07-14 07:44:36.738445] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0c30000b90 with addr=10.0.0.2, port=4420 00:27:20.796 qpair failed and we were unable to recover it. 00:27:20.796 [2024-07-14 07:44:36.738598] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.796 [2024-07-14 07:44:36.738782] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.796 [2024-07-14 07:44:36.738807] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0c30000b90 with addr=10.0.0.2, port=4420 00:27:20.796 qpair failed and we were unable to recover it. 00:27:20.796 [2024-07-14 07:44:36.738974] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.796 [2024-07-14 07:44:36.739125] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.796 [2024-07-14 07:44:36.739151] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0c30000b90 with addr=10.0.0.2, port=4420 00:27:20.796 qpair failed and we were unable to recover it. 00:27:20.796 [2024-07-14 07:44:36.739330] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.796 [2024-07-14 07:44:36.739520] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.796 [2024-07-14 07:44:36.739545] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0c30000b90 with addr=10.0.0.2, port=4420 00:27:20.796 qpair failed and we were unable to recover it. 00:27:20.796 [2024-07-14 07:44:36.739695] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.796 [2024-07-14 07:44:36.739896] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.796 [2024-07-14 07:44:36.739926] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0c30000b90 with addr=10.0.0.2, port=4420 00:27:20.796 qpair failed and we were unable to recover it. 00:27:20.796 [2024-07-14 07:44:36.740106] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.796 [2024-07-14 07:44:36.740307] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.796 [2024-07-14 07:44:36.740332] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0c30000b90 with addr=10.0.0.2, port=4420 00:27:20.796 qpair failed and we were unable to recover it. 00:27:20.796 [2024-07-14 07:44:36.740513] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.796 [2024-07-14 07:44:36.740708] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.796 [2024-07-14 07:44:36.740738] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0c30000b90 with addr=10.0.0.2, port=4420 00:27:20.796 qpair failed and we were unable to recover it. 00:27:20.796 [2024-07-14 07:44:36.740900] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.796 [2024-07-14 07:44:36.741087] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.796 [2024-07-14 07:44:36.741113] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0c30000b90 with addr=10.0.0.2, port=4420 00:27:20.796 qpair failed and we were unable to recover it. 00:27:20.796 [2024-07-14 07:44:36.741296] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.796 [2024-07-14 07:44:36.741450] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.796 [2024-07-14 07:44:36.741475] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0c30000b90 with addr=10.0.0.2, port=4420 00:27:20.796 qpair failed and we were unable to recover it. 00:27:20.796 [2024-07-14 07:44:36.741659] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.796 [2024-07-14 07:44:36.741844] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.796 [2024-07-14 07:44:36.741874] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0c30000b90 with addr=10.0.0.2, port=4420 00:27:20.796 qpair failed and we were unable to recover it. 00:27:20.796 [2024-07-14 07:44:36.742061] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.796 [2024-07-14 07:44:36.742243] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.796 [2024-07-14 07:44:36.742268] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0c30000b90 with addr=10.0.0.2, port=4420 00:27:20.796 qpair failed and we were unable to recover it. 00:27:20.796 [2024-07-14 07:44:36.742452] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.796 [2024-07-14 07:44:36.742652] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.796 [2024-07-14 07:44:36.742677] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0c30000b90 with addr=10.0.0.2, port=4420 00:27:20.796 qpair failed and we were unable to recover it. 00:27:20.796 [2024-07-14 07:44:36.742881] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.796 [2024-07-14 07:44:36.743031] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.796 [2024-07-14 07:44:36.743056] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0c30000b90 with addr=10.0.0.2, port=4420 00:27:20.796 qpair failed and we were unable to recover it. 00:27:20.796 [2024-07-14 07:44:36.743258] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.796 [2024-07-14 07:44:36.743468] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.796 [2024-07-14 07:44:36.743492] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0c30000b90 with addr=10.0.0.2, port=4420 00:27:20.796 qpair failed and we were unable to recover it. 00:27:20.796 [2024-07-14 07:44:36.743671] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.796 [2024-07-14 07:44:36.743851] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.796 [2024-07-14 07:44:36.743889] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0c30000b90 with addr=10.0.0.2, port=4420 00:27:20.796 qpair failed and we were unable to recover it. 00:27:20.796 [2024-07-14 07:44:36.744069] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.796 [2024-07-14 07:44:36.744224] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.796 [2024-07-14 07:44:36.744250] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0c30000b90 with addr=10.0.0.2, port=4420 00:27:20.796 qpair failed and we were unable to recover it. 00:27:20.796 [2024-07-14 07:44:36.744455] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.796 [2024-07-14 07:44:36.744633] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.796 [2024-07-14 07:44:36.744657] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0c30000b90 with addr=10.0.0.2, port=4420 00:27:20.796 qpair failed and we were unable to recover it. 00:27:20.796 [2024-07-14 07:44:36.744815] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.796 [2024-07-14 07:44:36.745019] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.796 [2024-07-14 07:44:36.745045] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0c30000b90 with addr=10.0.0.2, port=4420 00:27:20.796 qpair failed and we were unable to recover it. 00:27:20.796 [2024-07-14 07:44:36.745230] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.796 [2024-07-14 07:44:36.745406] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.797 [2024-07-14 07:44:36.745431] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0c30000b90 with addr=10.0.0.2, port=4420 00:27:20.797 qpair failed and we were unable to recover it. 00:27:20.797 [2024-07-14 07:44:36.745615] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.797 [2024-07-14 07:44:36.745789] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.797 [2024-07-14 07:44:36.745814] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0c30000b90 with addr=10.0.0.2, port=4420 00:27:20.797 qpair failed and we were unable to recover it. 00:27:20.797 [2024-07-14 07:44:36.745977] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.797 [2024-07-14 07:44:36.746161] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.797 [2024-07-14 07:44:36.746197] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0c30000b90 with addr=10.0.0.2, port=4420 00:27:20.797 qpair failed and we were unable to recover it. 00:27:20.797 [2024-07-14 07:44:36.746381] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.797 [2024-07-14 07:44:36.746531] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.797 [2024-07-14 07:44:36.746556] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0c30000b90 with addr=10.0.0.2, port=4420 00:27:20.797 qpair failed and we were unable to recover it. 00:27:20.797 [2024-07-14 07:44:36.746742] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.797 [2024-07-14 07:44:36.746914] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.797 [2024-07-14 07:44:36.746940] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0c30000b90 with addr=10.0.0.2, port=4420 00:27:20.797 qpair failed and we were unable to recover it. 00:27:20.797 [2024-07-14 07:44:36.747125] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.797 [2024-07-14 07:44:36.747313] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.797 [2024-07-14 07:44:36.747338] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0c30000b90 with addr=10.0.0.2, port=4420 00:27:20.797 qpair failed and we were unable to recover it. 00:27:20.797 [2024-07-14 07:44:36.747523] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.797 [2024-07-14 07:44:36.747697] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.797 [2024-07-14 07:44:36.747723] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0c30000b90 with addr=10.0.0.2, port=4420 00:27:20.797 qpair failed and we were unable to recover it. 00:27:20.797 [2024-07-14 07:44:36.747886] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.797 [2024-07-14 07:44:36.748064] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.797 [2024-07-14 07:44:36.748090] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0c30000b90 with addr=10.0.0.2, port=4420 00:27:20.797 qpair failed and we were unable to recover it. 00:27:20.797 [2024-07-14 07:44:36.748270] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.797 [2024-07-14 07:44:36.748460] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.797 [2024-07-14 07:44:36.748486] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0c30000b90 with addr=10.0.0.2, port=4420 00:27:20.797 qpair failed and we were unable to recover it. 00:27:20.797 [2024-07-14 07:44:36.748634] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.797 [2024-07-14 07:44:36.748786] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.797 [2024-07-14 07:44:36.748813] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0c30000b90 with addr=10.0.0.2, port=4420 00:27:20.797 qpair failed and we were unable to recover it. 00:27:20.797 [2024-07-14 07:44:36.749020] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.797 [2024-07-14 07:44:36.749175] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.797 [2024-07-14 07:44:36.749205] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0c30000b90 with addr=10.0.0.2, port=4420 00:27:20.797 qpair failed and we were unable to recover it. 00:27:20.797 [2024-07-14 07:44:36.749364] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.797 [2024-07-14 07:44:36.749538] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.797 [2024-07-14 07:44:36.749563] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0c30000b90 with addr=10.0.0.2, port=4420 00:27:20.797 qpair failed and we were unable to recover it. 00:27:20.797 [2024-07-14 07:44:36.749733] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.797 [2024-07-14 07:44:36.749915] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.797 [2024-07-14 07:44:36.749942] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0c30000b90 with addr=10.0.0.2, port=4420 00:27:20.797 qpair failed and we were unable to recover it. 00:27:20.797 [2024-07-14 07:44:36.750135] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.797 [2024-07-14 07:44:36.750315] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.797 [2024-07-14 07:44:36.750341] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0c30000b90 with addr=10.0.0.2, port=4420 00:27:20.797 qpair failed and we were unable to recover it. 00:27:20.797 [2024-07-14 07:44:36.750523] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.797 [2024-07-14 07:44:36.750675] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.797 [2024-07-14 07:44:36.750700] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0c30000b90 with addr=10.0.0.2, port=4420 00:27:20.797 qpair failed and we were unable to recover it. 00:27:20.797 [2024-07-14 07:44:36.750878] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.797 [2024-07-14 07:44:36.751066] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.797 [2024-07-14 07:44:36.751091] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0c30000b90 with addr=10.0.0.2, port=4420 00:27:20.797 qpair failed and we were unable to recover it. 00:27:20.797 [2024-07-14 07:44:36.751258] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.797 [2024-07-14 07:44:36.751469] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.797 [2024-07-14 07:44:36.751494] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0c30000b90 with addr=10.0.0.2, port=4420 00:27:20.797 qpair failed and we were unable to recover it. 00:27:20.797 [2024-07-14 07:44:36.751676] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.797 [2024-07-14 07:44:36.751849] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.797 [2024-07-14 07:44:36.751904] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0c30000b90 with addr=10.0.0.2, port=4420 00:27:20.797 qpair failed and we were unable to recover it. 00:27:20.797 [2024-07-14 07:44:36.752092] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.797 [2024-07-14 07:44:36.752257] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.797 [2024-07-14 07:44:36.752282] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0c30000b90 with addr=10.0.0.2, port=4420 00:27:20.797 qpair failed and we were unable to recover it. 00:27:20.797 [2024-07-14 07:44:36.752464] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.797 [2024-07-14 07:44:36.752621] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.797 [2024-07-14 07:44:36.752647] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0c30000b90 with addr=10.0.0.2, port=4420 00:27:20.797 qpair failed and we were unable to recover it. 00:27:20.797 [2024-07-14 07:44:36.752815] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.797 [2024-07-14 07:44:36.752973] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.797 [2024-07-14 07:44:36.752999] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0c30000b90 with addr=10.0.0.2, port=4420 00:27:20.797 qpair failed and we were unable to recover it. 00:27:20.797 [2024-07-14 07:44:36.753199] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.797 [2024-07-14 07:44:36.753391] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.797 [2024-07-14 07:44:36.753416] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0c30000b90 with addr=10.0.0.2, port=4420 00:27:20.797 qpair failed and we were unable to recover it. 00:27:20.797 [2024-07-14 07:44:36.753591] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.797 [2024-07-14 07:44:36.753737] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.797 [2024-07-14 07:44:36.753761] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0c30000b90 with addr=10.0.0.2, port=4420 00:27:20.797 qpair failed and we were unable to recover it. 00:27:20.797 [2024-07-14 07:44:36.753956] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.797 [2024-07-14 07:44:36.754109] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.797 [2024-07-14 07:44:36.754133] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0c30000b90 with addr=10.0.0.2, port=4420 00:27:20.797 qpair failed and we were unable to recover it. 00:27:20.797 [2024-07-14 07:44:36.754313] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.797 [2024-07-14 07:44:36.754492] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.797 [2024-07-14 07:44:36.754517] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0c30000b90 with addr=10.0.0.2, port=4420 00:27:20.797 qpair failed and we were unable to recover it. 00:27:20.797 [2024-07-14 07:44:36.754705] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.797 [2024-07-14 07:44:36.754856] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.797 [2024-07-14 07:44:36.754886] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0c30000b90 with addr=10.0.0.2, port=4420 00:27:20.797 qpair failed and we were unable to recover it. 00:27:20.797 [2024-07-14 07:44:36.755091] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.797 [2024-07-14 07:44:36.755297] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.797 [2024-07-14 07:44:36.755322] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0c30000b90 with addr=10.0.0.2, port=4420 00:27:20.797 qpair failed and we were unable to recover it. 00:27:20.797 [2024-07-14 07:44:36.755503] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.797 [2024-07-14 07:44:36.755651] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.797 [2024-07-14 07:44:36.755675] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0c30000b90 with addr=10.0.0.2, port=4420 00:27:20.797 qpair failed and we were unable to recover it. 00:27:20.797 [2024-07-14 07:44:36.755889] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.797 [2024-07-14 07:44:36.756071] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.797 [2024-07-14 07:44:36.756096] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0c30000b90 with addr=10.0.0.2, port=4420 00:27:20.797 qpair failed and we were unable to recover it. 00:27:20.797 [2024-07-14 07:44:36.756253] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.797 [2024-07-14 07:44:36.756433] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.797 [2024-07-14 07:44:36.756458] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0c30000b90 with addr=10.0.0.2, port=4420 00:27:20.797 qpair failed and we were unable to recover it. 00:27:20.797 [2024-07-14 07:44:36.756637] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.797 [2024-07-14 07:44:36.756813] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.797 [2024-07-14 07:44:36.756838] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0c30000b90 with addr=10.0.0.2, port=4420 00:27:20.797 qpair failed and we were unable to recover it. 00:27:20.797 [2024-07-14 07:44:36.757038] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.798 [2024-07-14 07:44:36.757223] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.798 [2024-07-14 07:44:36.757248] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0c30000b90 with addr=10.0.0.2, port=4420 00:27:20.798 qpair failed and we were unable to recover it. 00:27:20.798 [2024-07-14 07:44:36.757450] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.798 [2024-07-14 07:44:36.757604] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.798 [2024-07-14 07:44:36.757628] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0c30000b90 with addr=10.0.0.2, port=4420 00:27:20.798 qpair failed and we were unable to recover it. 00:27:20.798 [2024-07-14 07:44:36.757807] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.798 [2024-07-14 07:44:36.757974] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.798 [2024-07-14 07:44:36.757999] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0c30000b90 with addr=10.0.0.2, port=4420 00:27:20.798 qpair failed and we were unable to recover it. 00:27:20.798 [2024-07-14 07:44:36.758206] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.798 [2024-07-14 07:44:36.758359] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.798 [2024-07-14 07:44:36.758386] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0c30000b90 with addr=10.0.0.2, port=4420 00:27:20.798 qpair failed and we were unable to recover it. 00:27:20.798 [2024-07-14 07:44:36.758568] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.798 [2024-07-14 07:44:36.758748] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.798 [2024-07-14 07:44:36.758774] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0c30000b90 with addr=10.0.0.2, port=4420 00:27:20.798 qpair failed and we were unable to recover it. 00:27:20.798 [2024-07-14 07:44:36.758948] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.798 [2024-07-14 07:44:36.759128] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.798 [2024-07-14 07:44:36.759153] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0c30000b90 with addr=10.0.0.2, port=4420 00:27:20.798 qpair failed and we were unable to recover it. 00:27:20.798 [2024-07-14 07:44:36.759344] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.798 [2024-07-14 07:44:36.759492] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.798 [2024-07-14 07:44:36.759516] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0c30000b90 with addr=10.0.0.2, port=4420 00:27:20.798 qpair failed and we were unable to recover it. 00:27:20.798 [2024-07-14 07:44:36.759664] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.798 [2024-07-14 07:44:36.759840] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.798 [2024-07-14 07:44:36.759886] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0c30000b90 with addr=10.0.0.2, port=4420 00:27:20.798 qpair failed and we were unable to recover it. 00:27:20.798 [2024-07-14 07:44:36.760079] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.798 [2024-07-14 07:44:36.760260] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.798 [2024-07-14 07:44:36.760285] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0c30000b90 with addr=10.0.0.2, port=4420 00:27:20.798 qpair failed and we were unable to recover it. 00:27:20.798 [2024-07-14 07:44:36.760443] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.798 [2024-07-14 07:44:36.760614] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.798 [2024-07-14 07:44:36.760639] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0c30000b90 with addr=10.0.0.2, port=4420 00:27:20.798 qpair failed and we were unable to recover it. 00:27:20.798 [2024-07-14 07:44:36.760817] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.798 [2024-07-14 07:44:36.760996] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.798 [2024-07-14 07:44:36.761021] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0c30000b90 with addr=10.0.0.2, port=4420 00:27:20.798 qpair failed and we were unable to recover it. 00:27:20.798 [2024-07-14 07:44:36.761210] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.798 [2024-07-14 07:44:36.761418] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.798 [2024-07-14 07:44:36.761442] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0c30000b90 with addr=10.0.0.2, port=4420 00:27:20.798 qpair failed and we were unable to recover it. 00:27:20.798 [2024-07-14 07:44:36.761620] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.798 [2024-07-14 07:44:36.761795] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.798 [2024-07-14 07:44:36.761820] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0c30000b90 with addr=10.0.0.2, port=4420 00:27:20.798 qpair failed and we were unable to recover it. 00:27:20.798 [2024-07-14 07:44:36.761970] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.798 [2024-07-14 07:44:36.762130] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.798 [2024-07-14 07:44:36.762155] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0c30000b90 with addr=10.0.0.2, port=4420 00:27:20.798 qpair failed and we were unable to recover it. 00:27:20.798 [2024-07-14 07:44:36.762343] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.798 [2024-07-14 07:44:36.762529] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.798 [2024-07-14 07:44:36.762554] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0c30000b90 with addr=10.0.0.2, port=4420 00:27:20.798 qpair failed and we were unable to recover it. 00:27:20.798 [2024-07-14 07:44:36.762737] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.798 [2024-07-14 07:44:36.762950] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.798 [2024-07-14 07:44:36.762976] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0c30000b90 with addr=10.0.0.2, port=4420 00:27:20.798 qpair failed and we were unable to recover it. 00:27:20.798 [2024-07-14 07:44:36.763140] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.798 [2024-07-14 07:44:36.763288] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.798 [2024-07-14 07:44:36.763313] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0c30000b90 with addr=10.0.0.2, port=4420 00:27:20.798 qpair failed and we were unable to recover it. 00:27:20.798 [2024-07-14 07:44:36.763505] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.798 [2024-07-14 07:44:36.763669] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.798 [2024-07-14 07:44:36.763694] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0c30000b90 with addr=10.0.0.2, port=4420 00:27:20.798 qpair failed and we were unable to recover it. 00:27:20.798 [2024-07-14 07:44:36.763876] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.798 [2024-07-14 07:44:36.764034] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.798 [2024-07-14 07:44:36.764060] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0c30000b90 with addr=10.0.0.2, port=4420 00:27:20.798 qpair failed and we were unable to recover it. 00:27:20.798 [2024-07-14 07:44:36.764257] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.798 [2024-07-14 07:44:36.764413] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.798 [2024-07-14 07:44:36.764438] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0c30000b90 with addr=10.0.0.2, port=4420 00:27:20.798 qpair failed and we were unable to recover it. 00:27:20.798 [2024-07-14 07:44:36.764603] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.798 [2024-07-14 07:44:36.764777] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.798 [2024-07-14 07:44:36.764802] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0c30000b90 with addr=10.0.0.2, port=4420 00:27:20.798 qpair failed and we were unable to recover it. 00:27:20.798 [2024-07-14 07:44:36.764985] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.798 [2024-07-14 07:44:36.765142] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.798 [2024-07-14 07:44:36.765167] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0c30000b90 with addr=10.0.0.2, port=4420 00:27:20.798 qpair failed and we were unable to recover it. 00:27:20.798 [2024-07-14 07:44:36.765357] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.798 [2024-07-14 07:44:36.765558] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.798 [2024-07-14 07:44:36.765583] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0c30000b90 with addr=10.0.0.2, port=4420 00:27:20.798 qpair failed and we were unable to recover it. 00:27:20.798 [2024-07-14 07:44:36.765735] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.798 [2024-07-14 07:44:36.765915] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.798 [2024-07-14 07:44:36.765941] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0c30000b90 with addr=10.0.0.2, port=4420 00:27:20.798 qpair failed and we were unable to recover it. 00:27:20.798 [2024-07-14 07:44:36.766096] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.798 [2024-07-14 07:44:36.766258] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.798 [2024-07-14 07:44:36.766284] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0c30000b90 with addr=10.0.0.2, port=4420 00:27:20.798 qpair failed and we were unable to recover it. 00:27:20.798 [2024-07-14 07:44:36.766467] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.798 [2024-07-14 07:44:36.766675] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.798 [2024-07-14 07:44:36.766699] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0c30000b90 with addr=10.0.0.2, port=4420 00:27:20.798 qpair failed and we were unable to recover it. 00:27:20.798 [2024-07-14 07:44:36.766889] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.798 [2024-07-14 07:44:36.767047] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.798 [2024-07-14 07:44:36.767073] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0c30000b90 with addr=10.0.0.2, port=4420 00:27:20.798 qpair failed and we were unable to recover it. 00:27:20.798 [2024-07-14 07:44:36.767258] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.798 [2024-07-14 07:44:36.767411] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.798 [2024-07-14 07:44:36.767437] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0c30000b90 with addr=10.0.0.2, port=4420 00:27:20.798 qpair failed and we were unable to recover it. 00:27:20.798 [2024-07-14 07:44:36.767593] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.798 [2024-07-14 07:44:36.767792] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.798 [2024-07-14 07:44:36.767817] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0c30000b90 with addr=10.0.0.2, port=4420 00:27:20.798 qpair failed and we were unable to recover it. 00:27:20.798 [2024-07-14 07:44:36.768027] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.798 [2024-07-14 07:44:36.768185] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.798 [2024-07-14 07:44:36.768212] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0c30000b90 with addr=10.0.0.2, port=4420 00:27:20.798 qpair failed and we were unable to recover it. 00:27:20.798 [2024-07-14 07:44:36.768379] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.798 [2024-07-14 07:44:36.768577] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.798 [2024-07-14 07:44:36.768603] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0c30000b90 with addr=10.0.0.2, port=4420 00:27:20.799 qpair failed and we were unable to recover it. 00:27:20.799 [2024-07-14 07:44:36.768791] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.799 [2024-07-14 07:44:36.768973] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.799 [2024-07-14 07:44:36.768998] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0c30000b90 with addr=10.0.0.2, port=4420 00:27:20.799 qpair failed and we were unable to recover it. 00:27:20.799 [2024-07-14 07:44:36.769151] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.799 [2024-07-14 07:44:36.769355] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.799 [2024-07-14 07:44:36.769380] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0c30000b90 with addr=10.0.0.2, port=4420 00:27:20.799 qpair failed and we were unable to recover it. 00:27:20.799 [2024-07-14 07:44:36.769561] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.799 [2024-07-14 07:44:36.769714] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.799 [2024-07-14 07:44:36.769741] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0c30000b90 with addr=10.0.0.2, port=4420 00:27:20.799 qpair failed and we were unable to recover it. 00:27:20.799 [2024-07-14 07:44:36.769904] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.799 [2024-07-14 07:44:36.770092] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.799 [2024-07-14 07:44:36.770117] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0c30000b90 with addr=10.0.0.2, port=4420 00:27:20.799 qpair failed and we were unable to recover it. 00:27:20.799 [2024-07-14 07:44:36.770295] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.799 [2024-07-14 07:44:36.770475] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.799 [2024-07-14 07:44:36.770500] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0c30000b90 with addr=10.0.0.2, port=4420 00:27:20.799 qpair failed and we were unable to recover it. 00:27:20.799 [2024-07-14 07:44:36.770665] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.799 [2024-07-14 07:44:36.770879] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.799 [2024-07-14 07:44:36.770905] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0c30000b90 with addr=10.0.0.2, port=4420 00:27:20.799 qpair failed and we were unable to recover it. 00:27:20.799 [2024-07-14 07:44:36.771092] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.799 [2024-07-14 07:44:36.771262] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.799 [2024-07-14 07:44:36.771287] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0c30000b90 with addr=10.0.0.2, port=4420 00:27:20.799 qpair failed and we were unable to recover it. 00:27:20.799 [2024-07-14 07:44:36.771474] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.799 [2024-07-14 07:44:36.771657] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.799 [2024-07-14 07:44:36.771682] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0c30000b90 with addr=10.0.0.2, port=4420 00:27:20.799 qpair failed and we were unable to recover it. 00:27:20.799 [2024-07-14 07:44:36.771840] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.799 [2024-07-14 07:44:36.772029] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.799 [2024-07-14 07:44:36.772055] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0c30000b90 with addr=10.0.0.2, port=4420 00:27:20.799 qpair failed and we were unable to recover it. 00:27:20.799 [2024-07-14 07:44:36.772236] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.799 [2024-07-14 07:44:36.772406] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.799 [2024-07-14 07:44:36.772430] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0c30000b90 with addr=10.0.0.2, port=4420 00:27:20.799 qpair failed and we were unable to recover it. 00:27:20.799 [2024-07-14 07:44:36.772585] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.799 [2024-07-14 07:44:36.772740] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.799 [2024-07-14 07:44:36.772765] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0c30000b90 with addr=10.0.0.2, port=4420 00:27:20.799 qpair failed and we were unable to recover it. 00:27:20.799 [2024-07-14 07:44:36.772949] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.799 [2024-07-14 07:44:36.773104] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.799 [2024-07-14 07:44:36.773131] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0c30000b90 with addr=10.0.0.2, port=4420 00:27:20.799 qpair failed and we were unable to recover it. 00:27:20.799 [2024-07-14 07:44:36.773301] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.799 [2024-07-14 07:44:36.773510] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.799 [2024-07-14 07:44:36.773535] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0c30000b90 with addr=10.0.0.2, port=4420 00:27:20.799 qpair failed and we were unable to recover it. 00:27:20.799 [2024-07-14 07:44:36.773686] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.799 [2024-07-14 07:44:36.773847] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.799 [2024-07-14 07:44:36.773876] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0c30000b90 with addr=10.0.0.2, port=4420 00:27:20.799 qpair failed and we were unable to recover it. 00:27:20.799 [2024-07-14 07:44:36.774037] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.799 [2024-07-14 07:44:36.774188] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.799 [2024-07-14 07:44:36.774213] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0c30000b90 with addr=10.0.0.2, port=4420 00:27:20.799 qpair failed and we were unable to recover it. 00:27:20.799 [2024-07-14 07:44:36.774420] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.799 [2024-07-14 07:44:36.774571] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.799 [2024-07-14 07:44:36.774596] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0c30000b90 with addr=10.0.0.2, port=4420 00:27:20.799 qpair failed and we were unable to recover it. 00:27:20.799 [2024-07-14 07:44:36.774749] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.799 [2024-07-14 07:44:36.774933] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.799 [2024-07-14 07:44:36.774959] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0c30000b90 with addr=10.0.0.2, port=4420 00:27:20.799 qpair failed and we were unable to recover it. 00:27:20.799 [2024-07-14 07:44:36.775151] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.799 [2024-07-14 07:44:36.775329] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.799 [2024-07-14 07:44:36.775356] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0c30000b90 with addr=10.0.0.2, port=4420 00:27:20.799 qpair failed and we were unable to recover it. 00:27:20.799 [2024-07-14 07:44:36.775514] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.799 [2024-07-14 07:44:36.775694] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.799 [2024-07-14 07:44:36.775719] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0c30000b90 with addr=10.0.0.2, port=4420 00:27:20.799 qpair failed and we were unable to recover it. 00:27:20.799 [2024-07-14 07:44:36.775891] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.799 07:44:36 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:27:20.799 07:44:36 -- common/autotest_common.sh@852 -- # return 0 00:27:20.799 [2024-07-14 07:44:36.776044] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.799 [2024-07-14 07:44:36.776069] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0c30000b90 with addr=10.0.0.2, port=4420 00:27:20.799 qpair failed and we were unable to recover it. 00:27:20.799 07:44:36 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:27:20.799 [2024-07-14 07:44:36.776223] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.799 07:44:36 -- common/autotest_common.sh@718 -- # xtrace_disable 00:27:20.799 07:44:36 -- common/autotest_common.sh@10 -- # set +x 00:27:20.799 [2024-07-14 07:44:36.776390] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.799 [2024-07-14 07:44:36.776416] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0c30000b90 with addr=10.0.0.2, port=4420 00:27:20.799 qpair failed and we were unable to recover it. 00:27:20.799 [2024-07-14 07:44:36.776566] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.799 [2024-07-14 07:44:36.776723] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.799 [2024-07-14 07:44:36.776750] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0c30000b90 with addr=10.0.0.2, port=4420 00:27:20.799 qpair failed and we were unable to recover it. 00:27:20.799 [2024-07-14 07:44:36.776921] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.799 [2024-07-14 07:44:36.777072] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.799 [2024-07-14 07:44:36.777097] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0c30000b90 with addr=10.0.0.2, port=4420 00:27:20.799 qpair failed and we were unable to recover it. 00:27:20.799 [2024-07-14 07:44:36.777293] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.799 [2024-07-14 07:44:36.777445] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.799 [2024-07-14 07:44:36.777470] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0c30000b90 with addr=10.0.0.2, port=4420 00:27:20.799 qpair failed and we were unable to recover it. 00:27:20.799 [2024-07-14 07:44:36.777650] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.799 [2024-07-14 07:44:36.777825] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.799 [2024-07-14 07:44:36.777855] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0c30000b90 with addr=10.0.0.2, port=4420 00:27:20.799 qpair failed and we were unable to recover it. 00:27:20.799 [2024-07-14 07:44:36.778035] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.799 [2024-07-14 07:44:36.778184] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.799 [2024-07-14 07:44:36.778208] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0c30000b90 with addr=10.0.0.2, port=4420 00:27:20.799 qpair failed and we were unable to recover it. 00:27:20.799 [2024-07-14 07:44:36.778396] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.799 [2024-07-14 07:44:36.778550] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.799 [2024-07-14 07:44:36.778578] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0c30000b90 with addr=10.0.0.2, port=4420 00:27:20.799 qpair failed and we were unable to recover it. 00:27:20.799 [2024-07-14 07:44:36.778751] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.799 [2024-07-14 07:44:36.778955] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.799 [2024-07-14 07:44:36.778981] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0c30000b90 with addr=10.0.0.2, port=4420 00:27:20.799 qpair failed and we were unable to recover it. 00:27:20.799 [2024-07-14 07:44:36.779135] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.799 [2024-07-14 07:44:36.779297] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.799 [2024-07-14 07:44:36.779322] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0c30000b90 with addr=10.0.0.2, port=4420 00:27:20.799 qpair failed and we were unable to recover it. 00:27:20.799 [2024-07-14 07:44:36.779470] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.799 [2024-07-14 07:44:36.779639] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.800 [2024-07-14 07:44:36.779665] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0c30000b90 with addr=10.0.0.2, port=4420 00:27:20.800 qpair failed and we were unable to recover it. 00:27:20.800 [2024-07-14 07:44:36.779843] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.800 [2024-07-14 07:44:36.780013] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.800 [2024-07-14 07:44:36.780039] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0c30000b90 with addr=10.0.0.2, port=4420 00:27:20.800 qpair failed and we were unable to recover it. 00:27:20.800 [2024-07-14 07:44:36.780211] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.800 [2024-07-14 07:44:36.780382] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.800 [2024-07-14 07:44:36.780409] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0c30000b90 with addr=10.0.0.2, port=4420 00:27:20.800 qpair failed and we were unable to recover it. 00:27:20.800 [2024-07-14 07:44:36.780567] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.800 [2024-07-14 07:44:36.780773] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.800 [2024-07-14 07:44:36.780799] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0c30000b90 with addr=10.0.0.2, port=4420 00:27:20.800 qpair failed and we were unable to recover it. 00:27:20.800 [2024-07-14 07:44:36.780961] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.800 [2024-07-14 07:44:36.781118] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.800 [2024-07-14 07:44:36.781144] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0c30000b90 with addr=10.0.0.2, port=4420 00:27:20.800 qpair failed and we were unable to recover it. 00:27:20.800 [2024-07-14 07:44:36.781330] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.800 [2024-07-14 07:44:36.781518] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.800 [2024-07-14 07:44:36.781550] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0c30000b90 with addr=10.0.0.2, port=4420 00:27:20.800 qpair failed and we were unable to recover it. 00:27:20.800 [2024-07-14 07:44:36.781707] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.800 [2024-07-14 07:44:36.781876] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.800 [2024-07-14 07:44:36.781904] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0c30000b90 with addr=10.0.0.2, port=4420 00:27:20.800 qpair failed and we were unable to recover it. 00:27:20.800 [2024-07-14 07:44:36.782059] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.800 [2024-07-14 07:44:36.782218] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.800 [2024-07-14 07:44:36.782246] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0c30000b90 with addr=10.0.0.2, port=4420 00:27:20.800 qpair failed and we were unable to recover it. 00:27:20.800 [2024-07-14 07:44:36.782426] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.800 [2024-07-14 07:44:36.782610] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.800 [2024-07-14 07:44:36.782637] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0c30000b90 with addr=10.0.0.2, port=4420 00:27:20.800 qpair failed and we were unable to recover it. 00:27:20.800 [2024-07-14 07:44:36.782786] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.800 [2024-07-14 07:44:36.783007] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.800 [2024-07-14 07:44:36.783034] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0c30000b90 with addr=10.0.0.2, port=4420 00:27:20.800 qpair failed and we were unable to recover it. 00:27:20.800 [2024-07-14 07:44:36.783225] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.800 [2024-07-14 07:44:36.783382] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.800 [2024-07-14 07:44:36.783408] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0c30000b90 with addr=10.0.0.2, port=4420 00:27:20.800 qpair failed and we were unable to recover it. 00:27:20.800 [2024-07-14 07:44:36.783593] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.800 [2024-07-14 07:44:36.783768] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.800 [2024-07-14 07:44:36.783795] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0c30000b90 with addr=10.0.0.2, port=4420 00:27:20.800 qpair failed and we were unable to recover it. 00:27:20.800 [2024-07-14 07:44:36.783982] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.800 [2024-07-14 07:44:36.784167] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.800 [2024-07-14 07:44:36.784194] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0c30000b90 with addr=10.0.0.2, port=4420 00:27:20.800 qpair failed and we were unable to recover it. 00:27:20.800 [2024-07-14 07:44:36.784363] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.800 [2024-07-14 07:44:36.784530] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.800 [2024-07-14 07:44:36.784556] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0c30000b90 with addr=10.0.0.2, port=4420 00:27:20.800 qpair failed and we were unable to recover it. 00:27:20.800 [2024-07-14 07:44:36.784739] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.800 [2024-07-14 07:44:36.784894] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.800 [2024-07-14 07:44:36.784921] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0c30000b90 with addr=10.0.0.2, port=4420 00:27:20.800 qpair failed and we were unable to recover it. 00:27:20.800 [2024-07-14 07:44:36.785071] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.800 [2024-07-14 07:44:36.785220] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.800 [2024-07-14 07:44:36.785251] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0c30000b90 with addr=10.0.0.2, port=4420 00:27:20.800 qpair failed and we were unable to recover it. 00:27:20.800 [2024-07-14 07:44:36.785462] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.800 [2024-07-14 07:44:36.785627] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.800 [2024-07-14 07:44:36.785666] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0c30000b90 with addr=10.0.0.2, port=4420 00:27:20.800 qpair failed and we were unable to recover it. 00:27:20.800 [2024-07-14 07:44:36.785878] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.800 [2024-07-14 07:44:36.786072] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.800 [2024-07-14 07:44:36.786099] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0c30000b90 with addr=10.0.0.2, port=4420 00:27:20.800 qpair failed and we were unable to recover it. 00:27:20.800 [2024-07-14 07:44:36.786287] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.800 [2024-07-14 07:44:36.786462] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.800 [2024-07-14 07:44:36.786488] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0c30000b90 with addr=10.0.0.2, port=4420 00:27:20.800 qpair failed and we were unable to recover it. 00:27:20.800 [2024-07-14 07:44:36.786694] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.800 [2024-07-14 07:44:36.786896] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.800 [2024-07-14 07:44:36.786933] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0c30000b90 with addr=10.0.0.2, port=4420 00:27:20.800 qpair failed and we were unable to recover it. 00:27:20.800 [2024-07-14 07:44:36.787097] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.800 [2024-07-14 07:44:36.787266] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.800 [2024-07-14 07:44:36.787293] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0c30000b90 with addr=10.0.0.2, port=4420 00:27:20.800 qpair failed and we were unable to recover it. 00:27:20.800 [2024-07-14 07:44:36.787441] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.800 [2024-07-14 07:44:36.787611] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.800 [2024-07-14 07:44:36.787637] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0c30000b90 with addr=10.0.0.2, port=4420 00:27:20.800 qpair failed and we were unable to recover it. 00:27:20.800 [2024-07-14 07:44:36.787821] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.800 [2024-07-14 07:44:36.787994] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.800 [2024-07-14 07:44:36.788020] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0c30000b90 with addr=10.0.0.2, port=4420 00:27:20.800 qpair failed and we were unable to recover it. 00:27:20.800 [2024-07-14 07:44:36.788209] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.800 [2024-07-14 07:44:36.788407] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.800 [2024-07-14 07:44:36.788433] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0c30000b90 with addr=10.0.0.2, port=4420 00:27:20.800 qpair failed and we were unable to recover it. 00:27:20.800 [2024-07-14 07:44:36.788646] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.800 [2024-07-14 07:44:36.788816] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.800 [2024-07-14 07:44:36.788842] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0c30000b90 with addr=10.0.0.2, port=4420 00:27:20.800 qpair failed and we were unable to recover it. 00:27:20.800 [2024-07-14 07:44:36.789041] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.800 [2024-07-14 07:44:36.789251] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.800 [2024-07-14 07:44:36.789277] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0c30000b90 with addr=10.0.0.2, port=4420 00:27:20.800 qpair failed and we were unable to recover it. 00:27:20.800 [2024-07-14 07:44:36.789469] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.800 [2024-07-14 07:44:36.789683] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.800 [2024-07-14 07:44:36.789710] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0c30000b90 with addr=10.0.0.2, port=4420 00:27:20.800 qpair failed and we were unable to recover it. 00:27:20.800 [2024-07-14 07:44:36.789919] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.800 [2024-07-14 07:44:36.790096] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.800 [2024-07-14 07:44:36.790122] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0c30000b90 with addr=10.0.0.2, port=4420 00:27:20.800 qpair failed and we were unable to recover it. 00:27:20.800 [2024-07-14 07:44:36.790323] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.800 [2024-07-14 07:44:36.790528] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.800 [2024-07-14 07:44:36.790554] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0c30000b90 with addr=10.0.0.2, port=4420 00:27:20.800 qpair failed and we were unable to recover it. 00:27:20.800 [2024-07-14 07:44:36.790716] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.800 [2024-07-14 07:44:36.790903] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.800 [2024-07-14 07:44:36.790931] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0c30000b90 with addr=10.0.0.2, port=4420 00:27:20.800 qpair failed and we were unable to recover it. 00:27:20.800 [2024-07-14 07:44:36.791084] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.800 [2024-07-14 07:44:36.791254] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.800 [2024-07-14 07:44:36.791280] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0c30000b90 with addr=10.0.0.2, port=4420 00:27:20.801 qpair failed and we were unable to recover it. 00:27:20.801 [2024-07-14 07:44:36.791462] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.801 [2024-07-14 07:44:36.791613] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.801 [2024-07-14 07:44:36.791639] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0c30000b90 with addr=10.0.0.2, port=4420 00:27:20.801 qpair failed and we were unable to recover it. 00:27:20.801 [2024-07-14 07:44:36.791821] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.801 [2024-07-14 07:44:36.791980] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.801 [2024-07-14 07:44:36.792008] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0c30000b90 with addr=10.0.0.2, port=4420 00:27:20.801 qpair failed and we were unable to recover it. 00:27:20.801 [2024-07-14 07:44:36.792182] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.801 [2024-07-14 07:44:36.792370] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.801 [2024-07-14 07:44:36.792396] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0c30000b90 with addr=10.0.0.2, port=4420 00:27:20.801 qpair failed and we were unable to recover it. 00:27:20.801 [2024-07-14 07:44:36.792581] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.801 [2024-07-14 07:44:36.792769] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.801 [2024-07-14 07:44:36.792794] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0c30000b90 with addr=10.0.0.2, port=4420 00:27:20.801 qpair failed and we were unable to recover it. 00:27:20.801 [2024-07-14 07:44:36.792984] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.801 [2024-07-14 07:44:36.793144] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.801 [2024-07-14 07:44:36.793177] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0c30000b90 with addr=10.0.0.2, port=4420 00:27:20.801 qpair failed and we were unable to recover it. 00:27:20.801 [2024-07-14 07:44:36.793349] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.801 [2024-07-14 07:44:36.793524] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.801 [2024-07-14 07:44:36.793551] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0c30000b90 with addr=10.0.0.2, port=4420 00:27:20.801 qpair failed and we were unable to recover it. 00:27:20.801 [2024-07-14 07:44:36.793739] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.801 [2024-07-14 07:44:36.793944] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.801 [2024-07-14 07:44:36.793971] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0c30000b90 with addr=10.0.0.2, port=4420 00:27:20.801 qpair failed and we were unable to recover it. 00:27:20.801 [2024-07-14 07:44:36.794164] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.801 [2024-07-14 07:44:36.794346] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.801 [2024-07-14 07:44:36.794374] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0c30000b90 with addr=10.0.0.2, port=4420 00:27:20.801 qpair failed and we were unable to recover it. 00:27:20.801 [2024-07-14 07:44:36.794565] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.801 [2024-07-14 07:44:36.794749] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.801 [2024-07-14 07:44:36.794775] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0c30000b90 with addr=10.0.0.2, port=4420 00:27:20.801 qpair failed and we were unable to recover it. 00:27:20.801 [2024-07-14 07:44:36.794960] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.801 [2024-07-14 07:44:36.795125] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.801 [2024-07-14 07:44:36.795150] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0c30000b90 with addr=10.0.0.2, port=4420 00:27:20.801 qpair failed and we were unable to recover it. 00:27:20.801 [2024-07-14 07:44:36.795311] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.801 [2024-07-14 07:44:36.795493] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.801 [2024-07-14 07:44:36.795518] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0c30000b90 with addr=10.0.0.2, port=4420 00:27:20.801 qpair failed and we were unable to recover it. 00:27:20.801 [2024-07-14 07:44:36.795669] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.801 07:44:36 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:27:20.801 [2024-07-14 07:44:36.795823] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.801 [2024-07-14 07:44:36.795849] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0c30000b90 with addr=10.0.0.2, port=4420 00:27:20.801 qpair failed and we were unable to recover it. 00:27:20.801 07:44:36 -- host/target_disconnect.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:27:20.801 [2024-07-14 07:44:36.796021] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.801 07:44:36 -- common/autotest_common.sh@551 -- # xtrace_disable 00:27:20.801 07:44:36 -- common/autotest_common.sh@10 -- # set +x 00:27:20.801 [2024-07-14 07:44:36.796198] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.801 [2024-07-14 07:44:36.796237] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0c30000b90 with addr=10.0.0.2, port=4420 00:27:20.801 qpair failed and we were unable to recover it. 00:27:20.801 [2024-07-14 07:44:36.796445] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.801 [2024-07-14 07:44:36.796595] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.801 [2024-07-14 07:44:36.796621] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0c30000b90 with addr=10.0.0.2, port=4420 00:27:20.801 qpair failed and we were unable to recover it. 00:27:20.801 [2024-07-14 07:44:36.796799] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.801 [2024-07-14 07:44:36.796954] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.801 [2024-07-14 07:44:36.796980] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0c30000b90 with addr=10.0.0.2, port=4420 00:27:20.801 qpair failed and we were unable to recover it. 00:27:20.801 [2024-07-14 07:44:36.797135] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.801 [2024-07-14 07:44:36.797285] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.801 [2024-07-14 07:44:36.797310] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0c30000b90 with addr=10.0.0.2, port=4420 00:27:20.801 qpair failed and we were unable to recover it. 00:27:20.801 [2024-07-14 07:44:36.797465] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.801 [2024-07-14 07:44:36.797638] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.801 [2024-07-14 07:44:36.797664] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0c30000b90 with addr=10.0.0.2, port=4420 00:27:20.801 qpair failed and we were unable to recover it. 00:27:20.801 [2024-07-14 07:44:36.797843] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.801 [2024-07-14 07:44:36.798034] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.801 [2024-07-14 07:44:36.798060] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0c30000b90 with addr=10.0.0.2, port=4420 00:27:20.801 qpair failed and we were unable to recover it. 00:27:20.801 [2024-07-14 07:44:36.798252] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.801 [2024-07-14 07:44:36.798403] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.801 [2024-07-14 07:44:36.798430] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0c30000b90 with addr=10.0.0.2, port=4420 00:27:20.801 qpair failed and we were unable to recover it. 00:27:20.801 [2024-07-14 07:44:36.798712] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.801 [2024-07-14 07:44:36.798907] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.801 [2024-07-14 07:44:36.798943] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0c30000b90 with addr=10.0.0.2, port=4420 00:27:20.801 qpair failed and we were unable to recover it. 00:27:20.801 [2024-07-14 07:44:36.799117] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.801 [2024-07-14 07:44:36.799326] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.801 [2024-07-14 07:44:36.799352] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0c30000b90 with addr=10.0.0.2, port=4420 00:27:20.801 qpair failed and we were unable to recover it. 00:27:20.801 [2024-07-14 07:44:36.799531] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.801 [2024-07-14 07:44:36.799706] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.801 [2024-07-14 07:44:36.799732] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0c30000b90 with addr=10.0.0.2, port=4420 00:27:20.801 qpair failed and we were unable to recover it. 00:27:20.801 [2024-07-14 07:44:36.799946] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.801 [2024-07-14 07:44:36.800128] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.801 [2024-07-14 07:44:36.800154] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0c30000b90 with addr=10.0.0.2, port=4420 00:27:20.801 qpair failed and we were unable to recover it. 00:27:20.801 [2024-07-14 07:44:36.800327] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.801 [2024-07-14 07:44:36.800507] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.801 [2024-07-14 07:44:36.800533] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0c30000b90 with addr=10.0.0.2, port=4420 00:27:20.801 qpair failed and we were unable to recover it. 00:27:20.801 [2024-07-14 07:44:36.800740] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.801 [2024-07-14 07:44:36.800939] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.801 [2024-07-14 07:44:36.800969] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:20.801 qpair failed and we were unable to recover it. 00:27:20.802 [2024-07-14 07:44:36.801155] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.802 [2024-07-14 07:44:36.801327] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.802 [2024-07-14 07:44:36.801353] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:20.802 qpair failed and we were unable to recover it. 00:27:20.802 [2024-07-14 07:44:36.801508] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.802 [2024-07-14 07:44:36.801689] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.802 [2024-07-14 07:44:36.801715] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:20.802 qpair failed and we were unable to recover it. 00:27:20.802 [2024-07-14 07:44:36.801877] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.802 [2024-07-14 07:44:36.802033] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.802 [2024-07-14 07:44:36.802059] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:20.802 qpair failed and we were unable to recover it. 00:27:20.802 [2024-07-14 07:44:36.802229] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.802 [2024-07-14 07:44:36.802386] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.802 [2024-07-14 07:44:36.802411] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:20.802 qpair failed and we were unable to recover it. 00:27:20.802 [2024-07-14 07:44:36.802616] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.802 [2024-07-14 07:44:36.802766] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.802 [2024-07-14 07:44:36.802791] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:20.802 qpair failed and we were unable to recover it. 00:27:20.802 [2024-07-14 07:44:36.802977] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.802 [2024-07-14 07:44:36.803132] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.802 [2024-07-14 07:44:36.803157] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:20.802 qpair failed and we were unable to recover it. 00:27:20.802 [2024-07-14 07:44:36.803326] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.802 [2024-07-14 07:44:36.803507] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.802 [2024-07-14 07:44:36.803531] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:20.802 qpair failed and we were unable to recover it. 00:27:20.802 [2024-07-14 07:44:36.803746] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.802 [2024-07-14 07:44:36.803932] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.802 [2024-07-14 07:44:36.803957] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:20.802 qpair failed and we were unable to recover it. 00:27:20.802 [2024-07-14 07:44:36.804105] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.802 [2024-07-14 07:44:36.804260] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.802 [2024-07-14 07:44:36.804285] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:20.802 qpair failed and we were unable to recover it. 00:27:20.802 [2024-07-14 07:44:36.804488] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.802 [2024-07-14 07:44:36.804676] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.802 [2024-07-14 07:44:36.804702] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:20.802 qpair failed and we were unable to recover it. 00:27:20.802 [2024-07-14 07:44:36.804855] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.802 [2024-07-14 07:44:36.805070] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.802 [2024-07-14 07:44:36.805094] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:20.802 qpair failed and we were unable to recover it. 00:27:20.802 [2024-07-14 07:44:36.805248] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.802 [2024-07-14 07:44:36.805405] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.802 [2024-07-14 07:44:36.805429] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:20.802 qpair failed and we were unable to recover it. 00:27:20.802 [2024-07-14 07:44:36.805615] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.802 [2024-07-14 07:44:36.805793] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.802 [2024-07-14 07:44:36.805818] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:20.802 qpair failed and we were unable to recover it. 00:27:20.802 [2024-07-14 07:44:36.805990] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.802 [2024-07-14 07:44:36.806142] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.802 [2024-07-14 07:44:36.806166] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:20.802 qpair failed and we were unable to recover it. 00:27:20.802 [2024-07-14 07:44:36.806359] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.802 [2024-07-14 07:44:36.806536] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.802 [2024-07-14 07:44:36.806561] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:20.802 qpair failed and we were unable to recover it. 00:27:20.802 [2024-07-14 07:44:36.806738] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.802 [2024-07-14 07:44:36.806900] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.802 [2024-07-14 07:44:36.806935] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:20.802 qpair failed and we were unable to recover it. 00:27:20.802 [2024-07-14 07:44:36.807122] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.802 [2024-07-14 07:44:36.807281] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.802 [2024-07-14 07:44:36.807306] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:20.802 qpair failed and we were unable to recover it. 00:27:20.802 [2024-07-14 07:44:36.807487] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.802 [2024-07-14 07:44:36.807663] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.802 [2024-07-14 07:44:36.807689] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:20.802 qpair failed and we were unable to recover it. 00:27:20.802 [2024-07-14 07:44:36.807901] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.802 [2024-07-14 07:44:36.808054] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.802 [2024-07-14 07:44:36.808079] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:20.802 qpair failed and we were unable to recover it. 00:27:20.802 [2024-07-14 07:44:36.808254] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.802 [2024-07-14 07:44:36.808403] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.802 [2024-07-14 07:44:36.808432] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:20.802 qpair failed and we were unable to recover it. 00:27:20.802 [2024-07-14 07:44:36.808638] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.802 [2024-07-14 07:44:36.808789] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.802 [2024-07-14 07:44:36.808814] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:20.802 qpair failed and we were unable to recover it. 00:27:20.802 [2024-07-14 07:44:36.808975] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.802 [2024-07-14 07:44:36.809158] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.802 [2024-07-14 07:44:36.809184] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:20.802 qpair failed and we were unable to recover it. 00:27:20.802 [2024-07-14 07:44:36.809379] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.802 [2024-07-14 07:44:36.809549] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.802 [2024-07-14 07:44:36.809574] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:20.802 qpair failed and we were unable to recover it. 00:27:20.802 [2024-07-14 07:44:36.809729] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.802 [2024-07-14 07:44:36.809915] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.802 [2024-07-14 07:44:36.809940] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:20.802 qpair failed and we were unable to recover it. 00:27:20.802 [2024-07-14 07:44:36.810102] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.802 [2024-07-14 07:44:36.810268] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.802 [2024-07-14 07:44:36.810293] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:20.802 qpair failed and we were unable to recover it. 00:27:20.802 [2024-07-14 07:44:36.810478] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.802 [2024-07-14 07:44:36.810638] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.802 [2024-07-14 07:44:36.810666] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:20.802 qpair failed and we were unable to recover it. 00:27:20.802 [2024-07-14 07:44:36.810836] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.802 [2024-07-14 07:44:36.811027] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.802 [2024-07-14 07:44:36.811052] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:20.802 qpair failed and we were unable to recover it. 00:27:20.802 [2024-07-14 07:44:36.811314] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.802 [2024-07-14 07:44:36.811496] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.802 [2024-07-14 07:44:36.811520] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:20.802 qpair failed and we were unable to recover it. 00:27:20.802 [2024-07-14 07:44:36.811666] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.802 [2024-07-14 07:44:36.811858] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.802 [2024-07-14 07:44:36.811888] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:20.802 qpair failed and we were unable to recover it. 00:27:20.802 [2024-07-14 07:44:36.812085] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.802 [2024-07-14 07:44:36.812235] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.802 [2024-07-14 07:44:36.812260] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:20.802 qpair failed and we were unable to recover it. 00:27:20.803 [2024-07-14 07:44:36.812513] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.803 [2024-07-14 07:44:36.812692] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.803 [2024-07-14 07:44:36.812717] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:20.803 qpair failed and we were unable to recover it. 00:27:20.803 [2024-07-14 07:44:36.812898] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.803 [2024-07-14 07:44:36.813078] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.803 [2024-07-14 07:44:36.813102] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:20.803 qpair failed and we were unable to recover it. 00:27:20.803 [2024-07-14 07:44:36.813328] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.803 [2024-07-14 07:44:36.813513] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.803 [2024-07-14 07:44:36.813538] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:20.803 qpair failed and we were unable to recover it. 00:27:20.803 [2024-07-14 07:44:36.813719] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.803 [2024-07-14 07:44:36.813894] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.803 [2024-07-14 07:44:36.813920] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:20.803 qpair failed and we were unable to recover it. 00:27:20.803 [2024-07-14 07:44:36.814095] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.803 [2024-07-14 07:44:36.814288] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.803 [2024-07-14 07:44:36.814312] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:20.803 qpair failed and we were unable to recover it. 00:27:20.803 [2024-07-14 07:44:36.814504] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.803 [2024-07-14 07:44:36.814676] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.803 [2024-07-14 07:44:36.814700] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:20.803 qpair failed and we were unable to recover it. 00:27:20.803 [2024-07-14 07:44:36.814863] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.803 [2024-07-14 07:44:36.815063] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.803 [2024-07-14 07:44:36.815087] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:20.803 qpair failed and we were unable to recover it. 00:27:20.803 [2024-07-14 07:44:36.815237] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.803 [2024-07-14 07:44:36.815412] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.803 [2024-07-14 07:44:36.815437] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:20.803 qpair failed and we were unable to recover it. 00:27:20.803 [2024-07-14 07:44:36.815613] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.803 [2024-07-14 07:44:36.815824] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.803 [2024-07-14 07:44:36.815848] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:20.803 qpair failed and we were unable to recover it. 00:27:20.803 [2024-07-14 07:44:36.816019] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.803 [2024-07-14 07:44:36.816175] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.803 [2024-07-14 07:44:36.816200] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:20.803 qpair failed and we were unable to recover it. 00:27:20.803 [2024-07-14 07:44:36.816492] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.803 [2024-07-14 07:44:36.816648] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.803 [2024-07-14 07:44:36.816672] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:20.803 qpair failed and we were unable to recover it. 00:27:20.803 [2024-07-14 07:44:36.816876] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.803 [2024-07-14 07:44:36.817052] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.803 [2024-07-14 07:44:36.817076] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:20.803 qpair failed and we were unable to recover it. 00:27:20.803 [2024-07-14 07:44:36.817271] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.803 [2024-07-14 07:44:36.817431] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.803 [2024-07-14 07:44:36.817455] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:20.803 qpair failed and we were unable to recover it. 00:27:20.803 [2024-07-14 07:44:36.817611] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.803 [2024-07-14 07:44:36.817789] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.803 [2024-07-14 07:44:36.817813] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:20.803 qpair failed and we were unable to recover it. 00:27:20.803 [2024-07-14 07:44:36.818001] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.803 [2024-07-14 07:44:36.818200] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.803 [2024-07-14 07:44:36.818224] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:20.803 qpair failed and we were unable to recover it. 00:27:20.803 [2024-07-14 07:44:36.818399] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.803 [2024-07-14 07:44:36.818582] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.803 [2024-07-14 07:44:36.818606] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:20.803 qpair failed and we were unable to recover it. 00:27:20.803 [2024-07-14 07:44:36.818784] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.803 [2024-07-14 07:44:36.818940] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.803 [2024-07-14 07:44:36.818965] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:20.803 qpair failed and we were unable to recover it. 00:27:20.803 [2024-07-14 07:44:36.819156] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.803 [2024-07-14 07:44:36.819320] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.803 [2024-07-14 07:44:36.819345] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:20.803 qpair failed and we were unable to recover it. 00:27:20.803 [2024-07-14 07:44:36.819502] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.803 [2024-07-14 07:44:36.819653] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.803 [2024-07-14 07:44:36.819678] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:20.803 qpair failed and we were unable to recover it. 00:27:20.803 [2024-07-14 07:44:36.819829] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.803 [2024-07-14 07:44:36.820028] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.803 [2024-07-14 07:44:36.820054] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:20.803 qpair failed and we were unable to recover it. 00:27:20.803 [2024-07-14 07:44:36.820221] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.803 [2024-07-14 07:44:36.820428] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.803 Malloc0 00:27:20.803 [2024-07-14 07:44:36.820453] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:20.803 qpair failed and we were unable to recover it. 00:27:20.803 [2024-07-14 07:44:36.820633] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.803 [2024-07-14 07:44:36.820814] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.803 07:44:36 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:27:20.803 [2024-07-14 07:44:36.820839] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:20.803 qpair failed and we were unable to recover it. 00:27:20.803 07:44:36 -- host/target_disconnect.sh@21 -- # rpc_cmd nvmf_create_transport -t tcp -o 00:27:20.803 [2024-07-14 07:44:36.821026] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.803 07:44:36 -- common/autotest_common.sh@551 -- # xtrace_disable 00:27:20.803 [2024-07-14 07:44:36.821176] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.803 [2024-07-14 07:44:36.821200] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:20.803 qpair failed and we were unable to recover it. 00:27:20.803 07:44:36 -- common/autotest_common.sh@10 -- # set +x 00:27:20.803 [2024-07-14 07:44:36.821360] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.803 [2024-07-14 07:44:36.821519] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.803 [2024-07-14 07:44:36.821543] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:20.803 qpair failed and we were unable to recover it. 00:27:20.803 [2024-07-14 07:44:36.821716] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.803 [2024-07-14 07:44:36.821879] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.803 [2024-07-14 07:44:36.821906] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:20.803 qpair failed and we were unable to recover it. 00:27:20.803 [2024-07-14 07:44:36.822092] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.803 [2024-07-14 07:44:36.822248] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.803 [2024-07-14 07:44:36.822272] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:20.803 qpair failed and we were unable to recover it. 00:27:20.803 [2024-07-14 07:44:36.822454] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.803 [2024-07-14 07:44:36.822603] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.803 [2024-07-14 07:44:36.822627] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:20.803 qpair failed and we were unable to recover it. 00:27:20.803 [2024-07-14 07:44:36.822787] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.803 [2024-07-14 07:44:36.822941] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.803 [2024-07-14 07:44:36.822967] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:20.803 qpair failed and we were unable to recover it. 00:27:20.803 [2024-07-14 07:44:36.823155] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.803 [2024-07-14 07:44:36.823320] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.803 [2024-07-14 07:44:36.823345] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:20.803 qpair failed and we were unable to recover it. 00:27:20.803 [2024-07-14 07:44:36.823540] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.803 [2024-07-14 07:44:36.823734] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.803 [2024-07-14 07:44:36.823763] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:20.804 qpair failed and we were unable to recover it. 00:27:20.804 [2024-07-14 07:44:36.823934] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.804 [2024-07-14 07:44:36.824040] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:27:20.804 [2024-07-14 07:44:36.824101] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.804 [2024-07-14 07:44:36.824125] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:20.804 qpair failed and we were unable to recover it. 00:27:20.804 [2024-07-14 07:44:36.824290] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.804 [2024-07-14 07:44:36.824458] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.804 [2024-07-14 07:44:36.824493] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:20.804 qpair failed and we were unable to recover it. 00:27:20.804 [2024-07-14 07:44:36.824668] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.804 [2024-07-14 07:44:36.824817] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.804 [2024-07-14 07:44:36.824842] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:20.804 qpair failed and we were unable to recover it. 00:27:20.804 [2024-07-14 07:44:36.825041] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.804 [2024-07-14 07:44:36.825209] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.804 [2024-07-14 07:44:36.825234] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:20.804 qpair failed and we were unable to recover it. 00:27:20.804 [2024-07-14 07:44:36.825391] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.804 [2024-07-14 07:44:36.825556] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.804 [2024-07-14 07:44:36.825580] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:20.804 qpair failed and we were unable to recover it. 00:27:20.804 [2024-07-14 07:44:36.825770] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.804 [2024-07-14 07:44:36.825927] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.804 [2024-07-14 07:44:36.825952] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:20.804 qpair failed and we were unable to recover it. 00:27:20.804 [2024-07-14 07:44:36.826110] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.804 [2024-07-14 07:44:36.826277] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.804 [2024-07-14 07:44:36.826303] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:20.804 qpair failed and we were unable to recover it. 00:27:20.804 [2024-07-14 07:44:36.826458] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.804 [2024-07-14 07:44:36.826610] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.804 [2024-07-14 07:44:36.826637] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:20.804 qpair failed and we were unable to recover it. 00:27:20.804 [2024-07-14 07:44:36.826791] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.804 [2024-07-14 07:44:36.826965] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.804 [2024-07-14 07:44:36.827000] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:20.804 qpair failed and we were unable to recover it. 00:27:20.804 [2024-07-14 07:44:36.827151] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.804 [2024-07-14 07:44:36.827301] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.804 [2024-07-14 07:44:36.827331] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:20.804 qpair failed and we were unable to recover it. 00:27:20.804 [2024-07-14 07:44:36.827531] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.804 [2024-07-14 07:44:36.827683] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.804 [2024-07-14 07:44:36.827708] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc69f0 with addr=10.0.0.2, port=4420 00:27:20.804 qpair failed and we were unable to recover it. 00:27:20.804 [2024-07-14 07:44:36.827898] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.804 [2024-07-14 07:44:36.828077] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.804 [2024-07-14 07:44:36.828107] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0c30000b90 with addr=10.0.0.2, port=4420 00:27:20.804 qpair failed and we were unable to recover it. 00:27:20.804 [2024-07-14 07:44:36.828281] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.804 [2024-07-14 07:44:36.828437] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.804 [2024-07-14 07:44:36.828463] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0c30000b90 with addr=10.0.0.2, port=4420 00:27:20.804 qpair failed and we were unable to recover it. 00:27:20.804 [2024-07-14 07:44:36.828648] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.804 [2024-07-14 07:44:36.828823] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.804 [2024-07-14 07:44:36.828848] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0c30000b90 with addr=10.0.0.2, port=4420 00:27:20.804 qpair failed and we were unable to recover it. 00:27:20.804 [2024-07-14 07:44:36.829051] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.804 [2024-07-14 07:44:36.829209] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.804 [2024-07-14 07:44:36.829234] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0c30000b90 with addr=10.0.0.2, port=4420 00:27:20.804 qpair failed and we were unable to recover it. 00:27:20.804 [2024-07-14 07:44:36.829451] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.804 [2024-07-14 07:44:36.829635] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.804 [2024-07-14 07:44:36.829662] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0c30000b90 with addr=10.0.0.2, port=4420 00:27:20.804 qpair failed and we were unable to recover it. 00:27:20.804 [2024-07-14 07:44:36.829819] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.804 [2024-07-14 07:44:36.829980] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.804 [2024-07-14 07:44:36.830007] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0c30000b90 with addr=10.0.0.2, port=4420 00:27:20.804 qpair failed and we were unable to recover it. 00:27:20.804 [2024-07-14 07:44:36.830195] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.804 [2024-07-14 07:44:36.830404] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.804 [2024-07-14 07:44:36.830429] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0c30000b90 with addr=10.0.0.2, port=4420 00:27:20.804 qpair failed and we were unable to recover it. 00:27:20.804 [2024-07-14 07:44:36.830608] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.804 [2024-07-14 07:44:36.830782] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.804 [2024-07-14 07:44:36.830809] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0c30000b90 with addr=10.0.0.2, port=4420 00:27:20.804 qpair failed and we were unable to recover it. 00:27:20.804 [2024-07-14 07:44:36.831021] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.804 [2024-07-14 07:44:36.831193] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.804 [2024-07-14 07:44:36.831223] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0c30000b90 with addr=10.0.0.2, port=4420 00:27:20.804 qpair failed and we were unable to recover it. 00:27:20.804 [2024-07-14 07:44:36.831408] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.804 [2024-07-14 07:44:36.831592] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.804 [2024-07-14 07:44:36.831617] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0c30000b90 with addr=10.0.0.2, port=4420 00:27:20.804 qpair failed and we were unable to recover it. 00:27:20.804 [2024-07-14 07:44:36.831801] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.804 [2024-07-14 07:44:36.831979] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.804 [2024-07-14 07:44:36.832005] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0c30000b90 with addr=10.0.0.2, port=4420 00:27:20.804 qpair failed and we were unable to recover it. 00:27:20.804 [2024-07-14 07:44:36.832182] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.804 07:44:36 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:27:20.804 [2024-07-14 07:44:36.832337] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.804 [2024-07-14 07:44:36.832362] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0c30000b90 with addr=10.0.0.2, port=4420 00:27:20.804 qpair failed and we were unable to recover it. 00:27:20.804 07:44:36 -- host/target_disconnect.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:27:20.804 07:44:36 -- common/autotest_common.sh@551 -- # xtrace_disable 00:27:20.804 [2024-07-14 07:44:36.832552] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.804 07:44:36 -- common/autotest_common.sh@10 -- # set +x 00:27:20.804 [2024-07-14 07:44:36.832723] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.804 [2024-07-14 07:44:36.832748] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0c30000b90 with addr=10.0.0.2, port=4420 00:27:20.804 qpair failed and we were unable to recover it. 00:27:20.804 [2024-07-14 07:44:36.832931] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.804 [2024-07-14 07:44:36.833084] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.804 [2024-07-14 07:44:36.833109] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0c30000b90 with addr=10.0.0.2, port=4420 00:27:20.804 qpair failed and we were unable to recover it. 00:27:20.804 [2024-07-14 07:44:36.833294] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.804 [2024-07-14 07:44:36.833471] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.804 [2024-07-14 07:44:36.833498] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0c30000b90 with addr=10.0.0.2, port=4420 00:27:20.804 qpair failed and we were unable to recover it. 00:27:20.804 [2024-07-14 07:44:36.833688] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.804 [2024-07-14 07:44:36.833880] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.804 [2024-07-14 07:44:36.833906] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0c30000b90 with addr=10.0.0.2, port=4420 00:27:20.804 qpair failed and we were unable to recover it. 00:27:20.804 [2024-07-14 07:44:36.834071] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.804 [2024-07-14 07:44:36.834241] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.804 [2024-07-14 07:44:36.834266] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0c30000b90 with addr=10.0.0.2, port=4420 00:27:20.804 qpair failed and we were unable to recover it. 00:27:20.804 [2024-07-14 07:44:36.834452] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.804 [2024-07-14 07:44:36.834652] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.805 [2024-07-14 07:44:36.834677] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0c30000b90 with addr=10.0.0.2, port=4420 00:27:20.805 qpair failed and we were unable to recover it. 00:27:20.805 [2024-07-14 07:44:36.834836] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.805 [2024-07-14 07:44:36.834999] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.805 [2024-07-14 07:44:36.835024] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0c30000b90 with addr=10.0.0.2, port=4420 00:27:20.805 qpair failed and we were unable to recover it. 00:27:20.805 [2024-07-14 07:44:36.835231] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.805 [2024-07-14 07:44:36.835412] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.805 [2024-07-14 07:44:36.835436] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0c30000b90 with addr=10.0.0.2, port=4420 00:27:20.805 qpair failed and we were unable to recover it. 00:27:20.805 [2024-07-14 07:44:36.835592] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.805 [2024-07-14 07:44:36.835775] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.805 [2024-07-14 07:44:36.835799] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0c30000b90 with addr=10.0.0.2, port=4420 00:27:20.805 qpair failed and we were unable to recover it. 00:27:20.805 [2024-07-14 07:44:36.835979] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.805 [2024-07-14 07:44:36.836186] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.805 [2024-07-14 07:44:36.836211] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0c30000b90 with addr=10.0.0.2, port=4420 00:27:20.805 qpair failed and we were unable to recover it. 00:27:20.805 [2024-07-14 07:44:36.836401] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.805 [2024-07-14 07:44:36.836552] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.805 [2024-07-14 07:44:36.836577] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0c30000b90 with addr=10.0.0.2, port=4420 00:27:20.805 qpair failed and we were unable to recover it. 00:27:20.805 [2024-07-14 07:44:36.836786] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.805 [2024-07-14 07:44:36.836960] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.805 [2024-07-14 07:44:36.836986] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0c30000b90 with addr=10.0.0.2, port=4420 00:27:20.805 qpair failed and we were unable to recover it. 00:27:20.805 [2024-07-14 07:44:36.837180] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.805 [2024-07-14 07:44:36.837354] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.805 [2024-07-14 07:44:36.837378] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0c30000b90 with addr=10.0.0.2, port=4420 00:27:20.805 qpair failed and we were unable to recover it. 00:27:20.805 [2024-07-14 07:44:36.837589] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.805 [2024-07-14 07:44:36.837797] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.805 [2024-07-14 07:44:36.837822] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0c30000b90 with addr=10.0.0.2, port=4420 00:27:20.805 qpair failed and we were unable to recover it. 00:27:20.805 [2024-07-14 07:44:36.837991] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.805 [2024-07-14 07:44:36.838142] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.805 [2024-07-14 07:44:36.838167] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0c30000b90 with addr=10.0.0.2, port=4420 00:27:20.805 qpair failed and we were unable to recover it. 00:27:20.805 [2024-07-14 07:44:36.838348] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.805 [2024-07-14 07:44:36.838496] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.805 [2024-07-14 07:44:36.838521] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0c30000b90 with addr=10.0.0.2, port=4420 00:27:20.805 qpair failed and we were unable to recover it. 00:27:20.805 [2024-07-14 07:44:36.838715] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.805 [2024-07-14 07:44:36.838903] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.805 [2024-07-14 07:44:36.838930] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0c30000b90 with addr=10.0.0.2, port=4420 00:27:20.805 qpair failed and we were unable to recover it. 00:27:20.805 [2024-07-14 07:44:36.839114] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.805 [2024-07-14 07:44:36.839297] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.805 [2024-07-14 07:44:36.839322] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0c30000b90 with addr=10.0.0.2, port=4420 00:27:20.805 qpair failed and we were unable to recover it. 00:27:20.805 [2024-07-14 07:44:36.839478] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.805 [2024-07-14 07:44:36.839632] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.805 [2024-07-14 07:44:36.839657] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0c30000b90 with addr=10.0.0.2, port=4420 00:27:20.805 qpair failed and we were unable to recover it. 00:27:20.805 [2024-07-14 07:44:36.839877] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.805 [2024-07-14 07:44:36.840045] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.805 [2024-07-14 07:44:36.840070] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0c30000b90 with addr=10.0.0.2, port=4420 00:27:20.805 qpair failed and we were unable to recover it. 00:27:20.805 [2024-07-14 07:44:36.840226] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.805 07:44:36 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:27:20.805 [2024-07-14 07:44:36.840407] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.805 [2024-07-14 07:44:36.840432] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0c30000b90 with addr=10.0.0.2, port=4420 00:27:20.805 qpair failed and we were unable to recover it. 00:27:20.805 07:44:36 -- host/target_disconnect.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:27:20.805 07:44:36 -- common/autotest_common.sh@551 -- # xtrace_disable 00:27:20.805 [2024-07-14 07:44:36.840595] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.805 07:44:36 -- common/autotest_common.sh@10 -- # set +x 00:27:20.805 [2024-07-14 07:44:36.840778] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.805 [2024-07-14 07:44:36.840804] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0c30000b90 with addr=10.0.0.2, port=4420 00:27:20.805 qpair failed and we were unable to recover it. 00:27:20.805 [2024-07-14 07:44:36.840969] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.805 [2024-07-14 07:44:36.841147] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.805 [2024-07-14 07:44:36.841172] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0c30000b90 with addr=10.0.0.2, port=4420 00:27:20.805 qpair failed and we were unable to recover it. 00:27:20.805 [2024-07-14 07:44:36.841361] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.805 [2024-07-14 07:44:36.841513] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.805 [2024-07-14 07:44:36.841538] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0c30000b90 with addr=10.0.0.2, port=4420 00:27:20.805 qpair failed and we were unable to recover it. 00:27:20.805 [2024-07-14 07:44:36.841699] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.805 [2024-07-14 07:44:36.841878] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.805 [2024-07-14 07:44:36.841904] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0c30000b90 with addr=10.0.0.2, port=4420 00:27:20.805 qpair failed and we were unable to recover it. 00:27:20.805 [2024-07-14 07:44:36.842091] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.805 [2024-07-14 07:44:36.842258] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.805 [2024-07-14 07:44:36.842284] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0c30000b90 with addr=10.0.0.2, port=4420 00:27:20.805 qpair failed and we were unable to recover it. 00:27:20.805 [2024-07-14 07:44:36.842440] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.805 [2024-07-14 07:44:36.842619] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.805 [2024-07-14 07:44:36.842643] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0c30000b90 with addr=10.0.0.2, port=4420 00:27:20.805 qpair failed and we were unable to recover it. 00:27:20.805 [2024-07-14 07:44:36.842794] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.805 [2024-07-14 07:44:36.842956] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.805 [2024-07-14 07:44:36.842981] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0c30000b90 with addr=10.0.0.2, port=4420 00:27:20.805 qpair failed and we were unable to recover it. 00:27:20.805 [2024-07-14 07:44:36.843162] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.805 [2024-07-14 07:44:36.843351] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.805 [2024-07-14 07:44:36.843377] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0c30000b90 with addr=10.0.0.2, port=4420 00:27:20.805 qpair failed and we were unable to recover it. 00:27:20.805 [2024-07-14 07:44:36.843558] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.805 [2024-07-14 07:44:36.843791] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.806 [2024-07-14 07:44:36.843816] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0c30000b90 with addr=10.0.0.2, port=4420 00:27:20.806 qpair failed and we were unable to recover it. 00:27:20.806 [2024-07-14 07:44:36.843996] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.806 [2024-07-14 07:44:36.844183] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.806 [2024-07-14 07:44:36.844208] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0c30000b90 with addr=10.0.0.2, port=4420 00:27:20.806 qpair failed and we were unable to recover it. 00:27:20.806 [2024-07-14 07:44:36.844395] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.806 [2024-07-14 07:44:36.844572] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.806 [2024-07-14 07:44:36.844597] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0c30000b90 with addr=10.0.0.2, port=4420 00:27:20.806 qpair failed and we were unable to recover it. 00:27:20.806 [2024-07-14 07:44:36.844822] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.806 [2024-07-14 07:44:36.844987] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.806 [2024-07-14 07:44:36.845013] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0c30000b90 with addr=10.0.0.2, port=4420 00:27:20.806 qpair failed and we were unable to recover it. 00:27:20.806 [2024-07-14 07:44:36.845205] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.806 [2024-07-14 07:44:36.845360] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.806 [2024-07-14 07:44:36.845386] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0c30000b90 with addr=10.0.0.2, port=4420 00:27:20.806 qpair failed and we were unable to recover it. 00:27:20.806 [2024-07-14 07:44:36.845571] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.806 [2024-07-14 07:44:36.845751] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.806 [2024-07-14 07:44:36.845777] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0c30000b90 with addr=10.0.0.2, port=4420 00:27:20.806 qpair failed and we were unable to recover it. 00:27:20.806 [2024-07-14 07:44:36.845969] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.806 [2024-07-14 07:44:36.846165] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.806 [2024-07-14 07:44:36.846190] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0c30000b90 with addr=10.0.0.2, port=4420 00:27:20.806 qpair failed and we were unable to recover it. 00:27:20.806 [2024-07-14 07:44:36.846348] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.806 [2024-07-14 07:44:36.846500] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.806 [2024-07-14 07:44:36.846525] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0c30000b90 with addr=10.0.0.2, port=4420 00:27:20.806 qpair failed and we were unable to recover it. 00:27:20.806 [2024-07-14 07:44:36.846711] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.806 [2024-07-14 07:44:36.846872] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.806 [2024-07-14 07:44:36.846899] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0c30000b90 with addr=10.0.0.2, port=4420 00:27:20.806 qpair failed and we were unable to recover it. 00:27:20.806 [2024-07-14 07:44:36.847058] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.806 [2024-07-14 07:44:36.847221] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.806 [2024-07-14 07:44:36.847245] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0c30000b90 with addr=10.0.0.2, port=4420 00:27:20.806 qpair failed and we were unable to recover it. 00:27:20.806 [2024-07-14 07:44:36.847392] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.806 [2024-07-14 07:44:36.847576] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.806 [2024-07-14 07:44:36.847601] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0c30000b90 with addr=10.0.0.2, port=4420 00:27:20.806 qpair failed and we were unable to recover it. 00:27:20.806 [2024-07-14 07:44:36.847781] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.806 [2024-07-14 07:44:36.847957] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.806 [2024-07-14 07:44:36.847983] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0c30000b90 with addr=10.0.0.2, port=4420 00:27:20.806 qpair failed and we were unable to recover it. 00:27:20.806 [2024-07-14 07:44:36.848167] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.806 07:44:36 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:27:20.806 [2024-07-14 07:44:36.848362] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.806 [2024-07-14 07:44:36.848387] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0c30000b90 with addr=10.0.0.2, port=4420 00:27:20.806 qpair failed and we were unable to recover it. 00:27:20.806 07:44:36 -- host/target_disconnect.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:27:20.806 [2024-07-14 07:44:36.848542] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.806 07:44:36 -- common/autotest_common.sh@551 -- # xtrace_disable 00:27:20.806 07:44:36 -- common/autotest_common.sh@10 -- # set +x 00:27:20.806 [2024-07-14 07:44:36.848700] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.806 [2024-07-14 07:44:36.848725] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0c30000b90 with addr=10.0.0.2, port=4420 00:27:20.806 qpair failed and we were unable to recover it. 00:27:20.806 [2024-07-14 07:44:36.848891] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.806 [2024-07-14 07:44:36.849048] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.806 [2024-07-14 07:44:36.849073] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0c30000b90 with addr=10.0.0.2, port=4420 00:27:20.806 qpair failed and we were unable to recover it. 00:27:20.806 [2024-07-14 07:44:36.849303] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.806 [2024-07-14 07:44:36.849482] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.806 [2024-07-14 07:44:36.849506] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0c30000b90 with addr=10.0.0.2, port=4420 00:27:20.806 qpair failed and we were unable to recover it. 00:27:20.806 [2024-07-14 07:44:36.849699] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.806 [2024-07-14 07:44:36.849880] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.806 [2024-07-14 07:44:36.849906] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0c30000b90 with addr=10.0.0.2, port=4420 00:27:20.806 qpair failed and we were unable to recover it. 00:27:20.806 [2024-07-14 07:44:36.850112] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.806 [2024-07-14 07:44:36.850292] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.806 [2024-07-14 07:44:36.850317] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0c30000b90 with addr=10.0.0.2, port=4420 00:27:20.806 qpair failed and we were unable to recover it. 00:27:20.806 [2024-07-14 07:44:36.850491] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.806 [2024-07-14 07:44:36.850651] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.806 [2024-07-14 07:44:36.850675] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0c30000b90 with addr=10.0.0.2, port=4420 00:27:20.806 qpair failed and we were unable to recover it. 00:27:20.806 [2024-07-14 07:44:36.850847] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.806 [2024-07-14 07:44:36.851039] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.806 [2024-07-14 07:44:36.851065] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0c30000b90 with addr=10.0.0.2, port=4420 00:27:20.806 qpair failed and we were unable to recover it. 00:27:20.806 [2024-07-14 07:44:36.851281] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.806 [2024-07-14 07:44:36.851460] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.806 [2024-07-14 07:44:36.851485] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0c30000b90 with addr=10.0.0.2, port=4420 00:27:20.806 qpair failed and we were unable to recover it. 00:27:20.806 [2024-07-14 07:44:36.851638] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.806 [2024-07-14 07:44:36.851797] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.806 [2024-07-14 07:44:36.851824] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0c30000b90 with addr=10.0.0.2, port=4420 00:27:20.806 qpair failed and we were unable to recover it. 00:27:20.806 [2024-07-14 07:44:36.852015] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.806 [2024-07-14 07:44:36.852197] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.806 [2024-07-14 07:44:36.852222] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0c30000b90 with addr=10.0.0.2, port=4420 00:27:20.806 qpair failed and we were unable to recover it. 00:27:20.806 [2024-07-14 07:44:36.852286] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:27:20.806 [2024-07-14 07:44:36.854907] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:20.806 [2024-07-14 07:44:36.855107] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:20.806 [2024-07-14 07:44:36.855136] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:20.806 [2024-07-14 07:44:36.855170] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:20.806 [2024-07-14 07:44:36.855186] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0c30000b90 00:27:20.806 [2024-07-14 07:44:36.855260] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:20.806 qpair failed and we were unable to recover it. 00:27:20.806 07:44:36 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:27:20.806 07:44:36 -- host/target_disconnect.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:27:20.806 07:44:36 -- common/autotest_common.sh@551 -- # xtrace_disable 00:27:20.806 07:44:36 -- common/autotest_common.sh@10 -- # set +x 00:27:20.806 07:44:36 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:27:20.806 07:44:36 -- host/target_disconnect.sh@58 -- # wait 18725 00:27:20.806 [2024-07-14 07:44:36.864692] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:20.806 [2024-07-14 07:44:36.864858] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:20.806 [2024-07-14 07:44:36.864894] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:20.806 [2024-07-14 07:44:36.864910] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:20.806 [2024-07-14 07:44:36.864923] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0c30000b90 00:27:20.806 [2024-07-14 07:44:36.864954] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:20.806 qpair failed and we were unable to recover it. 00:27:20.806 [2024-07-14 07:44:36.874692] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:20.806 [2024-07-14 07:44:36.874891] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:20.807 [2024-07-14 07:44:36.874921] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:20.807 [2024-07-14 07:44:36.874937] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:20.807 [2024-07-14 07:44:36.874951] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0c30000b90 00:27:20.807 [2024-07-14 07:44:36.874982] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:20.807 qpair failed and we were unable to recover it. 00:27:20.807 [2024-07-14 07:44:36.884670] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:20.807 [2024-07-14 07:44:36.884886] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:20.807 [2024-07-14 07:44:36.884914] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:20.807 [2024-07-14 07:44:36.884930] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:20.807 [2024-07-14 07:44:36.884943] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0c30000b90 00:27:20.807 [2024-07-14 07:44:36.884973] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:20.807 qpair failed and we were unable to recover it. 00:27:20.807 [2024-07-14 07:44:36.894686] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:20.807 [2024-07-14 07:44:36.894849] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:20.807 [2024-07-14 07:44:36.894885] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:20.807 [2024-07-14 07:44:36.894902] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:20.807 [2024-07-14 07:44:36.894915] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0c30000b90 00:27:20.807 [2024-07-14 07:44:36.894944] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:20.807 qpair failed and we were unable to recover it. 00:27:20.807 [2024-07-14 07:44:36.904687] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:20.807 [2024-07-14 07:44:36.904855] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:20.807 [2024-07-14 07:44:36.904889] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:20.807 [2024-07-14 07:44:36.904906] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:20.807 [2024-07-14 07:44:36.904919] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0c30000b90 00:27:20.807 [2024-07-14 07:44:36.904948] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:20.807 qpair failed and we were unable to recover it. 00:27:20.807 [2024-07-14 07:44:36.914742] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:20.807 [2024-07-14 07:44:36.914936] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:20.807 [2024-07-14 07:44:36.914964] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:20.807 [2024-07-14 07:44:36.914979] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:20.807 [2024-07-14 07:44:36.914992] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0c30000b90 00:27:20.807 [2024-07-14 07:44:36.915022] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:20.807 qpair failed and we were unable to recover it. 00:27:20.807 [2024-07-14 07:44:36.924714] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:20.807 [2024-07-14 07:44:36.924886] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:20.807 [2024-07-14 07:44:36.924913] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:20.807 [2024-07-14 07:44:36.924929] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:20.807 [2024-07-14 07:44:36.924942] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0c30000b90 00:27:20.807 [2024-07-14 07:44:36.924971] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:20.807 qpair failed and we were unable to recover it. 00:27:20.807 [2024-07-14 07:44:36.934758] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:20.807 [2024-07-14 07:44:36.934925] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:20.807 [2024-07-14 07:44:36.934952] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:20.807 [2024-07-14 07:44:36.934968] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:20.807 [2024-07-14 07:44:36.934981] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0c30000b90 00:27:20.807 [2024-07-14 07:44:36.935011] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:20.807 qpair failed and we were unable to recover it. 00:27:20.807 [2024-07-14 07:44:36.944822] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:20.807 [2024-07-14 07:44:36.945027] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:20.807 [2024-07-14 07:44:36.945055] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:20.807 [2024-07-14 07:44:36.945077] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:20.807 [2024-07-14 07:44:36.945092] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0c30000b90 00:27:20.807 [2024-07-14 07:44:36.945122] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:20.807 qpair failed and we were unable to recover it. 00:27:21.067 [2024-07-14 07:44:36.954833] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:21.067 [2024-07-14 07:44:36.955005] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:21.067 [2024-07-14 07:44:36.955033] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:21.067 [2024-07-14 07:44:36.955049] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:21.067 [2024-07-14 07:44:36.955063] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0c30000b90 00:27:21.067 [2024-07-14 07:44:36.955093] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:21.067 qpair failed and we were unable to recover it. 00:27:21.067 [2024-07-14 07:44:36.964839] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:21.067 [2024-07-14 07:44:36.965015] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:21.067 [2024-07-14 07:44:36.965042] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:21.067 [2024-07-14 07:44:36.965058] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:21.067 [2024-07-14 07:44:36.965072] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0c30000b90 00:27:21.067 [2024-07-14 07:44:36.965102] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:21.067 qpair failed and we were unable to recover it. 00:27:21.067 [2024-07-14 07:44:36.974912] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:21.067 [2024-07-14 07:44:36.975079] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:21.067 [2024-07-14 07:44:36.975107] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:21.067 [2024-07-14 07:44:36.975123] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:21.067 [2024-07-14 07:44:36.975137] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0c30000b90 00:27:21.067 [2024-07-14 07:44:36.975168] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:21.067 qpair failed and we were unable to recover it. 00:27:21.067 [2024-07-14 07:44:36.984939] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:21.067 [2024-07-14 07:44:36.985133] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:21.067 [2024-07-14 07:44:36.985162] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:21.067 [2024-07-14 07:44:36.985182] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:21.067 [2024-07-14 07:44:36.985211] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0c30000b90 00:27:21.067 [2024-07-14 07:44:36.985243] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:21.067 qpair failed and we were unable to recover it. 00:27:21.067 [2024-07-14 07:44:36.994937] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:21.067 [2024-07-14 07:44:36.995101] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:21.067 [2024-07-14 07:44:36.995129] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:21.067 [2024-07-14 07:44:36.995145] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:21.067 [2024-07-14 07:44:36.995159] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0c30000b90 00:27:21.067 [2024-07-14 07:44:36.995204] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:21.067 qpair failed and we were unable to recover it. 00:27:21.067 [2024-07-14 07:44:37.004963] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:21.067 [2024-07-14 07:44:37.005153] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:21.067 [2024-07-14 07:44:37.005180] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:21.067 [2024-07-14 07:44:37.005196] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:21.067 [2024-07-14 07:44:37.005209] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0c30000b90 00:27:21.067 [2024-07-14 07:44:37.005240] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:21.067 qpair failed and we were unable to recover it. 00:27:21.067 [2024-07-14 07:44:37.015012] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:21.067 [2024-07-14 07:44:37.015182] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:21.067 [2024-07-14 07:44:37.015210] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:21.067 [2024-07-14 07:44:37.015226] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:21.067 [2024-07-14 07:44:37.015254] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0c30000b90 00:27:21.067 [2024-07-14 07:44:37.015284] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:21.067 qpair failed and we were unable to recover it. 00:27:21.067 [2024-07-14 07:44:37.025033] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:21.067 [2024-07-14 07:44:37.025192] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:21.067 [2024-07-14 07:44:37.025219] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:21.067 [2024-07-14 07:44:37.025235] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:21.067 [2024-07-14 07:44:37.025250] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0c30000b90 00:27:21.067 [2024-07-14 07:44:37.025280] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:21.067 qpair failed and we were unable to recover it. 00:27:21.067 [2024-07-14 07:44:37.035047] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:21.067 [2024-07-14 07:44:37.035204] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:21.067 [2024-07-14 07:44:37.035231] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:21.067 [2024-07-14 07:44:37.035253] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:21.067 [2024-07-14 07:44:37.035267] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0c30000b90 00:27:21.068 [2024-07-14 07:44:37.035297] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:21.068 qpair failed and we were unable to recover it. 00:27:21.068 [2024-07-14 07:44:37.045119] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:21.068 [2024-07-14 07:44:37.045347] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:21.068 [2024-07-14 07:44:37.045374] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:21.068 [2024-07-14 07:44:37.045392] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:21.068 [2024-07-14 07:44:37.045406] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0c30000b90 00:27:21.068 [2024-07-14 07:44:37.045450] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:21.068 qpair failed and we were unable to recover it. 00:27:21.068 [2024-07-14 07:44:37.055365] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:21.068 [2024-07-14 07:44:37.055556] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:21.068 [2024-07-14 07:44:37.055583] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:21.068 [2024-07-14 07:44:37.055598] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:21.068 [2024-07-14 07:44:37.055611] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0c30000b90 00:27:21.068 [2024-07-14 07:44:37.055655] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:21.068 qpair failed and we were unable to recover it. 00:27:21.068 [2024-07-14 07:44:37.065175] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:21.068 [2024-07-14 07:44:37.065337] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:21.068 [2024-07-14 07:44:37.065365] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:21.068 [2024-07-14 07:44:37.065381] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:21.068 [2024-07-14 07:44:37.065394] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0c30000b90 00:27:21.068 [2024-07-14 07:44:37.065426] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:21.068 qpair failed and we were unable to recover it. 00:27:21.068 [2024-07-14 07:44:37.075213] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:21.068 [2024-07-14 07:44:37.075376] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:21.068 [2024-07-14 07:44:37.075401] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:21.068 [2024-07-14 07:44:37.075416] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:21.068 [2024-07-14 07:44:37.075430] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0c30000b90 00:27:21.068 [2024-07-14 07:44:37.075460] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:21.068 qpair failed and we were unable to recover it. 00:27:21.068 [2024-07-14 07:44:37.085254] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:21.068 [2024-07-14 07:44:37.085416] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:21.068 [2024-07-14 07:44:37.085458] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:21.068 [2024-07-14 07:44:37.085474] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:21.068 [2024-07-14 07:44:37.085488] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0c30000b90 00:27:21.068 [2024-07-14 07:44:37.085532] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:21.068 qpair failed and we were unable to recover it. 00:27:21.068 [2024-07-14 07:44:37.095254] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:21.068 [2024-07-14 07:44:37.095422] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:21.068 [2024-07-14 07:44:37.095450] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:21.068 [2024-07-14 07:44:37.095466] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:21.068 [2024-07-14 07:44:37.095479] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0c30000b90 00:27:21.068 [2024-07-14 07:44:37.095509] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:21.068 qpair failed and we were unable to recover it. 00:27:21.068 [2024-07-14 07:44:37.105273] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:21.068 [2024-07-14 07:44:37.105431] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:21.068 [2024-07-14 07:44:37.105458] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:21.068 [2024-07-14 07:44:37.105475] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:21.068 [2024-07-14 07:44:37.105489] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0c30000b90 00:27:21.068 [2024-07-14 07:44:37.105518] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:21.068 qpair failed and we were unable to recover it. 00:27:21.068 [2024-07-14 07:44:37.115296] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:21.068 [2024-07-14 07:44:37.115454] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:21.068 [2024-07-14 07:44:37.115481] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:21.068 [2024-07-14 07:44:37.115497] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:21.068 [2024-07-14 07:44:37.115511] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0c30000b90 00:27:21.068 [2024-07-14 07:44:37.115541] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:21.068 qpair failed and we were unable to recover it. 00:27:21.068 [2024-07-14 07:44:37.125359] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:21.068 [2024-07-14 07:44:37.125522] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:21.068 [2024-07-14 07:44:37.125557] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:21.068 [2024-07-14 07:44:37.125590] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:21.068 [2024-07-14 07:44:37.125603] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0c30000b90 00:27:21.068 [2024-07-14 07:44:37.125647] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:21.068 qpair failed and we were unable to recover it. 00:27:21.068 [2024-07-14 07:44:37.135364] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:21.068 [2024-07-14 07:44:37.135572] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:21.068 [2024-07-14 07:44:37.135616] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:21.068 [2024-07-14 07:44:37.135632] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:21.068 [2024-07-14 07:44:37.135645] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0c30000b90 00:27:21.068 [2024-07-14 07:44:37.135690] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:21.068 qpair failed and we were unable to recover it. 00:27:21.068 [2024-07-14 07:44:37.145434] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:21.068 [2024-07-14 07:44:37.145603] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:21.068 [2024-07-14 07:44:37.145634] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:21.068 [2024-07-14 07:44:37.145652] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:21.068 [2024-07-14 07:44:37.145682] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0c30000b90 00:27:21.068 [2024-07-14 07:44:37.145713] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:21.068 qpair failed and we were unable to recover it. 00:27:21.068 [2024-07-14 07:44:37.155431] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:21.068 [2024-07-14 07:44:37.155593] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:21.068 [2024-07-14 07:44:37.155621] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:21.068 [2024-07-14 07:44:37.155651] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:21.068 [2024-07-14 07:44:37.155665] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0c30000b90 00:27:21.068 [2024-07-14 07:44:37.155695] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:21.068 qpair failed and we were unable to recover it. 00:27:21.068 [2024-07-14 07:44:37.165394] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:21.068 [2024-07-14 07:44:37.165556] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:21.068 [2024-07-14 07:44:37.165585] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:21.068 [2024-07-14 07:44:37.165601] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:21.068 [2024-07-14 07:44:37.165615] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0c30000b90 00:27:21.068 [2024-07-14 07:44:37.165651] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:21.068 qpair failed and we were unable to recover it. 00:27:21.068 [2024-07-14 07:44:37.175465] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:21.068 [2024-07-14 07:44:37.175625] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:21.068 [2024-07-14 07:44:37.175652] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:21.068 [2024-07-14 07:44:37.175669] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:21.068 [2024-07-14 07:44:37.175682] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0c30000b90 00:27:21.068 [2024-07-14 07:44:37.175724] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:21.068 qpair failed and we were unable to recover it. 00:27:21.068 [2024-07-14 07:44:37.185476] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:21.069 [2024-07-14 07:44:37.185669] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:21.069 [2024-07-14 07:44:37.185696] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:21.069 [2024-07-14 07:44:37.185712] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:21.069 [2024-07-14 07:44:37.185726] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0c30000b90 00:27:21.069 [2024-07-14 07:44:37.185756] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:21.069 qpair failed and we were unable to recover it. 00:27:21.069 [2024-07-14 07:44:37.195559] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:21.069 [2024-07-14 07:44:37.195724] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:21.069 [2024-07-14 07:44:37.195754] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:21.069 [2024-07-14 07:44:37.195774] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:21.069 [2024-07-14 07:44:37.195788] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0c30000b90 00:27:21.069 [2024-07-14 07:44:37.195833] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:21.069 qpair failed and we were unable to recover it. 00:27:21.069 [2024-07-14 07:44:37.205533] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:21.069 [2024-07-14 07:44:37.205695] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:21.069 [2024-07-14 07:44:37.205722] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:21.069 [2024-07-14 07:44:37.205739] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:21.069 [2024-07-14 07:44:37.205752] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0c30000b90 00:27:21.069 [2024-07-14 07:44:37.205782] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:21.069 qpair failed and we were unable to recover it. 00:27:21.069 [2024-07-14 07:44:37.215565] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:21.069 [2024-07-14 07:44:37.215802] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:21.069 [2024-07-14 07:44:37.215837] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:21.069 [2024-07-14 07:44:37.215876] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:21.069 [2024-07-14 07:44:37.215893] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0c30000b90 00:27:21.069 [2024-07-14 07:44:37.215923] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:21.069 qpair failed and we were unable to recover it. 00:27:21.069 [2024-07-14 07:44:37.225594] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:21.069 [2024-07-14 07:44:37.225755] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:21.069 [2024-07-14 07:44:37.225783] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:21.069 [2024-07-14 07:44:37.225799] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:21.069 [2024-07-14 07:44:37.225813] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0c30000b90 00:27:21.069 [2024-07-14 07:44:37.225843] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:21.069 qpair failed and we were unable to recover it. 00:27:21.069 [2024-07-14 07:44:37.235644] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:21.069 [2024-07-14 07:44:37.235807] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:21.069 [2024-07-14 07:44:37.235837] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:21.069 [2024-07-14 07:44:37.235857] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:21.069 [2024-07-14 07:44:37.235880] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0c30000b90 00:27:21.069 [2024-07-14 07:44:37.235911] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:21.069 qpair failed and we were unable to recover it. 00:27:21.328 [2024-07-14 07:44:37.245664] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:21.328 [2024-07-14 07:44:37.245829] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:21.328 [2024-07-14 07:44:37.245857] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:21.328 [2024-07-14 07:44:37.245879] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:21.328 [2024-07-14 07:44:37.245894] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0c30000b90 00:27:21.328 [2024-07-14 07:44:37.245924] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:21.328 qpair failed and we were unable to recover it. 00:27:21.328 [2024-07-14 07:44:37.255691] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:21.328 [2024-07-14 07:44:37.255874] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:21.328 [2024-07-14 07:44:37.255901] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:21.328 [2024-07-14 07:44:37.255917] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:21.328 [2024-07-14 07:44:37.255931] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0c30000b90 00:27:21.328 [2024-07-14 07:44:37.255967] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:21.328 qpair failed and we were unable to recover it. 00:27:21.328 [2024-07-14 07:44:37.265715] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:21.328 [2024-07-14 07:44:37.265878] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:21.328 [2024-07-14 07:44:37.265906] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:21.328 [2024-07-14 07:44:37.265921] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:21.328 [2024-07-14 07:44:37.265935] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0c30000b90 00:27:21.328 [2024-07-14 07:44:37.265966] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:21.328 qpair failed and we were unable to recover it. 00:27:21.328 [2024-07-14 07:44:37.275732] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:21.328 [2024-07-14 07:44:37.275893] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:21.328 [2024-07-14 07:44:37.275921] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:21.328 [2024-07-14 07:44:37.275937] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:21.328 [2024-07-14 07:44:37.275950] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0c30000b90 00:27:21.328 [2024-07-14 07:44:37.275981] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:21.328 qpair failed and we were unable to recover it. 00:27:21.328 [2024-07-14 07:44:37.285782] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:21.328 [2024-07-14 07:44:37.285948] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:21.328 [2024-07-14 07:44:37.285975] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:21.328 [2024-07-14 07:44:37.285992] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:21.328 [2024-07-14 07:44:37.286005] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0c30000b90 00:27:21.328 [2024-07-14 07:44:37.286047] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:21.328 qpair failed and we were unable to recover it. 00:27:21.328 [2024-07-14 07:44:37.295842] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:21.328 [2024-07-14 07:44:37.296068] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:21.328 [2024-07-14 07:44:37.296096] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:21.329 [2024-07-14 07:44:37.296112] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:21.329 [2024-07-14 07:44:37.296129] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0c30000b90 00:27:21.329 [2024-07-14 07:44:37.296160] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:21.329 qpair failed and we were unable to recover it. 00:27:21.329 [2024-07-14 07:44:37.305833] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:21.329 [2024-07-14 07:44:37.306008] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:21.329 [2024-07-14 07:44:37.306041] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:21.329 [2024-07-14 07:44:37.306058] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:21.329 [2024-07-14 07:44:37.306072] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0c30000b90 00:27:21.329 [2024-07-14 07:44:37.306103] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:21.329 qpair failed and we were unable to recover it. 00:27:21.329 [2024-07-14 07:44:37.315854] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:21.329 [2024-07-14 07:44:37.316017] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:21.329 [2024-07-14 07:44:37.316045] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:21.329 [2024-07-14 07:44:37.316061] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:21.329 [2024-07-14 07:44:37.316075] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0c30000b90 00:27:21.329 [2024-07-14 07:44:37.316106] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:21.329 qpair failed and we were unable to recover it. 00:27:21.329 [2024-07-14 07:44:37.325936] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:21.329 [2024-07-14 07:44:37.326104] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:21.329 [2024-07-14 07:44:37.326133] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:21.329 [2024-07-14 07:44:37.326167] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:21.329 [2024-07-14 07:44:37.326181] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0c30000b90 00:27:21.329 [2024-07-14 07:44:37.326211] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:21.329 qpair failed and we were unable to recover it. 00:27:21.329 [2024-07-14 07:44:37.335921] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:21.329 [2024-07-14 07:44:37.336079] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:21.329 [2024-07-14 07:44:37.336106] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:21.329 [2024-07-14 07:44:37.336122] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:21.329 [2024-07-14 07:44:37.336136] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0c30000b90 00:27:21.329 [2024-07-14 07:44:37.336166] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:21.329 qpair failed and we were unable to recover it. 00:27:21.329 [2024-07-14 07:44:37.345955] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:21.329 [2024-07-14 07:44:37.346112] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:21.329 [2024-07-14 07:44:37.346139] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:21.329 [2024-07-14 07:44:37.346155] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:21.329 [2024-07-14 07:44:37.346175] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0c30000b90 00:27:21.329 [2024-07-14 07:44:37.346206] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:21.329 qpair failed and we were unable to recover it. 00:27:21.329 [2024-07-14 07:44:37.355981] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:21.329 [2024-07-14 07:44:37.356133] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:21.329 [2024-07-14 07:44:37.356161] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:21.329 [2024-07-14 07:44:37.356177] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:21.329 [2024-07-14 07:44:37.356190] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0c30000b90 00:27:21.329 [2024-07-14 07:44:37.356221] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:21.329 qpair failed and we were unable to recover it. 00:27:21.329 [2024-07-14 07:44:37.366008] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:21.329 [2024-07-14 07:44:37.366166] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:21.329 [2024-07-14 07:44:37.366192] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:21.329 [2024-07-14 07:44:37.366208] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:21.329 [2024-07-14 07:44:37.366222] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0c30000b90 00:27:21.329 [2024-07-14 07:44:37.366252] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:21.329 qpair failed and we were unable to recover it. 00:27:21.329 [2024-07-14 07:44:37.376038] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:21.329 [2024-07-14 07:44:37.376194] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:21.329 [2024-07-14 07:44:37.376221] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:21.329 [2024-07-14 07:44:37.376236] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:21.329 [2024-07-14 07:44:37.376249] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0c30000b90 00:27:21.329 [2024-07-14 07:44:37.376278] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:21.329 qpair failed and we were unable to recover it. 00:27:21.329 [2024-07-14 07:44:37.386068] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:21.329 [2024-07-14 07:44:37.386265] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:21.329 [2024-07-14 07:44:37.386293] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:21.329 [2024-07-14 07:44:37.386313] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:21.329 [2024-07-14 07:44:37.386327] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0c30000b90 00:27:21.329 [2024-07-14 07:44:37.386359] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:21.329 qpair failed and we were unable to recover it. 00:27:21.329 [2024-07-14 07:44:37.396086] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:21.329 [2024-07-14 07:44:37.396248] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:21.329 [2024-07-14 07:44:37.396276] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:21.329 [2024-07-14 07:44:37.396292] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:21.329 [2024-07-14 07:44:37.396306] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0c30000b90 00:27:21.329 [2024-07-14 07:44:37.396336] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:21.329 qpair failed and we were unable to recover it. 00:27:21.329 [2024-07-14 07:44:37.406136] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:21.329 [2024-07-14 07:44:37.406296] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:21.329 [2024-07-14 07:44:37.406323] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:21.329 [2024-07-14 07:44:37.406339] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:21.329 [2024-07-14 07:44:37.406353] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0c30000b90 00:27:21.329 [2024-07-14 07:44:37.406383] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:21.329 qpair failed and we were unable to recover it. 00:27:21.329 [2024-07-14 07:44:37.416157] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:21.329 [2024-07-14 07:44:37.416320] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:21.329 [2024-07-14 07:44:37.416347] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:21.329 [2024-07-14 07:44:37.416363] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:21.329 [2024-07-14 07:44:37.416377] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0c30000b90 00:27:21.329 [2024-07-14 07:44:37.416406] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:21.329 qpair failed and we were unable to recover it. 00:27:21.329 [2024-07-14 07:44:37.426193] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:21.329 [2024-07-14 07:44:37.426344] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:21.329 [2024-07-14 07:44:37.426372] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:21.329 [2024-07-14 07:44:37.426388] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:21.329 [2024-07-14 07:44:37.426402] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0c30000b90 00:27:21.329 [2024-07-14 07:44:37.426446] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:21.329 qpair failed and we were unable to recover it. 00:27:21.329 [2024-07-14 07:44:37.436186] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:21.329 [2024-07-14 07:44:37.436346] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:21.329 [2024-07-14 07:44:37.436373] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:21.329 [2024-07-14 07:44:37.436389] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:21.329 [2024-07-14 07:44:37.436408] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0c30000b90 00:27:21.330 [2024-07-14 07:44:37.436439] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:21.330 qpair failed and we were unable to recover it. 00:27:21.330 [2024-07-14 07:44:37.446321] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:21.330 [2024-07-14 07:44:37.446504] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:21.330 [2024-07-14 07:44:37.446532] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:21.330 [2024-07-14 07:44:37.446562] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:21.330 [2024-07-14 07:44:37.446576] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0c30000b90 00:27:21.330 [2024-07-14 07:44:37.446606] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:21.330 qpair failed and we were unable to recover it. 00:27:21.330 [2024-07-14 07:44:37.456341] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:21.330 [2024-07-14 07:44:37.456499] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:21.330 [2024-07-14 07:44:37.456526] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:21.330 [2024-07-14 07:44:37.456542] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:21.330 [2024-07-14 07:44:37.456556] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0c30000b90 00:27:21.330 [2024-07-14 07:44:37.456587] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:21.330 qpair failed and we were unable to recover it. 00:27:21.330 [2024-07-14 07:44:37.466325] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:21.330 [2024-07-14 07:44:37.466503] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:21.330 [2024-07-14 07:44:37.466530] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:21.330 [2024-07-14 07:44:37.466545] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:21.330 [2024-07-14 07:44:37.466559] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0c30000b90 00:27:21.330 [2024-07-14 07:44:37.466589] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:21.330 qpair failed and we were unable to recover it. 00:27:21.330 [2024-07-14 07:44:37.476376] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:21.330 [2024-07-14 07:44:37.476554] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:21.330 [2024-07-14 07:44:37.476592] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:21.330 [2024-07-14 07:44:37.476610] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:21.330 [2024-07-14 07:44:37.476640] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0c30000b90 00:27:21.330 [2024-07-14 07:44:37.476674] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:21.330 qpair failed and we were unable to recover it. 00:27:21.330 [2024-07-14 07:44:37.486445] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:21.330 [2024-07-14 07:44:37.486612] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:21.330 [2024-07-14 07:44:37.486639] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:21.330 [2024-07-14 07:44:37.486655] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:21.330 [2024-07-14 07:44:37.486670] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0c30000b90 00:27:21.330 [2024-07-14 07:44:37.486700] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:21.330 qpair failed and we were unable to recover it. 00:27:21.330 [2024-07-14 07:44:37.496411] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:21.330 [2024-07-14 07:44:37.496573] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:21.330 [2024-07-14 07:44:37.496600] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:21.330 [2024-07-14 07:44:37.496615] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:21.330 [2024-07-14 07:44:37.496631] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0c30000b90 00:27:21.330 [2024-07-14 07:44:37.496675] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:21.330 qpair failed and we were unable to recover it. 00:27:21.589 [2024-07-14 07:44:37.506414] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:21.589 [2024-07-14 07:44:37.506624] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:21.589 [2024-07-14 07:44:37.506651] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:21.589 [2024-07-14 07:44:37.506666] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:21.589 [2024-07-14 07:44:37.506681] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0c30000b90 00:27:21.589 [2024-07-14 07:44:37.506711] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:21.589 qpair failed and we were unable to recover it. 00:27:21.589 [2024-07-14 07:44:37.516433] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:21.589 [2024-07-14 07:44:37.516591] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:21.589 [2024-07-14 07:44:37.516618] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:21.589 [2024-07-14 07:44:37.516633] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:21.589 [2024-07-14 07:44:37.516646] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0c30000b90 00:27:21.589 [2024-07-14 07:44:37.516677] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:21.589 qpair failed and we were unable to recover it. 00:27:21.589 [2024-07-14 07:44:37.526558] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:21.589 [2024-07-14 07:44:37.526723] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:21.589 [2024-07-14 07:44:37.526750] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:21.589 [2024-07-14 07:44:37.526774] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:21.589 [2024-07-14 07:44:37.526789] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0c30000b90 00:27:21.589 [2024-07-14 07:44:37.526819] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:21.589 qpair failed and we were unable to recover it. 00:27:21.589 [2024-07-14 07:44:37.536553] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:21.589 [2024-07-14 07:44:37.536727] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:21.589 [2024-07-14 07:44:37.536756] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:21.589 [2024-07-14 07:44:37.536775] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:21.589 [2024-07-14 07:44:37.536804] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0c30000b90 00:27:21.589 [2024-07-14 07:44:37.536834] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:21.589 qpair failed and we were unable to recover it. 00:27:21.589 [2024-07-14 07:44:37.546520] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:21.589 [2024-07-14 07:44:37.546680] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:21.589 [2024-07-14 07:44:37.546708] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:21.589 [2024-07-14 07:44:37.546723] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:21.589 [2024-07-14 07:44:37.546736] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0c30000b90 00:27:21.589 [2024-07-14 07:44:37.546767] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:21.589 qpair failed and we were unable to recover it. 00:27:21.589 [2024-07-14 07:44:37.556579] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:21.589 [2024-07-14 07:44:37.556738] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:21.589 [2024-07-14 07:44:37.556764] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:21.589 [2024-07-14 07:44:37.556779] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:21.589 [2024-07-14 07:44:37.556793] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0c30000b90 00:27:21.589 [2024-07-14 07:44:37.556823] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:21.589 qpair failed and we were unable to recover it. 00:27:21.589 [2024-07-14 07:44:37.566610] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:21.589 [2024-07-14 07:44:37.566767] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:21.589 [2024-07-14 07:44:37.566793] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:21.589 [2024-07-14 07:44:37.566808] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:21.589 [2024-07-14 07:44:37.566822] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0c30000b90 00:27:21.589 [2024-07-14 07:44:37.566853] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:21.589 qpair failed and we were unable to recover it. 00:27:21.589 [2024-07-14 07:44:37.576617] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:21.589 [2024-07-14 07:44:37.576782] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:21.589 [2024-07-14 07:44:37.576808] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:21.589 [2024-07-14 07:44:37.576823] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:21.589 [2024-07-14 07:44:37.576836] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0c30000b90 00:27:21.589 [2024-07-14 07:44:37.576873] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:21.589 qpair failed and we were unable to recover it. 00:27:21.589 [2024-07-14 07:44:37.586669] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:21.589 [2024-07-14 07:44:37.586832] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:21.589 [2024-07-14 07:44:37.586859] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:21.589 [2024-07-14 07:44:37.586884] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:21.589 [2024-07-14 07:44:37.586900] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0c30000b90 00:27:21.589 [2024-07-14 07:44:37.586930] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:21.589 qpair failed and we were unable to recover it. 00:27:21.589 [2024-07-14 07:44:37.596723] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:21.589 [2024-07-14 07:44:37.596911] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:21.589 [2024-07-14 07:44:37.596938] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:21.589 [2024-07-14 07:44:37.596956] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:21.589 [2024-07-14 07:44:37.596971] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0c30000b90 00:27:21.589 [2024-07-14 07:44:37.597002] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:21.589 qpair failed and we were unable to recover it. 00:27:21.589 [2024-07-14 07:44:37.606755] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:21.589 [2024-07-14 07:44:37.606950] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:21.589 [2024-07-14 07:44:37.606977] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:21.589 [2024-07-14 07:44:37.606995] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:21.589 [2024-07-14 07:44:37.607011] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0c30000b90 00:27:21.589 [2024-07-14 07:44:37.607041] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:21.589 qpair failed and we were unable to recover it. 00:27:21.589 [2024-07-14 07:44:37.616747] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:21.590 [2024-07-14 07:44:37.616918] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:21.590 [2024-07-14 07:44:37.616950] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:21.590 [2024-07-14 07:44:37.616967] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:21.590 [2024-07-14 07:44:37.616981] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0c30000b90 00:27:21.590 [2024-07-14 07:44:37.617011] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:21.590 qpair failed and we were unable to recover it. 00:27:21.590 [2024-07-14 07:44:37.626752] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:21.590 [2024-07-14 07:44:37.626911] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:21.590 [2024-07-14 07:44:37.626938] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:21.590 [2024-07-14 07:44:37.626953] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:21.590 [2024-07-14 07:44:37.626967] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0c30000b90 00:27:21.590 [2024-07-14 07:44:37.626997] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:21.590 qpair failed and we were unable to recover it. 00:27:21.590 [2024-07-14 07:44:37.636798] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:21.590 [2024-07-14 07:44:37.636956] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:21.590 [2024-07-14 07:44:37.636983] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:21.590 [2024-07-14 07:44:37.636998] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:21.590 [2024-07-14 07:44:37.637011] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0c30000b90 00:27:21.590 [2024-07-14 07:44:37.637042] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:21.590 qpair failed and we were unable to recover it. 00:27:21.590 [2024-07-14 07:44:37.646816] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:21.590 [2024-07-14 07:44:37.646995] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:21.590 [2024-07-14 07:44:37.647021] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:21.590 [2024-07-14 07:44:37.647036] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:21.590 [2024-07-14 07:44:37.647050] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0c30000b90 00:27:21.590 [2024-07-14 07:44:37.647080] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:21.590 qpair failed and we were unable to recover it. 00:27:21.590 [2024-07-14 07:44:37.656888] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:21.590 [2024-07-14 07:44:37.657046] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:21.590 [2024-07-14 07:44:37.657071] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:21.590 [2024-07-14 07:44:37.657086] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:21.590 [2024-07-14 07:44:37.657101] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0c30000b90 00:27:21.590 [2024-07-14 07:44:37.657137] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:21.590 qpair failed and we were unable to recover it. 00:27:21.590 [2024-07-14 07:44:37.666903] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:21.590 [2024-07-14 07:44:37.667087] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:21.590 [2024-07-14 07:44:37.667113] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:21.590 [2024-07-14 07:44:37.667128] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:21.590 [2024-07-14 07:44:37.667143] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0c30000b90 00:27:21.590 [2024-07-14 07:44:37.667173] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:21.590 qpair failed and we were unable to recover it. 00:27:21.590 [2024-07-14 07:44:37.676914] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:21.590 [2024-07-14 07:44:37.677075] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:21.590 [2024-07-14 07:44:37.677101] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:21.590 [2024-07-14 07:44:37.677117] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:21.590 [2024-07-14 07:44:37.677131] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0c30000b90 00:27:21.590 [2024-07-14 07:44:37.677162] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:21.590 qpair failed and we were unable to recover it. 00:27:21.590 [2024-07-14 07:44:37.686974] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:21.590 [2024-07-14 07:44:37.687136] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:21.590 [2024-07-14 07:44:37.687162] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:21.590 [2024-07-14 07:44:37.687177] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:21.590 [2024-07-14 07:44:37.687206] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0c30000b90 00:27:21.590 [2024-07-14 07:44:37.687238] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:21.590 qpair failed and we were unable to recover it. 00:27:21.590 [2024-07-14 07:44:37.697005] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:21.590 [2024-07-14 07:44:37.697171] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:21.590 [2024-07-14 07:44:37.697197] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:21.590 [2024-07-14 07:44:37.697213] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:21.590 [2024-07-14 07:44:37.697226] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0c30000b90 00:27:21.590 [2024-07-14 07:44:37.697256] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:21.590 qpair failed and we were unable to recover it. 00:27:21.590 [2024-07-14 07:44:37.707026] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:21.590 [2024-07-14 07:44:37.707241] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:21.590 [2024-07-14 07:44:37.707273] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:21.590 [2024-07-14 07:44:37.707288] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:21.590 [2024-07-14 07:44:37.707303] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0c30000b90 00:27:21.590 [2024-07-14 07:44:37.707333] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:21.590 qpair failed and we were unable to recover it. 00:27:21.590 [2024-07-14 07:44:37.717070] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:21.590 [2024-07-14 07:44:37.717230] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:21.590 [2024-07-14 07:44:37.717258] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:21.590 [2024-07-14 07:44:37.717274] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:21.590 [2024-07-14 07:44:37.717289] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0c30000b90 00:27:21.590 [2024-07-14 07:44:37.717320] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:21.590 qpair failed and we were unable to recover it. 00:27:21.590 [2024-07-14 07:44:37.727103] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:21.590 [2024-07-14 07:44:37.727260] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:21.590 [2024-07-14 07:44:37.727287] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:21.590 [2024-07-14 07:44:37.727302] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:21.590 [2024-07-14 07:44:37.727317] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0c30000b90 00:27:21.590 [2024-07-14 07:44:37.727347] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:21.590 qpair failed and we were unable to recover it. 00:27:21.590 [2024-07-14 07:44:37.737093] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:21.590 [2024-07-14 07:44:37.737259] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:21.590 [2024-07-14 07:44:37.737285] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:21.590 [2024-07-14 07:44:37.737301] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:21.590 [2024-07-14 07:44:37.737314] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0c30000b90 00:27:21.590 [2024-07-14 07:44:37.737345] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:21.590 qpair failed and we were unable to recover it. 00:27:21.590 [2024-07-14 07:44:37.747167] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:21.590 [2024-07-14 07:44:37.747327] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:21.590 [2024-07-14 07:44:37.747353] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:21.590 [2024-07-14 07:44:37.747368] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:21.590 [2024-07-14 07:44:37.747381] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0c30000b90 00:27:21.590 [2024-07-14 07:44:37.747432] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:21.590 qpair failed and we were unable to recover it. 00:27:21.590 [2024-07-14 07:44:37.757160] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:21.590 [2024-07-14 07:44:37.757319] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:21.591 [2024-07-14 07:44:37.757345] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:21.591 [2024-07-14 07:44:37.757361] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:21.591 [2024-07-14 07:44:37.757376] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0c30000b90 00:27:21.591 [2024-07-14 07:44:37.757407] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:21.591 qpair failed and we were unable to recover it. 00:27:21.850 [2024-07-14 07:44:37.767203] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:21.850 [2024-07-14 07:44:37.767371] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:21.850 [2024-07-14 07:44:37.767397] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:21.850 [2024-07-14 07:44:37.767412] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:21.850 [2024-07-14 07:44:37.767425] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0c30000b90 00:27:21.850 [2024-07-14 07:44:37.767457] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:21.850 qpair failed and we were unable to recover it. 00:27:21.850 [2024-07-14 07:44:37.777229] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:21.850 [2024-07-14 07:44:37.777407] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:21.850 [2024-07-14 07:44:37.777434] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:21.850 [2024-07-14 07:44:37.777449] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:21.850 [2024-07-14 07:44:37.777464] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0c30000b90 00:27:21.851 [2024-07-14 07:44:37.777494] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:21.851 qpair failed and we were unable to recover it. 00:27:21.851 [2024-07-14 07:44:37.787276] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:21.851 [2024-07-14 07:44:37.787434] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:21.851 [2024-07-14 07:44:37.787460] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:21.851 [2024-07-14 07:44:37.787475] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:21.851 [2024-07-14 07:44:37.787488] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0c30000b90 00:27:21.851 [2024-07-14 07:44:37.787517] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:21.851 qpair failed and we were unable to recover it. 00:27:21.851 [2024-07-14 07:44:37.797310] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:21.851 [2024-07-14 07:44:37.797470] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:21.851 [2024-07-14 07:44:37.797502] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:21.851 [2024-07-14 07:44:37.797518] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:21.851 [2024-07-14 07:44:37.797533] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0c30000b90 00:27:21.851 [2024-07-14 07:44:37.797563] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:21.851 qpair failed and we were unable to recover it. 00:27:21.851 [2024-07-14 07:44:37.807314] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:21.851 [2024-07-14 07:44:37.807474] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:21.851 [2024-07-14 07:44:37.807512] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:21.851 [2024-07-14 07:44:37.807528] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:21.851 [2024-07-14 07:44:37.807543] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0c30000b90 00:27:21.851 [2024-07-14 07:44:37.807572] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:21.851 qpair failed and we were unable to recover it. 00:27:21.851 [2024-07-14 07:44:37.817345] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:21.851 [2024-07-14 07:44:37.817503] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:21.851 [2024-07-14 07:44:37.817530] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:21.851 [2024-07-14 07:44:37.817546] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:21.851 [2024-07-14 07:44:37.817561] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0c30000b90 00:27:21.851 [2024-07-14 07:44:37.817593] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:21.851 qpair failed and we were unable to recover it. 00:27:21.851 [2024-07-14 07:44:37.827354] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:21.851 [2024-07-14 07:44:37.827514] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:21.851 [2024-07-14 07:44:37.827540] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:21.851 [2024-07-14 07:44:37.827555] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:21.851 [2024-07-14 07:44:37.827570] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0c30000b90 00:27:21.851 [2024-07-14 07:44:37.827601] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:21.851 qpair failed and we were unable to recover it. 00:27:21.851 [2024-07-14 07:44:37.837379] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:21.851 [2024-07-14 07:44:37.837537] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:21.851 [2024-07-14 07:44:37.837563] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:21.851 [2024-07-14 07:44:37.837579] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:21.851 [2024-07-14 07:44:37.837598] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0c30000b90 00:27:21.851 [2024-07-14 07:44:37.837631] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:21.851 qpair failed and we were unable to recover it. 00:27:21.851 [2024-07-14 07:44:37.847462] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:21.851 [2024-07-14 07:44:37.847638] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:21.851 [2024-07-14 07:44:37.847665] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:21.851 [2024-07-14 07:44:37.847680] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:21.851 [2024-07-14 07:44:37.847694] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0c30000b90 00:27:21.851 [2024-07-14 07:44:37.847724] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:21.851 qpair failed and we were unable to recover it. 00:27:21.851 [2024-07-14 07:44:37.857477] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:21.851 [2024-07-14 07:44:37.857679] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:21.851 [2024-07-14 07:44:37.857705] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:21.851 [2024-07-14 07:44:37.857719] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:21.851 [2024-07-14 07:44:37.857732] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0c30000b90 00:27:21.851 [2024-07-14 07:44:37.857761] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:21.851 qpair failed and we were unable to recover it. 00:27:21.851 [2024-07-14 07:44:37.867512] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:21.851 [2024-07-14 07:44:37.867679] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:21.851 [2024-07-14 07:44:37.867706] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:21.851 [2024-07-14 07:44:37.867722] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:21.851 [2024-07-14 07:44:37.867736] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0c30000b90 00:27:21.851 [2024-07-14 07:44:37.867766] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:21.851 qpair failed and we were unable to recover it. 00:27:21.851 [2024-07-14 07:44:37.877520] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:21.851 [2024-07-14 07:44:37.877683] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:21.851 [2024-07-14 07:44:37.877710] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:21.851 [2024-07-14 07:44:37.877725] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:21.851 [2024-07-14 07:44:37.877740] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0c30000b90 00:27:21.851 [2024-07-14 07:44:37.877786] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:21.851 qpair failed and we were unable to recover it. 00:27:21.851 [2024-07-14 07:44:37.887578] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:21.851 [2024-07-14 07:44:37.887763] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:21.851 [2024-07-14 07:44:37.887789] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:21.851 [2024-07-14 07:44:37.887804] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:21.851 [2024-07-14 07:44:37.887819] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0c30000b90 00:27:21.851 [2024-07-14 07:44:37.887849] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:21.851 qpair failed and we were unable to recover it. 00:27:21.851 [2024-07-14 07:44:37.897570] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:21.851 [2024-07-14 07:44:37.897764] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:21.851 [2024-07-14 07:44:37.897791] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:21.851 [2024-07-14 07:44:37.897807] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:21.851 [2024-07-14 07:44:37.897822] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0c30000b90 00:27:21.851 [2024-07-14 07:44:37.897862] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:21.851 qpair failed and we were unable to recover it. 00:27:21.851 [2024-07-14 07:44:37.907606] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:21.851 [2024-07-14 07:44:37.907780] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:21.851 [2024-07-14 07:44:37.907807] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:21.851 [2024-07-14 07:44:37.907822] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:21.851 [2024-07-14 07:44:37.907836] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0c30000b90 00:27:21.851 [2024-07-14 07:44:37.907885] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:21.851 qpair failed and we were unable to recover it. 00:27:21.851 [2024-07-14 07:44:37.917631] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:21.851 [2024-07-14 07:44:37.917791] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:21.851 [2024-07-14 07:44:37.917817] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:21.851 [2024-07-14 07:44:37.917833] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:21.851 [2024-07-14 07:44:37.917846] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0c30000b90 00:27:21.851 [2024-07-14 07:44:37.917894] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:21.851 qpair failed and we were unable to recover it. 00:27:21.851 [2024-07-14 07:44:37.927696] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:21.851 [2024-07-14 07:44:37.927884] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:21.852 [2024-07-14 07:44:37.927910] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:21.852 [2024-07-14 07:44:37.927925] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:21.852 [2024-07-14 07:44:37.927945] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0c30000b90 00:27:21.852 [2024-07-14 07:44:37.927975] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:21.852 qpair failed and we were unable to recover it. 00:27:21.852 [2024-07-14 07:44:37.937702] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:21.852 [2024-07-14 07:44:37.937875] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:21.852 [2024-07-14 07:44:37.937901] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:21.852 [2024-07-14 07:44:37.937916] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:21.852 [2024-07-14 07:44:37.937931] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0c30000b90 00:27:21.852 [2024-07-14 07:44:37.937961] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:21.852 qpair failed and we were unable to recover it. 00:27:21.852 [2024-07-14 07:44:37.947704] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:21.852 [2024-07-14 07:44:37.947884] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:21.852 [2024-07-14 07:44:37.947910] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:21.852 [2024-07-14 07:44:37.947925] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:21.852 [2024-07-14 07:44:37.947939] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0c30000b90 00:27:21.852 [2024-07-14 07:44:37.947970] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:21.852 qpair failed and we were unable to recover it. 00:27:21.852 [2024-07-14 07:44:37.957791] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:21.852 [2024-07-14 07:44:37.957976] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:21.852 [2024-07-14 07:44:37.958002] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:21.852 [2024-07-14 07:44:37.958018] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:21.852 [2024-07-14 07:44:37.958032] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0c30000b90 00:27:21.852 [2024-07-14 07:44:37.958075] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:21.852 qpair failed and we were unable to recover it. 00:27:21.852 [2024-07-14 07:44:37.967772] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:21.852 [2024-07-14 07:44:37.967958] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:21.852 [2024-07-14 07:44:37.967984] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:21.852 [2024-07-14 07:44:37.967999] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:21.852 [2024-07-14 07:44:37.968013] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0c30000b90 00:27:21.852 [2024-07-14 07:44:37.968043] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:21.852 qpair failed and we were unable to recover it. 00:27:21.852 [2024-07-14 07:44:37.977786] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:21.852 [2024-07-14 07:44:37.977960] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:21.852 [2024-07-14 07:44:37.977987] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:21.852 [2024-07-14 07:44:37.978002] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:21.852 [2024-07-14 07:44:37.978016] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0c30000b90 00:27:21.852 [2024-07-14 07:44:37.978048] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:21.852 qpair failed and we were unable to recover it. 00:27:21.852 [2024-07-14 07:44:37.987839] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:21.852 [2024-07-14 07:44:37.988055] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:21.852 [2024-07-14 07:44:37.988082] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:21.852 [2024-07-14 07:44:37.988097] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:21.852 [2024-07-14 07:44:37.988112] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0c30000b90 00:27:21.852 [2024-07-14 07:44:37.988142] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:21.852 qpair failed and we were unable to recover it. 00:27:21.852 [2024-07-14 07:44:37.997823] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:21.852 [2024-07-14 07:44:37.998002] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:21.852 [2024-07-14 07:44:37.998028] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:21.852 [2024-07-14 07:44:37.998044] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:21.852 [2024-07-14 07:44:37.998058] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0c30000b90 00:27:21.852 [2024-07-14 07:44:37.998088] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:21.852 qpair failed and we were unable to recover it. 00:27:21.852 [2024-07-14 07:44:38.007943] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:21.852 [2024-07-14 07:44:38.008120] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:21.852 [2024-07-14 07:44:38.008147] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:21.852 [2024-07-14 07:44:38.008162] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:21.852 [2024-07-14 07:44:38.008177] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0c30000b90 00:27:21.852 [2024-07-14 07:44:38.008207] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:21.852 qpair failed and we were unable to recover it. 00:27:21.852 [2024-07-14 07:44:38.017950] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:21.852 [2024-07-14 07:44:38.018112] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:21.852 [2024-07-14 07:44:38.018139] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:21.852 [2024-07-14 07:44:38.018160] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:21.852 [2024-07-14 07:44:38.018176] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0c30000b90 00:27:21.852 [2024-07-14 07:44:38.018207] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:21.852 qpair failed and we were unable to recover it. 00:27:22.112 [2024-07-14 07:44:38.027940] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:22.112 [2024-07-14 07:44:38.028110] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:22.112 [2024-07-14 07:44:38.028137] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:22.112 [2024-07-14 07:44:38.028152] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:22.112 [2024-07-14 07:44:38.028166] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0c30000b90 00:27:22.112 [2024-07-14 07:44:38.028196] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:22.112 qpair failed and we were unable to recover it. 00:27:22.112 [2024-07-14 07:44:38.038038] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:22.112 [2024-07-14 07:44:38.038201] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:22.112 [2024-07-14 07:44:38.038227] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:22.112 [2024-07-14 07:44:38.038242] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:22.112 [2024-07-14 07:44:38.038256] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0c30000b90 00:27:22.112 [2024-07-14 07:44:38.038286] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:22.112 qpair failed and we were unable to recover it. 00:27:22.112 [2024-07-14 07:44:38.048002] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:22.112 [2024-07-14 07:44:38.048178] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:22.112 [2024-07-14 07:44:38.048204] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:22.112 [2024-07-14 07:44:38.048218] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:22.112 [2024-07-14 07:44:38.048232] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0c30000b90 00:27:22.112 [2024-07-14 07:44:38.048262] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:22.112 qpair failed and we were unable to recover it. 00:27:22.112 [2024-07-14 07:44:38.058021] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:22.112 [2024-07-14 07:44:38.058185] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:22.112 [2024-07-14 07:44:38.058212] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:22.112 [2024-07-14 07:44:38.058227] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:22.112 [2024-07-14 07:44:38.058241] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0c30000b90 00:27:22.112 [2024-07-14 07:44:38.058270] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:22.112 qpair failed and we were unable to recover it. 00:27:22.112 [2024-07-14 07:44:38.068066] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:22.112 [2024-07-14 07:44:38.068234] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:22.112 [2024-07-14 07:44:38.068260] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:22.112 [2024-07-14 07:44:38.068276] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:22.112 [2024-07-14 07:44:38.068290] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0c30000b90 00:27:22.112 [2024-07-14 07:44:38.068320] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:22.112 qpair failed and we were unable to recover it. 00:27:22.112 [2024-07-14 07:44:38.078073] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:22.112 [2024-07-14 07:44:38.078244] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:22.112 [2024-07-14 07:44:38.078271] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:22.112 [2024-07-14 07:44:38.078286] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:22.112 [2024-07-14 07:44:38.078300] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0c30000b90 00:27:22.112 [2024-07-14 07:44:38.078329] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:22.112 qpair failed and we were unable to recover it. 00:27:22.112 [2024-07-14 07:44:38.088129] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:22.112 [2024-07-14 07:44:38.088303] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:22.112 [2024-07-14 07:44:38.088329] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:22.112 [2024-07-14 07:44:38.088344] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:22.112 [2024-07-14 07:44:38.088358] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0c30000b90 00:27:22.112 [2024-07-14 07:44:38.088388] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:22.112 qpair failed and we were unable to recover it. 00:27:22.112 [2024-07-14 07:44:38.098341] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:22.112 [2024-07-14 07:44:38.098519] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:22.112 [2024-07-14 07:44:38.098545] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:22.112 [2024-07-14 07:44:38.098560] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:22.112 [2024-07-14 07:44:38.098575] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0c30000b90 00:27:22.112 [2024-07-14 07:44:38.098604] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:22.112 qpair failed and we were unable to recover it. 00:27:22.112 [2024-07-14 07:44:38.108214] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:22.112 [2024-07-14 07:44:38.108379] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:22.112 [2024-07-14 07:44:38.108405] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:22.112 [2024-07-14 07:44:38.108427] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:22.112 [2024-07-14 07:44:38.108467] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0c30000b90 00:27:22.112 [2024-07-14 07:44:38.108498] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:22.112 qpair failed and we were unable to recover it. 00:27:22.112 [2024-07-14 07:44:38.118214] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:22.112 [2024-07-14 07:44:38.118376] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:22.112 [2024-07-14 07:44:38.118401] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:22.112 [2024-07-14 07:44:38.118416] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:22.112 [2024-07-14 07:44:38.118430] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0c30000b90 00:27:22.112 [2024-07-14 07:44:38.118460] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:22.112 qpair failed and we were unable to recover it. 00:27:22.112 [2024-07-14 07:44:38.128250] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:22.112 [2024-07-14 07:44:38.128463] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:22.112 [2024-07-14 07:44:38.128489] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:22.112 [2024-07-14 07:44:38.128504] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:22.112 [2024-07-14 07:44:38.128519] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0c30000b90 00:27:22.112 [2024-07-14 07:44:38.128549] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:22.112 qpair failed and we were unable to recover it. 00:27:22.112 [2024-07-14 07:44:38.138263] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:22.112 [2024-07-14 07:44:38.138426] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:22.112 [2024-07-14 07:44:38.138451] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:22.112 [2024-07-14 07:44:38.138466] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:22.112 [2024-07-14 07:44:38.138481] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0c30000b90 00:27:22.112 [2024-07-14 07:44:38.138511] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:22.113 qpair failed and we were unable to recover it. 00:27:22.113 [2024-07-14 07:44:38.148353] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:22.113 [2024-07-14 07:44:38.148519] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:22.113 [2024-07-14 07:44:38.148546] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:22.113 [2024-07-14 07:44:38.148562] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:22.113 [2024-07-14 07:44:38.148576] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0c30000b90 00:27:22.113 [2024-07-14 07:44:38.148606] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:22.113 qpair failed and we were unable to recover it. 00:27:22.113 [2024-07-14 07:44:38.158320] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:22.113 [2024-07-14 07:44:38.158482] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:22.113 [2024-07-14 07:44:38.158509] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:22.113 [2024-07-14 07:44:38.158524] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:22.113 [2024-07-14 07:44:38.158538] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0c30000b90 00:27:22.113 [2024-07-14 07:44:38.158568] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:22.113 qpair failed and we were unable to recover it. 00:27:22.113 [2024-07-14 07:44:38.168392] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:22.113 [2024-07-14 07:44:38.168560] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:22.113 [2024-07-14 07:44:38.168585] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:22.113 [2024-07-14 07:44:38.168599] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:22.113 [2024-07-14 07:44:38.168614] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0c30000b90 00:27:22.113 [2024-07-14 07:44:38.168644] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:22.113 qpair failed and we were unable to recover it. 00:27:22.113 [2024-07-14 07:44:38.178389] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:22.113 [2024-07-14 07:44:38.178554] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:22.113 [2024-07-14 07:44:38.178579] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:22.113 [2024-07-14 07:44:38.178594] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:22.113 [2024-07-14 07:44:38.178609] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0c30000b90 00:27:22.113 [2024-07-14 07:44:38.178638] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:22.113 qpair failed and we were unable to recover it. 00:27:22.113 [2024-07-14 07:44:38.188461] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:22.113 [2024-07-14 07:44:38.188638] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:22.113 [2024-07-14 07:44:38.188664] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:22.113 [2024-07-14 07:44:38.188679] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:22.113 [2024-07-14 07:44:38.188693] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0c30000b90 00:27:22.113 [2024-07-14 07:44:38.188722] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:22.113 qpair failed and we were unable to recover it. 00:27:22.113 [2024-07-14 07:44:38.198495] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:22.113 [2024-07-14 07:44:38.198666] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:22.113 [2024-07-14 07:44:38.198697] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:22.113 [2024-07-14 07:44:38.198713] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:22.113 [2024-07-14 07:44:38.198743] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0c30000b90 00:27:22.113 [2024-07-14 07:44:38.198774] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:22.113 qpair failed and we were unable to recover it. 00:27:22.113 [2024-07-14 07:44:38.208532] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:22.113 [2024-07-14 07:44:38.208710] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:22.113 [2024-07-14 07:44:38.208737] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:22.113 [2024-07-14 07:44:38.208751] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:22.113 [2024-07-14 07:44:38.208766] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0c30000b90 00:27:22.113 [2024-07-14 07:44:38.208795] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:22.113 qpair failed and we were unable to recover it. 00:27:22.113 [2024-07-14 07:44:38.218546] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:22.113 [2024-07-14 07:44:38.218746] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:22.113 [2024-07-14 07:44:38.218772] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:22.113 [2024-07-14 07:44:38.218787] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:22.113 [2024-07-14 07:44:38.218801] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0c30000b90 00:27:22.113 [2024-07-14 07:44:38.218846] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:22.113 qpair failed and we were unable to recover it. 00:27:22.113 [2024-07-14 07:44:38.228580] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:22.113 [2024-07-14 07:44:38.228742] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:22.113 [2024-07-14 07:44:38.228767] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:22.113 [2024-07-14 07:44:38.228783] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:22.113 [2024-07-14 07:44:38.228797] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0c30000b90 00:27:22.113 [2024-07-14 07:44:38.228827] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:22.113 qpair failed and we were unable to recover it. 00:27:22.113 [2024-07-14 07:44:38.238565] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:22.113 [2024-07-14 07:44:38.238763] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:22.113 [2024-07-14 07:44:38.238804] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:22.113 [2024-07-14 07:44:38.238819] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:22.113 [2024-07-14 07:44:38.238833] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0c30000b90 00:27:22.113 [2024-07-14 07:44:38.238891] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:22.113 qpair failed and we were unable to recover it. 00:27:22.113 [2024-07-14 07:44:38.248628] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:22.113 [2024-07-14 07:44:38.248821] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:22.113 [2024-07-14 07:44:38.248848] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:22.113 [2024-07-14 07:44:38.248863] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:22.113 [2024-07-14 07:44:38.248887] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0c30000b90 00:27:22.113 [2024-07-14 07:44:38.248919] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:22.113 qpair failed and we were unable to recover it. 00:27:22.113 [2024-07-14 07:44:38.258616] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:22.113 [2024-07-14 07:44:38.258785] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:22.113 [2024-07-14 07:44:38.258812] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:22.113 [2024-07-14 07:44:38.258827] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:22.113 [2024-07-14 07:44:38.258841] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0c30000b90 00:27:22.113 [2024-07-14 07:44:38.258878] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:22.113 qpair failed and we were unable to recover it. 00:27:22.113 [2024-07-14 07:44:38.268687] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:22.113 [2024-07-14 07:44:38.268857] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:22.113 [2024-07-14 07:44:38.268889] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:22.113 [2024-07-14 07:44:38.268905] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:22.113 [2024-07-14 07:44:38.268919] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0c30000b90 00:27:22.113 [2024-07-14 07:44:38.268951] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:22.113 qpair failed and we were unable to recover it. 00:27:22.113 [2024-07-14 07:44:38.278696] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:22.113 [2024-07-14 07:44:38.278875] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:22.113 [2024-07-14 07:44:38.278902] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:22.113 [2024-07-14 07:44:38.278917] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:22.113 [2024-07-14 07:44:38.278931] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0c30000b90 00:27:22.113 [2024-07-14 07:44:38.278961] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:22.113 qpair failed and we were unable to recover it. 00:27:22.374 [2024-07-14 07:44:38.288782] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:22.374 [2024-07-14 07:44:38.288970] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:22.374 [2024-07-14 07:44:38.289002] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:22.374 [2024-07-14 07:44:38.289018] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:22.374 [2024-07-14 07:44:38.289032] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0c30000b90 00:27:22.374 [2024-07-14 07:44:38.289062] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:22.374 qpair failed and we were unable to recover it. 00:27:22.374 [2024-07-14 07:44:38.298752] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:22.374 [2024-07-14 07:44:38.298927] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:22.374 [2024-07-14 07:44:38.298953] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:22.374 [2024-07-14 07:44:38.298969] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:22.374 [2024-07-14 07:44:38.298983] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0c30000b90 00:27:22.374 [2024-07-14 07:44:38.299013] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:22.374 qpair failed and we were unable to recover it. 00:27:22.374 [2024-07-14 07:44:38.308815] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:22.374 [2024-07-14 07:44:38.308990] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:22.374 [2024-07-14 07:44:38.309028] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:22.374 [2024-07-14 07:44:38.309047] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:22.374 [2024-07-14 07:44:38.309061] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0c30000b90 00:27:22.374 [2024-07-14 07:44:38.309092] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:22.374 qpair failed and we were unable to recover it. 00:27:22.374 [2024-07-14 07:44:38.318798] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:22.374 [2024-07-14 07:44:38.318958] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:22.374 [2024-07-14 07:44:38.318984] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:22.374 [2024-07-14 07:44:38.319000] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:22.374 [2024-07-14 07:44:38.319013] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0c30000b90 00:27:22.374 [2024-07-14 07:44:38.319043] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:22.374 qpair failed and we were unable to recover it. 00:27:22.374 [2024-07-14 07:44:38.328840] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:22.374 [2024-07-14 07:44:38.329026] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:22.374 [2024-07-14 07:44:38.329053] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:22.374 [2024-07-14 07:44:38.329068] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:22.374 [2024-07-14 07:44:38.329086] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0c30000b90 00:27:22.374 [2024-07-14 07:44:38.329118] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:22.374 qpair failed and we were unable to recover it. 00:27:22.374 [2024-07-14 07:44:38.338952] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:22.374 [2024-07-14 07:44:38.339116] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:22.374 [2024-07-14 07:44:38.339143] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:22.374 [2024-07-14 07:44:38.339159] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:22.374 [2024-07-14 07:44:38.339172] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0c30000b90 00:27:22.374 [2024-07-14 07:44:38.339202] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:22.375 qpair failed and we were unable to recover it. 00:27:22.375 [2024-07-14 07:44:38.348901] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:22.375 [2024-07-14 07:44:38.349064] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:22.375 [2024-07-14 07:44:38.349092] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:22.375 [2024-07-14 07:44:38.349108] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:22.375 [2024-07-14 07:44:38.349122] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0c30000b90 00:27:22.375 [2024-07-14 07:44:38.349153] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:22.375 qpair failed and we were unable to recover it. 00:27:22.375 [2024-07-14 07:44:38.358968] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:22.375 [2024-07-14 07:44:38.359166] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:22.375 [2024-07-14 07:44:38.359194] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:22.375 [2024-07-14 07:44:38.359210] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:22.375 [2024-07-14 07:44:38.359224] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0c30000b90 00:27:22.375 [2024-07-14 07:44:38.359270] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:22.375 qpair failed and we were unable to recover it. 00:27:22.375 [2024-07-14 07:44:38.368969] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:22.375 [2024-07-14 07:44:38.369129] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:22.375 [2024-07-14 07:44:38.369157] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:22.375 [2024-07-14 07:44:38.369173] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:22.375 [2024-07-14 07:44:38.369186] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0c30000b90 00:27:22.375 [2024-07-14 07:44:38.369217] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:22.375 qpair failed and we were unable to recover it. 00:27:22.375 [2024-07-14 07:44:38.378997] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:22.375 [2024-07-14 07:44:38.379162] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:22.375 [2024-07-14 07:44:38.379189] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:22.375 [2024-07-14 07:44:38.379206] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:22.375 [2024-07-14 07:44:38.379219] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0c30000b90 00:27:22.375 [2024-07-14 07:44:38.379249] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:22.375 qpair failed and we were unable to recover it. 00:27:22.375 [2024-07-14 07:44:38.389045] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:22.375 [2024-07-14 07:44:38.389221] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:22.375 [2024-07-14 07:44:38.389249] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:22.375 [2024-07-14 07:44:38.389265] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:22.375 [2024-07-14 07:44:38.389278] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0c30000b90 00:27:22.375 [2024-07-14 07:44:38.389309] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:22.375 qpair failed and we were unable to recover it. 00:27:22.375 [2024-07-14 07:44:38.399073] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:22.375 [2024-07-14 07:44:38.399235] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:22.375 [2024-07-14 07:44:38.399262] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:22.375 [2024-07-14 07:44:38.399278] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:22.375 [2024-07-14 07:44:38.399291] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0c30000b90 00:27:22.375 [2024-07-14 07:44:38.399321] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:22.375 qpair failed and we were unable to recover it. 00:27:22.375 [2024-07-14 07:44:38.409095] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:22.375 [2024-07-14 07:44:38.409259] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:22.375 [2024-07-14 07:44:38.409285] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:22.375 [2024-07-14 07:44:38.409301] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:22.375 [2024-07-14 07:44:38.409314] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0c30000b90 00:27:22.375 [2024-07-14 07:44:38.409343] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:22.375 qpair failed and we were unable to recover it. 00:27:22.375 [2024-07-14 07:44:38.419196] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:22.375 [2024-07-14 07:44:38.419351] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:22.375 [2024-07-14 07:44:38.419377] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:22.375 [2024-07-14 07:44:38.419392] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:22.375 [2024-07-14 07:44:38.419412] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0c30000b90 00:27:22.375 [2024-07-14 07:44:38.419442] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:22.375 qpair failed and we were unable to recover it. 00:27:22.375 [2024-07-14 07:44:38.429119] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:22.375 [2024-07-14 07:44:38.429272] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:22.375 [2024-07-14 07:44:38.429299] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:22.375 [2024-07-14 07:44:38.429315] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:22.375 [2024-07-14 07:44:38.429328] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0c30000b90 00:27:22.375 [2024-07-14 07:44:38.429359] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:22.375 qpair failed and we were unable to recover it. 00:27:22.375 [2024-07-14 07:44:38.439157] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:22.375 [2024-07-14 07:44:38.439328] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:22.375 [2024-07-14 07:44:38.439357] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:22.375 [2024-07-14 07:44:38.439389] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:22.375 [2024-07-14 07:44:38.439404] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0c30000b90 00:27:22.375 [2024-07-14 07:44:38.439433] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:22.375 qpair failed and we were unable to recover it. 00:27:22.375 [2024-07-14 07:44:38.449231] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:22.375 [2024-07-14 07:44:38.449394] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:22.375 [2024-07-14 07:44:38.449422] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:22.375 [2024-07-14 07:44:38.449439] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:22.375 [2024-07-14 07:44:38.449468] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0c30000b90 00:27:22.375 [2024-07-14 07:44:38.449498] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:22.375 qpair failed and we were unable to recover it. 00:27:22.375 [2024-07-14 07:44:38.459309] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:22.375 [2024-07-14 07:44:38.459469] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:22.375 [2024-07-14 07:44:38.459497] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:22.375 [2024-07-14 07:44:38.459512] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:22.375 [2024-07-14 07:44:38.459526] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0c30000b90 00:27:22.375 [2024-07-14 07:44:38.459571] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:22.375 qpair failed and we were unable to recover it. 00:27:22.375 [2024-07-14 07:44:38.469266] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:22.375 [2024-07-14 07:44:38.469441] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:22.375 [2024-07-14 07:44:38.469469] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:22.375 [2024-07-14 07:44:38.469503] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:22.376 [2024-07-14 07:44:38.469518] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0c30000b90 00:27:22.376 [2024-07-14 07:44:38.469548] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:22.376 qpair failed and we were unable to recover it. 00:27:22.376 [2024-07-14 07:44:38.479304] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:22.376 [2024-07-14 07:44:38.479504] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:22.376 [2024-07-14 07:44:38.479545] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:22.376 [2024-07-14 07:44:38.479561] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:22.376 [2024-07-14 07:44:38.479574] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0c30000b90 00:27:22.376 [2024-07-14 07:44:38.479617] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:22.376 qpair failed and we were unable to recover it. 00:27:22.376 [2024-07-14 07:44:38.489314] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:22.376 [2024-07-14 07:44:38.489478] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:22.376 [2024-07-14 07:44:38.489505] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:22.376 [2024-07-14 07:44:38.489521] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:22.376 [2024-07-14 07:44:38.489535] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0c30000b90 00:27:22.376 [2024-07-14 07:44:38.489565] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:22.376 qpair failed and we were unable to recover it. 00:27:22.376 [2024-07-14 07:44:38.499337] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:22.376 [2024-07-14 07:44:38.499505] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:22.376 [2024-07-14 07:44:38.499533] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:22.376 [2024-07-14 07:44:38.499548] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:22.376 [2024-07-14 07:44:38.499562] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0c30000b90 00:27:22.376 [2024-07-14 07:44:38.499592] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:22.376 qpair failed and we were unable to recover it. 00:27:22.376 [2024-07-14 07:44:38.509391] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:22.376 [2024-07-14 07:44:38.509549] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:22.376 [2024-07-14 07:44:38.509575] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:22.376 [2024-07-14 07:44:38.509597] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:22.376 [2024-07-14 07:44:38.509611] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0c30000b90 00:27:22.376 [2024-07-14 07:44:38.509641] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:22.376 qpair failed and we were unable to recover it. 00:27:22.376 [2024-07-14 07:44:38.519407] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:22.376 [2024-07-14 07:44:38.519607] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:22.376 [2024-07-14 07:44:38.519649] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:22.376 [2024-07-14 07:44:38.519665] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:22.376 [2024-07-14 07:44:38.519677] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0c30000b90 00:27:22.376 [2024-07-14 07:44:38.519721] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:22.376 qpair failed and we were unable to recover it. 00:27:22.376 [2024-07-14 07:44:38.529508] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:22.376 [2024-07-14 07:44:38.529685] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:22.376 [2024-07-14 07:44:38.529727] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:22.376 [2024-07-14 07:44:38.529743] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:22.376 [2024-07-14 07:44:38.529757] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0c30000b90 00:27:22.376 [2024-07-14 07:44:38.529801] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:22.376 qpair failed and we were unable to recover it. 00:27:22.376 [2024-07-14 07:44:38.539475] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:22.376 [2024-07-14 07:44:38.539638] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:22.376 [2024-07-14 07:44:38.539665] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:22.376 [2024-07-14 07:44:38.539680] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:22.376 [2024-07-14 07:44:38.539694] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0c30000b90 00:27:22.376 [2024-07-14 07:44:38.539724] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:22.376 qpair failed and we were unable to recover it. 00:27:22.635 [2024-07-14 07:44:38.549550] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:22.635 [2024-07-14 07:44:38.549764] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:22.635 [2024-07-14 07:44:38.549805] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:22.635 [2024-07-14 07:44:38.549821] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:22.635 [2024-07-14 07:44:38.549834] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0c30000b90 00:27:22.635 [2024-07-14 07:44:38.549885] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:22.635 qpair failed and we were unable to recover it. 00:27:22.635 [2024-07-14 07:44:38.559526] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:22.635 [2024-07-14 07:44:38.559691] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:22.635 [2024-07-14 07:44:38.559718] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:22.635 [2024-07-14 07:44:38.559734] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:22.635 [2024-07-14 07:44:38.559748] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0c30000b90 00:27:22.635 [2024-07-14 07:44:38.559791] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:22.635 qpair failed and we were unable to recover it. 00:27:22.635 [2024-07-14 07:44:38.569581] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:22.635 [2024-07-14 07:44:38.569742] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:22.635 [2024-07-14 07:44:38.569769] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:22.635 [2024-07-14 07:44:38.569785] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:22.635 [2024-07-14 07:44:38.569814] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0c30000b90 00:27:22.635 [2024-07-14 07:44:38.569844] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:22.635 qpair failed and we were unable to recover it. 00:27:22.635 [2024-07-14 07:44:38.579575] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:22.635 [2024-07-14 07:44:38.579775] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:22.636 [2024-07-14 07:44:38.579816] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:22.636 [2024-07-14 07:44:38.579831] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:22.636 [2024-07-14 07:44:38.579844] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0c30000b90 00:27:22.636 [2024-07-14 07:44:38.579898] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:22.636 qpair failed and we were unable to recover it. 00:27:22.636 [2024-07-14 07:44:38.589616] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:22.636 [2024-07-14 07:44:38.589774] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:22.636 [2024-07-14 07:44:38.589800] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:22.636 [2024-07-14 07:44:38.589816] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:22.636 [2024-07-14 07:44:38.589830] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0c30000b90 00:27:22.636 [2024-07-14 07:44:38.589860] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:22.636 qpair failed and we were unable to recover it. 00:27:22.636 [2024-07-14 07:44:38.599648] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:22.636 [2024-07-14 07:44:38.599808] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:22.636 [2024-07-14 07:44:38.599834] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:22.636 [2024-07-14 07:44:38.599855] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:22.636 [2024-07-14 07:44:38.599876] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0c30000b90 00:27:22.636 [2024-07-14 07:44:38.599907] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:22.636 qpair failed and we were unable to recover it. 00:27:22.636 [2024-07-14 07:44:38.609760] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:22.636 [2024-07-14 07:44:38.609934] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:22.636 [2024-07-14 07:44:38.609961] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:22.636 [2024-07-14 07:44:38.609977] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:22.636 [2024-07-14 07:44:38.609990] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0c30000b90 00:27:22.636 [2024-07-14 07:44:38.610020] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:22.636 qpair failed and we were unable to recover it. 00:27:22.636 [2024-07-14 07:44:38.619693] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:22.636 [2024-07-14 07:44:38.619873] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:22.636 [2024-07-14 07:44:38.619900] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:22.636 [2024-07-14 07:44:38.619916] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:22.636 [2024-07-14 07:44:38.619929] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0c30000b90 00:27:22.636 [2024-07-14 07:44:38.619961] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:22.636 qpair failed and we were unable to recover it. 00:27:22.636 [2024-07-14 07:44:38.629730] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:22.636 [2024-07-14 07:44:38.629898] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:22.636 [2024-07-14 07:44:38.629927] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:22.636 [2024-07-14 07:44:38.629943] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:22.636 [2024-07-14 07:44:38.629957] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0c30000b90 00:27:22.636 [2024-07-14 07:44:38.629986] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:22.636 qpair failed and we were unable to recover it. 00:27:22.636 [2024-07-14 07:44:38.639759] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:22.636 [2024-07-14 07:44:38.639921] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:22.636 [2024-07-14 07:44:38.639949] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:22.636 [2024-07-14 07:44:38.639965] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:22.636 [2024-07-14 07:44:38.639978] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0c30000b90 00:27:22.636 [2024-07-14 07:44:38.640007] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:22.636 qpair failed and we were unable to recover it. 00:27:22.636 [2024-07-14 07:44:38.649877] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:22.636 [2024-07-14 07:44:38.650048] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:22.636 [2024-07-14 07:44:38.650076] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:22.636 [2024-07-14 07:44:38.650091] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:22.636 [2024-07-14 07:44:38.650105] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0c30000b90 00:27:22.636 [2024-07-14 07:44:38.650135] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:22.636 qpair failed and we were unable to recover it. 00:27:22.636 [2024-07-14 07:44:38.659842] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:22.636 [2024-07-14 07:44:38.660014] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:22.636 [2024-07-14 07:44:38.660042] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:22.636 [2024-07-14 07:44:38.660059] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:22.636 [2024-07-14 07:44:38.660072] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0c30000b90 00:27:22.636 [2024-07-14 07:44:38.660115] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:22.636 qpair failed and we were unable to recover it. 00:27:22.636 [2024-07-14 07:44:38.669847] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:22.636 [2024-07-14 07:44:38.670040] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:22.636 [2024-07-14 07:44:38.670067] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:22.636 [2024-07-14 07:44:38.670083] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:22.636 [2024-07-14 07:44:38.670097] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0c30000b90 00:27:22.636 [2024-07-14 07:44:38.670127] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:22.636 qpair failed and we were unable to recover it. 00:27:22.636 [2024-07-14 07:44:38.679853] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:22.636 [2024-07-14 07:44:38.680020] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:22.636 [2024-07-14 07:44:38.680048] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:22.636 [2024-07-14 07:44:38.680063] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:22.636 [2024-07-14 07:44:38.680077] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0c30000b90 00:27:22.636 [2024-07-14 07:44:38.680106] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:22.636 qpair failed and we were unable to recover it. 00:27:22.636 [2024-07-14 07:44:38.689926] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:22.636 [2024-07-14 07:44:38.690096] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:22.636 [2024-07-14 07:44:38.690129] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:22.636 [2024-07-14 07:44:38.690147] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:22.636 [2024-07-14 07:44:38.690161] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0c30000b90 00:27:22.636 [2024-07-14 07:44:38.690191] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:22.636 qpair failed and we were unable to recover it. 00:27:22.636 [2024-07-14 07:44:38.699957] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:22.636 [2024-07-14 07:44:38.700154] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:22.636 [2024-07-14 07:44:38.700182] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:22.636 [2024-07-14 07:44:38.700197] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:22.636 [2024-07-14 07:44:38.700211] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0c30000b90 00:27:22.636 [2024-07-14 07:44:38.700240] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:22.636 qpair failed and we were unable to recover it. 00:27:22.636 [2024-07-14 07:44:38.709950] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:22.636 [2024-07-14 07:44:38.710109] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:22.636 [2024-07-14 07:44:38.710136] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:22.636 [2024-07-14 07:44:38.710153] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:22.636 [2024-07-14 07:44:38.710167] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0c30000b90 00:27:22.636 [2024-07-14 07:44:38.710197] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:22.636 qpair failed and we were unable to recover it. 00:27:22.636 [2024-07-14 07:44:38.720055] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:22.636 [2024-07-14 07:44:38.720247] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:22.636 [2024-07-14 07:44:38.720274] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:22.636 [2024-07-14 07:44:38.720290] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:22.636 [2024-07-14 07:44:38.720304] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0c30000b90 00:27:22.637 [2024-07-14 07:44:38.720347] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:22.637 qpair failed and we were unable to recover it. 00:27:22.637 [2024-07-14 07:44:38.730059] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:22.637 [2024-07-14 07:44:38.730259] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:22.637 [2024-07-14 07:44:38.730301] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:22.637 [2024-07-14 07:44:38.730316] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:22.637 [2024-07-14 07:44:38.730329] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0c30000b90 00:27:22.637 [2024-07-14 07:44:38.730378] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:22.637 qpair failed and we were unable to recover it. 00:27:22.637 [2024-07-14 07:44:38.740054] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:22.637 [2024-07-14 07:44:38.740217] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:22.637 [2024-07-14 07:44:38.740244] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:22.637 [2024-07-14 07:44:38.740261] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:22.637 [2024-07-14 07:44:38.740274] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0c30000b90 00:27:22.637 [2024-07-14 07:44:38.740304] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:22.637 qpair failed and we were unable to recover it. 00:27:22.637 [2024-07-14 07:44:38.750097] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:22.637 [2024-07-14 07:44:38.750283] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:22.637 [2024-07-14 07:44:38.750327] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:22.637 [2024-07-14 07:44:38.750343] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:22.637 [2024-07-14 07:44:38.750355] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0c30000b90 00:27:22.637 [2024-07-14 07:44:38.750413] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:22.637 qpair failed and we were unable to recover it. 00:27:22.637 [2024-07-14 07:44:38.760105] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:22.637 [2024-07-14 07:44:38.760257] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:22.637 [2024-07-14 07:44:38.760286] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:22.637 [2024-07-14 07:44:38.760302] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:22.637 [2024-07-14 07:44:38.760315] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0c30000b90 00:27:22.637 [2024-07-14 07:44:38.760357] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:22.637 qpair failed and we were unable to recover it. 00:27:22.637 [2024-07-14 07:44:38.770196] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:22.637 [2024-07-14 07:44:38.770364] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:22.637 [2024-07-14 07:44:38.770393] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:22.637 [2024-07-14 07:44:38.770412] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:22.637 [2024-07-14 07:44:38.770426] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0c30000b90 00:27:22.637 [2024-07-14 07:44:38.770484] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:22.637 qpair failed and we were unable to recover it. 00:27:22.637 [2024-07-14 07:44:38.780236] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:22.637 [2024-07-14 07:44:38.780431] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:22.637 [2024-07-14 07:44:38.780479] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:22.637 [2024-07-14 07:44:38.780495] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:22.637 [2024-07-14 07:44:38.780507] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0c30000b90 00:27:22.637 [2024-07-14 07:44:38.780537] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:22.637 qpair failed and we were unable to recover it. 00:27:22.637 [2024-07-14 07:44:38.790189] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:22.637 [2024-07-14 07:44:38.790349] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:22.637 [2024-07-14 07:44:38.790377] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:22.637 [2024-07-14 07:44:38.790396] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:22.637 [2024-07-14 07:44:38.790410] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0c30000b90 00:27:22.637 [2024-07-14 07:44:38.790454] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:22.637 qpair failed and we were unable to recover it. 00:27:22.637 [2024-07-14 07:44:38.800212] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:22.637 [2024-07-14 07:44:38.800368] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:22.637 [2024-07-14 07:44:38.800396] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:22.637 [2024-07-14 07:44:38.800412] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:22.637 [2024-07-14 07:44:38.800426] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0c30000b90 00:27:22.637 [2024-07-14 07:44:38.800456] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:22.637 qpair failed and we were unable to recover it. 00:27:22.901 [2024-07-14 07:44:38.810266] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:22.901 [2024-07-14 07:44:38.810436] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:22.901 [2024-07-14 07:44:38.810462] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:22.901 [2024-07-14 07:44:38.810478] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:22.901 [2024-07-14 07:44:38.810492] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0c30000b90 00:27:22.901 [2024-07-14 07:44:38.810523] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:22.901 qpair failed and we were unable to recover it. 00:27:22.901 [2024-07-14 07:44:38.820341] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:22.901 [2024-07-14 07:44:38.820541] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:22.901 [2024-07-14 07:44:38.820583] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:22.901 [2024-07-14 07:44:38.820599] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:22.901 [2024-07-14 07:44:38.820612] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0c30000b90 00:27:22.901 [2024-07-14 07:44:38.820646] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:22.901 qpair failed and we were unable to recover it. 00:27:22.901 [2024-07-14 07:44:38.830310] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:22.901 [2024-07-14 07:44:38.830464] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:22.901 [2024-07-14 07:44:38.830490] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:22.901 [2024-07-14 07:44:38.830506] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:22.901 [2024-07-14 07:44:38.830521] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0c30000b90 00:27:22.901 [2024-07-14 07:44:38.830553] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:22.901 qpair failed and we were unable to recover it. 00:27:22.901 [2024-07-14 07:44:38.840352] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:22.901 [2024-07-14 07:44:38.840511] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:22.901 [2024-07-14 07:44:38.840538] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:22.901 [2024-07-14 07:44:38.840554] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:22.901 [2024-07-14 07:44:38.840567] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0c30000b90 00:27:22.901 [2024-07-14 07:44:38.840612] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:22.901 qpair failed and we were unable to recover it. 00:27:22.901 [2024-07-14 07:44:38.850406] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:22.901 [2024-07-14 07:44:38.850570] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:22.901 [2024-07-14 07:44:38.850597] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:22.901 [2024-07-14 07:44:38.850612] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:22.901 [2024-07-14 07:44:38.850626] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0c30000b90 00:27:22.901 [2024-07-14 07:44:38.850657] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:22.901 qpair failed and we were unable to recover it. 00:27:22.901 [2024-07-14 07:44:38.860421] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:22.901 [2024-07-14 07:44:38.860588] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:22.901 [2024-07-14 07:44:38.860613] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:22.901 [2024-07-14 07:44:38.860628] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:22.901 [2024-07-14 07:44:38.860655] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0c30000b90 00:27:22.902 [2024-07-14 07:44:38.860684] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:22.902 qpair failed and we were unable to recover it. 00:27:22.902 [2024-07-14 07:44:38.870420] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:22.902 [2024-07-14 07:44:38.870631] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:22.902 [2024-07-14 07:44:38.870663] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:22.902 [2024-07-14 07:44:38.870679] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:22.902 [2024-07-14 07:44:38.870693] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0c30000b90 00:27:22.902 [2024-07-14 07:44:38.870725] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:22.902 qpair failed and we were unable to recover it. 00:27:22.902 [2024-07-14 07:44:38.880498] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:22.902 [2024-07-14 07:44:38.880662] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:22.902 [2024-07-14 07:44:38.880688] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:22.902 [2024-07-14 07:44:38.880703] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:22.902 [2024-07-14 07:44:38.880717] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0c30000b90 00:27:22.902 [2024-07-14 07:44:38.880746] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:22.902 qpair failed and we were unable to recover it. 00:27:22.902 [2024-07-14 07:44:38.890522] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:22.902 [2024-07-14 07:44:38.890729] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:22.902 [2024-07-14 07:44:38.890756] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:22.902 [2024-07-14 07:44:38.890771] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:22.902 [2024-07-14 07:44:38.890785] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0c30000b90 00:27:22.902 [2024-07-14 07:44:38.890842] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:22.902 qpair failed and we were unable to recover it. 00:27:22.902 [2024-07-14 07:44:38.900571] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:22.902 [2024-07-14 07:44:38.900737] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:22.902 [2024-07-14 07:44:38.900764] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:22.902 [2024-07-14 07:44:38.900780] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:22.902 [2024-07-14 07:44:38.900793] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0c30000b90 00:27:22.902 [2024-07-14 07:44:38.900839] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:22.902 qpair failed and we were unable to recover it. 00:27:22.902 [2024-07-14 07:44:38.910526] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:22.902 [2024-07-14 07:44:38.910719] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:22.902 [2024-07-14 07:44:38.910746] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:22.902 [2024-07-14 07:44:38.910762] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:22.902 [2024-07-14 07:44:38.910787] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0c30000b90 00:27:22.902 [2024-07-14 07:44:38.910819] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:22.902 qpair failed and we were unable to recover it. 00:27:22.902 [2024-07-14 07:44:38.920544] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:22.902 [2024-07-14 07:44:38.920699] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:22.902 [2024-07-14 07:44:38.920726] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:22.902 [2024-07-14 07:44:38.920741] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:22.902 [2024-07-14 07:44:38.920755] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0c30000b90 00:27:22.902 [2024-07-14 07:44:38.920785] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:22.902 qpair failed and we were unable to recover it. 00:27:22.902 [2024-07-14 07:44:38.930571] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:22.902 [2024-07-14 07:44:38.930734] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:22.902 [2024-07-14 07:44:38.930761] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:22.902 [2024-07-14 07:44:38.930776] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:22.902 [2024-07-14 07:44:38.930789] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0c30000b90 00:27:22.902 [2024-07-14 07:44:38.930819] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:22.902 qpair failed and we were unable to recover it. 00:27:22.902 [2024-07-14 07:44:38.940596] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:22.902 [2024-07-14 07:44:38.940756] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:22.902 [2024-07-14 07:44:38.940783] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:22.902 [2024-07-14 07:44:38.940798] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:22.902 [2024-07-14 07:44:38.940812] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0c30000b90 00:27:22.902 [2024-07-14 07:44:38.940842] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:22.902 qpair failed and we were unable to recover it. 00:27:22.902 [2024-07-14 07:44:38.950670] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:22.902 [2024-07-14 07:44:38.950829] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:22.902 [2024-07-14 07:44:38.950856] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:22.902 [2024-07-14 07:44:38.950880] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:22.902 [2024-07-14 07:44:38.950896] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0c30000b90 00:27:22.902 [2024-07-14 07:44:38.950928] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:22.902 qpair failed and we were unable to recover it. 00:27:22.902 [2024-07-14 07:44:38.960660] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:22.902 [2024-07-14 07:44:38.960820] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:22.902 [2024-07-14 07:44:38.960847] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:22.902 [2024-07-14 07:44:38.960862] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:22.902 [2024-07-14 07:44:38.960887] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0c30000b90 00:27:22.902 [2024-07-14 07:44:38.960919] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:22.902 qpair failed and we were unable to recover it. 00:27:22.902 [2024-07-14 07:44:38.970725] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:22.903 [2024-07-14 07:44:38.970928] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:22.903 [2024-07-14 07:44:38.970955] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:22.903 [2024-07-14 07:44:38.970971] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:22.903 [2024-07-14 07:44:38.970985] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0c30000b90 00:27:22.903 [2024-07-14 07:44:38.971016] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:22.903 qpair failed and we were unable to recover it. 00:27:22.903 [2024-07-14 07:44:38.980752] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:22.903 [2024-07-14 07:44:38.980923] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:22.903 [2024-07-14 07:44:38.980949] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:22.903 [2024-07-14 07:44:38.980964] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:22.903 [2024-07-14 07:44:38.980977] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0c30000b90 00:27:22.903 [2024-07-14 07:44:38.981008] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:22.903 qpair failed and we were unable to recover it. 00:27:22.903 [2024-07-14 07:44:38.990749] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:22.903 [2024-07-14 07:44:38.990913] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:22.903 [2024-07-14 07:44:38.990940] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:22.903 [2024-07-14 07:44:38.990955] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:22.903 [2024-07-14 07:44:38.990969] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0c30000b90 00:27:22.903 [2024-07-14 07:44:38.990998] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:22.903 qpair failed and we were unable to recover it. 00:27:22.903 [2024-07-14 07:44:39.000801] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:22.903 [2024-07-14 07:44:39.000957] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:22.903 [2024-07-14 07:44:39.000983] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:22.903 [2024-07-14 07:44:39.001003] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:22.903 [2024-07-14 07:44:39.001019] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0c30000b90 00:27:22.903 [2024-07-14 07:44:39.001049] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:22.903 qpair failed and we were unable to recover it. 00:27:22.903 [2024-07-14 07:44:39.010835] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:22.903 [2024-07-14 07:44:39.011012] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:22.903 [2024-07-14 07:44:39.011038] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:22.903 [2024-07-14 07:44:39.011053] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:22.903 [2024-07-14 07:44:39.011066] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0c30000b90 00:27:22.903 [2024-07-14 07:44:39.011096] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:22.903 qpair failed and we were unable to recover it. 00:27:22.903 [2024-07-14 07:44:39.020845] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:22.903 [2024-07-14 07:44:39.021027] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:22.903 [2024-07-14 07:44:39.021053] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:22.903 [2024-07-14 07:44:39.021069] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:22.903 [2024-07-14 07:44:39.021082] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0c30000b90 00:27:22.903 [2024-07-14 07:44:39.021113] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:22.903 qpair failed and we were unable to recover it. 00:27:22.903 [2024-07-14 07:44:39.030899] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:22.903 [2024-07-14 07:44:39.031064] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:22.903 [2024-07-14 07:44:39.031090] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:22.903 [2024-07-14 07:44:39.031105] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:22.903 [2024-07-14 07:44:39.031119] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0c30000b90 00:27:22.903 [2024-07-14 07:44:39.031150] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:22.903 qpair failed and we were unable to recover it. 00:27:22.903 [2024-07-14 07:44:39.040911] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:22.903 [2024-07-14 07:44:39.041067] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:22.903 [2024-07-14 07:44:39.041093] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:22.903 [2024-07-14 07:44:39.041108] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:22.903 [2024-07-14 07:44:39.041123] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0c30000b90 00:27:22.903 [2024-07-14 07:44:39.041153] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:22.903 qpair failed and we were unable to recover it. 00:27:22.903 [2024-07-14 07:44:39.050957] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:22.903 [2024-07-14 07:44:39.051118] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:22.903 [2024-07-14 07:44:39.051145] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:22.903 [2024-07-14 07:44:39.051161] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:22.903 [2024-07-14 07:44:39.051175] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0c30000b90 00:27:22.903 [2024-07-14 07:44:39.051206] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:22.903 qpair failed and we were unable to recover it. 00:27:22.903 [2024-07-14 07:44:39.060977] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:22.903 [2024-07-14 07:44:39.061136] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:22.903 [2024-07-14 07:44:39.061163] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:22.903 [2024-07-14 07:44:39.061178] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:22.903 [2024-07-14 07:44:39.061192] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0c30000b90 00:27:22.903 [2024-07-14 07:44:39.061223] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:22.903 qpair failed and we were unable to recover it. 00:27:23.161 [2024-07-14 07:44:39.071137] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:23.161 [2024-07-14 07:44:39.071323] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:23.161 [2024-07-14 07:44:39.071367] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:23.161 [2024-07-14 07:44:39.071384] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:23.161 [2024-07-14 07:44:39.071397] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0c30000b90 00:27:23.161 [2024-07-14 07:44:39.071442] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:23.161 qpair failed and we were unable to recover it. 00:27:23.161 [2024-07-14 07:44:39.081075] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:23.161 [2024-07-14 07:44:39.081239] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:23.161 [2024-07-14 07:44:39.081267] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:23.161 [2024-07-14 07:44:39.081283] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:23.161 [2024-07-14 07:44:39.081296] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0c30000b90 00:27:23.161 [2024-07-14 07:44:39.081326] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:23.161 qpair failed and we were unable to recover it. 00:27:23.161 [2024-07-14 07:44:39.091113] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:23.161 [2024-07-14 07:44:39.091319] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:23.161 [2024-07-14 07:44:39.091347] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:23.161 [2024-07-14 07:44:39.091368] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:23.161 [2024-07-14 07:44:39.091383] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0c30000b90 00:27:23.161 [2024-07-14 07:44:39.091413] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:23.161 qpair failed and we were unable to recover it. 00:27:23.161 [2024-07-14 07:44:39.101134] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:23.161 [2024-07-14 07:44:39.101309] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:23.161 [2024-07-14 07:44:39.101336] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:23.161 [2024-07-14 07:44:39.101352] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:23.161 [2024-07-14 07:44:39.101366] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0c30000b90 00:27:23.161 [2024-07-14 07:44:39.101396] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:23.161 qpair failed and we were unable to recover it. 00:27:23.161 [2024-07-14 07:44:39.111097] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:23.161 [2024-07-14 07:44:39.111255] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:23.161 [2024-07-14 07:44:39.111280] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:23.161 [2024-07-14 07:44:39.111296] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:23.161 [2024-07-14 07:44:39.111311] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0c30000b90 00:27:23.161 [2024-07-14 07:44:39.111341] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:23.161 qpair failed and we were unable to recover it. 00:27:23.161 [2024-07-14 07:44:39.121149] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:23.161 [2024-07-14 07:44:39.121307] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:23.161 [2024-07-14 07:44:39.121335] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:23.161 [2024-07-14 07:44:39.121350] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:23.161 [2024-07-14 07:44:39.121365] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0c30000b90 00:27:23.161 [2024-07-14 07:44:39.121409] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:23.161 qpair failed and we were unable to recover it. 00:27:23.161 [2024-07-14 07:44:39.131234] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:23.161 [2024-07-14 07:44:39.131426] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:23.161 [2024-07-14 07:44:39.131455] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:23.161 [2024-07-14 07:44:39.131486] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:23.161 [2024-07-14 07:44:39.131501] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0c30000b90 00:27:23.161 [2024-07-14 07:44:39.131560] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:23.161 qpair failed and we were unable to recover it. 00:27:23.161 [2024-07-14 07:44:39.141194] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:23.161 [2024-07-14 07:44:39.141359] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:23.161 [2024-07-14 07:44:39.141387] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:23.161 [2024-07-14 07:44:39.141403] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:23.161 [2024-07-14 07:44:39.141417] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0c30000b90 00:27:23.161 [2024-07-14 07:44:39.141459] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:23.161 qpair failed and we were unable to recover it. 00:27:23.161 [2024-07-14 07:44:39.151229] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:23.161 [2024-07-14 07:44:39.151386] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:23.161 [2024-07-14 07:44:39.151413] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:23.161 [2024-07-14 07:44:39.151429] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:23.161 [2024-07-14 07:44:39.151442] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0c30000b90 00:27:23.161 [2024-07-14 07:44:39.151473] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:23.161 qpair failed and we were unable to recover it. 00:27:23.161 [2024-07-14 07:44:39.161236] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:23.161 [2024-07-14 07:44:39.161396] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:23.161 [2024-07-14 07:44:39.161423] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:23.161 [2024-07-14 07:44:39.161439] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:23.161 [2024-07-14 07:44:39.161453] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0c30000b90 00:27:23.161 [2024-07-14 07:44:39.161483] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:23.161 qpair failed and we were unable to recover it. 00:27:23.161 [2024-07-14 07:44:39.171312] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:23.161 [2024-07-14 07:44:39.171473] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:23.161 [2024-07-14 07:44:39.171500] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:23.161 [2024-07-14 07:44:39.171517] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:23.161 [2024-07-14 07:44:39.171530] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0c30000b90 00:27:23.162 [2024-07-14 07:44:39.171572] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:23.162 qpair failed and we were unable to recover it. 00:27:23.162 [2024-07-14 07:44:39.181335] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:23.162 [2024-07-14 07:44:39.181534] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:23.162 [2024-07-14 07:44:39.181567] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:23.162 [2024-07-14 07:44:39.181587] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:23.162 [2024-07-14 07:44:39.181601] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0c30000b90 00:27:23.162 [2024-07-14 07:44:39.181647] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:23.162 qpair failed and we were unable to recover it. 00:27:23.162 [2024-07-14 07:44:39.191366] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:23.162 [2024-07-14 07:44:39.191555] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:23.162 [2024-07-14 07:44:39.191585] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:23.162 [2024-07-14 07:44:39.191601] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:23.162 [2024-07-14 07:44:39.191630] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0c30000b90 00:27:23.162 [2024-07-14 07:44:39.191661] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:23.162 qpair failed and we were unable to recover it. 00:27:23.162 [2024-07-14 07:44:39.201351] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:23.162 [2024-07-14 07:44:39.201513] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:23.162 [2024-07-14 07:44:39.201540] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:23.162 [2024-07-14 07:44:39.201556] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:23.162 [2024-07-14 07:44:39.201570] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0c30000b90 00:27:23.162 [2024-07-14 07:44:39.201600] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:23.162 qpair failed and we were unable to recover it. 00:27:23.162 [2024-07-14 07:44:39.211412] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:23.162 [2024-07-14 07:44:39.211571] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:23.162 [2024-07-14 07:44:39.211598] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:23.162 [2024-07-14 07:44:39.211614] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:23.162 [2024-07-14 07:44:39.211628] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0c30000b90 00:27:23.162 [2024-07-14 07:44:39.211657] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:23.162 qpair failed and we were unable to recover it. 00:27:23.162 [2024-07-14 07:44:39.221418] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:23.162 [2024-07-14 07:44:39.221573] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:23.162 [2024-07-14 07:44:39.221600] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:23.162 [2024-07-14 07:44:39.221616] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:23.162 [2024-07-14 07:44:39.221630] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0c30000b90 00:27:23.162 [2024-07-14 07:44:39.221667] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:23.162 qpair failed and we were unable to recover it. 00:27:23.162 [2024-07-14 07:44:39.231465] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:23.162 [2024-07-14 07:44:39.231625] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:23.162 [2024-07-14 07:44:39.231653] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:23.162 [2024-07-14 07:44:39.231673] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:23.162 [2024-07-14 07:44:39.231688] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0c30000b90 00:27:23.162 [2024-07-14 07:44:39.231718] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:23.162 qpair failed and we were unable to recover it. 00:27:23.162 [2024-07-14 07:44:39.241469] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:23.162 [2024-07-14 07:44:39.241678] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:23.162 [2024-07-14 07:44:39.241705] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:23.162 [2024-07-14 07:44:39.241720] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:23.162 [2024-07-14 07:44:39.241733] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0c30000b90 00:27:23.162 [2024-07-14 07:44:39.241763] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:23.162 qpair failed and we were unable to recover it. 00:27:23.162 [2024-07-14 07:44:39.251519] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:23.162 [2024-07-14 07:44:39.251681] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:23.162 [2024-07-14 07:44:39.251707] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:23.162 [2024-07-14 07:44:39.251723] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:23.162 [2024-07-14 07:44:39.251737] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0c30000b90 00:27:23.162 [2024-07-14 07:44:39.251779] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:23.162 qpair failed and we were unable to recover it. 00:27:23.162 [2024-07-14 07:44:39.261526] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:23.162 [2024-07-14 07:44:39.261686] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:23.162 [2024-07-14 07:44:39.261714] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:23.162 [2024-07-14 07:44:39.261729] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:23.162 [2024-07-14 07:44:39.261743] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0c30000b90 00:27:23.162 [2024-07-14 07:44:39.261774] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:23.162 qpair failed and we were unable to recover it. 00:27:23.162 [2024-07-14 07:44:39.271570] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:23.162 [2024-07-14 07:44:39.271722] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:23.162 [2024-07-14 07:44:39.271754] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:23.162 [2024-07-14 07:44:39.271771] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:23.162 [2024-07-14 07:44:39.271785] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0c30000b90 00:27:23.162 [2024-07-14 07:44:39.271815] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:23.162 qpair failed and we were unable to recover it. 00:27:23.162 [2024-07-14 07:44:39.281599] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:23.162 [2024-07-14 07:44:39.281821] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:23.162 [2024-07-14 07:44:39.281848] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:23.162 [2024-07-14 07:44:39.281863] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:23.162 [2024-07-14 07:44:39.281887] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0c30000b90 00:27:23.162 [2024-07-14 07:44:39.281917] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:23.162 qpair failed and we were unable to recover it. 00:27:23.162 [2024-07-14 07:44:39.291630] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:23.162 [2024-07-14 07:44:39.291800] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:23.162 [2024-07-14 07:44:39.291826] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:23.162 [2024-07-14 07:44:39.291842] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:23.162 [2024-07-14 07:44:39.291856] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0c30000b90 00:27:23.162 [2024-07-14 07:44:39.291892] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:23.162 qpair failed and we were unable to recover it. 00:27:23.162 [2024-07-14 07:44:39.301653] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:23.162 [2024-07-14 07:44:39.301813] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:23.162 [2024-07-14 07:44:39.301840] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:23.162 [2024-07-14 07:44:39.301856] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:23.162 [2024-07-14 07:44:39.301880] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0c30000b90 00:27:23.162 [2024-07-14 07:44:39.301912] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:23.162 qpair failed and we were unable to recover it. 00:27:23.162 [2024-07-14 07:44:39.311670] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:23.162 [2024-07-14 07:44:39.311826] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:23.162 [2024-07-14 07:44:39.311852] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:23.162 [2024-07-14 07:44:39.311875] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:23.162 [2024-07-14 07:44:39.311890] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0c30000b90 00:27:23.162 [2024-07-14 07:44:39.311935] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:23.162 qpair failed and we were unable to recover it. 00:27:23.162 [2024-07-14 07:44:39.321706] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:23.162 [2024-07-14 07:44:39.321912] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:23.162 [2024-07-14 07:44:39.321939] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:23.162 [2024-07-14 07:44:39.321955] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:23.162 [2024-07-14 07:44:39.321968] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0c30000b90 00:27:23.162 [2024-07-14 07:44:39.321998] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:23.162 qpair failed and we were unable to recover it. 00:27:23.420 [2024-07-14 07:44:39.331750] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:23.420 [2024-07-14 07:44:39.331952] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:23.420 [2024-07-14 07:44:39.331977] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:23.420 [2024-07-14 07:44:39.331993] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:23.420 [2024-07-14 07:44:39.332006] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0c30000b90 00:27:23.420 [2024-07-14 07:44:39.332037] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:23.420 qpair failed and we were unable to recover it. 00:27:23.420 [2024-07-14 07:44:39.341774] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:23.420 [2024-07-14 07:44:39.341951] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:23.420 [2024-07-14 07:44:39.341978] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:23.421 [2024-07-14 07:44:39.341993] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:23.421 [2024-07-14 07:44:39.342006] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0c30000b90 00:27:23.421 [2024-07-14 07:44:39.342037] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:23.421 qpair failed and we were unable to recover it. 00:27:23.421 [2024-07-14 07:44:39.351791] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:23.421 [2024-07-14 07:44:39.351959] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:23.421 [2024-07-14 07:44:39.351986] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:23.421 [2024-07-14 07:44:39.352002] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:23.421 [2024-07-14 07:44:39.352015] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0c30000b90 00:27:23.421 [2024-07-14 07:44:39.352047] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:23.421 qpair failed and we were unable to recover it. 00:27:23.421 [2024-07-14 07:44:39.361807] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:23.421 [2024-07-14 07:44:39.361969] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:23.421 [2024-07-14 07:44:39.362003] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:23.421 [2024-07-14 07:44:39.362020] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:23.421 [2024-07-14 07:44:39.362035] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0c30000b90 00:27:23.421 [2024-07-14 07:44:39.362065] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:23.421 qpair failed and we were unable to recover it. 00:27:23.421 [2024-07-14 07:44:39.371882] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:23.421 [2024-07-14 07:44:39.372046] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:23.421 [2024-07-14 07:44:39.372072] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:23.421 [2024-07-14 07:44:39.372087] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:23.421 [2024-07-14 07:44:39.372101] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0c30000b90 00:27:23.421 [2024-07-14 07:44:39.372132] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:23.421 qpair failed and we were unable to recover it. 00:27:23.421 [2024-07-14 07:44:39.381923] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:23.421 [2024-07-14 07:44:39.382088] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:23.421 [2024-07-14 07:44:39.382113] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:23.421 [2024-07-14 07:44:39.382128] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:23.421 [2024-07-14 07:44:39.382142] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0c30000b90 00:27:23.421 [2024-07-14 07:44:39.382172] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:23.421 qpair failed and we were unable to recover it. 00:27:23.421 [2024-07-14 07:44:39.391914] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:23.421 [2024-07-14 07:44:39.392078] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:23.421 [2024-07-14 07:44:39.392104] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:23.421 [2024-07-14 07:44:39.392120] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:23.421 [2024-07-14 07:44:39.392133] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0c30000b90 00:27:23.421 [2024-07-14 07:44:39.392164] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:23.421 qpair failed and we were unable to recover it. 00:27:23.421 [2024-07-14 07:44:39.401938] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:23.421 [2024-07-14 07:44:39.402094] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:23.421 [2024-07-14 07:44:39.402119] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:23.421 [2024-07-14 07:44:39.402135] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:23.421 [2024-07-14 07:44:39.402154] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0c30000b90 00:27:23.421 [2024-07-14 07:44:39.402185] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:23.421 qpair failed and we were unable to recover it. 00:27:23.421 [2024-07-14 07:44:39.412041] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:23.421 [2024-07-14 07:44:39.412206] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:23.421 [2024-07-14 07:44:39.412233] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:23.421 [2024-07-14 07:44:39.412251] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:23.421 [2024-07-14 07:44:39.412265] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0c30000b90 00:27:23.421 [2024-07-14 07:44:39.412295] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:23.421 qpair failed and we were unable to recover it. 00:27:23.421 [2024-07-14 07:44:39.422006] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:23.421 [2024-07-14 07:44:39.422168] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:23.421 [2024-07-14 07:44:39.422195] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:23.421 [2024-07-14 07:44:39.422210] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:23.421 [2024-07-14 07:44:39.422224] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0c30000b90 00:27:23.421 [2024-07-14 07:44:39.422254] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:23.421 qpair failed and we were unable to recover it. 00:27:23.421 [2024-07-14 07:44:39.432030] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:23.421 [2024-07-14 07:44:39.432227] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:23.421 [2024-07-14 07:44:39.432253] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:23.421 [2024-07-14 07:44:39.432268] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:23.421 [2024-07-14 07:44:39.432283] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0c30000b90 00:27:23.421 [2024-07-14 07:44:39.432313] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:23.421 qpair failed and we were unable to recover it. 00:27:23.421 [2024-07-14 07:44:39.442081] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:23.421 [2024-07-14 07:44:39.442236] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:23.421 [2024-07-14 07:44:39.442263] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:23.421 [2024-07-14 07:44:39.442278] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:23.421 [2024-07-14 07:44:39.442293] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0c30000b90 00:27:23.421 [2024-07-14 07:44:39.442323] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:23.421 qpair failed and we were unable to recover it. 00:27:23.421 [2024-07-14 07:44:39.452153] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:23.421 [2024-07-14 07:44:39.452357] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:23.421 [2024-07-14 07:44:39.452385] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:23.421 [2024-07-14 07:44:39.452405] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:23.421 [2024-07-14 07:44:39.452420] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0c30000b90 00:27:23.421 [2024-07-14 07:44:39.452465] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:23.421 qpair failed and we were unable to recover it. 00:27:23.421 [2024-07-14 07:44:39.462131] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:23.421 [2024-07-14 07:44:39.462290] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:23.421 [2024-07-14 07:44:39.462316] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:23.421 [2024-07-14 07:44:39.462331] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:23.421 [2024-07-14 07:44:39.462362] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0c30000b90 00:27:23.421 [2024-07-14 07:44:39.462392] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:23.421 qpair failed and we were unable to recover it. 00:27:23.421 [2024-07-14 07:44:39.472134] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:23.421 [2024-07-14 07:44:39.472290] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:23.421 [2024-07-14 07:44:39.472316] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:23.421 [2024-07-14 07:44:39.472331] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:23.421 [2024-07-14 07:44:39.472344] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0c30000b90 00:27:23.421 [2024-07-14 07:44:39.472374] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:23.421 qpair failed and we were unable to recover it. 00:27:23.421 [2024-07-14 07:44:39.482226] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:23.421 [2024-07-14 07:44:39.482434] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:23.421 [2024-07-14 07:44:39.482460] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:23.421 [2024-07-14 07:44:39.482475] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:23.421 [2024-07-14 07:44:39.482489] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0c30000b90 00:27:23.421 [2024-07-14 07:44:39.482519] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:23.421 qpair failed and we were unable to recover it. 00:27:23.421 [2024-07-14 07:44:39.492226] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:23.421 [2024-07-14 07:44:39.492386] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:23.421 [2024-07-14 07:44:39.492412] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:23.421 [2024-07-14 07:44:39.492428] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:23.421 [2024-07-14 07:44:39.492448] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0c30000b90 00:27:23.421 [2024-07-14 07:44:39.492492] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:23.421 qpair failed and we were unable to recover it. 00:27:23.421 [2024-07-14 07:44:39.502212] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:23.421 [2024-07-14 07:44:39.502370] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:23.421 [2024-07-14 07:44:39.502397] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:23.421 [2024-07-14 07:44:39.502412] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:23.421 [2024-07-14 07:44:39.502426] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0c30000b90 00:27:23.421 [2024-07-14 07:44:39.502457] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:23.421 qpair failed and we were unable to recover it. 00:27:23.421 [2024-07-14 07:44:39.512299] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:23.421 [2024-07-14 07:44:39.512464] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:23.421 [2024-07-14 07:44:39.512491] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:23.421 [2024-07-14 07:44:39.512507] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:23.421 [2024-07-14 07:44:39.512522] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0c30000b90 00:27:23.421 [2024-07-14 07:44:39.512568] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:23.421 qpair failed and we were unable to recover it. 00:27:23.421 [2024-07-14 07:44:39.522296] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:23.421 [2024-07-14 07:44:39.522448] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:23.421 [2024-07-14 07:44:39.522475] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:23.421 [2024-07-14 07:44:39.522490] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:23.421 [2024-07-14 07:44:39.522504] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0c30000b90 00:27:23.421 [2024-07-14 07:44:39.522534] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:23.421 qpair failed and we were unable to recover it. 00:27:23.421 [2024-07-14 07:44:39.532314] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:23.421 [2024-07-14 07:44:39.532477] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:23.421 [2024-07-14 07:44:39.532502] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:23.421 [2024-07-14 07:44:39.532518] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:23.421 [2024-07-14 07:44:39.532531] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0c30000b90 00:27:23.421 [2024-07-14 07:44:39.532562] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:23.421 qpair failed and we were unable to recover it. 00:27:23.421 [2024-07-14 07:44:39.542355] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:23.421 [2024-07-14 07:44:39.542507] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:23.421 [2024-07-14 07:44:39.542532] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:23.421 [2024-07-14 07:44:39.542547] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:23.421 [2024-07-14 07:44:39.542560] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0c30000b90 00:27:23.421 [2024-07-14 07:44:39.542590] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:23.421 qpair failed and we were unable to recover it. 00:27:23.421 [2024-07-14 07:44:39.552453] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:23.421 [2024-07-14 07:44:39.552643] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:23.421 [2024-07-14 07:44:39.552669] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:23.421 [2024-07-14 07:44:39.552685] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:23.421 [2024-07-14 07:44:39.552698] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0c30000b90 00:27:23.421 [2024-07-14 07:44:39.552729] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:23.421 qpair failed and we were unable to recover it. 00:27:23.421 [2024-07-14 07:44:39.562416] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:23.421 [2024-07-14 07:44:39.562623] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:23.421 [2024-07-14 07:44:39.562649] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:23.421 [2024-07-14 07:44:39.562665] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:23.421 [2024-07-14 07:44:39.562680] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0c30000b90 00:27:23.421 [2024-07-14 07:44:39.562710] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:23.421 qpair failed and we were unable to recover it. 00:27:23.421 [2024-07-14 07:44:39.572530] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:23.421 [2024-07-14 07:44:39.572711] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:23.421 [2024-07-14 07:44:39.572737] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:23.421 [2024-07-14 07:44:39.572752] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:23.421 [2024-07-14 07:44:39.572766] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0c30000b90 00:27:23.421 [2024-07-14 07:44:39.572796] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:23.421 qpair failed and we were unable to recover it. 00:27:23.421 [2024-07-14 07:44:39.582469] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:23.421 [2024-07-14 07:44:39.582629] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:23.422 [2024-07-14 07:44:39.582655] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:23.422 [2024-07-14 07:44:39.582676] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:23.422 [2024-07-14 07:44:39.582691] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0c30000b90 00:27:23.422 [2024-07-14 07:44:39.582720] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:23.422 qpair failed and we were unable to recover it. 00:27:23.679 [2024-07-14 07:44:39.592504] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:23.679 [2024-07-14 07:44:39.592680] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:23.679 [2024-07-14 07:44:39.592706] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:23.679 [2024-07-14 07:44:39.592722] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:23.679 [2024-07-14 07:44:39.592737] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0c30000b90 00:27:23.679 [2024-07-14 07:44:39.592767] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:23.679 qpair failed and we were unable to recover it. 00:27:23.679 [2024-07-14 07:44:39.602524] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:23.680 [2024-07-14 07:44:39.602694] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:23.680 [2024-07-14 07:44:39.602720] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:23.680 [2024-07-14 07:44:39.602735] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:23.680 [2024-07-14 07:44:39.602750] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0c30000b90 00:27:23.680 [2024-07-14 07:44:39.602780] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:23.680 qpair failed and we were unable to recover it. 00:27:23.680 [2024-07-14 07:44:39.612585] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:23.680 [2024-07-14 07:44:39.612756] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:23.680 [2024-07-14 07:44:39.612782] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:23.680 [2024-07-14 07:44:39.612797] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:23.680 [2024-07-14 07:44:39.612810] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0c30000b90 00:27:23.680 [2024-07-14 07:44:39.612855] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:23.680 qpair failed and we were unable to recover it. 00:27:23.680 [2024-07-14 07:44:39.622644] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:23.680 [2024-07-14 07:44:39.622809] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:23.680 [2024-07-14 07:44:39.622836] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:23.680 [2024-07-14 07:44:39.622851] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:23.680 [2024-07-14 07:44:39.622872] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0c30000b90 00:27:23.680 [2024-07-14 07:44:39.622904] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:23.680 qpair failed and we were unable to recover it. 00:27:23.680 [2024-07-14 07:44:39.632672] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:23.680 [2024-07-14 07:44:39.632832] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:23.680 [2024-07-14 07:44:39.632859] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:23.680 [2024-07-14 07:44:39.632882] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:23.680 [2024-07-14 07:44:39.632898] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0c30000b90 00:27:23.680 [2024-07-14 07:44:39.632928] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:23.680 qpair failed and we were unable to recover it. 00:27:23.680 [2024-07-14 07:44:39.642644] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:23.680 [2024-07-14 07:44:39.642799] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:23.680 [2024-07-14 07:44:39.642826] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:23.680 [2024-07-14 07:44:39.642841] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:23.680 [2024-07-14 07:44:39.642854] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0c30000b90 00:27:23.680 [2024-07-14 07:44:39.642894] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:23.680 qpair failed and we were unable to recover it. 00:27:23.680 [2024-07-14 07:44:39.652745] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:23.680 [2024-07-14 07:44:39.652946] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:23.680 [2024-07-14 07:44:39.652972] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:23.680 [2024-07-14 07:44:39.652988] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:23.680 [2024-07-14 07:44:39.653001] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0c30000b90 00:27:23.680 [2024-07-14 07:44:39.653032] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:23.680 qpair failed and we were unable to recover it. 00:27:23.680 [2024-07-14 07:44:39.662722] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:23.680 [2024-07-14 07:44:39.662891] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:23.680 [2024-07-14 07:44:39.662917] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:23.680 [2024-07-14 07:44:39.662933] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:23.680 [2024-07-14 07:44:39.662946] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0c30000b90 00:27:23.680 [2024-07-14 07:44:39.662976] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:23.680 qpair failed and we were unable to recover it. 00:27:23.680 [2024-07-14 07:44:39.672750] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:23.680 [2024-07-14 07:44:39.672909] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:23.680 [2024-07-14 07:44:39.672935] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:23.680 [2024-07-14 07:44:39.672955] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:23.680 [2024-07-14 07:44:39.672970] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0c30000b90 00:27:23.680 [2024-07-14 07:44:39.673000] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:23.680 qpair failed and we were unable to recover it. 00:27:23.680 [2024-07-14 07:44:39.682820] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:23.680 [2024-07-14 07:44:39.682987] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:23.680 [2024-07-14 07:44:39.683013] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:23.680 [2024-07-14 07:44:39.683028] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:23.680 [2024-07-14 07:44:39.683042] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0c30000b90 00:27:23.680 [2024-07-14 07:44:39.683072] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:23.680 qpair failed and we were unable to recover it. 00:27:23.680 [2024-07-14 07:44:39.692850] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:23.680 [2024-07-14 07:44:39.693047] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:23.680 [2024-07-14 07:44:39.693073] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:23.680 [2024-07-14 07:44:39.693088] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:23.680 [2024-07-14 07:44:39.693102] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0c30000b90 00:27:23.680 [2024-07-14 07:44:39.693133] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:23.680 qpair failed and we were unable to recover it. 00:27:23.680 [2024-07-14 07:44:39.702858] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:23.680 [2024-07-14 07:44:39.703033] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:23.680 [2024-07-14 07:44:39.703059] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:23.680 [2024-07-14 07:44:39.703074] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:23.680 [2024-07-14 07:44:39.703088] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0c30000b90 00:27:23.680 [2024-07-14 07:44:39.703118] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:23.680 qpair failed and we were unable to recover it. 00:27:23.680 [2024-07-14 07:44:39.712872] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:23.680 [2024-07-14 07:44:39.713034] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:23.680 [2024-07-14 07:44:39.713061] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:23.680 [2024-07-14 07:44:39.713076] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:23.680 [2024-07-14 07:44:39.713091] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0c30000b90 00:27:23.680 [2024-07-14 07:44:39.713121] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:23.680 qpair failed and we were unable to recover it. 00:27:23.680 [2024-07-14 07:44:39.722924] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:23.680 [2024-07-14 07:44:39.723083] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:23.680 [2024-07-14 07:44:39.723110] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:23.680 [2024-07-14 07:44:39.723125] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:23.680 [2024-07-14 07:44:39.723142] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0c30000b90 00:27:23.680 [2024-07-14 07:44:39.723172] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:23.680 qpair failed and we were unable to recover it. 00:27:23.680 [2024-07-14 07:44:39.732979] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:23.680 [2024-07-14 07:44:39.733140] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:23.680 [2024-07-14 07:44:39.733166] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:23.680 [2024-07-14 07:44:39.733181] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:23.680 [2024-07-14 07:44:39.733194] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0c30000b90 00:27:23.680 [2024-07-14 07:44:39.733225] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:23.680 qpair failed and we were unable to recover it. 00:27:23.680 [2024-07-14 07:44:39.742999] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:23.680 [2024-07-14 07:44:39.743162] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:23.680 [2024-07-14 07:44:39.743188] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:23.680 [2024-07-14 07:44:39.743204] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:23.680 [2024-07-14 07:44:39.743219] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0c30000b90 00:27:23.680 [2024-07-14 07:44:39.743248] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:23.680 qpair failed and we were unable to recover it. 00:27:23.680 [2024-07-14 07:44:39.753005] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:23.680 [2024-07-14 07:44:39.753167] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:23.680 [2024-07-14 07:44:39.753193] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:23.680 [2024-07-14 07:44:39.753208] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:23.680 [2024-07-14 07:44:39.753237] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0c30000b90 00:27:23.680 [2024-07-14 07:44:39.753266] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:23.680 qpair failed and we were unable to recover it. 00:27:23.680 [2024-07-14 07:44:39.763045] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:23.680 [2024-07-14 07:44:39.763205] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:23.680 [2024-07-14 07:44:39.763236] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:23.680 [2024-07-14 07:44:39.763252] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:23.680 [2024-07-14 07:44:39.763266] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0c30000b90 00:27:23.680 [2024-07-14 07:44:39.763296] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:23.680 qpair failed and we were unable to recover it. 00:27:23.680 [2024-07-14 07:44:39.773085] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:23.680 [2024-07-14 07:44:39.773245] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:23.680 [2024-07-14 07:44:39.773271] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:23.680 [2024-07-14 07:44:39.773286] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:23.680 [2024-07-14 07:44:39.773300] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0c30000b90 00:27:23.680 [2024-07-14 07:44:39.773346] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:23.680 qpair failed and we were unable to recover it. 00:27:23.680 [2024-07-14 07:44:39.783101] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:23.680 [2024-07-14 07:44:39.783258] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:23.680 [2024-07-14 07:44:39.783283] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:23.680 [2024-07-14 07:44:39.783299] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:23.680 [2024-07-14 07:44:39.783314] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0c30000b90 00:27:23.680 [2024-07-14 07:44:39.783360] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:23.680 qpair failed and we were unable to recover it. 00:27:23.680 [2024-07-14 07:44:39.793139] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:23.680 [2024-07-14 07:44:39.793319] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:23.680 [2024-07-14 07:44:39.793345] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:23.680 [2024-07-14 07:44:39.793361] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:23.680 [2024-07-14 07:44:39.793374] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0c30000b90 00:27:23.680 [2024-07-14 07:44:39.793403] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:23.680 qpair failed and we were unable to recover it. 00:27:23.680 [2024-07-14 07:44:39.803152] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:23.680 [2024-07-14 07:44:39.803323] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:23.680 [2024-07-14 07:44:39.803349] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:23.680 [2024-07-14 07:44:39.803365] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:23.680 [2024-07-14 07:44:39.803380] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0c30000b90 00:27:23.680 [2024-07-14 07:44:39.803416] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:23.680 qpair failed and we were unable to recover it. 00:27:23.680 [2024-07-14 07:44:39.813216] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:23.680 [2024-07-14 07:44:39.813397] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:23.680 [2024-07-14 07:44:39.813424] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:23.680 [2024-07-14 07:44:39.813440] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:23.680 [2024-07-14 07:44:39.813455] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0c30000b90 00:27:23.680 [2024-07-14 07:44:39.813501] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:23.680 qpair failed and we were unable to recover it. 00:27:23.680 [2024-07-14 07:44:39.823229] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:23.680 [2024-07-14 07:44:39.823386] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:23.680 [2024-07-14 07:44:39.823412] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:23.680 [2024-07-14 07:44:39.823427] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:23.680 [2024-07-14 07:44:39.823440] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0c30000b90 00:27:23.680 [2024-07-14 07:44:39.823471] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:23.680 qpair failed and we were unable to recover it. 00:27:23.680 [2024-07-14 07:44:39.833231] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:23.680 [2024-07-14 07:44:39.833401] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:23.680 [2024-07-14 07:44:39.833427] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:23.680 [2024-07-14 07:44:39.833442] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:23.680 [2024-07-14 07:44:39.833456] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0c30000b90 00:27:23.680 [2024-07-14 07:44:39.833486] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:23.680 qpair failed and we were unable to recover it. 00:27:23.680 [2024-07-14 07:44:39.843311] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:23.680 [2024-07-14 07:44:39.843487] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:23.680 [2024-07-14 07:44:39.843513] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:23.680 [2024-07-14 07:44:39.843529] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:23.680 [2024-07-14 07:44:39.843543] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0c30000b90 00:27:23.680 [2024-07-14 07:44:39.843589] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:23.680 qpair failed and we were unable to recover it. 00:27:23.938 [2024-07-14 07:44:39.853339] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:23.938 [2024-07-14 07:44:39.853542] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:23.938 [2024-07-14 07:44:39.853574] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:23.938 [2024-07-14 07:44:39.853590] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:23.938 [2024-07-14 07:44:39.853605] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0c30000b90 00:27:23.938 [2024-07-14 07:44:39.853650] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:23.938 qpair failed and we were unable to recover it. 00:27:23.938 [2024-07-14 07:44:39.863366] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:23.938 [2024-07-14 07:44:39.863533] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:23.938 [2024-07-14 07:44:39.863557] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:23.938 [2024-07-14 07:44:39.863572] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:23.938 [2024-07-14 07:44:39.863585] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0c30000b90 00:27:23.938 [2024-07-14 07:44:39.863615] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:23.938 qpair failed and we were unable to recover it. 00:27:23.938 [2024-07-14 07:44:39.873397] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:23.938 [2024-07-14 07:44:39.873560] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:23.938 [2024-07-14 07:44:39.873586] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:23.938 [2024-07-14 07:44:39.873601] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:23.938 [2024-07-14 07:44:39.873616] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0c30000b90 00:27:23.938 [2024-07-14 07:44:39.873647] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:23.938 qpair failed and we were unable to recover it. 00:27:23.938 [2024-07-14 07:44:39.883408] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:23.938 [2024-07-14 07:44:39.883567] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:23.938 [2024-07-14 07:44:39.883594] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:23.938 [2024-07-14 07:44:39.883609] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:23.938 [2024-07-14 07:44:39.883622] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0c30000b90 00:27:23.938 [2024-07-14 07:44:39.883653] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:23.938 qpair failed and we were unable to recover it. 00:27:23.938 [2024-07-14 07:44:39.893506] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:23.938 [2024-07-14 07:44:39.893699] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:23.938 [2024-07-14 07:44:39.893725] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:23.938 [2024-07-14 07:44:39.893740] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:23.938 [2024-07-14 07:44:39.893760] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0c30000b90 00:27:23.938 [2024-07-14 07:44:39.893806] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:23.938 qpair failed and we were unable to recover it. 00:27:23.938 [2024-07-14 07:44:39.903459] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:23.938 [2024-07-14 07:44:39.903621] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:23.938 [2024-07-14 07:44:39.903647] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:23.938 [2024-07-14 07:44:39.903661] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:23.938 [2024-07-14 07:44:39.903675] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0c30000b90 00:27:23.938 [2024-07-14 07:44:39.903706] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:23.938 qpair failed and we were unable to recover it. 00:27:23.938 [2024-07-14 07:44:39.913494] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:23.938 [2024-07-14 07:44:39.913644] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:23.938 [2024-07-14 07:44:39.913669] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:23.938 [2024-07-14 07:44:39.913685] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:23.938 [2024-07-14 07:44:39.913698] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0c30000b90 00:27:23.938 [2024-07-14 07:44:39.913729] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:23.938 qpair failed and we were unable to recover it. 00:27:23.938 [2024-07-14 07:44:39.923518] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:23.938 [2024-07-14 07:44:39.923711] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:23.938 [2024-07-14 07:44:39.923737] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:23.938 [2024-07-14 07:44:39.923753] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:23.938 [2024-07-14 07:44:39.923766] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0c30000b90 00:27:23.938 [2024-07-14 07:44:39.923797] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:23.938 qpair failed and we were unable to recover it. 00:27:23.938 [2024-07-14 07:44:39.933581] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:23.938 [2024-07-14 07:44:39.933749] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:23.938 [2024-07-14 07:44:39.933775] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:23.938 [2024-07-14 07:44:39.933790] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:23.938 [2024-07-14 07:44:39.933818] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0c30000b90 00:27:23.938 [2024-07-14 07:44:39.933849] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:23.938 qpair failed and we were unable to recover it. 00:27:23.938 [2024-07-14 07:44:39.943585] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:23.939 [2024-07-14 07:44:39.943777] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:23.939 [2024-07-14 07:44:39.943805] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:23.939 [2024-07-14 07:44:39.943820] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:23.939 [2024-07-14 07:44:39.943834] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0c30000b90 00:27:23.939 [2024-07-14 07:44:39.943873] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:23.939 qpair failed and we were unable to recover it. 00:27:23.939 [2024-07-14 07:44:39.953617] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:23.939 [2024-07-14 07:44:39.953776] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:23.939 [2024-07-14 07:44:39.953802] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:23.939 [2024-07-14 07:44:39.953817] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:23.939 [2024-07-14 07:44:39.953831] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0c30000b90 00:27:23.939 [2024-07-14 07:44:39.953862] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:23.939 qpair failed and we were unable to recover it. 00:27:23.939 [2024-07-14 07:44:39.963636] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:23.939 [2024-07-14 07:44:39.963793] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:23.939 [2024-07-14 07:44:39.963819] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:23.939 [2024-07-14 07:44:39.963834] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:23.939 [2024-07-14 07:44:39.963848] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0c30000b90 00:27:23.939 [2024-07-14 07:44:39.963890] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:23.939 qpair failed and we were unable to recover it. 00:27:23.939 [2024-07-14 07:44:39.973672] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:23.939 [2024-07-14 07:44:39.973831] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:23.939 [2024-07-14 07:44:39.973857] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:23.939 [2024-07-14 07:44:39.973880] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:23.939 [2024-07-14 07:44:39.973896] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0c30000b90 00:27:23.939 [2024-07-14 07:44:39.973927] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:23.939 qpair failed and we were unable to recover it. 00:27:23.939 [2024-07-14 07:44:39.983709] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:23.939 [2024-07-14 07:44:39.983879] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:23.939 [2024-07-14 07:44:39.983905] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:23.939 [2024-07-14 07:44:39.983920] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:23.939 [2024-07-14 07:44:39.983940] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0c30000b90 00:27:23.939 [2024-07-14 07:44:39.983984] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:23.939 qpair failed and we were unable to recover it. 00:27:23.939 [2024-07-14 07:44:39.993735] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:23.939 [2024-07-14 07:44:39.993916] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:23.939 [2024-07-14 07:44:39.993942] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:23.939 [2024-07-14 07:44:39.993958] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:23.939 [2024-07-14 07:44:39.993971] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0c30000b90 00:27:23.939 [2024-07-14 07:44:39.994002] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:23.939 qpair failed and we were unable to recover it. 00:27:23.939 [2024-07-14 07:44:40.003771] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:23.939 [2024-07-14 07:44:40.003941] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:23.939 [2024-07-14 07:44:40.003968] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:23.939 [2024-07-14 07:44:40.003984] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:23.939 [2024-07-14 07:44:40.003999] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0c30000b90 00:27:23.939 [2024-07-14 07:44:40.004030] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:23.939 qpair failed and we were unable to recover it. 00:27:23.939 [2024-07-14 07:44:40.013831] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:23.939 [2024-07-14 07:44:40.014015] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:23.939 [2024-07-14 07:44:40.014045] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:23.939 [2024-07-14 07:44:40.014062] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:23.939 [2024-07-14 07:44:40.014075] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0c30000b90 00:27:23.939 [2024-07-14 07:44:40.014107] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:23.939 qpair failed and we were unable to recover it. 00:27:23.939 [2024-07-14 07:44:40.023842] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:23.939 [2024-07-14 07:44:40.024048] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:23.939 [2024-07-14 07:44:40.024076] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:23.939 [2024-07-14 07:44:40.024091] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:23.939 [2024-07-14 07:44:40.024106] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0c30000b90 00:27:23.939 [2024-07-14 07:44:40.024138] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:23.939 qpair failed and we were unable to recover it. 00:27:23.939 [2024-07-14 07:44:40.033881] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:23.939 [2024-07-14 07:44:40.034045] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:23.939 [2024-07-14 07:44:40.034072] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:23.939 [2024-07-14 07:44:40.034087] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:23.939 [2024-07-14 07:44:40.034102] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0c30000b90 00:27:23.939 [2024-07-14 07:44:40.034133] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:23.939 qpair failed and we were unable to recover it. 00:27:23.939 [2024-07-14 07:44:40.043910] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:23.939 [2024-07-14 07:44:40.044065] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:23.939 [2024-07-14 07:44:40.044093] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:23.939 [2024-07-14 07:44:40.044108] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:23.939 [2024-07-14 07:44:40.044122] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0c30000b90 00:27:23.939 [2024-07-14 07:44:40.044167] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:23.939 qpair failed and we were unable to recover it. 00:27:23.939 [2024-07-14 07:44:40.053935] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:23.939 [2024-07-14 07:44:40.054106] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:23.939 [2024-07-14 07:44:40.054133] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:23.939 [2024-07-14 07:44:40.054148] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:23.939 [2024-07-14 07:44:40.054162] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0c30000b90 00:27:23.939 [2024-07-14 07:44:40.054193] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:23.939 qpair failed and we were unable to recover it. 00:27:23.939 [2024-07-14 07:44:40.063966] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:23.939 [2024-07-14 07:44:40.064124] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:23.939 [2024-07-14 07:44:40.064151] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:23.939 [2024-07-14 07:44:40.064166] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:23.939 [2024-07-14 07:44:40.064181] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0c30000b90 00:27:23.939 [2024-07-14 07:44:40.064224] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:23.939 qpair failed and we were unable to recover it. 00:27:23.939 [2024-07-14 07:44:40.073982] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:23.939 [2024-07-14 07:44:40.074226] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:23.939 [2024-07-14 07:44:40.074252] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:23.939 [2024-07-14 07:44:40.074276] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:23.939 [2024-07-14 07:44:40.074291] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0c30000b90 00:27:23.939 [2024-07-14 07:44:40.074349] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:23.939 qpair failed and we were unable to recover it. 00:27:23.939 [2024-07-14 07:44:40.084016] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:23.939 [2024-07-14 07:44:40.084192] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:23.939 [2024-07-14 07:44:40.084218] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:23.939 [2024-07-14 07:44:40.084234] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:23.939 [2024-07-14 07:44:40.084248] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0c30000b90 00:27:23.939 [2024-07-14 07:44:40.084278] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:23.939 qpair failed and we were unable to recover it. 00:27:23.939 [2024-07-14 07:44:40.094040] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:23.939 [2024-07-14 07:44:40.094273] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:23.939 [2024-07-14 07:44:40.094299] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:23.939 [2024-07-14 07:44:40.094314] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:23.939 [2024-07-14 07:44:40.094327] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0c30000b90 00:27:23.939 [2024-07-14 07:44:40.094357] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:23.939 qpair failed and we were unable to recover it. 00:27:23.939 [2024-07-14 07:44:40.104036] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:23.939 [2024-07-14 07:44:40.104208] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:23.939 [2024-07-14 07:44:40.104235] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:23.939 [2024-07-14 07:44:40.104251] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:23.939 [2024-07-14 07:44:40.104265] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0c30000b90 00:27:23.939 [2024-07-14 07:44:40.104296] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:23.939 qpair failed and we were unable to recover it. 00:27:24.197 [2024-07-14 07:44:40.114100] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:24.197 [2024-07-14 07:44:40.114275] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:24.197 [2024-07-14 07:44:40.114301] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:24.197 [2024-07-14 07:44:40.114317] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:24.197 [2024-07-14 07:44:40.114331] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0c30000b90 00:27:24.197 [2024-07-14 07:44:40.114362] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:24.197 qpair failed and we were unable to recover it. 00:27:24.197 [2024-07-14 07:44:40.124095] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:24.197 [2024-07-14 07:44:40.124260] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:24.197 [2024-07-14 07:44:40.124286] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:24.197 [2024-07-14 07:44:40.124301] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:24.197 [2024-07-14 07:44:40.124316] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0c30000b90 00:27:24.197 [2024-07-14 07:44:40.124345] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:24.197 qpair failed and we were unable to recover it. 00:27:24.197 [2024-07-14 07:44:40.134169] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:24.197 [2024-07-14 07:44:40.134348] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:24.197 [2024-07-14 07:44:40.134374] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:24.197 [2024-07-14 07:44:40.134389] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:24.197 [2024-07-14 07:44:40.134404] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0c30000b90 00:27:24.197 [2024-07-14 07:44:40.134433] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:24.197 qpair failed and we were unable to recover it. 00:27:24.197 [2024-07-14 07:44:40.144153] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:24.197 [2024-07-14 07:44:40.144318] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:24.197 [2024-07-14 07:44:40.144344] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:24.197 [2024-07-14 07:44:40.144359] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:24.197 [2024-07-14 07:44:40.144373] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0c30000b90 00:27:24.197 [2024-07-14 07:44:40.144402] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:24.197 qpair failed and we were unable to recover it. 00:27:24.197 [2024-07-14 07:44:40.154172] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:24.197 [2024-07-14 07:44:40.154335] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:24.197 [2024-07-14 07:44:40.154362] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:24.197 [2024-07-14 07:44:40.154377] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:24.197 [2024-07-14 07:44:40.154391] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0c30000b90 00:27:24.197 [2024-07-14 07:44:40.154421] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:24.197 qpair failed and we were unable to recover it. 00:27:24.197 [2024-07-14 07:44:40.164223] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:24.197 [2024-07-14 07:44:40.164429] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:24.197 [2024-07-14 07:44:40.164472] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:24.197 [2024-07-14 07:44:40.164495] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:24.197 [2024-07-14 07:44:40.164510] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0c30000b90 00:27:24.197 [2024-07-14 07:44:40.164554] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:24.197 qpair failed and we were unable to recover it. 00:27:24.197 [2024-07-14 07:44:40.174274] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:24.197 [2024-07-14 07:44:40.174446] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:24.197 [2024-07-14 07:44:40.174472] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:24.197 [2024-07-14 07:44:40.174487] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:24.197 [2024-07-14 07:44:40.174502] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0c30000b90 00:27:24.197 [2024-07-14 07:44:40.174532] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:24.197 qpair failed and we were unable to recover it. 00:27:24.197 [2024-07-14 07:44:40.184260] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:24.197 [2024-07-14 07:44:40.184423] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:24.197 [2024-07-14 07:44:40.184449] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:24.197 [2024-07-14 07:44:40.184464] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:24.197 [2024-07-14 07:44:40.184479] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0c30000b90 00:27:24.197 [2024-07-14 07:44:40.184508] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:24.197 qpair failed and we were unable to recover it. 00:27:24.197 [2024-07-14 07:44:40.194279] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:24.197 [2024-07-14 07:44:40.194449] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:24.197 [2024-07-14 07:44:40.194474] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:24.197 [2024-07-14 07:44:40.194489] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:24.197 [2024-07-14 07:44:40.194504] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0c30000b90 00:27:24.197 [2024-07-14 07:44:40.194534] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:24.197 qpair failed and we were unable to recover it. 00:27:24.197 [2024-07-14 07:44:40.204322] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:24.197 [2024-07-14 07:44:40.204488] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:24.197 [2024-07-14 07:44:40.204514] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:24.197 [2024-07-14 07:44:40.204537] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:24.197 [2024-07-14 07:44:40.204550] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0c30000b90 00:27:24.197 [2024-07-14 07:44:40.204581] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:24.197 qpair failed and we were unable to recover it. 00:27:24.197 [2024-07-14 07:44:40.214344] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:24.197 [2024-07-14 07:44:40.214508] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:24.197 [2024-07-14 07:44:40.214534] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:24.197 [2024-07-14 07:44:40.214549] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:24.197 [2024-07-14 07:44:40.214562] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0c30000b90 00:27:24.197 [2024-07-14 07:44:40.214592] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:24.197 qpair failed and we were unable to recover it. 00:27:24.197 [2024-07-14 07:44:40.224361] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:24.197 [2024-07-14 07:44:40.224575] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:24.197 [2024-07-14 07:44:40.224601] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:24.197 [2024-07-14 07:44:40.224616] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:24.197 [2024-07-14 07:44:40.224629] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0c30000b90 00:27:24.197 [2024-07-14 07:44:40.224659] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:24.197 qpair failed and we were unable to recover it. 00:27:24.197 [2024-07-14 07:44:40.234415] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:24.197 [2024-07-14 07:44:40.234616] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:24.197 [2024-07-14 07:44:40.234641] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:24.197 [2024-07-14 07:44:40.234665] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:24.197 [2024-07-14 07:44:40.234678] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0c30000b90 00:27:24.197 [2024-07-14 07:44:40.234709] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:24.197 qpair failed and we were unable to recover it. 00:27:24.197 [2024-07-14 07:44:40.244478] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:24.197 [2024-07-14 07:44:40.244663] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:24.197 [2024-07-14 07:44:40.244704] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:24.197 [2024-07-14 07:44:40.244720] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:24.197 [2024-07-14 07:44:40.244734] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0c30000b90 00:27:24.197 [2024-07-14 07:44:40.244778] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:24.197 qpair failed and we were unable to recover it. 00:27:24.197 [2024-07-14 07:44:40.254523] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:24.197 [2024-07-14 07:44:40.254682] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:24.197 [2024-07-14 07:44:40.254713] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:24.197 [2024-07-14 07:44:40.254745] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:24.197 [2024-07-14 07:44:40.254760] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0c30000b90 00:27:24.197 [2024-07-14 07:44:40.254789] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:24.197 qpair failed and we were unable to recover it. 00:27:24.197 [2024-07-14 07:44:40.264510] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:24.197 [2024-07-14 07:44:40.264669] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:24.197 [2024-07-14 07:44:40.264696] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:24.197 [2024-07-14 07:44:40.264712] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:24.197 [2024-07-14 07:44:40.264740] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0c30000b90 00:27:24.197 [2024-07-14 07:44:40.264770] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:24.197 qpair failed and we were unable to recover it. 00:27:24.197 [2024-07-14 07:44:40.274523] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:24.197 [2024-07-14 07:44:40.274679] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:24.197 [2024-07-14 07:44:40.274705] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:24.197 [2024-07-14 07:44:40.274720] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:24.197 [2024-07-14 07:44:40.274733] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0c30000b90 00:27:24.197 [2024-07-14 07:44:40.274764] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:24.197 qpair failed and we were unable to recover it. 00:27:24.197 [2024-07-14 07:44:40.284565] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:24.197 [2024-07-14 07:44:40.284727] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:24.197 [2024-07-14 07:44:40.284755] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:24.197 [2024-07-14 07:44:40.284770] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:24.197 [2024-07-14 07:44:40.284784] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0c30000b90 00:27:24.197 [2024-07-14 07:44:40.284814] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:24.197 qpair failed and we were unable to recover it. 00:27:24.197 [2024-07-14 07:44:40.294605] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:24.197 [2024-07-14 07:44:40.294766] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:24.197 [2024-07-14 07:44:40.294792] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:24.197 [2024-07-14 07:44:40.294822] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:24.197 [2024-07-14 07:44:40.294836] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0c30000b90 00:27:24.197 [2024-07-14 07:44:40.294893] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:24.197 qpair failed and we were unable to recover it. 00:27:24.197 [2024-07-14 07:44:40.304632] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:24.197 [2024-07-14 07:44:40.304799] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:24.197 [2024-07-14 07:44:40.304827] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:24.197 [2024-07-14 07:44:40.304842] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:24.197 [2024-07-14 07:44:40.304858] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0c30000b90 00:27:24.197 [2024-07-14 07:44:40.304895] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:24.197 qpair failed and we were unable to recover it. 00:27:24.197 [2024-07-14 07:44:40.314677] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:24.197 [2024-07-14 07:44:40.314841] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:24.197 [2024-07-14 07:44:40.314875] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:24.197 [2024-07-14 07:44:40.314893] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:24.198 [2024-07-14 07:44:40.314907] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0c30000b90 00:27:24.198 [2024-07-14 07:44:40.314937] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:24.198 qpair failed and we were unable to recover it. 00:27:24.198 [2024-07-14 07:44:40.324677] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:24.198 [2024-07-14 07:44:40.324832] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:24.198 [2024-07-14 07:44:40.324859] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:24.198 [2024-07-14 07:44:40.324884] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:24.198 [2024-07-14 07:44:40.324899] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0c30000b90 00:27:24.198 [2024-07-14 07:44:40.324928] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:24.198 qpair failed and we were unable to recover it. 00:27:24.198 [2024-07-14 07:44:40.334752] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:24.198 [2024-07-14 07:44:40.334964] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:24.198 [2024-07-14 07:44:40.334991] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:24.198 [2024-07-14 07:44:40.335007] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:24.198 [2024-07-14 07:44:40.335020] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0c30000b90 00:27:24.198 [2024-07-14 07:44:40.335050] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:24.198 qpair failed and we were unable to recover it. 00:27:24.198 [2024-07-14 07:44:40.344757] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:24.198 [2024-07-14 07:44:40.344922] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:24.198 [2024-07-14 07:44:40.344955] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:24.198 [2024-07-14 07:44:40.344971] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:24.198 [2024-07-14 07:44:40.344985] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0c30000b90 00:27:24.198 [2024-07-14 07:44:40.345017] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:24.198 qpair failed and we were unable to recover it. 00:27:24.198 [2024-07-14 07:44:40.354772] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:24.198 [2024-07-14 07:44:40.354936] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:24.198 [2024-07-14 07:44:40.354963] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:24.198 [2024-07-14 07:44:40.354978] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:24.198 [2024-07-14 07:44:40.354992] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0c30000b90 00:27:24.198 [2024-07-14 07:44:40.355023] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:24.198 qpair failed and we were unable to recover it. 00:27:24.198 [2024-07-14 07:44:40.364809] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:24.198 [2024-07-14 07:44:40.364970] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:24.198 [2024-07-14 07:44:40.364997] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:24.198 [2024-07-14 07:44:40.365012] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:24.198 [2024-07-14 07:44:40.365026] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0c30000b90 00:27:24.198 [2024-07-14 07:44:40.365055] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:24.198 qpair failed and we were unable to recover it. 00:27:24.455 [2024-07-14 07:44:40.374859] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:24.455 [2024-07-14 07:44:40.375040] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:24.455 [2024-07-14 07:44:40.375067] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:24.455 [2024-07-14 07:44:40.375083] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:24.455 [2024-07-14 07:44:40.375097] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0c30000b90 00:27:24.455 [2024-07-14 07:44:40.375127] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:24.455 qpair failed and we were unable to recover it. 00:27:24.455 [2024-07-14 07:44:40.384915] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:24.455 [2024-07-14 07:44:40.385090] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:24.455 [2024-07-14 07:44:40.385117] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:24.455 [2024-07-14 07:44:40.385133] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:24.455 [2024-07-14 07:44:40.385161] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0c30000b90 00:27:24.455 [2024-07-14 07:44:40.385196] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:24.455 qpair failed and we were unable to recover it. 00:27:24.455 [2024-07-14 07:44:40.394899] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:24.455 [2024-07-14 07:44:40.395062] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:24.455 [2024-07-14 07:44:40.395088] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:24.455 [2024-07-14 07:44:40.395104] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:24.455 [2024-07-14 07:44:40.395118] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0c30000b90 00:27:24.455 [2024-07-14 07:44:40.395149] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:24.455 qpair failed and we were unable to recover it. 00:27:24.455 [2024-07-14 07:44:40.404930] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:24.455 [2024-07-14 07:44:40.405132] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:24.455 [2024-07-14 07:44:40.405160] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:24.455 [2024-07-14 07:44:40.405175] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:24.455 [2024-07-14 07:44:40.405188] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0c30000b90 00:27:24.455 [2024-07-14 07:44:40.405218] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:24.455 qpair failed and we were unable to recover it. 00:27:24.455 [2024-07-14 07:44:40.414954] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:24.455 [2024-07-14 07:44:40.415119] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:24.455 [2024-07-14 07:44:40.415146] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:24.455 [2024-07-14 07:44:40.415162] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:24.455 [2024-07-14 07:44:40.415176] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0c30000b90 00:27:24.455 [2024-07-14 07:44:40.415206] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:24.455 qpair failed and we were unable to recover it. 00:27:24.455 [2024-07-14 07:44:40.424978] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:24.455 [2024-07-14 07:44:40.425143] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:24.455 [2024-07-14 07:44:40.425169] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:24.455 [2024-07-14 07:44:40.425185] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:24.455 [2024-07-14 07:44:40.425199] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0c30000b90 00:27:24.455 [2024-07-14 07:44:40.425229] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:24.455 qpair failed and we were unable to recover it. 00:27:24.455 [2024-07-14 07:44:40.435011] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:24.455 [2024-07-14 07:44:40.435171] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:24.455 [2024-07-14 07:44:40.435203] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:24.455 [2024-07-14 07:44:40.435235] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:24.455 [2024-07-14 07:44:40.435249] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0c30000b90 00:27:24.455 [2024-07-14 07:44:40.435306] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:24.455 qpair failed and we were unable to recover it. 00:27:24.455 [2024-07-14 07:44:40.445038] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:24.455 [2024-07-14 07:44:40.445195] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:24.455 [2024-07-14 07:44:40.445222] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:24.455 [2024-07-14 07:44:40.445238] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:24.455 [2024-07-14 07:44:40.445266] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0c30000b90 00:27:24.455 [2024-07-14 07:44:40.445297] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:24.455 qpair failed and we were unable to recover it. 00:27:24.455 [2024-07-14 07:44:40.455064] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:24.455 [2024-07-14 07:44:40.455228] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:24.455 [2024-07-14 07:44:40.455253] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:24.455 [2024-07-14 07:44:40.455269] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:24.455 [2024-07-14 07:44:40.455283] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0c30000b90 00:27:24.455 [2024-07-14 07:44:40.455312] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:24.455 qpair failed and we were unable to recover it. 00:27:24.455 [2024-07-14 07:44:40.465103] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:24.455 [2024-07-14 07:44:40.465267] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:24.455 [2024-07-14 07:44:40.465294] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:24.455 [2024-07-14 07:44:40.465312] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:24.455 [2024-07-14 07:44:40.465342] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0c30000b90 00:27:24.455 [2024-07-14 07:44:40.465372] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:24.455 qpair failed and we were unable to recover it. 00:27:24.455 [2024-07-14 07:44:40.475127] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:24.455 [2024-07-14 07:44:40.475310] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:24.455 [2024-07-14 07:44:40.475337] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:24.455 [2024-07-14 07:44:40.475353] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:24.455 [2024-07-14 07:44:40.475373] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0c30000b90 00:27:24.455 [2024-07-14 07:44:40.475405] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:24.455 qpair failed and we were unable to recover it. 00:27:24.455 [2024-07-14 07:44:40.485134] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:24.455 [2024-07-14 07:44:40.485349] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:24.455 [2024-07-14 07:44:40.485376] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:24.456 [2024-07-14 07:44:40.485391] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:24.456 [2024-07-14 07:44:40.485405] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0c30000b90 00:27:24.456 [2024-07-14 07:44:40.485435] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:24.456 qpair failed and we were unable to recover it. 00:27:24.456 [2024-07-14 07:44:40.495212] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:24.456 [2024-07-14 07:44:40.495371] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:24.456 [2024-07-14 07:44:40.495398] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:24.456 [2024-07-14 07:44:40.495414] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:24.456 [2024-07-14 07:44:40.495428] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0c30000b90 00:27:24.456 [2024-07-14 07:44:40.495458] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:24.456 qpair failed and we were unable to recover it. 00:27:24.456 [2024-07-14 07:44:40.505242] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:24.456 [2024-07-14 07:44:40.505453] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:24.456 [2024-07-14 07:44:40.505480] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:24.456 [2024-07-14 07:44:40.505496] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:24.456 [2024-07-14 07:44:40.505509] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0c30000b90 00:27:24.456 [2024-07-14 07:44:40.505539] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:24.456 qpair failed and we were unable to recover it. 00:27:24.456 [2024-07-14 07:44:40.515202] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:24.456 [2024-07-14 07:44:40.515362] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:24.456 [2024-07-14 07:44:40.515389] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:24.456 [2024-07-14 07:44:40.515409] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:24.456 [2024-07-14 07:44:40.515438] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0c30000b90 00:27:24.456 [2024-07-14 07:44:40.515468] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:24.456 qpair failed and we were unable to recover it. 00:27:24.456 [2024-07-14 07:44:40.525257] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:24.456 [2024-07-14 07:44:40.525426] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:24.456 [2024-07-14 07:44:40.525454] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:24.456 [2024-07-14 07:44:40.525470] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:24.456 [2024-07-14 07:44:40.525484] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0c30000b90 00:27:24.456 [2024-07-14 07:44:40.525514] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:24.456 qpair failed and we were unable to recover it. 00:27:24.456 [2024-07-14 07:44:40.535296] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:24.456 [2024-07-14 07:44:40.535458] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:24.456 [2024-07-14 07:44:40.535485] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:24.456 [2024-07-14 07:44:40.535500] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:24.456 [2024-07-14 07:44:40.535514] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0c30000b90 00:27:24.456 [2024-07-14 07:44:40.535543] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:24.456 qpair failed and we were unable to recover it. 00:27:24.456 [2024-07-14 07:44:40.545306] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:24.456 [2024-07-14 07:44:40.545467] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:24.456 [2024-07-14 07:44:40.545495] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:24.456 [2024-07-14 07:44:40.545511] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:24.456 [2024-07-14 07:44:40.545525] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0c30000b90 00:27:24.456 [2024-07-14 07:44:40.545556] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:24.456 qpair failed and we were unable to recover it. 00:27:24.456 [2024-07-14 07:44:40.555346] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:24.456 [2024-07-14 07:44:40.555501] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:24.456 [2024-07-14 07:44:40.555527] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:24.456 [2024-07-14 07:44:40.555543] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:24.456 [2024-07-14 07:44:40.555557] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0c30000b90 00:27:24.456 [2024-07-14 07:44:40.555602] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:24.456 qpair failed and we were unable to recover it. 00:27:24.456 [2024-07-14 07:44:40.565360] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:24.456 [2024-07-14 07:44:40.565530] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:24.456 [2024-07-14 07:44:40.565557] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:24.456 [2024-07-14 07:44:40.565578] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:24.456 [2024-07-14 07:44:40.565593] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0c30000b90 00:27:24.456 [2024-07-14 07:44:40.565624] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:24.456 qpair failed and we were unable to recover it. 00:27:24.456 [2024-07-14 07:44:40.575393] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:24.456 [2024-07-14 07:44:40.575556] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:24.456 [2024-07-14 07:44:40.575583] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:24.456 [2024-07-14 07:44:40.575598] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:24.456 [2024-07-14 07:44:40.575612] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0c30000b90 00:27:24.456 [2024-07-14 07:44:40.575642] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:24.456 qpair failed and we were unable to recover it. 00:27:24.456 [2024-07-14 07:44:40.585433] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:24.456 [2024-07-14 07:44:40.585618] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:24.456 [2024-07-14 07:44:40.585645] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:24.456 [2024-07-14 07:44:40.585661] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:24.456 [2024-07-14 07:44:40.585675] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0c30000b90 00:27:24.456 [2024-07-14 07:44:40.585705] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:24.456 qpair failed and we were unable to recover it. 00:27:24.456 [2024-07-14 07:44:40.595462] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:24.456 [2024-07-14 07:44:40.595624] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:24.456 [2024-07-14 07:44:40.595651] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:24.456 [2024-07-14 07:44:40.595667] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:24.456 [2024-07-14 07:44:40.595696] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0c30000b90 00:27:24.456 [2024-07-14 07:44:40.595726] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:24.456 qpair failed and we were unable to recover it. 00:27:24.456 [2024-07-14 07:44:40.605507] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:24.456 [2024-07-14 07:44:40.605663] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:24.456 [2024-07-14 07:44:40.605690] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:24.456 [2024-07-14 07:44:40.605721] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:24.456 [2024-07-14 07:44:40.605736] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0c30000b90 00:27:24.456 [2024-07-14 07:44:40.605766] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:24.456 qpair failed and we were unable to recover it. 00:27:24.456 [2024-07-14 07:44:40.615555] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:24.456 [2024-07-14 07:44:40.615740] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:24.456 [2024-07-14 07:44:40.615767] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:24.456 [2024-07-14 07:44:40.615783] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:24.456 [2024-07-14 07:44:40.615797] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0c30000b90 00:27:24.456 [2024-07-14 07:44:40.615827] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:24.456 qpair failed and we were unable to recover it. 00:27:24.713 [2024-07-14 07:44:40.625565] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:24.713 [2024-07-14 07:44:40.625770] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:24.713 [2024-07-14 07:44:40.625797] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:24.713 [2024-07-14 07:44:40.625812] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:24.713 [2024-07-14 07:44:40.625826] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0c30000b90 00:27:24.713 [2024-07-14 07:44:40.625856] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:24.713 qpair failed and we were unable to recover it. 00:27:24.713 [2024-07-14 07:44:40.635560] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:24.713 [2024-07-14 07:44:40.635723] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:24.713 [2024-07-14 07:44:40.635750] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:24.713 [2024-07-14 07:44:40.635766] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:24.713 [2024-07-14 07:44:40.635781] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0c30000b90 00:27:24.713 [2024-07-14 07:44:40.635810] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:24.713 qpair failed and we were unable to recover it. 00:27:24.713 [2024-07-14 07:44:40.645599] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:24.713 [2024-07-14 07:44:40.645757] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:24.713 [2024-07-14 07:44:40.645784] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:24.713 [2024-07-14 07:44:40.645800] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:24.713 [2024-07-14 07:44:40.645814] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0c30000b90 00:27:24.713 [2024-07-14 07:44:40.645857] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:24.713 qpair failed and we were unable to recover it. 00:27:24.713 [2024-07-14 07:44:40.655654] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:24.713 [2024-07-14 07:44:40.655818] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:24.713 [2024-07-14 07:44:40.655846] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:24.713 [2024-07-14 07:44:40.655873] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:24.713 [2024-07-14 07:44:40.655890] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0c30000b90 00:27:24.713 [2024-07-14 07:44:40.655922] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:24.713 qpair failed and we were unable to recover it. 00:27:24.713 [2024-07-14 07:44:40.665699] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:24.713 [2024-07-14 07:44:40.665858] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:24.713 [2024-07-14 07:44:40.665890] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:24.713 [2024-07-14 07:44:40.665907] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:24.713 [2024-07-14 07:44:40.665921] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0c30000b90 00:27:24.713 [2024-07-14 07:44:40.665951] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:24.713 qpair failed and we were unable to recover it. 00:27:24.713 [2024-07-14 07:44:40.675689] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:24.713 [2024-07-14 07:44:40.675850] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:24.713 [2024-07-14 07:44:40.675891] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:24.713 [2024-07-14 07:44:40.675909] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:24.713 [2024-07-14 07:44:40.675923] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0c30000b90 00:27:24.713 [2024-07-14 07:44:40.675953] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:24.713 qpair failed and we were unable to recover it. 00:27:24.713 [2024-07-14 07:44:40.685759] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:24.713 [2024-07-14 07:44:40.685924] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:24.713 [2024-07-14 07:44:40.685953] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:24.713 [2024-07-14 07:44:40.685969] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:24.713 [2024-07-14 07:44:40.685983] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0c30000b90 00:27:24.713 [2024-07-14 07:44:40.686012] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:24.713 qpair failed and we were unable to recover it. 00:27:24.713 [2024-07-14 07:44:40.695743] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:24.713 [2024-07-14 07:44:40.695908] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:24.713 [2024-07-14 07:44:40.695935] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:24.713 [2024-07-14 07:44:40.695950] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:24.713 [2024-07-14 07:44:40.695963] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0c30000b90 00:27:24.713 [2024-07-14 07:44:40.695993] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:24.713 qpair failed and we were unable to recover it. 00:27:24.713 [2024-07-14 07:44:40.705802] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:24.713 [2024-07-14 07:44:40.705973] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:24.713 [2024-07-14 07:44:40.706000] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:24.713 [2024-07-14 07:44:40.706016] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:24.713 [2024-07-14 07:44:40.706030] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0c30000b90 00:27:24.713 [2024-07-14 07:44:40.706059] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:24.713 qpair failed and we were unable to recover it. 00:27:24.713 [2024-07-14 07:44:40.715799] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:24.713 [2024-07-14 07:44:40.715963] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:24.713 [2024-07-14 07:44:40.715990] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:24.713 [2024-07-14 07:44:40.716006] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:24.713 [2024-07-14 07:44:40.716020] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0c30000b90 00:27:24.713 [2024-07-14 07:44:40.716051] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:24.713 qpair failed and we were unable to recover it. 00:27:24.713 [2024-07-14 07:44:40.725888] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:24.713 [2024-07-14 07:44:40.726049] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:24.713 [2024-07-14 07:44:40.726076] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:24.713 [2024-07-14 07:44:40.726092] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:24.713 [2024-07-14 07:44:40.726106] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0c30000b90 00:27:24.713 [2024-07-14 07:44:40.726136] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:24.713 qpair failed and we were unable to recover it. 00:27:24.713 [2024-07-14 07:44:40.735895] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:24.713 [2024-07-14 07:44:40.736058] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:24.713 [2024-07-14 07:44:40.736085] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:24.713 [2024-07-14 07:44:40.736101] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:24.714 [2024-07-14 07:44:40.736115] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0c30000b90 00:27:24.714 [2024-07-14 07:44:40.736158] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:24.714 qpair failed and we were unable to recover it. 00:27:24.714 [2024-07-14 07:44:40.745938] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:24.714 [2024-07-14 07:44:40.746100] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:24.714 [2024-07-14 07:44:40.746135] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:24.714 [2024-07-14 07:44:40.746151] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:24.714 [2024-07-14 07:44:40.746165] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0c30000b90 00:27:24.714 [2024-07-14 07:44:40.746195] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:24.714 qpair failed and we were unable to recover it. 00:27:24.714 [2024-07-14 07:44:40.755953] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:24.714 [2024-07-14 07:44:40.756123] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:24.714 [2024-07-14 07:44:40.756150] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:24.714 [2024-07-14 07:44:40.756166] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:24.714 [2024-07-14 07:44:40.756179] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0c30000b90 00:27:24.714 [2024-07-14 07:44:40.756209] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:24.714 qpair failed and we were unable to recover it. 00:27:24.714 [2024-07-14 07:44:40.765964] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:24.714 [2024-07-14 07:44:40.766126] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:24.714 [2024-07-14 07:44:40.766153] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:24.714 [2024-07-14 07:44:40.766168] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:24.714 [2024-07-14 07:44:40.766182] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0c30000b90 00:27:24.714 [2024-07-14 07:44:40.766212] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:24.714 qpair failed and we were unable to recover it. 00:27:24.714 [2024-07-14 07:44:40.776085] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:24.714 [2024-07-14 07:44:40.776304] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:24.714 [2024-07-14 07:44:40.776334] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:24.714 [2024-07-14 07:44:40.776350] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:24.714 [2024-07-14 07:44:40.776363] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0c30000b90 00:27:24.714 [2024-07-14 07:44:40.776392] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:24.714 qpair failed and we were unable to recover it. 00:27:24.714 [2024-07-14 07:44:40.786048] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:24.714 [2024-07-14 07:44:40.786211] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:24.714 [2024-07-14 07:44:40.786238] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:24.714 [2024-07-14 07:44:40.786254] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:24.714 [2024-07-14 07:44:40.786268] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0c30000b90 00:27:24.714 [2024-07-14 07:44:40.786318] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:24.714 qpair failed and we were unable to recover it. 00:27:24.714 [2024-07-14 07:44:40.796077] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:24.714 [2024-07-14 07:44:40.796244] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:24.714 [2024-07-14 07:44:40.796271] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:24.714 [2024-07-14 07:44:40.796287] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:24.714 [2024-07-14 07:44:40.796301] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0c30000b90 00:27:24.714 [2024-07-14 07:44:40.796330] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:24.714 qpair failed and we were unable to recover it. 00:27:24.714 [2024-07-14 07:44:40.806084] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:24.714 [2024-07-14 07:44:40.806281] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:24.714 [2024-07-14 07:44:40.806308] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:24.714 [2024-07-14 07:44:40.806323] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:24.714 [2024-07-14 07:44:40.806336] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0c30000b90 00:27:24.714 [2024-07-14 07:44:40.806365] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:24.714 qpair failed and we were unable to recover it. 00:27:24.714 [2024-07-14 07:44:40.816116] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:24.714 [2024-07-14 07:44:40.816281] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:24.714 [2024-07-14 07:44:40.816307] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:24.714 [2024-07-14 07:44:40.816323] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:24.714 [2024-07-14 07:44:40.816337] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0c30000b90 00:27:24.714 [2024-07-14 07:44:40.816367] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:24.714 qpair failed and we were unable to recover it. 00:27:24.714 [2024-07-14 07:44:40.826139] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:24.714 [2024-07-14 07:44:40.826301] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:24.714 [2024-07-14 07:44:40.826329] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:24.714 [2024-07-14 07:44:40.826344] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:24.714 [2024-07-14 07:44:40.826358] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0c30000b90 00:27:24.714 [2024-07-14 07:44:40.826388] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:24.714 qpair failed and we were unable to recover it. 00:27:24.714 [2024-07-14 07:44:40.836190] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:24.714 [2024-07-14 07:44:40.836389] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:24.714 [2024-07-14 07:44:40.836421] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:24.714 [2024-07-14 07:44:40.836438] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:24.714 [2024-07-14 07:44:40.836453] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0c30000b90 00:27:24.714 [2024-07-14 07:44:40.836483] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:24.714 qpair failed and we were unable to recover it. 00:27:24.714 [2024-07-14 07:44:40.846209] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:24.714 [2024-07-14 07:44:40.846397] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:24.714 [2024-07-14 07:44:40.846426] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:24.714 [2024-07-14 07:44:40.846459] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:24.714 [2024-07-14 07:44:40.846473] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0c30000b90 00:27:24.714 [2024-07-14 07:44:40.846518] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:24.714 qpair failed and we were unable to recover it. 00:27:24.714 [2024-07-14 07:44:40.856225] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:24.714 [2024-07-14 07:44:40.856390] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:24.714 [2024-07-14 07:44:40.856418] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:24.714 [2024-07-14 07:44:40.856433] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:24.714 [2024-07-14 07:44:40.856446] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0c30000b90 00:27:24.714 [2024-07-14 07:44:40.856477] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:24.714 qpair failed and we were unable to recover it. 00:27:24.714 [2024-07-14 07:44:40.866251] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:24.714 [2024-07-14 07:44:40.866411] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:24.714 [2024-07-14 07:44:40.866437] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:24.714 [2024-07-14 07:44:40.866451] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:24.714 [2024-07-14 07:44:40.866464] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0c30000b90 00:27:24.714 [2024-07-14 07:44:40.866494] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:24.714 qpair failed and we were unable to recover it. 00:27:24.714 [2024-07-14 07:44:40.876303] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:24.714 [2024-07-14 07:44:40.876475] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:24.714 [2024-07-14 07:44:40.876502] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:24.714 [2024-07-14 07:44:40.876518] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:24.714 [2024-07-14 07:44:40.876532] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0c30000b90 00:27:24.714 [2024-07-14 07:44:40.876568] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:24.714 qpair failed and we were unable to recover it. 00:27:24.975 [2024-07-14 07:44:40.886337] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:24.975 [2024-07-14 07:44:40.886512] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:24.975 [2024-07-14 07:44:40.886539] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:24.975 [2024-07-14 07:44:40.886555] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:24.975 [2024-07-14 07:44:40.886569] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0c30000b90 00:27:24.975 [2024-07-14 07:44:40.886599] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:24.975 qpair failed and we were unable to recover it. 00:27:24.975 [2024-07-14 07:44:40.896367] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:24.975 [2024-07-14 07:44:40.896572] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:24.975 [2024-07-14 07:44:40.896599] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:24.975 [2024-07-14 07:44:40.896615] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:24.975 [2024-07-14 07:44:40.896629] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0c30000b90 00:27:24.975 [2024-07-14 07:44:40.896660] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:24.975 qpair failed and we were unable to recover it. 00:27:24.975 [2024-07-14 07:44:40.906357] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:24.975 [2024-07-14 07:44:40.906511] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:24.975 [2024-07-14 07:44:40.906539] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:24.975 [2024-07-14 07:44:40.906555] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:24.975 [2024-07-14 07:44:40.906569] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0c30000b90 00:27:24.975 [2024-07-14 07:44:40.906600] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:24.975 qpair failed and we were unable to recover it. 00:27:24.975 [2024-07-14 07:44:40.916405] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:24.975 [2024-07-14 07:44:40.916582] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:24.975 [2024-07-14 07:44:40.916609] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:24.975 [2024-07-14 07:44:40.916624] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:24.975 [2024-07-14 07:44:40.916638] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0c30000b90 00:27:24.975 [2024-07-14 07:44:40.916669] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:24.975 qpair failed and we were unable to recover it. 00:27:24.975 [2024-07-14 07:44:40.926435] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:24.975 [2024-07-14 07:44:40.926638] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:24.975 [2024-07-14 07:44:40.926671] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:24.975 [2024-07-14 07:44:40.926687] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:24.975 [2024-07-14 07:44:40.926700] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0c30000b90 00:27:24.975 [2024-07-14 07:44:40.926743] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:24.975 qpair failed and we were unable to recover it. 00:27:24.975 [2024-07-14 07:44:40.936500] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:24.975 [2024-07-14 07:44:40.936666] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:24.975 [2024-07-14 07:44:40.936693] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:24.975 [2024-07-14 07:44:40.936709] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:24.975 [2024-07-14 07:44:40.936738] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0c30000b90 00:27:24.975 [2024-07-14 07:44:40.936769] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:24.975 qpair failed and we were unable to recover it. 00:27:24.975 [2024-07-14 07:44:40.946500] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:24.975 [2024-07-14 07:44:40.946654] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:24.975 [2024-07-14 07:44:40.946681] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:24.975 [2024-07-14 07:44:40.946697] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:24.975 [2024-07-14 07:44:40.946710] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0c30000b90 00:27:24.975 [2024-07-14 07:44:40.946755] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:24.975 qpair failed and we were unable to recover it. 00:27:24.975 [2024-07-14 07:44:40.956592] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:24.975 [2024-07-14 07:44:40.956759] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:24.975 [2024-07-14 07:44:40.956786] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:24.975 [2024-07-14 07:44:40.956802] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:24.975 [2024-07-14 07:44:40.956816] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0c30000b90 00:27:24.975 [2024-07-14 07:44:40.956883] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:24.975 qpair failed and we were unable to recover it. 00:27:24.975 [2024-07-14 07:44:40.966548] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:24.975 [2024-07-14 07:44:40.966717] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:24.975 [2024-07-14 07:44:40.966745] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:24.975 [2024-07-14 07:44:40.966761] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:24.975 [2024-07-14 07:44:40.966779] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0c30000b90 00:27:24.975 [2024-07-14 07:44:40.966810] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:24.975 qpair failed and we were unable to recover it. 00:27:24.975 [2024-07-14 07:44:40.976616] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:24.975 [2024-07-14 07:44:40.976783] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:24.975 [2024-07-14 07:44:40.976810] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:24.975 [2024-07-14 07:44:40.976825] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:24.975 [2024-07-14 07:44:40.976839] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0c30000b90 00:27:24.975 [2024-07-14 07:44:40.976876] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:24.975 qpair failed and we were unable to recover it. 00:27:24.975 [2024-07-14 07:44:40.986599] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:24.975 [2024-07-14 07:44:40.986755] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:24.975 [2024-07-14 07:44:40.986782] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:24.975 [2024-07-14 07:44:40.986798] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:24.975 [2024-07-14 07:44:40.986811] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0c30000b90 00:27:24.975 [2024-07-14 07:44:40.986841] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:24.975 qpair failed and we were unable to recover it. 00:27:24.975 [2024-07-14 07:44:40.996683] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:24.975 [2024-07-14 07:44:40.996879] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:24.975 [2024-07-14 07:44:40.996907] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:24.975 [2024-07-14 07:44:40.996922] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:24.975 [2024-07-14 07:44:40.996936] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0c30000b90 00:27:24.975 [2024-07-14 07:44:40.996965] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:24.975 qpair failed and we were unable to recover it. 00:27:24.975 [2024-07-14 07:44:41.006660] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:24.975 [2024-07-14 07:44:41.006823] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:24.976 [2024-07-14 07:44:41.006850] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:24.976 [2024-07-14 07:44:41.006873] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:24.976 [2024-07-14 07:44:41.006889] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0c30000b90 00:27:24.976 [2024-07-14 07:44:41.006920] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:24.976 qpair failed and we were unable to recover it. 00:27:24.976 [2024-07-14 07:44:41.016714] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:24.976 [2024-07-14 07:44:41.016928] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:24.976 [2024-07-14 07:44:41.016955] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:24.976 [2024-07-14 07:44:41.016971] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:24.976 [2024-07-14 07:44:41.016986] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0c30000b90 00:27:24.976 [2024-07-14 07:44:41.017015] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:24.976 qpair failed and we were unable to recover it. 00:27:24.976 [2024-07-14 07:44:41.026736] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:24.976 [2024-07-14 07:44:41.026905] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:24.976 [2024-07-14 07:44:41.026934] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:24.976 [2024-07-14 07:44:41.026953] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:24.976 [2024-07-14 07:44:41.026967] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0c30000b90 00:27:24.976 [2024-07-14 07:44:41.027011] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:24.976 qpair failed and we were unable to recover it. 00:27:24.976 [2024-07-14 07:44:41.036752] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:24.976 [2024-07-14 07:44:41.036965] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:24.976 [2024-07-14 07:44:41.037007] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:24.976 [2024-07-14 07:44:41.037023] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:24.976 [2024-07-14 07:44:41.037036] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0c30000b90 00:27:24.976 [2024-07-14 07:44:41.037079] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:24.976 qpair failed and we were unable to recover it. 00:27:24.976 [2024-07-14 07:44:41.046794] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:24.976 [2024-07-14 07:44:41.046953] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:24.976 [2024-07-14 07:44:41.046980] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:24.976 [2024-07-14 07:44:41.046996] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:24.976 [2024-07-14 07:44:41.047010] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0c30000b90 00:27:24.976 [2024-07-14 07:44:41.047041] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:24.976 qpair failed and we were unable to recover it. 00:27:24.976 [2024-07-14 07:44:41.056821] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:24.976 [2024-07-14 07:44:41.057005] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:24.976 [2024-07-14 07:44:41.057033] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:24.976 [2024-07-14 07:44:41.057052] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:24.976 [2024-07-14 07:44:41.057072] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0c30000b90 00:27:24.976 [2024-07-14 07:44:41.057114] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:24.976 qpair failed and we were unable to recover it. 00:27:24.976 [2024-07-14 07:44:41.066824] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:24.976 [2024-07-14 07:44:41.066994] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:24.976 [2024-07-14 07:44:41.067022] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:24.976 [2024-07-14 07:44:41.067038] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:24.976 [2024-07-14 07:44:41.067052] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0c30000b90 00:27:24.976 [2024-07-14 07:44:41.067083] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:24.976 qpair failed and we were unable to recover it. 00:27:24.976 [2024-07-14 07:44:41.076887] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:24.976 [2024-07-14 07:44:41.077087] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:24.976 [2024-07-14 07:44:41.077115] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:24.976 [2024-07-14 07:44:41.077131] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:24.976 [2024-07-14 07:44:41.077145] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0c30000b90 00:27:24.976 [2024-07-14 07:44:41.077174] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:24.976 qpair failed and we were unable to recover it. 00:27:24.976 [2024-07-14 07:44:41.086908] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:24.976 [2024-07-14 07:44:41.087067] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:24.976 [2024-07-14 07:44:41.087094] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:24.976 [2024-07-14 07:44:41.087110] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:24.976 [2024-07-14 07:44:41.087124] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0c30000b90 00:27:24.976 [2024-07-14 07:44:41.087154] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:24.976 qpair failed and we were unable to recover it. 00:27:24.976 [2024-07-14 07:44:41.096929] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:24.976 [2024-07-14 07:44:41.097090] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:24.976 [2024-07-14 07:44:41.097118] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:24.976 [2024-07-14 07:44:41.097133] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:24.976 [2024-07-14 07:44:41.097147] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0c30000b90 00:27:24.976 [2024-07-14 07:44:41.097191] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:24.976 qpair failed and we were unable to recover it. 00:27:24.976 [2024-07-14 07:44:41.106963] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:24.976 [2024-07-14 07:44:41.107141] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:24.976 [2024-07-14 07:44:41.107169] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:24.976 [2024-07-14 07:44:41.107185] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:24.976 [2024-07-14 07:44:41.107199] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0c30000b90 00:27:24.976 [2024-07-14 07:44:41.107229] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:24.976 qpair failed and we were unable to recover it. 00:27:24.976 [2024-07-14 07:44:41.116980] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:24.976 [2024-07-14 07:44:41.117141] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:24.976 [2024-07-14 07:44:41.117168] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:24.976 [2024-07-14 07:44:41.117184] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:24.976 [2024-07-14 07:44:41.117198] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0c30000b90 00:27:24.976 [2024-07-14 07:44:41.117228] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:24.976 qpair failed and we were unable to recover it. 00:27:24.976 [2024-07-14 07:44:41.126999] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:24.976 [2024-07-14 07:44:41.127178] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:24.976 [2024-07-14 07:44:41.127206] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:24.976 [2024-07-14 07:44:41.127221] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:24.976 [2024-07-14 07:44:41.127235] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0c30000b90 00:27:24.976 [2024-07-14 07:44:41.127265] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:24.976 qpair failed and we were unable to recover it. 00:27:24.976 [2024-07-14 07:44:41.137035] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:24.976 [2024-07-14 07:44:41.137190] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:24.976 [2024-07-14 07:44:41.137217] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:24.976 [2024-07-14 07:44:41.137232] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:24.976 [2024-07-14 07:44:41.137246] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0c30000b90 00:27:24.976 [2024-07-14 07:44:41.137276] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:24.976 qpair failed and we were unable to recover it. 00:27:25.234 [2024-07-14 07:44:41.147088] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:25.234 [2024-07-14 07:44:41.147272] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:25.234 [2024-07-14 07:44:41.147299] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:25.234 [2024-07-14 07:44:41.147321] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:25.234 [2024-07-14 07:44:41.147338] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0c30000b90 00:27:25.234 [2024-07-14 07:44:41.147368] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:25.234 qpair failed and we were unable to recover it. 00:27:25.234 [2024-07-14 07:44:41.157139] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:25.234 [2024-07-14 07:44:41.157304] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:25.234 [2024-07-14 07:44:41.157330] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:25.234 [2024-07-14 07:44:41.157345] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:25.234 [2024-07-14 07:44:41.157359] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0c30000b90 00:27:25.234 [2024-07-14 07:44:41.157389] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:25.234 qpair failed and we were unable to recover it. 00:27:25.234 [2024-07-14 07:44:41.167157] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:25.234 [2024-07-14 07:44:41.167358] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:25.234 [2024-07-14 07:44:41.167399] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:25.234 [2024-07-14 07:44:41.167414] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:25.234 [2024-07-14 07:44:41.167428] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0c30000b90 00:27:25.234 [2024-07-14 07:44:41.167472] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:25.234 qpair failed and we were unable to recover it. 00:27:25.234 [2024-07-14 07:44:41.177205] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:25.234 [2024-07-14 07:44:41.177371] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:25.234 [2024-07-14 07:44:41.177398] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:25.234 [2024-07-14 07:44:41.177413] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:25.234 [2024-07-14 07:44:41.177427] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0c30000b90 00:27:25.234 [2024-07-14 07:44:41.177458] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:25.234 qpair failed and we were unable to recover it. 00:27:25.234 [2024-07-14 07:44:41.187196] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:25.234 [2024-07-14 07:44:41.187355] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:25.234 [2024-07-14 07:44:41.187382] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:25.234 [2024-07-14 07:44:41.187397] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:25.234 [2024-07-14 07:44:41.187410] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0c30000b90 00:27:25.234 [2024-07-14 07:44:41.187440] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:25.234 qpair failed and we were unable to recover it. 00:27:25.234 [2024-07-14 07:44:41.197219] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:25.234 [2024-07-14 07:44:41.197375] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:25.234 [2024-07-14 07:44:41.197401] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:25.234 [2024-07-14 07:44:41.197416] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:25.234 [2024-07-14 07:44:41.197430] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0c30000b90 00:27:25.234 [2024-07-14 07:44:41.197475] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:25.234 qpair failed and we were unable to recover it. 00:27:25.234 [2024-07-14 07:44:41.207286] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:25.234 [2024-07-14 07:44:41.207447] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:25.234 [2024-07-14 07:44:41.207474] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:25.234 [2024-07-14 07:44:41.207490] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:25.234 [2024-07-14 07:44:41.207504] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0c30000b90 00:27:25.234 [2024-07-14 07:44:41.207548] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:25.234 qpair failed and we were unable to recover it. 00:27:25.234 [2024-07-14 07:44:41.217274] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:25.234 [2024-07-14 07:44:41.217437] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:25.234 [2024-07-14 07:44:41.217463] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:25.234 [2024-07-14 07:44:41.217478] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:25.234 [2024-07-14 07:44:41.217492] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0c30000b90 00:27:25.234 [2024-07-14 07:44:41.217522] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:25.234 qpair failed and we were unable to recover it. 00:27:25.234 [2024-07-14 07:44:41.227309] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:25.234 [2024-07-14 07:44:41.227470] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:25.234 [2024-07-14 07:44:41.227501] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:25.235 [2024-07-14 07:44:41.227516] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:25.235 [2024-07-14 07:44:41.227530] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0c30000b90 00:27:25.235 [2024-07-14 07:44:41.227559] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:25.235 qpair failed and we were unable to recover it. 00:27:25.235 [2024-07-14 07:44:41.237316] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:25.235 [2024-07-14 07:44:41.237492] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:25.235 [2024-07-14 07:44:41.237518] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:25.235 [2024-07-14 07:44:41.237539] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:25.235 [2024-07-14 07:44:41.237556] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0c30000b90 00:27:25.235 [2024-07-14 07:44:41.237587] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:25.235 qpair failed and we were unable to recover it. 00:27:25.235 [2024-07-14 07:44:41.247368] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:25.235 [2024-07-14 07:44:41.247545] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:25.235 [2024-07-14 07:44:41.247572] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:25.235 [2024-07-14 07:44:41.247588] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:25.235 [2024-07-14 07:44:41.247602] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0c30000b90 00:27:25.235 [2024-07-14 07:44:41.247633] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:25.235 qpair failed and we were unable to recover it. 00:27:25.235 [2024-07-14 07:44:41.257414] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:25.235 [2024-07-14 07:44:41.257598] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:25.235 [2024-07-14 07:44:41.257637] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:25.235 [2024-07-14 07:44:41.257654] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:25.235 [2024-07-14 07:44:41.257668] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0c30000b90 00:27:25.235 [2024-07-14 07:44:41.257711] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:25.235 qpair failed and we were unable to recover it. 00:27:25.235 [2024-07-14 07:44:41.267491] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:25.235 [2024-07-14 07:44:41.267660] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:25.235 [2024-07-14 07:44:41.267686] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:25.235 [2024-07-14 07:44:41.267702] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:25.235 [2024-07-14 07:44:41.267717] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0c30000b90 00:27:25.235 [2024-07-14 07:44:41.267763] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:25.235 qpair failed and we were unable to recover it. 00:27:25.235 [2024-07-14 07:44:41.277528] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:25.235 [2024-07-14 07:44:41.277704] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:25.235 [2024-07-14 07:44:41.277729] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:25.235 [2024-07-14 07:44:41.277745] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:25.235 [2024-07-14 07:44:41.277759] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0c30000b90 00:27:25.235 [2024-07-14 07:44:41.277789] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:25.235 qpair failed and we were unable to recover it. 00:27:25.235 [2024-07-14 07:44:41.287508] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:25.235 [2024-07-14 07:44:41.287725] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:25.235 [2024-07-14 07:44:41.287751] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:25.235 [2024-07-14 07:44:41.287766] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:25.235 [2024-07-14 07:44:41.287780] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0c30000b90 00:27:25.235 [2024-07-14 07:44:41.287810] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:25.235 qpair failed and we were unable to recover it. 00:27:25.235 [2024-07-14 07:44:41.297582] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:25.235 [2024-07-14 07:44:41.297748] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:25.235 [2024-07-14 07:44:41.297773] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:25.235 [2024-07-14 07:44:41.297788] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:25.235 [2024-07-14 07:44:41.297801] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0c30000b90 00:27:25.235 [2024-07-14 07:44:41.297845] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:25.235 qpair failed and we were unable to recover it. 00:27:25.235 [2024-07-14 07:44:41.307559] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:25.235 [2024-07-14 07:44:41.307724] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:25.235 [2024-07-14 07:44:41.307750] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:25.235 [2024-07-14 07:44:41.307766] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:25.235 [2024-07-14 07:44:41.307781] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0c30000b90 00:27:25.235 [2024-07-14 07:44:41.307823] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:25.235 qpair failed and we were unable to recover it. 00:27:25.235 [2024-07-14 07:44:41.317628] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:25.235 [2024-07-14 07:44:41.317840] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:25.235 [2024-07-14 07:44:41.317873] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:25.235 [2024-07-14 07:44:41.317891] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:25.235 [2024-07-14 07:44:41.317906] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0c30000b90 00:27:25.235 [2024-07-14 07:44:41.317937] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:25.235 qpair failed and we were unable to recover it. 00:27:25.235 [2024-07-14 07:44:41.327604] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:25.235 [2024-07-14 07:44:41.327774] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:25.235 [2024-07-14 07:44:41.327805] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:25.235 [2024-07-14 07:44:41.327822] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:25.235 [2024-07-14 07:44:41.327836] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0c30000b90 00:27:25.235 [2024-07-14 07:44:41.327885] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:25.235 qpair failed and we were unable to recover it. 00:27:25.235 [2024-07-14 07:44:41.337660] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:25.235 [2024-07-14 07:44:41.337826] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:25.235 [2024-07-14 07:44:41.337864] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:25.235 [2024-07-14 07:44:41.337888] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:25.235 [2024-07-14 07:44:41.337903] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0c30000b90 00:27:25.235 [2024-07-14 07:44:41.337946] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:25.235 qpair failed and we were unable to recover it. 00:27:25.235 [2024-07-14 07:44:41.347672] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:25.235 [2024-07-14 07:44:41.347833] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:25.235 [2024-07-14 07:44:41.347860] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:25.235 [2024-07-14 07:44:41.347884] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:25.235 [2024-07-14 07:44:41.347899] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0c30000b90 00:27:25.235 [2024-07-14 07:44:41.347930] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:25.235 qpair failed and we were unable to recover it. 00:27:25.235 [2024-07-14 07:44:41.357697] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:25.235 [2024-07-14 07:44:41.357860] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:25.235 [2024-07-14 07:44:41.357893] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:25.235 [2024-07-14 07:44:41.357909] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:25.235 [2024-07-14 07:44:41.357923] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0c30000b90 00:27:25.235 [2024-07-14 07:44:41.357953] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:25.235 qpair failed and we were unable to recover it. 00:27:25.235 [2024-07-14 07:44:41.367742] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:25.235 [2024-07-14 07:44:41.367944] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:25.235 [2024-07-14 07:44:41.367971] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:25.235 [2024-07-14 07:44:41.367986] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:25.235 [2024-07-14 07:44:41.368001] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0c30000b90 00:27:25.235 [2024-07-14 07:44:41.368037] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:25.235 qpair failed and we were unable to recover it. 00:27:25.236 [2024-07-14 07:44:41.377769] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:25.236 [2024-07-14 07:44:41.377968] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:25.236 [2024-07-14 07:44:41.377995] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:25.236 [2024-07-14 07:44:41.378010] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:25.236 [2024-07-14 07:44:41.378025] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0c30000b90 00:27:25.236 [2024-07-14 07:44:41.378054] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:25.236 qpair failed and we were unable to recover it. 00:27:25.236 [2024-07-14 07:44:41.387803] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:25.236 [2024-07-14 07:44:41.387985] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:25.236 [2024-07-14 07:44:41.388012] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:25.236 [2024-07-14 07:44:41.388030] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:25.236 [2024-07-14 07:44:41.388045] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0c30000b90 00:27:25.236 [2024-07-14 07:44:41.388074] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:25.236 qpair failed and we were unable to recover it. 00:27:25.236 [2024-07-14 07:44:41.397812] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:25.236 [2024-07-14 07:44:41.398021] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:25.236 [2024-07-14 07:44:41.398048] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:25.236 [2024-07-14 07:44:41.398069] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:25.236 [2024-07-14 07:44:41.398084] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0c30000b90 00:27:25.236 [2024-07-14 07:44:41.398115] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:25.236 qpair failed and we were unable to recover it. 00:27:25.493 [2024-07-14 07:44:41.407844] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:25.493 [2024-07-14 07:44:41.408027] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:25.493 [2024-07-14 07:44:41.408054] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:25.493 [2024-07-14 07:44:41.408069] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:25.493 [2024-07-14 07:44:41.408084] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0c30000b90 00:27:25.493 [2024-07-14 07:44:41.408114] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:25.493 qpair failed and we were unable to recover it. 00:27:25.493 [2024-07-14 07:44:41.417927] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:25.493 [2024-07-14 07:44:41.418104] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:25.493 [2024-07-14 07:44:41.418136] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:25.494 [2024-07-14 07:44:41.418157] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:25.494 [2024-07-14 07:44:41.418170] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0c30000b90 00:27:25.494 [2024-07-14 07:44:41.418213] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:25.494 qpair failed and we were unable to recover it. 00:27:25.494 [2024-07-14 07:44:41.427949] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:25.494 [2024-07-14 07:44:41.428138] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:25.494 [2024-07-14 07:44:41.428164] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:25.494 [2024-07-14 07:44:41.428179] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:25.494 [2024-07-14 07:44:41.428193] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0c30000b90 00:27:25.494 [2024-07-14 07:44:41.428223] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:25.494 qpair failed and we were unable to recover it. 00:27:25.494 [2024-07-14 07:44:41.437965] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:25.494 [2024-07-14 07:44:41.438153] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:25.494 [2024-07-14 07:44:41.438195] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:25.494 [2024-07-14 07:44:41.438213] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:25.494 [2024-07-14 07:44:41.438227] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0c30000b90 00:27:25.494 [2024-07-14 07:44:41.438273] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:25.494 qpair failed and we were unable to recover it. 00:27:25.494 [2024-07-14 07:44:41.448004] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:25.494 [2024-07-14 07:44:41.448169] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:25.494 [2024-07-14 07:44:41.448196] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:25.494 [2024-07-14 07:44:41.448212] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:25.494 [2024-07-14 07:44:41.448238] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0c30000b90 00:27:25.494 [2024-07-14 07:44:41.448268] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:25.494 qpair failed and we were unable to recover it. 00:27:25.494 [2024-07-14 07:44:41.458022] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:25.494 [2024-07-14 07:44:41.458220] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:25.494 [2024-07-14 07:44:41.458246] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:25.494 [2024-07-14 07:44:41.458262] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:25.494 [2024-07-14 07:44:41.458283] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0c30000b90 00:27:25.494 [2024-07-14 07:44:41.458316] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:25.494 qpair failed and we were unable to recover it. 00:27:25.494 [2024-07-14 07:44:41.468024] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:25.494 [2024-07-14 07:44:41.468190] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:25.494 [2024-07-14 07:44:41.468218] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:25.494 [2024-07-14 07:44:41.468233] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:25.494 [2024-07-14 07:44:41.468263] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0c30000b90 00:27:25.494 [2024-07-14 07:44:41.468293] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:25.494 qpair failed and we were unable to recover it. 00:27:25.494 [2024-07-14 07:44:41.478077] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:25.494 [2024-07-14 07:44:41.478238] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:25.494 [2024-07-14 07:44:41.478264] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:25.494 [2024-07-14 07:44:41.478279] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:25.494 [2024-07-14 07:44:41.478293] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0c30000b90 00:27:25.494 [2024-07-14 07:44:41.478324] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:25.494 qpair failed and we were unable to recover it. 00:27:25.494 [2024-07-14 07:44:41.488074] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:25.494 [2024-07-14 07:44:41.488231] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:25.494 [2024-07-14 07:44:41.488258] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:25.494 [2024-07-14 07:44:41.488273] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:25.494 [2024-07-14 07:44:41.488286] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0c30000b90 00:27:25.494 [2024-07-14 07:44:41.488315] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:25.494 qpair failed and we were unable to recover it. 00:27:25.494 [2024-07-14 07:44:41.498128] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:25.494 [2024-07-14 07:44:41.498327] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:25.494 [2024-07-14 07:44:41.498353] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:25.494 [2024-07-14 07:44:41.498368] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:25.494 [2024-07-14 07:44:41.498382] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0c30000b90 00:27:25.494 [2024-07-14 07:44:41.498411] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:25.494 qpair failed and we were unable to recover it. 00:27:25.494 [2024-07-14 07:44:41.508273] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:25.494 [2024-07-14 07:44:41.508481] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:25.494 [2024-07-14 07:44:41.508521] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:25.494 [2024-07-14 07:44:41.508536] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:25.494 [2024-07-14 07:44:41.508549] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0c30000b90 00:27:25.494 [2024-07-14 07:44:41.508593] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:25.494 qpair failed and we were unable to recover it. 00:27:25.494 [2024-07-14 07:44:41.518167] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:25.494 [2024-07-14 07:44:41.518336] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:25.494 [2024-07-14 07:44:41.518362] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:25.494 [2024-07-14 07:44:41.518377] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:25.494 [2024-07-14 07:44:41.518407] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0c30000b90 00:27:25.495 [2024-07-14 07:44:41.518438] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:25.495 qpair failed and we were unable to recover it. 00:27:25.495 [2024-07-14 07:44:41.528220] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:25.495 [2024-07-14 07:44:41.528381] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:25.495 [2024-07-14 07:44:41.528407] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:25.495 [2024-07-14 07:44:41.528422] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:25.495 [2024-07-14 07:44:41.528451] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0c30000b90 00:27:25.495 [2024-07-14 07:44:41.528481] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:25.495 qpair failed and we were unable to recover it. 00:27:25.495 [2024-07-14 07:44:41.538339] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:25.495 [2024-07-14 07:44:41.538525] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:25.495 [2024-07-14 07:44:41.538566] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:25.495 [2024-07-14 07:44:41.538583] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:25.495 [2024-07-14 07:44:41.538596] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0c30000b90 00:27:25.495 [2024-07-14 07:44:41.538627] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:25.495 qpair failed and we were unable to recover it. 00:27:25.495 [2024-07-14 07:44:41.548267] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:25.495 [2024-07-14 07:44:41.548439] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:25.495 [2024-07-14 07:44:41.548465] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:25.495 [2024-07-14 07:44:41.548480] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:25.495 [2024-07-14 07:44:41.548499] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0c30000b90 00:27:25.495 [2024-07-14 07:44:41.548530] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:25.495 qpair failed and we were unable to recover it. 00:27:25.495 [2024-07-14 07:44:41.558284] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:25.495 [2024-07-14 07:44:41.558444] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:25.495 [2024-07-14 07:44:41.558470] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:25.495 [2024-07-14 07:44:41.558485] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:25.495 [2024-07-14 07:44:41.558500] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0c30000b90 00:27:25.495 [2024-07-14 07:44:41.558529] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:25.495 qpair failed and we were unable to recover it. 00:27:25.495 [2024-07-14 07:44:41.568303] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:25.495 [2024-07-14 07:44:41.568464] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:25.495 [2024-07-14 07:44:41.568491] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:25.495 [2024-07-14 07:44:41.568506] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:25.495 [2024-07-14 07:44:41.568520] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0c30000b90 00:27:25.495 [2024-07-14 07:44:41.568550] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:25.495 qpair failed and we were unable to recover it. 00:27:25.495 [2024-07-14 07:44:41.578341] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:25.495 [2024-07-14 07:44:41.578502] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:25.495 [2024-07-14 07:44:41.578528] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:25.495 [2024-07-14 07:44:41.578543] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:25.495 [2024-07-14 07:44:41.578557] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0c30000b90 00:27:25.495 [2024-07-14 07:44:41.578587] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:25.495 qpair failed and we were unable to recover it. 00:27:25.495 [2024-07-14 07:44:41.588377] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:25.495 [2024-07-14 07:44:41.588547] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:25.495 [2024-07-14 07:44:41.588573] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:25.495 [2024-07-14 07:44:41.588589] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:25.495 [2024-07-14 07:44:41.588602] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0c30000b90 00:27:25.495 [2024-07-14 07:44:41.588632] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:25.495 qpair failed and we were unable to recover it. 00:27:25.495 [2024-07-14 07:44:41.598417] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:25.495 [2024-07-14 07:44:41.598590] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:25.495 [2024-07-14 07:44:41.598616] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:25.495 [2024-07-14 07:44:41.598631] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:25.495 [2024-07-14 07:44:41.598646] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0c30000b90 00:27:25.495 [2024-07-14 07:44:41.598676] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:25.495 qpair failed and we were unable to recover it. 00:27:25.495 [2024-07-14 07:44:41.608505] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:25.495 [2024-07-14 07:44:41.608726] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:25.495 [2024-07-14 07:44:41.608768] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:25.495 [2024-07-14 07:44:41.608784] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:25.495 [2024-07-14 07:44:41.608797] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0c30000b90 00:27:25.495 [2024-07-14 07:44:41.608843] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:25.495 qpair failed and we were unable to recover it. 00:27:25.495 [2024-07-14 07:44:41.618491] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:25.495 [2024-07-14 07:44:41.618666] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:25.495 [2024-07-14 07:44:41.618693] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:25.495 [2024-07-14 07:44:41.618708] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:25.495 [2024-07-14 07:44:41.618738] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0c30000b90 00:27:25.495 [2024-07-14 07:44:41.618767] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:25.495 qpair failed and we were unable to recover it. 00:27:25.495 [2024-07-14 07:44:41.628478] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:25.495 [2024-07-14 07:44:41.628638] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:25.496 [2024-07-14 07:44:41.628664] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:25.496 [2024-07-14 07:44:41.628680] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:25.496 [2024-07-14 07:44:41.628694] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0c30000b90 00:27:25.496 [2024-07-14 07:44:41.628724] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:25.496 qpair failed and we were unable to recover it. 00:27:25.496 [2024-07-14 07:44:41.638498] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:25.496 [2024-07-14 07:44:41.638654] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:25.496 [2024-07-14 07:44:41.638681] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:25.496 [2024-07-14 07:44:41.638703] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:25.496 [2024-07-14 07:44:41.638718] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0c30000b90 00:27:25.496 [2024-07-14 07:44:41.638749] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:25.496 qpair failed and we were unable to recover it. 00:27:25.496 [2024-07-14 07:44:41.648564] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:25.496 [2024-07-14 07:44:41.648775] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:25.496 [2024-07-14 07:44:41.648801] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:25.496 [2024-07-14 07:44:41.648816] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:25.496 [2024-07-14 07:44:41.648830] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0c30000b90 00:27:25.496 [2024-07-14 07:44:41.648860] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:25.496 qpair failed and we were unable to recover it. 00:27:25.496 [2024-07-14 07:44:41.658591] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:25.496 [2024-07-14 07:44:41.658798] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:25.496 [2024-07-14 07:44:41.658824] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:25.496 [2024-07-14 07:44:41.658839] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:25.496 [2024-07-14 07:44:41.658854] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0c30000b90 00:27:25.496 [2024-07-14 07:44:41.658890] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:25.496 qpair failed and we were unable to recover it. 00:27:25.754 [2024-07-14 07:44:41.668593] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:25.754 [2024-07-14 07:44:41.668763] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:25.754 [2024-07-14 07:44:41.668789] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:25.754 [2024-07-14 07:44:41.668804] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:25.754 [2024-07-14 07:44:41.668818] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0c30000b90 00:27:25.754 [2024-07-14 07:44:41.668848] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:25.754 qpair failed and we were unable to recover it. 00:27:25.754 [2024-07-14 07:44:41.678617] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:25.754 [2024-07-14 07:44:41.678784] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:25.754 [2024-07-14 07:44:41.678810] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:25.754 [2024-07-14 07:44:41.678825] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:25.754 [2024-07-14 07:44:41.678839] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0c30000b90 00:27:25.754 [2024-07-14 07:44:41.678875] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:25.754 qpair failed and we were unable to recover it. 00:27:25.754 [2024-07-14 07:44:41.688650] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:25.754 [2024-07-14 07:44:41.688812] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:25.754 [2024-07-14 07:44:41.688837] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:25.754 [2024-07-14 07:44:41.688852] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:25.754 [2024-07-14 07:44:41.688875] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0c30000b90 00:27:25.754 [2024-07-14 07:44:41.688909] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:25.754 qpair failed and we were unable to recover it. 00:27:25.754 [2024-07-14 07:44:41.698702] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:25.754 [2024-07-14 07:44:41.698877] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:25.754 [2024-07-14 07:44:41.698903] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:25.754 [2024-07-14 07:44:41.698918] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:25.754 [2024-07-14 07:44:41.698933] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0c30000b90 00:27:25.754 [2024-07-14 07:44:41.698963] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:25.754 qpair failed and we were unable to recover it. 00:27:25.754 [2024-07-14 07:44:41.708712] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:25.754 [2024-07-14 07:44:41.708884] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:25.754 [2024-07-14 07:44:41.708909] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:25.754 [2024-07-14 07:44:41.708925] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:25.754 [2024-07-14 07:44:41.708939] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0c30000b90 00:27:25.754 [2024-07-14 07:44:41.708968] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:25.754 qpair failed and we were unable to recover it. 00:27:25.754 [2024-07-14 07:44:41.718756] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:25.754 [2024-07-14 07:44:41.718939] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:25.754 [2024-07-14 07:44:41.718965] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:25.754 [2024-07-14 07:44:41.718981] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:25.754 [2024-07-14 07:44:41.718995] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0c30000b90 00:27:25.754 [2024-07-14 07:44:41.719025] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:25.754 qpair failed and we were unable to recover it. 00:27:25.754 [2024-07-14 07:44:41.728746] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:25.754 [2024-07-14 07:44:41.728908] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:25.754 [2024-07-14 07:44:41.728934] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:25.754 [2024-07-14 07:44:41.728956] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:25.755 [2024-07-14 07:44:41.728971] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0c30000b90 00:27:25.755 [2024-07-14 07:44:41.729001] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:25.755 qpair failed and we were unable to recover it. 00:27:25.755 [2024-07-14 07:44:41.738799] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:25.755 [2024-07-14 07:44:41.738976] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:25.755 [2024-07-14 07:44:41.739002] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:25.755 [2024-07-14 07:44:41.739017] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:25.755 [2024-07-14 07:44:41.739031] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0c30000b90 00:27:25.755 [2024-07-14 07:44:41.739061] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:25.755 qpair failed and we were unable to recover it. 00:27:25.755 [2024-07-14 07:44:41.748829] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:25.755 [2024-07-14 07:44:41.749002] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:25.755 [2024-07-14 07:44:41.749029] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:25.755 [2024-07-14 07:44:41.749045] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:25.755 [2024-07-14 07:44:41.749060] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0c30000b90 00:27:25.755 [2024-07-14 07:44:41.749090] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:25.755 qpair failed and we were unable to recover it. 00:27:25.755 [2024-07-14 07:44:41.758849] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:25.755 [2024-07-14 07:44:41.759028] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:25.755 [2024-07-14 07:44:41.759054] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:25.755 [2024-07-14 07:44:41.759069] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:25.755 [2024-07-14 07:44:41.759084] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0c30000b90 00:27:25.755 [2024-07-14 07:44:41.759114] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:25.755 qpair failed and we were unable to recover it. 00:27:25.755 [2024-07-14 07:44:41.768905] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:25.755 [2024-07-14 07:44:41.769090] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:25.755 [2024-07-14 07:44:41.769116] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:25.755 [2024-07-14 07:44:41.769131] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:25.755 [2024-07-14 07:44:41.769146] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0c30000b90 00:27:25.755 [2024-07-14 07:44:41.769203] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:25.755 qpair failed and we were unable to recover it. 00:27:25.755 [2024-07-14 07:44:41.778958] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:25.755 [2024-07-14 07:44:41.779129] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:25.755 [2024-07-14 07:44:41.779155] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:25.755 [2024-07-14 07:44:41.779170] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:25.755 [2024-07-14 07:44:41.779184] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0c30000b90 00:27:25.755 [2024-07-14 07:44:41.779214] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:25.755 qpair failed and we were unable to recover it. 00:27:25.755 [2024-07-14 07:44:41.788986] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:25.755 [2024-07-14 07:44:41.789154] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:25.755 [2024-07-14 07:44:41.789180] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:25.755 [2024-07-14 07:44:41.789195] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:25.755 [2024-07-14 07:44:41.789210] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0c30000b90 00:27:25.755 [2024-07-14 07:44:41.789241] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:25.755 qpair failed and we were unable to recover it. 00:27:25.755 [2024-07-14 07:44:41.798974] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:25.755 [2024-07-14 07:44:41.799142] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:25.755 [2024-07-14 07:44:41.799168] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:25.755 [2024-07-14 07:44:41.799183] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:25.755 [2024-07-14 07:44:41.799196] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0c30000b90 00:27:25.755 [2024-07-14 07:44:41.799225] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:25.755 qpair failed and we were unable to recover it. 00:27:25.755 [2024-07-14 07:44:41.809006] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:25.755 [2024-07-14 07:44:41.809161] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:25.755 [2024-07-14 07:44:41.809186] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:25.755 [2024-07-14 07:44:41.809202] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:25.755 [2024-07-14 07:44:41.809216] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0c30000b90 00:27:25.755 [2024-07-14 07:44:41.809246] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:25.755 qpair failed and we were unable to recover it. 00:27:25.755 [2024-07-14 07:44:41.819063] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:25.755 [2024-07-14 07:44:41.819230] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:25.755 [2024-07-14 07:44:41.819261] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:25.755 [2024-07-14 07:44:41.819277] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:25.755 [2024-07-14 07:44:41.819307] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0c30000b90 00:27:25.755 [2024-07-14 07:44:41.819337] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:25.755 qpair failed and we were unable to recover it. 00:27:25.755 [2024-07-14 07:44:41.829084] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:25.755 [2024-07-14 07:44:41.829254] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:25.755 [2024-07-14 07:44:41.829280] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:25.755 [2024-07-14 07:44:41.829295] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:25.755 [2024-07-14 07:44:41.829309] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0c30000b90 00:27:25.755 [2024-07-14 07:44:41.829338] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:25.755 qpair failed and we were unable to recover it. 00:27:25.755 [2024-07-14 07:44:41.839113] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:25.755 [2024-07-14 07:44:41.839275] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:25.755 [2024-07-14 07:44:41.839302] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:25.755 [2024-07-14 07:44:41.839317] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:25.755 [2024-07-14 07:44:41.839331] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0c30000b90 00:27:25.755 [2024-07-14 07:44:41.839360] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:25.755 qpair failed and we were unable to recover it. 00:27:25.755 [2024-07-14 07:44:41.849128] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:25.756 [2024-07-14 07:44:41.849275] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:25.756 [2024-07-14 07:44:41.849300] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:25.756 [2024-07-14 07:44:41.849315] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:25.756 [2024-07-14 07:44:41.849330] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0c30000b90 00:27:25.756 [2024-07-14 07:44:41.849361] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:25.756 qpair failed and we were unable to recover it. 00:27:25.756 [2024-07-14 07:44:41.859201] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:25.756 [2024-07-14 07:44:41.859368] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:25.756 [2024-07-14 07:44:41.859395] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:25.756 [2024-07-14 07:44:41.859410] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:25.756 [2024-07-14 07:44:41.859440] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0c30000b90 00:27:25.756 [2024-07-14 07:44:41.859477] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:25.756 qpair failed and we were unable to recover it. 00:27:25.756 [2024-07-14 07:44:41.869223] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:25.756 [2024-07-14 07:44:41.869420] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:25.756 [2024-07-14 07:44:41.869445] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:25.756 [2024-07-14 07:44:41.869460] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:25.756 [2024-07-14 07:44:41.869474] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0c30000b90 00:27:25.756 [2024-07-14 07:44:41.869503] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:25.756 qpair failed and we were unable to recover it. 00:27:25.756 [2024-07-14 07:44:41.879218] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:25.756 [2024-07-14 07:44:41.879387] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:25.756 [2024-07-14 07:44:41.879413] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:25.756 [2024-07-14 07:44:41.879429] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:25.756 [2024-07-14 07:44:41.879443] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0c30000b90 00:27:25.756 [2024-07-14 07:44:41.879473] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:25.756 qpair failed and we were unable to recover it. 00:27:25.756 [2024-07-14 07:44:41.889247] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:25.756 [2024-07-14 07:44:41.889406] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:25.756 [2024-07-14 07:44:41.889432] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:25.756 [2024-07-14 07:44:41.889446] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:25.756 [2024-07-14 07:44:41.889460] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0c30000b90 00:27:25.756 [2024-07-14 07:44:41.889490] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:25.756 qpair failed and we were unable to recover it. 00:27:25.756 [2024-07-14 07:44:41.899284] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:25.756 [2024-07-14 07:44:41.899460] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:25.756 [2024-07-14 07:44:41.899486] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:25.756 [2024-07-14 07:44:41.899501] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:25.756 [2024-07-14 07:44:41.899514] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0c30000b90 00:27:25.756 [2024-07-14 07:44:41.899544] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:25.756 qpair failed and we were unable to recover it. 00:27:25.756 [2024-07-14 07:44:41.909326] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:25.756 [2024-07-14 07:44:41.909496] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:25.756 [2024-07-14 07:44:41.909528] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:25.756 [2024-07-14 07:44:41.909544] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:25.756 [2024-07-14 07:44:41.909558] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0c30000b90 00:27:25.756 [2024-07-14 07:44:41.909603] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:25.756 qpair failed and we were unable to recover it. 00:27:25.756 [2024-07-14 07:44:41.919344] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:25.756 [2024-07-14 07:44:41.919502] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:25.756 [2024-07-14 07:44:41.919528] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:25.756 [2024-07-14 07:44:41.919544] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:25.756 [2024-07-14 07:44:41.919558] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0c30000b90 00:27:25.756 [2024-07-14 07:44:41.919601] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:25.756 qpair failed and we were unable to recover it. 00:27:26.014 [2024-07-14 07:44:41.929414] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:26.014 [2024-07-14 07:44:41.929588] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:26.014 [2024-07-14 07:44:41.929615] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:26.015 [2024-07-14 07:44:41.929647] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:26.015 [2024-07-14 07:44:41.929662] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0c30000b90 00:27:26.015 [2024-07-14 07:44:41.929692] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:26.015 qpair failed and we were unable to recover it. 00:27:26.015 [2024-07-14 07:44:41.939437] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:26.015 [2024-07-14 07:44:41.939602] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:26.015 [2024-07-14 07:44:41.939628] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:26.015 [2024-07-14 07:44:41.939658] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:26.015 [2024-07-14 07:44:41.939672] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0c30000b90 00:27:26.015 [2024-07-14 07:44:41.939717] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:26.015 qpair failed and we were unable to recover it. 00:27:26.015 [2024-07-14 07:44:41.949471] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:26.015 [2024-07-14 07:44:41.949664] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:26.015 [2024-07-14 07:44:41.949705] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:26.015 [2024-07-14 07:44:41.949721] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:26.015 [2024-07-14 07:44:41.949734] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0c30000b90 00:27:26.015 [2024-07-14 07:44:41.949770] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:26.015 qpair failed and we were unable to recover it. 00:27:26.015 [2024-07-14 07:44:41.959485] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:26.015 [2024-07-14 07:44:41.959664] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:26.015 [2024-07-14 07:44:41.959690] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:26.015 [2024-07-14 07:44:41.959706] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:26.015 [2024-07-14 07:44:41.959720] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0c30000b90 00:27:26.015 [2024-07-14 07:44:41.959749] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:26.015 qpair failed and we were unable to recover it. 00:27:26.015 [2024-07-14 07:44:41.969473] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:26.015 [2024-07-14 07:44:41.969632] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:26.015 [2024-07-14 07:44:41.969659] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:26.015 [2024-07-14 07:44:41.969674] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:26.015 [2024-07-14 07:44:41.969688] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0c30000b90 00:27:26.015 [2024-07-14 07:44:41.969717] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:26.015 qpair failed and we were unable to recover it. 00:27:26.015 [2024-07-14 07:44:41.979531] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:26.015 [2024-07-14 07:44:41.979702] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:26.015 [2024-07-14 07:44:41.979727] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:26.015 [2024-07-14 07:44:41.979743] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:26.015 [2024-07-14 07:44:41.979758] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0c30000b90 00:27:26.015 [2024-07-14 07:44:41.979787] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:26.015 qpair failed and we were unable to recover it. 00:27:26.015 [2024-07-14 07:44:41.989578] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:26.015 [2024-07-14 07:44:41.989785] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:26.015 [2024-07-14 07:44:41.989827] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:26.015 [2024-07-14 07:44:41.989842] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:26.015 [2024-07-14 07:44:41.989856] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0c30000b90 00:27:26.015 [2024-07-14 07:44:41.989908] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:26.015 qpair failed and we were unable to recover it. 00:27:26.015 [2024-07-14 07:44:41.999588] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:26.015 [2024-07-14 07:44:41.999756] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:26.015 [2024-07-14 07:44:41.999782] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:26.015 [2024-07-14 07:44:41.999797] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:26.015 [2024-07-14 07:44:41.999812] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0c30000b90 00:27:26.015 [2024-07-14 07:44:41.999842] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:26.015 qpair failed and we were unable to recover it. 00:27:26.015 [2024-07-14 07:44:42.009613] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:26.015 [2024-07-14 07:44:42.009768] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:26.015 [2024-07-14 07:44:42.009795] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:26.015 [2024-07-14 07:44:42.009810] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:26.015 [2024-07-14 07:44:42.009825] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0c30000b90 00:27:26.015 [2024-07-14 07:44:42.009874] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:26.015 qpair failed and we were unable to recover it. 00:27:26.015 [2024-07-14 07:44:42.019685] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:26.015 [2024-07-14 07:44:42.019943] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:26.015 [2024-07-14 07:44:42.019977] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:26.015 [2024-07-14 07:44:42.019994] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:26.015 [2024-07-14 07:44:42.020009] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0c38000b90 00:27:26.015 [2024-07-14 07:44:42.020040] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:27:26.015 qpair failed and we were unable to recover it. 00:27:26.015 [2024-07-14 07:44:42.029690] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:26.015 [2024-07-14 07:44:42.029851] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:26.015 [2024-07-14 07:44:42.029891] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:26.015 [2024-07-14 07:44:42.029918] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:26.015 [2024-07-14 07:44:42.029931] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1bc69f0 00:27:26.015 [2024-07-14 07:44:42.029962] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:26.015 qpair failed and we were unable to recover it. 00:27:26.015 [2024-07-14 07:44:42.039729] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:26.015 [2024-07-14 07:44:42.039889] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:26.015 [2024-07-14 07:44:42.039922] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:26.015 [2024-07-14 07:44:42.039937] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:26.015 [2024-07-14 07:44:42.039955] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1bc69f0 00:27:26.015 [2024-07-14 07:44:42.039987] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:26.015 qpair failed and we were unable to recover it. 00:27:26.015 [2024-07-14 07:44:42.049770] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:26.015 [2024-07-14 07:44:42.049938] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:26.016 [2024-07-14 07:44:42.049972] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:26.016 [2024-07-14 07:44:42.049989] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:26.016 [2024-07-14 07:44:42.050003] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0c38000b90 00:27:26.016 [2024-07-14 07:44:42.050047] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:27:26.016 qpair failed and we were unable to recover it. 00:27:26.016 [2024-07-14 07:44:42.050187] nvme_ctrlr.c:4339:nvme_ctrlr_keep_alive: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Submitting Keep Alive failed 00:27:26.016 A controller has encountered a failure and is being reset. 00:27:26.016 [2024-07-14 07:44:42.059794] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:26.016 [2024-07-14 07:44:42.059974] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:26.016 [2024-07-14 07:44:42.060009] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:26.016 [2024-07-14 07:44:42.060026] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:26.016 [2024-07-14 07:44:42.060040] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0c28000b90 00:27:26.016 [2024-07-14 07:44:42.060071] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:27:26.016 qpair failed and we were unable to recover it. 00:27:26.016 [2024-07-14 07:44:42.069840] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:26.016 [2024-07-14 07:44:42.070025] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:26.016 [2024-07-14 07:44:42.070057] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:26.016 [2024-07-14 07:44:42.070074] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:26.016 [2024-07-14 07:44:42.070088] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0c28000b90 00:27:26.016 [2024-07-14 07:44:42.070133] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:27:26.016 qpair failed and we were unable to recover it. 00:27:26.016 [2024-07-14 07:44:42.070248] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1bd44b0 (9): Bad file descriptor 00:27:26.016 Controller properly reset. 00:27:26.016 Initializing NVMe Controllers 00:27:26.016 Attaching to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:27:26.016 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:27:26.016 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 0 00:27:26.016 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 1 00:27:26.016 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 2 00:27:26.016 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 3 00:27:26.016 Initialization complete. Launching workers. 00:27:26.016 Starting thread on core 1 00:27:26.016 Starting thread on core 2 00:27:26.016 Starting thread on core 3 00:27:26.016 Starting thread on core 0 00:27:26.016 07:44:42 -- host/target_disconnect.sh@59 -- # sync 00:27:26.016 00:27:26.016 real 0m11.515s 00:27:26.016 user 0m19.971s 00:27:26.016 sys 0m5.664s 00:27:26.016 07:44:42 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:27:26.016 07:44:42 -- common/autotest_common.sh@10 -- # set +x 00:27:26.016 ************************************ 00:27:26.016 END TEST nvmf_target_disconnect_tc2 00:27:26.016 ************************************ 00:27:26.016 07:44:42 -- host/target_disconnect.sh@80 -- # '[' -n '' ']' 00:27:26.016 07:44:42 -- host/target_disconnect.sh@84 -- # trap - SIGINT SIGTERM EXIT 00:27:26.016 07:44:42 -- host/target_disconnect.sh@85 -- # nvmftestfini 00:27:26.016 07:44:42 -- nvmf/common.sh@476 -- # nvmfcleanup 00:27:26.016 07:44:42 -- nvmf/common.sh@116 -- # sync 00:27:26.016 07:44:42 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:27:26.016 07:44:42 -- nvmf/common.sh@119 -- # set +e 00:27:26.016 07:44:42 -- nvmf/common.sh@120 -- # for i in {1..20} 00:27:26.016 07:44:42 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:27:26.016 rmmod nvme_tcp 00:27:26.275 rmmod nvme_fabrics 00:27:26.275 rmmod nvme_keyring 00:27:26.275 07:44:42 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:27:26.275 07:44:42 -- nvmf/common.sh@123 -- # set -e 00:27:26.275 07:44:42 -- nvmf/common.sh@124 -- # return 0 00:27:26.275 07:44:42 -- nvmf/common.sh@477 -- # '[' -n 19266 ']' 00:27:26.275 07:44:42 -- nvmf/common.sh@478 -- # killprocess 19266 00:27:26.275 07:44:42 -- common/autotest_common.sh@926 -- # '[' -z 19266 ']' 00:27:26.275 07:44:42 -- common/autotest_common.sh@930 -- # kill -0 19266 00:27:26.275 07:44:42 -- common/autotest_common.sh@931 -- # uname 00:27:26.275 07:44:42 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:27:26.275 07:44:42 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 19266 00:27:26.275 07:44:42 -- common/autotest_common.sh@932 -- # process_name=reactor_4 00:27:26.275 07:44:42 -- common/autotest_common.sh@936 -- # '[' reactor_4 = sudo ']' 00:27:26.275 07:44:42 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 19266' 00:27:26.275 killing process with pid 19266 00:27:26.275 07:44:42 -- common/autotest_common.sh@945 -- # kill 19266 00:27:26.275 07:44:42 -- common/autotest_common.sh@950 -- # wait 19266 00:27:26.535 07:44:42 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:27:26.535 07:44:42 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:27:26.535 07:44:42 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:27:26.535 07:44:42 -- nvmf/common.sh@273 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:27:26.535 07:44:42 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:27:26.535 07:44:42 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:27:26.535 07:44:42 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:27:26.535 07:44:42 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:27:28.437 07:44:44 -- nvmf/common.sh@278 -- # ip -4 addr flush cvl_0_1 00:27:28.437 00:27:28.437 real 0m16.213s 00:27:28.437 user 0m45.999s 00:27:28.437 sys 0m7.593s 00:27:28.437 07:44:44 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:27:28.437 07:44:44 -- common/autotest_common.sh@10 -- # set +x 00:27:28.437 ************************************ 00:27:28.437 END TEST nvmf_target_disconnect 00:27:28.437 ************************************ 00:27:28.437 07:44:44 -- nvmf/nvmf.sh@127 -- # timing_exit host 00:27:28.437 07:44:44 -- common/autotest_common.sh@718 -- # xtrace_disable 00:27:28.437 07:44:44 -- common/autotest_common.sh@10 -- # set +x 00:27:28.695 07:44:44 -- nvmf/nvmf.sh@129 -- # trap - SIGINT SIGTERM EXIT 00:27:28.695 00:27:28.695 real 21m1.952s 00:27:28.695 user 60m15.930s 00:27:28.695 sys 5m6.416s 00:27:28.695 07:44:44 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:27:28.695 07:44:44 -- common/autotest_common.sh@10 -- # set +x 00:27:28.695 ************************************ 00:27:28.695 END TEST nvmf_tcp 00:27:28.695 ************************************ 00:27:28.695 07:44:44 -- spdk/autotest.sh@296 -- # [[ 0 -eq 0 ]] 00:27:28.695 07:44:44 -- spdk/autotest.sh@297 -- # run_test spdkcli_nvmf_tcp /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/nvmf.sh --transport=tcp 00:27:28.695 07:44:44 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:27:28.695 07:44:44 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:27:28.695 07:44:44 -- common/autotest_common.sh@10 -- # set +x 00:27:28.695 ************************************ 00:27:28.695 START TEST spdkcli_nvmf_tcp 00:27:28.695 ************************************ 00:27:28.695 07:44:44 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/nvmf.sh --transport=tcp 00:27:28.695 * Looking for test storage... 00:27:28.695 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli 00:27:28.695 07:44:44 -- spdkcli/nvmf.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/common.sh 00:27:28.695 07:44:44 -- spdkcli/common.sh@6 -- # spdkcli_job=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/spdkcli_job.py 00:27:28.695 07:44:44 -- spdkcli/common.sh@7 -- # spdk_clear_config_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/clear_config.py 00:27:28.695 07:44:44 -- spdkcli/nvmf.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:27:28.695 07:44:44 -- nvmf/common.sh@7 -- # uname -s 00:27:28.695 07:44:44 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:27:28.695 07:44:44 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:27:28.695 07:44:44 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:27:28.695 07:44:44 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:27:28.695 07:44:44 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:27:28.695 07:44:44 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:27:28.695 07:44:44 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:27:28.695 07:44:44 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:27:28.695 07:44:44 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:27:28.695 07:44:44 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:27:28.695 07:44:44 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:27:28.695 07:44:44 -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:27:28.695 07:44:44 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:27:28.695 07:44:44 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:27:28.695 07:44:44 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:27:28.695 07:44:44 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:27:28.695 07:44:44 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:27:28.695 07:44:44 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:27:28.695 07:44:44 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:27:28.695 07:44:44 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:28.695 07:44:44 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:28.695 07:44:44 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:28.695 07:44:44 -- paths/export.sh@5 -- # export PATH 00:27:28.695 07:44:44 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:28.695 07:44:44 -- nvmf/common.sh@46 -- # : 0 00:27:28.695 07:44:44 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:27:28.695 07:44:44 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:27:28.695 07:44:44 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:27:28.695 07:44:44 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:27:28.695 07:44:44 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:27:28.695 07:44:44 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:27:28.695 07:44:44 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:27:28.696 07:44:44 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:27:28.696 07:44:44 -- spdkcli/nvmf.sh@12 -- # MATCH_FILE=spdkcli_nvmf.test 00:27:28.696 07:44:44 -- spdkcli/nvmf.sh@13 -- # SPDKCLI_BRANCH=/nvmf 00:27:28.696 07:44:44 -- spdkcli/nvmf.sh@15 -- # trap cleanup EXIT 00:27:28.696 07:44:44 -- spdkcli/nvmf.sh@17 -- # timing_enter run_nvmf_tgt 00:27:28.696 07:44:44 -- common/autotest_common.sh@712 -- # xtrace_disable 00:27:28.696 07:44:44 -- common/autotest_common.sh@10 -- # set +x 00:27:28.696 07:44:44 -- spdkcli/nvmf.sh@18 -- # run_nvmf_tgt 00:27:28.696 07:44:44 -- spdkcli/common.sh@33 -- # nvmf_tgt_pid=20485 00:27:28.696 07:44:44 -- spdkcli/common.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -m 0x3 -p 0 00:27:28.696 07:44:44 -- spdkcli/common.sh@34 -- # waitforlisten 20485 00:27:28.696 07:44:44 -- common/autotest_common.sh@819 -- # '[' -z 20485 ']' 00:27:28.696 07:44:44 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:27:28.696 07:44:44 -- common/autotest_common.sh@824 -- # local max_retries=100 00:27:28.696 07:44:44 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:27:28.696 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:27:28.696 07:44:44 -- common/autotest_common.sh@828 -- # xtrace_disable 00:27:28.696 07:44:44 -- common/autotest_common.sh@10 -- # set +x 00:27:28.696 [2024-07-14 07:44:44.751831] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:27:28.696 [2024-07-14 07:44:44.751947] [ DPDK EAL parameters: nvmf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid20485 ] 00:27:28.696 EAL: No free 2048 kB hugepages reported on node 1 00:27:28.696 [2024-07-14 07:44:44.808683] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 2 00:27:28.954 [2024-07-14 07:44:44.914395] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:27:28.954 [2024-07-14 07:44:44.914628] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:27:28.954 [2024-07-14 07:44:44.914634] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:27:29.518 07:44:45 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:27:29.518 07:44:45 -- common/autotest_common.sh@852 -- # return 0 00:27:29.518 07:44:45 -- spdkcli/nvmf.sh@19 -- # timing_exit run_nvmf_tgt 00:27:29.518 07:44:45 -- common/autotest_common.sh@718 -- # xtrace_disable 00:27:29.518 07:44:45 -- common/autotest_common.sh@10 -- # set +x 00:27:29.775 07:44:45 -- spdkcli/nvmf.sh@21 -- # NVMF_TARGET_IP=127.0.0.1 00:27:29.775 07:44:45 -- spdkcli/nvmf.sh@22 -- # [[ tcp == \r\d\m\a ]] 00:27:29.775 07:44:45 -- spdkcli/nvmf.sh@27 -- # timing_enter spdkcli_create_nvmf_config 00:27:29.775 07:44:45 -- common/autotest_common.sh@712 -- # xtrace_disable 00:27:29.775 07:44:45 -- common/autotest_common.sh@10 -- # set +x 00:27:29.775 07:44:45 -- spdkcli/nvmf.sh@65 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/spdkcli_job.py ''\''/bdevs/malloc create 32 512 Malloc1'\'' '\''Malloc1'\'' True 00:27:29.775 '\''/bdevs/malloc create 32 512 Malloc2'\'' '\''Malloc2'\'' True 00:27:29.775 '\''/bdevs/malloc create 32 512 Malloc3'\'' '\''Malloc3'\'' True 00:27:29.775 '\''/bdevs/malloc create 32 512 Malloc4'\'' '\''Malloc4'\'' True 00:27:29.775 '\''/bdevs/malloc create 32 512 Malloc5'\'' '\''Malloc5'\'' True 00:27:29.775 '\''/bdevs/malloc create 32 512 Malloc6'\'' '\''Malloc6'\'' True 00:27:29.775 '\''nvmf/transport create tcp max_io_qpairs_per_ctrlr=4 io_unit_size=8192'\'' '\'''\'' True 00:27:29.775 '\''/nvmf/subsystem create nqn.2014-08.org.spdk:cnode1 N37SXV509SRW max_namespaces=4 allow_any_host=True'\'' '\''nqn.2014-08.org.spdk:cnode1'\'' True 00:27:29.775 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc3 1'\'' '\''Malloc3'\'' True 00:27:29.775 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc4 2'\'' '\''Malloc4'\'' True 00:27:29.775 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4260 IPv4'\'' '\''127.0.0.1:4260'\'' True 00:27:29.775 '\''/nvmf/subsystem create nqn.2014-08.org.spdk:cnode2 N37SXV509SRD max_namespaces=2 allow_any_host=True'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:27:29.775 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/namespaces create Malloc2'\'' '\''Malloc2'\'' True 00:27:29.775 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/listen_addresses create tcp 127.0.0.1 4260 IPv4'\'' '\''127.0.0.1:4260'\'' True 00:27:29.775 '\''/nvmf/subsystem create nqn.2014-08.org.spdk:cnode3 N37SXV509SRR max_namespaces=2 allow_any_host=True'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:27:29.775 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/namespaces create Malloc1'\'' '\''Malloc1'\'' True 00:27:29.775 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create tcp 127.0.0.1 4260 IPv4'\'' '\''127.0.0.1:4260'\'' True 00:27:29.775 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create tcp 127.0.0.1 4261 IPv4'\'' '\''127.0.0.1:4261'\'' True 00:27:29.775 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode1'\'' '\''nqn.2014-08.org.spdk:cnode1'\'' True 00:27:29.775 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode2'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:27:29.775 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host True'\'' '\''Allow any host'\'' 00:27:29.775 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host False'\'' '\''Allow any host'\'' True 00:27:29.775 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4261 IPv4'\'' '\''127.0.0.1:4261'\'' True 00:27:29.775 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4262 IPv4'\'' '\''127.0.0.1:4262'\'' True 00:27:29.775 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts create nqn.2014-08.org.spdk:cnode2'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:27:29.775 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc5'\'' '\''Malloc5'\'' True 00:27:29.775 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc6'\'' '\''Malloc6'\'' True 00:27:29.775 '\''/nvmf/referral create tcp 127.0.0.2 4030 IPv4'\'' 00:27:29.775 ' 00:27:30.033 [2024-07-14 07:44:46.102661] nvmf_rpc.c: 275:rpc_nvmf_get_subsystems: *WARNING*: rpc_nvmf_get_subsystems: deprecated feature listener.transport is deprecated in favor of trtype to be removed in v24.05 00:27:32.561 [2024-07-14 07:44:48.282680] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:27:33.527 [2024-07-14 07:44:49.523231] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4260 *** 00:27:36.053 [2024-07-14 07:44:51.798735] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4261 *** 00:27:37.951 [2024-07-14 07:44:53.809403] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4262 *** 00:27:39.323 Executing command: ['/bdevs/malloc create 32 512 Malloc1', 'Malloc1', True] 00:27:39.323 Executing command: ['/bdevs/malloc create 32 512 Malloc2', 'Malloc2', True] 00:27:39.323 Executing command: ['/bdevs/malloc create 32 512 Malloc3', 'Malloc3', True] 00:27:39.323 Executing command: ['/bdevs/malloc create 32 512 Malloc4', 'Malloc4', True] 00:27:39.323 Executing command: ['/bdevs/malloc create 32 512 Malloc5', 'Malloc5', True] 00:27:39.323 Executing command: ['/bdevs/malloc create 32 512 Malloc6', 'Malloc6', True] 00:27:39.323 Executing command: ['nvmf/transport create tcp max_io_qpairs_per_ctrlr=4 io_unit_size=8192', '', True] 00:27:39.323 Executing command: ['/nvmf/subsystem create nqn.2014-08.org.spdk:cnode1 N37SXV509SRW max_namespaces=4 allow_any_host=True', 'nqn.2014-08.org.spdk:cnode1', True] 00:27:39.323 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc3 1', 'Malloc3', True] 00:27:39.323 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc4 2', 'Malloc4', True] 00:27:39.323 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4260 IPv4', '127.0.0.1:4260', True] 00:27:39.323 Executing command: ['/nvmf/subsystem create nqn.2014-08.org.spdk:cnode2 N37SXV509SRD max_namespaces=2 allow_any_host=True', 'nqn.2014-08.org.spdk:cnode2', True] 00:27:39.323 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/namespaces create Malloc2', 'Malloc2', True] 00:27:39.323 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/listen_addresses create tcp 127.0.0.1 4260 IPv4', '127.0.0.1:4260', True] 00:27:39.323 Executing command: ['/nvmf/subsystem create nqn.2014-08.org.spdk:cnode3 N37SXV509SRR max_namespaces=2 allow_any_host=True', 'nqn.2014-08.org.spdk:cnode2', True] 00:27:39.323 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/namespaces create Malloc1', 'Malloc1', True] 00:27:39.323 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create tcp 127.0.0.1 4260 IPv4', '127.0.0.1:4260', True] 00:27:39.323 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create tcp 127.0.0.1 4261 IPv4', '127.0.0.1:4261', True] 00:27:39.324 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode1', 'nqn.2014-08.org.spdk:cnode1', True] 00:27:39.324 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode2', 'nqn.2014-08.org.spdk:cnode2', True] 00:27:39.324 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host True', 'Allow any host', False] 00:27:39.324 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host False', 'Allow any host', True] 00:27:39.324 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4261 IPv4', '127.0.0.1:4261', True] 00:27:39.324 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4262 IPv4', '127.0.0.1:4262', True] 00:27:39.324 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts create nqn.2014-08.org.spdk:cnode2', 'nqn.2014-08.org.spdk:cnode2', True] 00:27:39.324 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc5', 'Malloc5', True] 00:27:39.324 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc6', 'Malloc6', True] 00:27:39.324 Executing command: ['/nvmf/referral create tcp 127.0.0.2 4030 IPv4', False] 00:27:39.324 07:44:55 -- spdkcli/nvmf.sh@66 -- # timing_exit spdkcli_create_nvmf_config 00:27:39.324 07:44:55 -- common/autotest_common.sh@718 -- # xtrace_disable 00:27:39.324 07:44:55 -- common/autotest_common.sh@10 -- # set +x 00:27:39.324 07:44:55 -- spdkcli/nvmf.sh@68 -- # timing_enter spdkcli_check_match 00:27:39.324 07:44:55 -- common/autotest_common.sh@712 -- # xtrace_disable 00:27:39.324 07:44:55 -- common/autotest_common.sh@10 -- # set +x 00:27:39.324 07:44:55 -- spdkcli/nvmf.sh@69 -- # check_match 00:27:39.324 07:44:55 -- spdkcli/common.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdkcli.py ll /nvmf 00:27:39.890 07:44:55 -- spdkcli/common.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/match/match /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/match_files/spdkcli_nvmf.test.match 00:27:39.890 07:44:55 -- spdkcli/common.sh@46 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/match_files/spdkcli_nvmf.test 00:27:39.890 07:44:55 -- spdkcli/nvmf.sh@70 -- # timing_exit spdkcli_check_match 00:27:39.890 07:44:55 -- common/autotest_common.sh@718 -- # xtrace_disable 00:27:39.890 07:44:55 -- common/autotest_common.sh@10 -- # set +x 00:27:39.890 07:44:55 -- spdkcli/nvmf.sh@72 -- # timing_enter spdkcli_clear_nvmf_config 00:27:39.890 07:44:55 -- common/autotest_common.sh@712 -- # xtrace_disable 00:27:39.890 07:44:55 -- common/autotest_common.sh@10 -- # set +x 00:27:39.890 07:44:55 -- spdkcli/nvmf.sh@87 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/spdkcli_job.py ''\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete nsid=1'\'' '\''Malloc3'\'' 00:27:39.890 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete_all'\'' '\''Malloc4'\'' 00:27:39.890 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts delete nqn.2014-08.org.spdk:cnode2'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' 00:27:39.890 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts delete_all'\'' '\''nqn.2014-08.org.spdk:cnode1'\'' 00:27:39.890 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete tcp 127.0.0.1 4262'\'' '\''127.0.0.1:4262'\'' 00:27:39.890 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete_all'\'' '\''127.0.0.1:4261'\'' 00:27:39.890 '\''/nvmf/subsystem delete nqn.2014-08.org.spdk:cnode3'\'' '\''nqn.2014-08.org.spdk:cnode3'\'' 00:27:39.890 '\''/nvmf/subsystem delete_all'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' 00:27:39.890 '\''/bdevs/malloc delete Malloc6'\'' '\''Malloc6'\'' 00:27:39.890 '\''/bdevs/malloc delete Malloc5'\'' '\''Malloc5'\'' 00:27:39.890 '\''/bdevs/malloc delete Malloc4'\'' '\''Malloc4'\'' 00:27:39.890 '\''/bdevs/malloc delete Malloc3'\'' '\''Malloc3'\'' 00:27:39.890 '\''/bdevs/malloc delete Malloc2'\'' '\''Malloc2'\'' 00:27:39.890 '\''/bdevs/malloc delete Malloc1'\'' '\''Malloc1'\'' 00:27:39.890 ' 00:27:45.151 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete nsid=1', 'Malloc3', False] 00:27:45.151 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete_all', 'Malloc4', False] 00:27:45.151 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts delete nqn.2014-08.org.spdk:cnode2', 'nqn.2014-08.org.spdk:cnode2', False] 00:27:45.151 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts delete_all', 'nqn.2014-08.org.spdk:cnode1', False] 00:27:45.151 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete tcp 127.0.0.1 4262', '127.0.0.1:4262', False] 00:27:45.151 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete_all', '127.0.0.1:4261', False] 00:27:45.151 Executing command: ['/nvmf/subsystem delete nqn.2014-08.org.spdk:cnode3', 'nqn.2014-08.org.spdk:cnode3', False] 00:27:45.151 Executing command: ['/nvmf/subsystem delete_all', 'nqn.2014-08.org.spdk:cnode2', False] 00:27:45.151 Executing command: ['/bdevs/malloc delete Malloc6', 'Malloc6', False] 00:27:45.151 Executing command: ['/bdevs/malloc delete Malloc5', 'Malloc5', False] 00:27:45.151 Executing command: ['/bdevs/malloc delete Malloc4', 'Malloc4', False] 00:27:45.151 Executing command: ['/bdevs/malloc delete Malloc3', 'Malloc3', False] 00:27:45.151 Executing command: ['/bdevs/malloc delete Malloc2', 'Malloc2', False] 00:27:45.151 Executing command: ['/bdevs/malloc delete Malloc1', 'Malloc1', False] 00:27:45.151 07:45:01 -- spdkcli/nvmf.sh@88 -- # timing_exit spdkcli_clear_nvmf_config 00:27:45.151 07:45:01 -- common/autotest_common.sh@718 -- # xtrace_disable 00:27:45.151 07:45:01 -- common/autotest_common.sh@10 -- # set +x 00:27:45.151 07:45:01 -- spdkcli/nvmf.sh@90 -- # killprocess 20485 00:27:45.151 07:45:01 -- common/autotest_common.sh@926 -- # '[' -z 20485 ']' 00:27:45.151 07:45:01 -- common/autotest_common.sh@930 -- # kill -0 20485 00:27:45.151 07:45:01 -- common/autotest_common.sh@931 -- # uname 00:27:45.151 07:45:01 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:27:45.152 07:45:01 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 20485 00:27:45.152 07:45:01 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:27:45.152 07:45:01 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:27:45.152 07:45:01 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 20485' 00:27:45.152 killing process with pid 20485 00:27:45.152 07:45:01 -- common/autotest_common.sh@945 -- # kill 20485 00:27:45.152 [2024-07-14 07:45:01.186660] app.c: 883:log_deprecation_hits: *WARNING*: rpc_nvmf_get_subsystems: deprecation 'listener.transport is deprecated in favor of trtype' scheduled for removal in v24.05 hit 1 times 00:27:45.152 07:45:01 -- common/autotest_common.sh@950 -- # wait 20485 00:27:45.410 07:45:01 -- spdkcli/nvmf.sh@1 -- # cleanup 00:27:45.410 07:45:01 -- spdkcli/common.sh@10 -- # '[' -n '' ']' 00:27:45.410 07:45:01 -- spdkcli/common.sh@13 -- # '[' -n 20485 ']' 00:27:45.410 07:45:01 -- spdkcli/common.sh@14 -- # killprocess 20485 00:27:45.410 07:45:01 -- common/autotest_common.sh@926 -- # '[' -z 20485 ']' 00:27:45.410 07:45:01 -- common/autotest_common.sh@930 -- # kill -0 20485 00:27:45.410 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 930: kill: (20485) - No such process 00:27:45.410 07:45:01 -- common/autotest_common.sh@953 -- # echo 'Process with pid 20485 is not found' 00:27:45.410 Process with pid 20485 is not found 00:27:45.410 07:45:01 -- spdkcli/common.sh@16 -- # '[' -n '' ']' 00:27:45.411 07:45:01 -- spdkcli/common.sh@19 -- # '[' -n '' ']' 00:27:45.411 07:45:01 -- spdkcli/common.sh@22 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/spdkcli_nvmf.test /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/match_files/spdkcli_details_vhost.test /tmp/sample_aio 00:27:45.411 00:27:45.411 real 0m16.805s 00:27:45.411 user 0m35.584s 00:27:45.411 sys 0m0.873s 00:27:45.411 07:45:01 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:27:45.411 07:45:01 -- common/autotest_common.sh@10 -- # set +x 00:27:45.411 ************************************ 00:27:45.411 END TEST spdkcli_nvmf_tcp 00:27:45.411 ************************************ 00:27:45.411 07:45:01 -- spdk/autotest.sh@298 -- # run_test nvmf_identify_passthru /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/identify_passthru.sh --transport=tcp 00:27:45.411 07:45:01 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:27:45.411 07:45:01 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:27:45.411 07:45:01 -- common/autotest_common.sh@10 -- # set +x 00:27:45.411 ************************************ 00:27:45.411 START TEST nvmf_identify_passthru 00:27:45.411 ************************************ 00:27:45.411 07:45:01 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/identify_passthru.sh --transport=tcp 00:27:45.411 * Looking for test storage... 00:27:45.411 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:27:45.411 07:45:01 -- target/identify_passthru.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:27:45.411 07:45:01 -- nvmf/common.sh@7 -- # uname -s 00:27:45.411 07:45:01 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:27:45.411 07:45:01 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:27:45.411 07:45:01 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:27:45.411 07:45:01 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:27:45.411 07:45:01 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:27:45.411 07:45:01 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:27:45.411 07:45:01 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:27:45.411 07:45:01 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:27:45.411 07:45:01 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:27:45.411 07:45:01 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:27:45.411 07:45:01 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:27:45.411 07:45:01 -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:27:45.411 07:45:01 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:27:45.411 07:45:01 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:27:45.411 07:45:01 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:27:45.411 07:45:01 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:27:45.411 07:45:01 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:27:45.411 07:45:01 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:27:45.411 07:45:01 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:27:45.411 07:45:01 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:45.411 07:45:01 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:45.411 07:45:01 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:45.411 07:45:01 -- paths/export.sh@5 -- # export PATH 00:27:45.411 07:45:01 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:45.411 07:45:01 -- nvmf/common.sh@46 -- # : 0 00:27:45.411 07:45:01 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:27:45.411 07:45:01 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:27:45.411 07:45:01 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:27:45.411 07:45:01 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:27:45.411 07:45:01 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:27:45.411 07:45:01 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:27:45.411 07:45:01 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:27:45.411 07:45:01 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:27:45.411 07:45:01 -- target/identify_passthru.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:27:45.411 07:45:01 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:27:45.411 07:45:01 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:27:45.411 07:45:01 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:27:45.411 07:45:01 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:45.411 07:45:01 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:45.411 07:45:01 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:45.411 07:45:01 -- paths/export.sh@5 -- # export PATH 00:27:45.411 07:45:01 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:45.411 07:45:01 -- target/identify_passthru.sh@12 -- # nvmftestinit 00:27:45.411 07:45:01 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:27:45.411 07:45:01 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:27:45.411 07:45:01 -- nvmf/common.sh@436 -- # prepare_net_devs 00:27:45.411 07:45:01 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:27:45.411 07:45:01 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:27:45.411 07:45:01 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:27:45.411 07:45:01 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:27:45.411 07:45:01 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:27:45.411 07:45:01 -- nvmf/common.sh@402 -- # [[ phy != virt ]] 00:27:45.411 07:45:01 -- nvmf/common.sh@402 -- # gather_supported_nvmf_pci_devs 00:27:45.411 07:45:01 -- nvmf/common.sh@284 -- # xtrace_disable 00:27:45.411 07:45:01 -- common/autotest_common.sh@10 -- # set +x 00:27:47.314 07:45:03 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:27:47.314 07:45:03 -- nvmf/common.sh@290 -- # pci_devs=() 00:27:47.314 07:45:03 -- nvmf/common.sh@290 -- # local -a pci_devs 00:27:47.314 07:45:03 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:27:47.314 07:45:03 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:27:47.314 07:45:03 -- nvmf/common.sh@292 -- # pci_drivers=() 00:27:47.314 07:45:03 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:27:47.314 07:45:03 -- nvmf/common.sh@294 -- # net_devs=() 00:27:47.314 07:45:03 -- nvmf/common.sh@294 -- # local -ga net_devs 00:27:47.314 07:45:03 -- nvmf/common.sh@295 -- # e810=() 00:27:47.314 07:45:03 -- nvmf/common.sh@295 -- # local -ga e810 00:27:47.314 07:45:03 -- nvmf/common.sh@296 -- # x722=() 00:27:47.314 07:45:03 -- nvmf/common.sh@296 -- # local -ga x722 00:27:47.314 07:45:03 -- nvmf/common.sh@297 -- # mlx=() 00:27:47.314 07:45:03 -- nvmf/common.sh@297 -- # local -ga mlx 00:27:47.314 07:45:03 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:27:47.314 07:45:03 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:27:47.314 07:45:03 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:27:47.314 07:45:03 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:27:47.314 07:45:03 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:27:47.314 07:45:03 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:27:47.314 07:45:03 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:27:47.314 07:45:03 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:27:47.314 07:45:03 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:27:47.314 07:45:03 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:27:47.314 07:45:03 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:27:47.314 07:45:03 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:27:47.314 07:45:03 -- nvmf/common.sh@320 -- # [[ tcp == rdma ]] 00:27:47.314 07:45:03 -- nvmf/common.sh@326 -- # [[ e810 == mlx5 ]] 00:27:47.314 07:45:03 -- nvmf/common.sh@328 -- # [[ e810 == e810 ]] 00:27:47.314 07:45:03 -- nvmf/common.sh@329 -- # pci_devs=("${e810[@]}") 00:27:47.314 07:45:03 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:27:47.314 07:45:03 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:27:47.314 07:45:03 -- nvmf/common.sh@340 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:27:47.314 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:27:47.314 07:45:03 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:27:47.314 07:45:03 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:27:47.314 07:45:03 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:27:47.314 07:45:03 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:27:47.314 07:45:03 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:27:47.314 07:45:03 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:27:47.314 07:45:03 -- nvmf/common.sh@340 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:27:47.314 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:27:47.314 07:45:03 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:27:47.314 07:45:03 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:27:47.314 07:45:03 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:27:47.314 07:45:03 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:27:47.314 07:45:03 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:27:47.314 07:45:03 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:27:47.314 07:45:03 -- nvmf/common.sh@371 -- # [[ e810 == e810 ]] 00:27:47.314 07:45:03 -- nvmf/common.sh@371 -- # [[ tcp == rdma ]] 00:27:47.314 07:45:03 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:27:47.314 07:45:03 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:27:47.314 07:45:03 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:27:47.314 07:45:03 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:27:47.314 07:45:03 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:27:47.314 Found net devices under 0000:0a:00.0: cvl_0_0 00:27:47.314 07:45:03 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:27:47.314 07:45:03 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:27:47.314 07:45:03 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:27:47.314 07:45:03 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:27:47.314 07:45:03 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:27:47.314 07:45:03 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:27:47.314 Found net devices under 0000:0a:00.1: cvl_0_1 00:27:47.314 07:45:03 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:27:47.314 07:45:03 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:27:47.314 07:45:03 -- nvmf/common.sh@402 -- # is_hw=yes 00:27:47.314 07:45:03 -- nvmf/common.sh@404 -- # [[ yes == yes ]] 00:27:47.314 07:45:03 -- nvmf/common.sh@405 -- # [[ tcp == tcp ]] 00:27:47.314 07:45:03 -- nvmf/common.sh@406 -- # nvmf_tcp_init 00:27:47.314 07:45:03 -- nvmf/common.sh@228 -- # NVMF_INITIATOR_IP=10.0.0.1 00:27:47.314 07:45:03 -- nvmf/common.sh@229 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:27:47.314 07:45:03 -- nvmf/common.sh@230 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:27:47.314 07:45:03 -- nvmf/common.sh@233 -- # (( 2 > 1 )) 00:27:47.314 07:45:03 -- nvmf/common.sh@235 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:27:47.314 07:45:03 -- nvmf/common.sh@236 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:27:47.314 07:45:03 -- nvmf/common.sh@239 -- # NVMF_SECOND_TARGET_IP= 00:27:47.314 07:45:03 -- nvmf/common.sh@241 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:27:47.314 07:45:03 -- nvmf/common.sh@242 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:27:47.314 07:45:03 -- nvmf/common.sh@243 -- # ip -4 addr flush cvl_0_0 00:27:47.314 07:45:03 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_1 00:27:47.314 07:45:03 -- nvmf/common.sh@247 -- # ip netns add cvl_0_0_ns_spdk 00:27:47.314 07:45:03 -- nvmf/common.sh@250 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:27:47.573 07:45:03 -- nvmf/common.sh@253 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:27:47.573 07:45:03 -- nvmf/common.sh@254 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:27:47.573 07:45:03 -- nvmf/common.sh@257 -- # ip link set cvl_0_1 up 00:27:47.573 07:45:03 -- nvmf/common.sh@259 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:27:47.573 07:45:03 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:27:47.573 07:45:03 -- nvmf/common.sh@263 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:27:47.573 07:45:03 -- nvmf/common.sh@266 -- # ping -c 1 10.0.0.2 00:27:47.573 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:27:47.573 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.145 ms 00:27:47.573 00:27:47.573 --- 10.0.0.2 ping statistics --- 00:27:47.573 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:27:47.573 rtt min/avg/max/mdev = 0.145/0.145/0.145/0.000 ms 00:27:47.573 07:45:03 -- nvmf/common.sh@267 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:27:47.573 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:27:47.573 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.133 ms 00:27:47.573 00:27:47.573 --- 10.0.0.1 ping statistics --- 00:27:47.573 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:27:47.573 rtt min/avg/max/mdev = 0.133/0.133/0.133/0.000 ms 00:27:47.573 07:45:03 -- nvmf/common.sh@269 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:27:47.573 07:45:03 -- nvmf/common.sh@410 -- # return 0 00:27:47.573 07:45:03 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:27:47.573 07:45:03 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:27:47.573 07:45:03 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:27:47.573 07:45:03 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:27:47.573 07:45:03 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:27:47.573 07:45:03 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:27:47.573 07:45:03 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:27:47.573 07:45:03 -- target/identify_passthru.sh@14 -- # timing_enter nvme_identify 00:27:47.573 07:45:03 -- common/autotest_common.sh@712 -- # xtrace_disable 00:27:47.573 07:45:03 -- common/autotest_common.sh@10 -- # set +x 00:27:47.573 07:45:03 -- target/identify_passthru.sh@16 -- # get_first_nvme_bdf 00:27:47.573 07:45:03 -- common/autotest_common.sh@1509 -- # bdfs=() 00:27:47.573 07:45:03 -- common/autotest_common.sh@1509 -- # local bdfs 00:27:47.573 07:45:03 -- common/autotest_common.sh@1510 -- # bdfs=($(get_nvme_bdfs)) 00:27:47.573 07:45:03 -- common/autotest_common.sh@1510 -- # get_nvme_bdfs 00:27:47.573 07:45:03 -- common/autotest_common.sh@1498 -- # bdfs=() 00:27:47.573 07:45:03 -- common/autotest_common.sh@1498 -- # local bdfs 00:27:47.573 07:45:03 -- common/autotest_common.sh@1499 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:27:47.573 07:45:03 -- common/autotest_common.sh@1499 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh 00:27:47.573 07:45:03 -- common/autotest_common.sh@1499 -- # jq -r '.config[].params.traddr' 00:27:47.573 07:45:03 -- common/autotest_common.sh@1500 -- # (( 1 == 0 )) 00:27:47.573 07:45:03 -- common/autotest_common.sh@1504 -- # printf '%s\n' 0000:88:00.0 00:27:47.573 07:45:03 -- common/autotest_common.sh@1512 -- # echo 0000:88:00.0 00:27:47.573 07:45:03 -- target/identify_passthru.sh@16 -- # bdf=0000:88:00.0 00:27:47.573 07:45:03 -- target/identify_passthru.sh@17 -- # '[' -z 0000:88:00.0 ']' 00:27:47.573 07:45:03 -- target/identify_passthru.sh@23 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:88:00.0' -i 0 00:27:47.573 07:45:03 -- target/identify_passthru.sh@23 -- # grep 'Serial Number:' 00:27:47.573 07:45:03 -- target/identify_passthru.sh@23 -- # awk '{print $3}' 00:27:47.830 EAL: No free 2048 kB hugepages reported on node 1 00:27:52.008 07:45:07 -- target/identify_passthru.sh@23 -- # nvme_serial_number=PHLJ916004901P0FGN 00:27:52.008 07:45:07 -- target/identify_passthru.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:88:00.0' -i 0 00:27:52.008 07:45:07 -- target/identify_passthru.sh@24 -- # grep 'Model Number:' 00:27:52.008 07:45:07 -- target/identify_passthru.sh@24 -- # awk '{print $3}' 00:27:52.008 EAL: No free 2048 kB hugepages reported on node 1 00:27:56.246 07:45:12 -- target/identify_passthru.sh@24 -- # nvme_model_number=INTEL 00:27:56.246 07:45:12 -- target/identify_passthru.sh@26 -- # timing_exit nvme_identify 00:27:56.246 07:45:12 -- common/autotest_common.sh@718 -- # xtrace_disable 00:27:56.246 07:45:12 -- common/autotest_common.sh@10 -- # set +x 00:27:56.246 07:45:12 -- target/identify_passthru.sh@28 -- # timing_enter start_nvmf_tgt 00:27:56.246 07:45:12 -- common/autotest_common.sh@712 -- # xtrace_disable 00:27:56.246 07:45:12 -- common/autotest_common.sh@10 -- # set +x 00:27:56.246 07:45:12 -- target/identify_passthru.sh@31 -- # nvmfpid=25838 00:27:56.246 07:45:12 -- target/identify_passthru.sh@30 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF --wait-for-rpc 00:27:56.246 07:45:12 -- target/identify_passthru.sh@33 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:27:56.246 07:45:12 -- target/identify_passthru.sh@35 -- # waitforlisten 25838 00:27:56.246 07:45:12 -- common/autotest_common.sh@819 -- # '[' -z 25838 ']' 00:27:56.246 07:45:12 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:27:56.246 07:45:12 -- common/autotest_common.sh@824 -- # local max_retries=100 00:27:56.246 07:45:12 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:27:56.246 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:27:56.246 07:45:12 -- common/autotest_common.sh@828 -- # xtrace_disable 00:27:56.246 07:45:12 -- common/autotest_common.sh@10 -- # set +x 00:27:56.246 [2024-07-14 07:45:12.267733] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:27:56.246 [2024-07-14 07:45:12.267823] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:27:56.246 EAL: No free 2048 kB hugepages reported on node 1 00:27:56.246 [2024-07-14 07:45:12.333670] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:27:56.503 [2024-07-14 07:45:12.440064] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:27:56.503 [2024-07-14 07:45:12.440206] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:27:56.503 [2024-07-14 07:45:12.440222] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:27:56.503 [2024-07-14 07:45:12.440234] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:27:56.503 [2024-07-14 07:45:12.440283] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:27:56.503 [2024-07-14 07:45:12.440307] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:27:56.503 [2024-07-14 07:45:12.440361] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:27:56.503 [2024-07-14 07:45:12.440364] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:27:57.066 07:45:13 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:27:57.066 07:45:13 -- common/autotest_common.sh@852 -- # return 0 00:27:57.066 07:45:13 -- target/identify_passthru.sh@36 -- # rpc_cmd -v nvmf_set_config --passthru-identify-ctrlr 00:27:57.066 07:45:13 -- common/autotest_common.sh@551 -- # xtrace_disable 00:27:57.066 07:45:13 -- common/autotest_common.sh@10 -- # set +x 00:27:57.066 INFO: Log level set to 20 00:27:57.066 INFO: Requests: 00:27:57.066 { 00:27:57.066 "jsonrpc": "2.0", 00:27:57.066 "method": "nvmf_set_config", 00:27:57.066 "id": 1, 00:27:57.066 "params": { 00:27:57.066 "admin_cmd_passthru": { 00:27:57.066 "identify_ctrlr": true 00:27:57.066 } 00:27:57.066 } 00:27:57.066 } 00:27:57.066 00:27:57.323 INFO: response: 00:27:57.323 { 00:27:57.323 "jsonrpc": "2.0", 00:27:57.323 "id": 1, 00:27:57.323 "result": true 00:27:57.323 } 00:27:57.323 00:27:57.323 07:45:13 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:27:57.323 07:45:13 -- target/identify_passthru.sh@37 -- # rpc_cmd -v framework_start_init 00:27:57.323 07:45:13 -- common/autotest_common.sh@551 -- # xtrace_disable 00:27:57.323 07:45:13 -- common/autotest_common.sh@10 -- # set +x 00:27:57.323 INFO: Setting log level to 20 00:27:57.323 INFO: Setting log level to 20 00:27:57.323 INFO: Log level set to 20 00:27:57.323 INFO: Log level set to 20 00:27:57.323 INFO: Requests: 00:27:57.323 { 00:27:57.323 "jsonrpc": "2.0", 00:27:57.323 "method": "framework_start_init", 00:27:57.323 "id": 1 00:27:57.323 } 00:27:57.323 00:27:57.323 INFO: Requests: 00:27:57.323 { 00:27:57.323 "jsonrpc": "2.0", 00:27:57.323 "method": "framework_start_init", 00:27:57.323 "id": 1 00:27:57.323 } 00:27:57.323 00:27:57.323 [2024-07-14 07:45:13.349277] nvmf_tgt.c: 423:nvmf_tgt_advance_state: *NOTICE*: Custom identify ctrlr handler enabled 00:27:57.323 INFO: response: 00:27:57.323 { 00:27:57.323 "jsonrpc": "2.0", 00:27:57.323 "id": 1, 00:27:57.323 "result": true 00:27:57.323 } 00:27:57.323 00:27:57.323 INFO: response: 00:27:57.323 { 00:27:57.323 "jsonrpc": "2.0", 00:27:57.323 "id": 1, 00:27:57.323 "result": true 00:27:57.323 } 00:27:57.323 00:27:57.323 07:45:13 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:27:57.323 07:45:13 -- target/identify_passthru.sh@38 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:27:57.323 07:45:13 -- common/autotest_common.sh@551 -- # xtrace_disable 00:27:57.323 07:45:13 -- common/autotest_common.sh@10 -- # set +x 00:27:57.323 INFO: Setting log level to 40 00:27:57.323 INFO: Setting log level to 40 00:27:57.323 INFO: Setting log level to 40 00:27:57.323 [2024-07-14 07:45:13.359361] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:27:57.323 07:45:13 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:27:57.323 07:45:13 -- target/identify_passthru.sh@39 -- # timing_exit start_nvmf_tgt 00:27:57.323 07:45:13 -- common/autotest_common.sh@718 -- # xtrace_disable 00:27:57.323 07:45:13 -- common/autotest_common.sh@10 -- # set +x 00:27:57.323 07:45:13 -- target/identify_passthru.sh@41 -- # rpc_cmd bdev_nvme_attach_controller -b Nvme0 -t PCIe -a 0000:88:00.0 00:27:57.323 07:45:13 -- common/autotest_common.sh@551 -- # xtrace_disable 00:27:57.323 07:45:13 -- common/autotest_common.sh@10 -- # set +x 00:28:00.602 Nvme0n1 00:28:00.602 07:45:16 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:28:00.602 07:45:16 -- target/identify_passthru.sh@42 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 1 00:28:00.602 07:45:16 -- common/autotest_common.sh@551 -- # xtrace_disable 00:28:00.602 07:45:16 -- common/autotest_common.sh@10 -- # set +x 00:28:00.602 07:45:16 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:28:00.602 07:45:16 -- target/identify_passthru.sh@43 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Nvme0n1 00:28:00.602 07:45:16 -- common/autotest_common.sh@551 -- # xtrace_disable 00:28:00.602 07:45:16 -- common/autotest_common.sh@10 -- # set +x 00:28:00.602 07:45:16 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:28:00.602 07:45:16 -- target/identify_passthru.sh@44 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:28:00.602 07:45:16 -- common/autotest_common.sh@551 -- # xtrace_disable 00:28:00.602 07:45:16 -- common/autotest_common.sh@10 -- # set +x 00:28:00.602 [2024-07-14 07:45:16.255863] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:28:00.602 07:45:16 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:28:00.602 07:45:16 -- target/identify_passthru.sh@46 -- # rpc_cmd nvmf_get_subsystems 00:28:00.602 07:45:16 -- common/autotest_common.sh@551 -- # xtrace_disable 00:28:00.602 07:45:16 -- common/autotest_common.sh@10 -- # set +x 00:28:00.602 [2024-07-14 07:45:16.263588] nvmf_rpc.c: 275:rpc_nvmf_get_subsystems: *WARNING*: rpc_nvmf_get_subsystems: deprecated feature listener.transport is deprecated in favor of trtype to be removed in v24.05 00:28:00.602 [ 00:28:00.602 { 00:28:00.602 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:28:00.602 "subtype": "Discovery", 00:28:00.602 "listen_addresses": [], 00:28:00.602 "allow_any_host": true, 00:28:00.602 "hosts": [] 00:28:00.602 }, 00:28:00.602 { 00:28:00.602 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:28:00.602 "subtype": "NVMe", 00:28:00.602 "listen_addresses": [ 00:28:00.602 { 00:28:00.602 "transport": "TCP", 00:28:00.602 "trtype": "TCP", 00:28:00.602 "adrfam": "IPv4", 00:28:00.602 "traddr": "10.0.0.2", 00:28:00.602 "trsvcid": "4420" 00:28:00.602 } 00:28:00.602 ], 00:28:00.602 "allow_any_host": true, 00:28:00.602 "hosts": [], 00:28:00.602 "serial_number": "SPDK00000000000001", 00:28:00.602 "model_number": "SPDK bdev Controller", 00:28:00.602 "max_namespaces": 1, 00:28:00.602 "min_cntlid": 1, 00:28:00.602 "max_cntlid": 65519, 00:28:00.602 "namespaces": [ 00:28:00.602 { 00:28:00.602 "nsid": 1, 00:28:00.602 "bdev_name": "Nvme0n1", 00:28:00.602 "name": "Nvme0n1", 00:28:00.602 "nguid": "CCDAA391B72C4B829D22F6E6859B8E35", 00:28:00.602 "uuid": "ccdaa391-b72c-4b82-9d22-f6e6859b8e35" 00:28:00.602 } 00:28:00.602 ] 00:28:00.602 } 00:28:00.602 ] 00:28:00.602 07:45:16 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:28:00.602 07:45:16 -- target/identify_passthru.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:28:00.602 07:45:16 -- target/identify_passthru.sh@54 -- # grep 'Serial Number:' 00:28:00.602 07:45:16 -- target/identify_passthru.sh@54 -- # awk '{print $3}' 00:28:00.602 EAL: No free 2048 kB hugepages reported on node 1 00:28:00.602 07:45:16 -- target/identify_passthru.sh@54 -- # nvmf_serial_number=PHLJ916004901P0FGN 00:28:00.602 07:45:16 -- target/identify_passthru.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:28:00.602 07:45:16 -- target/identify_passthru.sh@61 -- # grep 'Model Number:' 00:28:00.602 07:45:16 -- target/identify_passthru.sh@61 -- # awk '{print $3}' 00:28:00.602 EAL: No free 2048 kB hugepages reported on node 1 00:28:00.602 07:45:16 -- target/identify_passthru.sh@61 -- # nvmf_model_number=INTEL 00:28:00.602 07:45:16 -- target/identify_passthru.sh@63 -- # '[' PHLJ916004901P0FGN '!=' PHLJ916004901P0FGN ']' 00:28:00.602 07:45:16 -- target/identify_passthru.sh@68 -- # '[' INTEL '!=' INTEL ']' 00:28:00.602 07:45:16 -- target/identify_passthru.sh@73 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:28:00.602 07:45:16 -- common/autotest_common.sh@551 -- # xtrace_disable 00:28:00.603 07:45:16 -- common/autotest_common.sh@10 -- # set +x 00:28:00.603 07:45:16 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:28:00.603 07:45:16 -- target/identify_passthru.sh@75 -- # trap - SIGINT SIGTERM EXIT 00:28:00.603 07:45:16 -- target/identify_passthru.sh@77 -- # nvmftestfini 00:28:00.603 07:45:16 -- nvmf/common.sh@476 -- # nvmfcleanup 00:28:00.603 07:45:16 -- nvmf/common.sh@116 -- # sync 00:28:00.603 07:45:16 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:28:00.603 07:45:16 -- nvmf/common.sh@119 -- # set +e 00:28:00.603 07:45:16 -- nvmf/common.sh@120 -- # for i in {1..20} 00:28:00.603 07:45:16 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:28:00.603 rmmod nvme_tcp 00:28:00.603 rmmod nvme_fabrics 00:28:00.603 rmmod nvme_keyring 00:28:00.603 07:45:16 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:28:00.603 07:45:16 -- nvmf/common.sh@123 -- # set -e 00:28:00.603 07:45:16 -- nvmf/common.sh@124 -- # return 0 00:28:00.603 07:45:16 -- nvmf/common.sh@477 -- # '[' -n 25838 ']' 00:28:00.603 07:45:16 -- nvmf/common.sh@478 -- # killprocess 25838 00:28:00.603 07:45:16 -- common/autotest_common.sh@926 -- # '[' -z 25838 ']' 00:28:00.603 07:45:16 -- common/autotest_common.sh@930 -- # kill -0 25838 00:28:00.603 07:45:16 -- common/autotest_common.sh@931 -- # uname 00:28:00.603 07:45:16 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:28:00.603 07:45:16 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 25838 00:28:00.603 07:45:16 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:28:00.603 07:45:16 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:28:00.603 07:45:16 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 25838' 00:28:00.603 killing process with pid 25838 00:28:00.603 07:45:16 -- common/autotest_common.sh@945 -- # kill 25838 00:28:00.603 [2024-07-14 07:45:16.590330] app.c: 883:log_deprecation_hits: *WARNING*: rpc_nvmf_get_subsystems: deprecation 'listener.transport is deprecated in favor of trtype' scheduled for removal in v24.05 hit 1 times 00:28:00.603 07:45:16 -- common/autotest_common.sh@950 -- # wait 25838 00:28:02.501 07:45:18 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:28:02.501 07:45:18 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:28:02.501 07:45:18 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:28:02.501 07:45:18 -- nvmf/common.sh@273 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:28:02.501 07:45:18 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:28:02.501 07:45:18 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:28:02.501 07:45:18 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:28:02.501 07:45:18 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:28:04.405 07:45:20 -- nvmf/common.sh@278 -- # ip -4 addr flush cvl_0_1 00:28:04.405 00:28:04.405 real 0m18.756s 00:28:04.405 user 0m29.564s 00:28:04.405 sys 0m2.329s 00:28:04.405 07:45:20 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:28:04.405 07:45:20 -- common/autotest_common.sh@10 -- # set +x 00:28:04.405 ************************************ 00:28:04.405 END TEST nvmf_identify_passthru 00:28:04.405 ************************************ 00:28:04.405 07:45:20 -- spdk/autotest.sh@300 -- # run_test nvmf_dif /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/dif.sh 00:28:04.405 07:45:20 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:28:04.405 07:45:20 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:28:04.405 07:45:20 -- common/autotest_common.sh@10 -- # set +x 00:28:04.405 ************************************ 00:28:04.405 START TEST nvmf_dif 00:28:04.405 ************************************ 00:28:04.405 07:45:20 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/dif.sh 00:28:04.405 * Looking for test storage... 00:28:04.405 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:28:04.405 07:45:20 -- target/dif.sh@13 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:28:04.405 07:45:20 -- nvmf/common.sh@7 -- # uname -s 00:28:04.405 07:45:20 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:28:04.405 07:45:20 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:28:04.405 07:45:20 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:28:04.405 07:45:20 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:28:04.405 07:45:20 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:28:04.405 07:45:20 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:28:04.405 07:45:20 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:28:04.405 07:45:20 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:28:04.405 07:45:20 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:28:04.405 07:45:20 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:28:04.405 07:45:20 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:28:04.405 07:45:20 -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:28:04.405 07:45:20 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:28:04.405 07:45:20 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:28:04.405 07:45:20 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:28:04.405 07:45:20 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:28:04.405 07:45:20 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:28:04.405 07:45:20 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:28:04.405 07:45:20 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:28:04.405 07:45:20 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:04.405 07:45:20 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:04.405 07:45:20 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:04.405 07:45:20 -- paths/export.sh@5 -- # export PATH 00:28:04.405 07:45:20 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:04.405 07:45:20 -- nvmf/common.sh@46 -- # : 0 00:28:04.405 07:45:20 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:28:04.405 07:45:20 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:28:04.405 07:45:20 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:28:04.405 07:45:20 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:28:04.405 07:45:20 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:28:04.405 07:45:20 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:28:04.405 07:45:20 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:28:04.405 07:45:20 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:28:04.405 07:45:20 -- target/dif.sh@15 -- # NULL_META=16 00:28:04.405 07:45:20 -- target/dif.sh@15 -- # NULL_BLOCK_SIZE=512 00:28:04.405 07:45:20 -- target/dif.sh@15 -- # NULL_SIZE=64 00:28:04.405 07:45:20 -- target/dif.sh@15 -- # NULL_DIF=1 00:28:04.405 07:45:20 -- target/dif.sh@135 -- # nvmftestinit 00:28:04.405 07:45:20 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:28:04.405 07:45:20 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:28:04.405 07:45:20 -- nvmf/common.sh@436 -- # prepare_net_devs 00:28:04.405 07:45:20 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:28:04.405 07:45:20 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:28:04.405 07:45:20 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:28:04.405 07:45:20 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:28:04.405 07:45:20 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:28:04.405 07:45:20 -- nvmf/common.sh@402 -- # [[ phy != virt ]] 00:28:04.405 07:45:20 -- nvmf/common.sh@402 -- # gather_supported_nvmf_pci_devs 00:28:04.405 07:45:20 -- nvmf/common.sh@284 -- # xtrace_disable 00:28:04.405 07:45:20 -- common/autotest_common.sh@10 -- # set +x 00:28:06.309 07:45:22 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:28:06.309 07:45:22 -- nvmf/common.sh@290 -- # pci_devs=() 00:28:06.309 07:45:22 -- nvmf/common.sh@290 -- # local -a pci_devs 00:28:06.309 07:45:22 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:28:06.309 07:45:22 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:28:06.309 07:45:22 -- nvmf/common.sh@292 -- # pci_drivers=() 00:28:06.309 07:45:22 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:28:06.309 07:45:22 -- nvmf/common.sh@294 -- # net_devs=() 00:28:06.309 07:45:22 -- nvmf/common.sh@294 -- # local -ga net_devs 00:28:06.309 07:45:22 -- nvmf/common.sh@295 -- # e810=() 00:28:06.309 07:45:22 -- nvmf/common.sh@295 -- # local -ga e810 00:28:06.309 07:45:22 -- nvmf/common.sh@296 -- # x722=() 00:28:06.309 07:45:22 -- nvmf/common.sh@296 -- # local -ga x722 00:28:06.309 07:45:22 -- nvmf/common.sh@297 -- # mlx=() 00:28:06.309 07:45:22 -- nvmf/common.sh@297 -- # local -ga mlx 00:28:06.309 07:45:22 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:28:06.309 07:45:22 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:28:06.309 07:45:22 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:28:06.309 07:45:22 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:28:06.309 07:45:22 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:28:06.309 07:45:22 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:28:06.309 07:45:22 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:28:06.309 07:45:22 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:28:06.309 07:45:22 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:28:06.309 07:45:22 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:28:06.309 07:45:22 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:28:06.309 07:45:22 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:28:06.309 07:45:22 -- nvmf/common.sh@320 -- # [[ tcp == rdma ]] 00:28:06.309 07:45:22 -- nvmf/common.sh@326 -- # [[ e810 == mlx5 ]] 00:28:06.309 07:45:22 -- nvmf/common.sh@328 -- # [[ e810 == e810 ]] 00:28:06.309 07:45:22 -- nvmf/common.sh@329 -- # pci_devs=("${e810[@]}") 00:28:06.309 07:45:22 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:28:06.309 07:45:22 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:28:06.309 07:45:22 -- nvmf/common.sh@340 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:28:06.309 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:28:06.309 07:45:22 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:28:06.309 07:45:22 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:28:06.309 07:45:22 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:28:06.309 07:45:22 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:28:06.309 07:45:22 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:28:06.309 07:45:22 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:28:06.309 07:45:22 -- nvmf/common.sh@340 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:28:06.309 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:28:06.309 07:45:22 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:28:06.309 07:45:22 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:28:06.309 07:45:22 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:28:06.309 07:45:22 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:28:06.309 07:45:22 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:28:06.309 07:45:22 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:28:06.309 07:45:22 -- nvmf/common.sh@371 -- # [[ e810 == e810 ]] 00:28:06.309 07:45:22 -- nvmf/common.sh@371 -- # [[ tcp == rdma ]] 00:28:06.309 07:45:22 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:28:06.309 07:45:22 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:28:06.309 07:45:22 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:28:06.309 07:45:22 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:28:06.309 07:45:22 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:28:06.309 Found net devices under 0000:0a:00.0: cvl_0_0 00:28:06.309 07:45:22 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:28:06.309 07:45:22 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:28:06.309 07:45:22 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:28:06.309 07:45:22 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:28:06.309 07:45:22 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:28:06.309 07:45:22 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:28:06.309 Found net devices under 0000:0a:00.1: cvl_0_1 00:28:06.309 07:45:22 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:28:06.309 07:45:22 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:28:06.309 07:45:22 -- nvmf/common.sh@402 -- # is_hw=yes 00:28:06.309 07:45:22 -- nvmf/common.sh@404 -- # [[ yes == yes ]] 00:28:06.309 07:45:22 -- nvmf/common.sh@405 -- # [[ tcp == tcp ]] 00:28:06.309 07:45:22 -- nvmf/common.sh@406 -- # nvmf_tcp_init 00:28:06.309 07:45:22 -- nvmf/common.sh@228 -- # NVMF_INITIATOR_IP=10.0.0.1 00:28:06.309 07:45:22 -- nvmf/common.sh@229 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:28:06.309 07:45:22 -- nvmf/common.sh@230 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:28:06.309 07:45:22 -- nvmf/common.sh@233 -- # (( 2 > 1 )) 00:28:06.309 07:45:22 -- nvmf/common.sh@235 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:28:06.309 07:45:22 -- nvmf/common.sh@236 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:28:06.309 07:45:22 -- nvmf/common.sh@239 -- # NVMF_SECOND_TARGET_IP= 00:28:06.309 07:45:22 -- nvmf/common.sh@241 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:28:06.309 07:45:22 -- nvmf/common.sh@242 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:28:06.309 07:45:22 -- nvmf/common.sh@243 -- # ip -4 addr flush cvl_0_0 00:28:06.309 07:45:22 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_1 00:28:06.309 07:45:22 -- nvmf/common.sh@247 -- # ip netns add cvl_0_0_ns_spdk 00:28:06.309 07:45:22 -- nvmf/common.sh@250 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:28:06.309 07:45:22 -- nvmf/common.sh@253 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:28:06.309 07:45:22 -- nvmf/common.sh@254 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:28:06.309 07:45:22 -- nvmf/common.sh@257 -- # ip link set cvl_0_1 up 00:28:06.309 07:45:22 -- nvmf/common.sh@259 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:28:06.309 07:45:22 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:28:06.309 07:45:22 -- nvmf/common.sh@263 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:28:06.309 07:45:22 -- nvmf/common.sh@266 -- # ping -c 1 10.0.0.2 00:28:06.309 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:28:06.309 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.140 ms 00:28:06.309 00:28:06.309 --- 10.0.0.2 ping statistics --- 00:28:06.309 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:28:06.309 rtt min/avg/max/mdev = 0.140/0.140/0.140/0.000 ms 00:28:06.309 07:45:22 -- nvmf/common.sh@267 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:28:06.309 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:28:06.309 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.145 ms 00:28:06.309 00:28:06.309 --- 10.0.0.1 ping statistics --- 00:28:06.309 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:28:06.309 rtt min/avg/max/mdev = 0.145/0.145/0.145/0.000 ms 00:28:06.309 07:45:22 -- nvmf/common.sh@269 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:28:06.309 07:45:22 -- nvmf/common.sh@410 -- # return 0 00:28:06.309 07:45:22 -- nvmf/common.sh@438 -- # '[' iso == iso ']' 00:28:06.309 07:45:22 -- nvmf/common.sh@439 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:28:07.681 0000:00:04.7 (8086 0e27): Already using the vfio-pci driver 00:28:07.681 0000:88:00.0 (8086 0a54): Already using the vfio-pci driver 00:28:07.681 0000:00:04.6 (8086 0e26): Already using the vfio-pci driver 00:28:07.681 0000:00:04.5 (8086 0e25): Already using the vfio-pci driver 00:28:07.681 0000:00:04.4 (8086 0e24): Already using the vfio-pci driver 00:28:07.681 0000:00:04.3 (8086 0e23): Already using the vfio-pci driver 00:28:07.681 0000:00:04.2 (8086 0e22): Already using the vfio-pci driver 00:28:07.681 0000:00:04.1 (8086 0e21): Already using the vfio-pci driver 00:28:07.681 0000:00:04.0 (8086 0e20): Already using the vfio-pci driver 00:28:07.681 0000:80:04.7 (8086 0e27): Already using the vfio-pci driver 00:28:07.681 0000:80:04.6 (8086 0e26): Already using the vfio-pci driver 00:28:07.681 0000:80:04.5 (8086 0e25): Already using the vfio-pci driver 00:28:07.681 0000:80:04.4 (8086 0e24): Already using the vfio-pci driver 00:28:07.681 0000:80:04.3 (8086 0e23): Already using the vfio-pci driver 00:28:07.681 0000:80:04.2 (8086 0e22): Already using the vfio-pci driver 00:28:07.681 0000:80:04.1 (8086 0e21): Already using the vfio-pci driver 00:28:07.681 0000:80:04.0 (8086 0e20): Already using the vfio-pci driver 00:28:07.681 07:45:23 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:28:07.681 07:45:23 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:28:07.681 07:45:23 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:28:07.681 07:45:23 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:28:07.681 07:45:23 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:28:07.681 07:45:23 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:28:07.681 07:45:23 -- target/dif.sh@136 -- # NVMF_TRANSPORT_OPTS+=' --dif-insert-or-strip' 00:28:07.681 07:45:23 -- target/dif.sh@137 -- # nvmfappstart 00:28:07.681 07:45:23 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:28:07.681 07:45:23 -- common/autotest_common.sh@712 -- # xtrace_disable 00:28:07.682 07:45:23 -- common/autotest_common.sh@10 -- # set +x 00:28:07.682 07:45:23 -- nvmf/common.sh@469 -- # nvmfpid=29161 00:28:07.682 07:45:23 -- nvmf/common.sh@468 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:28:07.682 07:45:23 -- nvmf/common.sh@470 -- # waitforlisten 29161 00:28:07.682 07:45:23 -- common/autotest_common.sh@819 -- # '[' -z 29161 ']' 00:28:07.682 07:45:23 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:28:07.682 07:45:23 -- common/autotest_common.sh@824 -- # local max_retries=100 00:28:07.682 07:45:23 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:28:07.682 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:28:07.682 07:45:23 -- common/autotest_common.sh@828 -- # xtrace_disable 00:28:07.682 07:45:23 -- common/autotest_common.sh@10 -- # set +x 00:28:07.682 [2024-07-14 07:45:23.702363] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:28:07.682 [2024-07-14 07:45:23.702431] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:28:07.682 EAL: No free 2048 kB hugepages reported on node 1 00:28:07.682 [2024-07-14 07:45:23.764336] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:07.939 [2024-07-14 07:45:23.869191] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:28:07.939 [2024-07-14 07:45:23.869355] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:28:07.939 [2024-07-14 07:45:23.869372] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:28:07.939 [2024-07-14 07:45:23.869384] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:28:07.939 [2024-07-14 07:45:23.869423] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:28:08.504 07:45:24 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:28:08.504 07:45:24 -- common/autotest_common.sh@852 -- # return 0 00:28:08.504 07:45:24 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:28:08.504 07:45:24 -- common/autotest_common.sh@718 -- # xtrace_disable 00:28:08.504 07:45:24 -- common/autotest_common.sh@10 -- # set +x 00:28:08.504 07:45:24 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:28:08.504 07:45:24 -- target/dif.sh@139 -- # create_transport 00:28:08.504 07:45:24 -- target/dif.sh@50 -- # rpc_cmd nvmf_create_transport -t tcp -o --dif-insert-or-strip 00:28:08.504 07:45:24 -- common/autotest_common.sh@551 -- # xtrace_disable 00:28:08.504 07:45:24 -- common/autotest_common.sh@10 -- # set +x 00:28:08.504 [2024-07-14 07:45:24.664255] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:28:08.504 07:45:24 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:28:08.504 07:45:24 -- target/dif.sh@141 -- # run_test fio_dif_1_default fio_dif_1 00:28:08.504 07:45:24 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:28:08.504 07:45:24 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:28:08.504 07:45:24 -- common/autotest_common.sh@10 -- # set +x 00:28:08.504 ************************************ 00:28:08.504 START TEST fio_dif_1_default 00:28:08.504 ************************************ 00:28:08.762 07:45:24 -- common/autotest_common.sh@1104 -- # fio_dif_1 00:28:08.762 07:45:24 -- target/dif.sh@86 -- # create_subsystems 0 00:28:08.762 07:45:24 -- target/dif.sh@28 -- # local sub 00:28:08.762 07:45:24 -- target/dif.sh@30 -- # for sub in "$@" 00:28:08.762 07:45:24 -- target/dif.sh@31 -- # create_subsystem 0 00:28:08.762 07:45:24 -- target/dif.sh@18 -- # local sub_id=0 00:28:08.762 07:45:24 -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 1 00:28:08.762 07:45:24 -- common/autotest_common.sh@551 -- # xtrace_disable 00:28:08.762 07:45:24 -- common/autotest_common.sh@10 -- # set +x 00:28:08.762 bdev_null0 00:28:08.762 07:45:24 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:28:08.762 07:45:24 -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:28:08.762 07:45:24 -- common/autotest_common.sh@551 -- # xtrace_disable 00:28:08.762 07:45:24 -- common/autotest_common.sh@10 -- # set +x 00:28:08.762 07:45:24 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:28:08.762 07:45:24 -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:28:08.762 07:45:24 -- common/autotest_common.sh@551 -- # xtrace_disable 00:28:08.762 07:45:24 -- common/autotest_common.sh@10 -- # set +x 00:28:08.762 07:45:24 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:28:08.762 07:45:24 -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:28:08.762 07:45:24 -- common/autotest_common.sh@551 -- # xtrace_disable 00:28:08.762 07:45:24 -- common/autotest_common.sh@10 -- # set +x 00:28:08.762 [2024-07-14 07:45:24.704510] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:28:08.762 07:45:24 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:28:08.762 07:45:24 -- target/dif.sh@87 -- # fio /dev/fd/62 00:28:08.762 07:45:24 -- target/dif.sh@87 -- # create_json_sub_conf 0 00:28:08.762 07:45:24 -- target/dif.sh@51 -- # gen_nvmf_target_json 0 00:28:08.762 07:45:24 -- nvmf/common.sh@520 -- # config=() 00:28:08.762 07:45:24 -- nvmf/common.sh@520 -- # local subsystem config 00:28:08.762 07:45:24 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:28:08.762 07:45:24 -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:28:08.762 07:45:24 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:28:08.762 { 00:28:08.762 "params": { 00:28:08.762 "name": "Nvme$subsystem", 00:28:08.762 "trtype": "$TEST_TRANSPORT", 00:28:08.762 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:08.762 "adrfam": "ipv4", 00:28:08.762 "trsvcid": "$NVMF_PORT", 00:28:08.762 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:08.762 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:08.762 "hdgst": ${hdgst:-false}, 00:28:08.762 "ddgst": ${ddgst:-false} 00:28:08.762 }, 00:28:08.762 "method": "bdev_nvme_attach_controller" 00:28:08.762 } 00:28:08.762 EOF 00:28:08.762 )") 00:28:08.762 07:45:24 -- common/autotest_common.sh@1335 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:28:08.762 07:45:24 -- target/dif.sh@82 -- # gen_fio_conf 00:28:08.762 07:45:24 -- common/autotest_common.sh@1316 -- # local fio_dir=/usr/src/fio 00:28:08.762 07:45:24 -- common/autotest_common.sh@1318 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:28:08.762 07:45:24 -- target/dif.sh@54 -- # local file 00:28:08.762 07:45:24 -- common/autotest_common.sh@1318 -- # local sanitizers 00:28:08.762 07:45:24 -- target/dif.sh@56 -- # cat 00:28:08.762 07:45:24 -- common/autotest_common.sh@1319 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:28:08.762 07:45:24 -- common/autotest_common.sh@1320 -- # shift 00:28:08.762 07:45:24 -- common/autotest_common.sh@1322 -- # local asan_lib= 00:28:08.762 07:45:24 -- common/autotest_common.sh@1323 -- # for sanitizer in "${sanitizers[@]}" 00:28:08.762 07:45:24 -- nvmf/common.sh@542 -- # cat 00:28:08.762 07:45:24 -- common/autotest_common.sh@1324 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:28:08.762 07:45:24 -- target/dif.sh@72 -- # (( file = 1 )) 00:28:08.762 07:45:24 -- target/dif.sh@72 -- # (( file <= files )) 00:28:08.762 07:45:24 -- common/autotest_common.sh@1324 -- # grep libasan 00:28:08.762 07:45:24 -- common/autotest_common.sh@1324 -- # awk '{print $3}' 00:28:08.762 07:45:24 -- nvmf/common.sh@544 -- # jq . 00:28:08.762 07:45:24 -- nvmf/common.sh@545 -- # IFS=, 00:28:08.762 07:45:24 -- nvmf/common.sh@546 -- # printf '%s\n' '{ 00:28:08.762 "params": { 00:28:08.762 "name": "Nvme0", 00:28:08.762 "trtype": "tcp", 00:28:08.762 "traddr": "10.0.0.2", 00:28:08.762 "adrfam": "ipv4", 00:28:08.762 "trsvcid": "4420", 00:28:08.762 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:28:08.762 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:28:08.762 "hdgst": false, 00:28:08.762 "ddgst": false 00:28:08.762 }, 00:28:08.762 "method": "bdev_nvme_attach_controller" 00:28:08.762 }' 00:28:08.762 07:45:24 -- common/autotest_common.sh@1324 -- # asan_lib= 00:28:08.762 07:45:24 -- common/autotest_common.sh@1325 -- # [[ -n '' ]] 00:28:08.762 07:45:24 -- common/autotest_common.sh@1323 -- # for sanitizer in "${sanitizers[@]}" 00:28:08.762 07:45:24 -- common/autotest_common.sh@1324 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:28:08.762 07:45:24 -- common/autotest_common.sh@1324 -- # grep libclang_rt.asan 00:28:08.762 07:45:24 -- common/autotest_common.sh@1324 -- # awk '{print $3}' 00:28:08.762 07:45:24 -- common/autotest_common.sh@1324 -- # asan_lib= 00:28:08.762 07:45:24 -- common/autotest_common.sh@1325 -- # [[ -n '' ]] 00:28:08.762 07:45:24 -- common/autotest_common.sh@1331 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:28:08.762 07:45:24 -- common/autotest_common.sh@1331 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:28:09.020 filename0: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=4 00:28:09.020 fio-3.35 00:28:09.020 Starting 1 thread 00:28:09.021 EAL: No free 2048 kB hugepages reported on node 1 00:28:09.586 [2024-07-14 07:45:25.550692] rpc.c: 181:spdk_rpc_listen: *ERROR*: RPC Unix domain socket path /var/tmp/spdk.sock in use. Specify another. 00:28:09.586 [2024-07-14 07:45:25.550756] rpc.c: 90:spdk_rpc_initialize: *ERROR*: Unable to start RPC service at /var/tmp/spdk.sock 00:28:19.552 00:28:19.552 filename0: (groupid=0, jobs=1): err= 0: pid=29482: Sun Jul 14 07:45:35 2024 00:28:19.552 read: IOPS=185, BW=742KiB/s (760kB/s)(7424KiB/10001msec) 00:28:19.552 slat (nsec): min=4288, max=45526, avg=8489.34, stdev=3225.85 00:28:19.552 clat (usec): min=916, max=47416, avg=21526.22, stdev=20460.53 00:28:19.552 lat (usec): min=923, max=47435, avg=21534.71, stdev=20460.55 00:28:19.552 clat percentiles (usec): 00:28:19.552 | 1.00th=[ 938], 5.00th=[ 947], 10.00th=[ 955], 20.00th=[ 971], 00:28:19.552 | 30.00th=[ 988], 40.00th=[ 1012], 50.00th=[41157], 60.00th=[41681], 00:28:19.552 | 70.00th=[42206], 80.00th=[42206], 90.00th=[42206], 95.00th=[42206], 00:28:19.552 | 99.00th=[42206], 99.50th=[42206], 99.90th=[47449], 99.95th=[47449], 00:28:19.552 | 99.99th=[47449] 00:28:19.552 bw ( KiB/s): min= 672, max= 768, per=99.96%, avg=742.74, stdev=34.69, samples=19 00:28:19.552 iops : min= 168, max= 192, avg=185.68, stdev= 8.67, samples=19 00:28:19.552 lat (usec) : 1000=36.10% 00:28:19.552 lat (msec) : 2=13.69%, 50=50.22% 00:28:19.552 cpu : usr=90.62%, sys=9.11%, ctx=19, majf=0, minf=248 00:28:19.552 IO depths : 1=25.0%, 2=50.0%, 4=25.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:28:19.552 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:19.552 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:19.552 issued rwts: total=1856,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:28:19.552 latency : target=0, window=0, percentile=100.00%, depth=4 00:28:19.552 00:28:19.552 Run status group 0 (all jobs): 00:28:19.552 READ: bw=742KiB/s (760kB/s), 742KiB/s-742KiB/s (760kB/s-760kB/s), io=7424KiB (7602kB), run=10001-10001msec 00:28:19.809 07:45:35 -- target/dif.sh@88 -- # destroy_subsystems 0 00:28:19.809 07:45:35 -- target/dif.sh@43 -- # local sub 00:28:19.810 07:45:35 -- target/dif.sh@45 -- # for sub in "$@" 00:28:19.810 07:45:35 -- target/dif.sh@46 -- # destroy_subsystem 0 00:28:19.810 07:45:35 -- target/dif.sh@36 -- # local sub_id=0 00:28:19.810 07:45:35 -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:28:19.810 07:45:35 -- common/autotest_common.sh@551 -- # xtrace_disable 00:28:19.810 07:45:35 -- common/autotest_common.sh@10 -- # set +x 00:28:19.810 07:45:35 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:28:19.810 07:45:35 -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:28:19.810 07:45:35 -- common/autotest_common.sh@551 -- # xtrace_disable 00:28:19.810 07:45:35 -- common/autotest_common.sh@10 -- # set +x 00:28:19.810 07:45:35 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:28:19.810 00:28:19.810 real 0m11.245s 00:28:19.810 user 0m10.435s 00:28:19.810 sys 0m1.195s 00:28:19.810 07:45:35 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:28:19.810 07:45:35 -- common/autotest_common.sh@10 -- # set +x 00:28:19.810 ************************************ 00:28:19.810 END TEST fio_dif_1_default 00:28:19.810 ************************************ 00:28:19.810 07:45:35 -- target/dif.sh@142 -- # run_test fio_dif_1_multi_subsystems fio_dif_1_multi_subsystems 00:28:19.810 07:45:35 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:28:19.810 07:45:35 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:28:19.810 07:45:35 -- common/autotest_common.sh@10 -- # set +x 00:28:19.810 ************************************ 00:28:19.810 START TEST fio_dif_1_multi_subsystems 00:28:19.810 ************************************ 00:28:19.810 07:45:35 -- common/autotest_common.sh@1104 -- # fio_dif_1_multi_subsystems 00:28:19.810 07:45:35 -- target/dif.sh@92 -- # local files=1 00:28:19.810 07:45:35 -- target/dif.sh@94 -- # create_subsystems 0 1 00:28:19.810 07:45:35 -- target/dif.sh@28 -- # local sub 00:28:19.810 07:45:35 -- target/dif.sh@30 -- # for sub in "$@" 00:28:19.810 07:45:35 -- target/dif.sh@31 -- # create_subsystem 0 00:28:19.810 07:45:35 -- target/dif.sh@18 -- # local sub_id=0 00:28:19.810 07:45:35 -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 1 00:28:19.810 07:45:35 -- common/autotest_common.sh@551 -- # xtrace_disable 00:28:19.810 07:45:35 -- common/autotest_common.sh@10 -- # set +x 00:28:19.810 bdev_null0 00:28:19.810 07:45:35 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:28:19.810 07:45:35 -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:28:19.810 07:45:35 -- common/autotest_common.sh@551 -- # xtrace_disable 00:28:19.810 07:45:35 -- common/autotest_common.sh@10 -- # set +x 00:28:19.810 07:45:35 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:28:19.810 07:45:35 -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:28:19.810 07:45:35 -- common/autotest_common.sh@551 -- # xtrace_disable 00:28:19.810 07:45:35 -- common/autotest_common.sh@10 -- # set +x 00:28:19.810 07:45:35 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:28:19.810 07:45:35 -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:28:19.810 07:45:35 -- common/autotest_common.sh@551 -- # xtrace_disable 00:28:19.810 07:45:35 -- common/autotest_common.sh@10 -- # set +x 00:28:19.810 [2024-07-14 07:45:35.979175] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:28:20.067 07:45:35 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:28:20.067 07:45:35 -- target/dif.sh@30 -- # for sub in "$@" 00:28:20.067 07:45:35 -- target/dif.sh@31 -- # create_subsystem 1 00:28:20.067 07:45:35 -- target/dif.sh@18 -- # local sub_id=1 00:28:20.067 07:45:35 -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null1 64 512 --md-size 16 --dif-type 1 00:28:20.067 07:45:35 -- common/autotest_common.sh@551 -- # xtrace_disable 00:28:20.067 07:45:35 -- common/autotest_common.sh@10 -- # set +x 00:28:20.067 bdev_null1 00:28:20.067 07:45:35 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:28:20.067 07:45:35 -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 --serial-number 53313233-1 --allow-any-host 00:28:20.067 07:45:35 -- common/autotest_common.sh@551 -- # xtrace_disable 00:28:20.067 07:45:35 -- common/autotest_common.sh@10 -- # set +x 00:28:20.067 07:45:35 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:28:20.067 07:45:35 -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 bdev_null1 00:28:20.067 07:45:35 -- common/autotest_common.sh@551 -- # xtrace_disable 00:28:20.067 07:45:35 -- common/autotest_common.sh@10 -- # set +x 00:28:20.067 07:45:36 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:28:20.067 07:45:36 -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:28:20.067 07:45:36 -- common/autotest_common.sh@551 -- # xtrace_disable 00:28:20.067 07:45:36 -- common/autotest_common.sh@10 -- # set +x 00:28:20.067 07:45:36 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:28:20.067 07:45:36 -- target/dif.sh@95 -- # fio /dev/fd/62 00:28:20.067 07:45:36 -- target/dif.sh@95 -- # create_json_sub_conf 0 1 00:28:20.067 07:45:36 -- target/dif.sh@51 -- # gen_nvmf_target_json 0 1 00:28:20.067 07:45:36 -- nvmf/common.sh@520 -- # config=() 00:28:20.067 07:45:36 -- nvmf/common.sh@520 -- # local subsystem config 00:28:20.067 07:45:36 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:28:20.067 07:45:36 -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:28:20.067 07:45:36 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:28:20.067 { 00:28:20.067 "params": { 00:28:20.067 "name": "Nvme$subsystem", 00:28:20.067 "trtype": "$TEST_TRANSPORT", 00:28:20.067 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:20.067 "adrfam": "ipv4", 00:28:20.067 "trsvcid": "$NVMF_PORT", 00:28:20.067 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:20.067 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:20.067 "hdgst": ${hdgst:-false}, 00:28:20.067 "ddgst": ${ddgst:-false} 00:28:20.067 }, 00:28:20.067 "method": "bdev_nvme_attach_controller" 00:28:20.067 } 00:28:20.067 EOF 00:28:20.067 )") 00:28:20.067 07:45:36 -- common/autotest_common.sh@1335 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:28:20.067 07:45:36 -- target/dif.sh@82 -- # gen_fio_conf 00:28:20.067 07:45:36 -- common/autotest_common.sh@1316 -- # local fio_dir=/usr/src/fio 00:28:20.067 07:45:36 -- target/dif.sh@54 -- # local file 00:28:20.067 07:45:36 -- common/autotest_common.sh@1318 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:28:20.067 07:45:36 -- target/dif.sh@56 -- # cat 00:28:20.067 07:45:36 -- common/autotest_common.sh@1318 -- # local sanitizers 00:28:20.067 07:45:36 -- common/autotest_common.sh@1319 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:28:20.067 07:45:36 -- common/autotest_common.sh@1320 -- # shift 00:28:20.067 07:45:36 -- common/autotest_common.sh@1322 -- # local asan_lib= 00:28:20.067 07:45:36 -- common/autotest_common.sh@1323 -- # for sanitizer in "${sanitizers[@]}" 00:28:20.067 07:45:36 -- nvmf/common.sh@542 -- # cat 00:28:20.067 07:45:36 -- common/autotest_common.sh@1324 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:28:20.067 07:45:36 -- target/dif.sh@72 -- # (( file = 1 )) 00:28:20.067 07:45:36 -- target/dif.sh@72 -- # (( file <= files )) 00:28:20.067 07:45:36 -- common/autotest_common.sh@1324 -- # grep libasan 00:28:20.067 07:45:36 -- target/dif.sh@73 -- # cat 00:28:20.067 07:45:36 -- common/autotest_common.sh@1324 -- # awk '{print $3}' 00:28:20.067 07:45:36 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:28:20.067 07:45:36 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:28:20.067 { 00:28:20.067 "params": { 00:28:20.067 "name": "Nvme$subsystem", 00:28:20.067 "trtype": "$TEST_TRANSPORT", 00:28:20.067 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:20.067 "adrfam": "ipv4", 00:28:20.067 "trsvcid": "$NVMF_PORT", 00:28:20.067 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:20.067 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:20.067 "hdgst": ${hdgst:-false}, 00:28:20.067 "ddgst": ${ddgst:-false} 00:28:20.067 }, 00:28:20.067 "method": "bdev_nvme_attach_controller" 00:28:20.067 } 00:28:20.067 EOF 00:28:20.067 )") 00:28:20.067 07:45:36 -- nvmf/common.sh@542 -- # cat 00:28:20.067 07:45:36 -- target/dif.sh@72 -- # (( file++ )) 00:28:20.067 07:45:36 -- target/dif.sh@72 -- # (( file <= files )) 00:28:20.067 07:45:36 -- nvmf/common.sh@544 -- # jq . 00:28:20.067 07:45:36 -- nvmf/common.sh@545 -- # IFS=, 00:28:20.067 07:45:36 -- nvmf/common.sh@546 -- # printf '%s\n' '{ 00:28:20.067 "params": { 00:28:20.067 "name": "Nvme0", 00:28:20.067 "trtype": "tcp", 00:28:20.067 "traddr": "10.0.0.2", 00:28:20.067 "adrfam": "ipv4", 00:28:20.067 "trsvcid": "4420", 00:28:20.067 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:28:20.067 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:28:20.067 "hdgst": false, 00:28:20.067 "ddgst": false 00:28:20.067 }, 00:28:20.067 "method": "bdev_nvme_attach_controller" 00:28:20.067 },{ 00:28:20.067 "params": { 00:28:20.067 "name": "Nvme1", 00:28:20.067 "trtype": "tcp", 00:28:20.067 "traddr": "10.0.0.2", 00:28:20.067 "adrfam": "ipv4", 00:28:20.067 "trsvcid": "4420", 00:28:20.067 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:28:20.067 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:28:20.067 "hdgst": false, 00:28:20.067 "ddgst": false 00:28:20.067 }, 00:28:20.067 "method": "bdev_nvme_attach_controller" 00:28:20.067 }' 00:28:20.067 07:45:36 -- common/autotest_common.sh@1324 -- # asan_lib= 00:28:20.067 07:45:36 -- common/autotest_common.sh@1325 -- # [[ -n '' ]] 00:28:20.067 07:45:36 -- common/autotest_common.sh@1323 -- # for sanitizer in "${sanitizers[@]}" 00:28:20.067 07:45:36 -- common/autotest_common.sh@1324 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:28:20.067 07:45:36 -- common/autotest_common.sh@1324 -- # grep libclang_rt.asan 00:28:20.067 07:45:36 -- common/autotest_common.sh@1324 -- # awk '{print $3}' 00:28:20.067 07:45:36 -- common/autotest_common.sh@1324 -- # asan_lib= 00:28:20.067 07:45:36 -- common/autotest_common.sh@1325 -- # [[ -n '' ]] 00:28:20.067 07:45:36 -- common/autotest_common.sh@1331 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:28:20.067 07:45:36 -- common/autotest_common.sh@1331 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:28:20.325 filename0: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=4 00:28:20.325 filename1: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=4 00:28:20.325 fio-3.35 00:28:20.325 Starting 2 threads 00:28:20.325 EAL: No free 2048 kB hugepages reported on node 1 00:28:20.890 [2024-07-14 07:45:36.894061] rpc.c: 181:spdk_rpc_listen: *ERROR*: RPC Unix domain socket path /var/tmp/spdk.sock in use. Specify another. 00:28:20.890 [2024-07-14 07:45:36.894120] rpc.c: 90:spdk_rpc_initialize: *ERROR*: Unable to start RPC service at /var/tmp/spdk.sock 00:28:33.082 00:28:33.082 filename0: (groupid=0, jobs=1): err= 0: pid=30969: Sun Jul 14 07:45:47 2024 00:28:33.082 read: IOPS=185, BW=742KiB/s (760kB/s)(7440KiB/10022msec) 00:28:33.082 slat (nsec): min=6781, max=86490, avg=9331.74, stdev=4479.95 00:28:33.082 clat (usec): min=895, max=42967, avg=21522.26, stdev=20395.54 00:28:33.082 lat (usec): min=902, max=42979, avg=21531.59, stdev=20395.51 00:28:33.082 clat percentiles (usec): 00:28:33.082 | 1.00th=[ 914], 5.00th=[ 938], 10.00th=[ 955], 20.00th=[ 988], 00:28:33.082 | 30.00th=[ 1012], 40.00th=[ 1045], 50.00th=[41157], 60.00th=[41681], 00:28:33.082 | 70.00th=[41681], 80.00th=[41681], 90.00th=[42206], 95.00th=[42206], 00:28:33.082 | 99.00th=[42206], 99.50th=[42206], 99.90th=[42730], 99.95th=[42730], 00:28:33.082 | 99.99th=[42730] 00:28:33.082 bw ( KiB/s): min= 704, max= 768, per=49.98%, avg=742.40, stdev=32.17, samples=20 00:28:33.082 iops : min= 176, max= 192, avg=185.60, stdev= 8.04, samples=20 00:28:33.082 lat (usec) : 1000=24.95% 00:28:33.082 lat (msec) : 2=24.73%, 50=50.32% 00:28:33.082 cpu : usr=94.18%, sys=5.53%, ctx=15, majf=0, minf=111 00:28:33.082 IO depths : 1=25.0%, 2=50.0%, 4=25.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:28:33.082 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:33.082 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:33.082 issued rwts: total=1860,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:28:33.082 latency : target=0, window=0, percentile=100.00%, depth=4 00:28:33.082 filename1: (groupid=0, jobs=1): err= 0: pid=30970: Sun Jul 14 07:45:47 2024 00:28:33.082 read: IOPS=185, BW=742KiB/s (760kB/s)(7440KiB/10021msec) 00:28:33.082 slat (nsec): min=6732, max=35412, avg=9540.87, stdev=4581.61 00:28:33.082 clat (usec): min=1025, max=43305, avg=21519.99, stdev=20372.09 00:28:33.082 lat (usec): min=1032, max=43340, avg=21529.53, stdev=20370.90 00:28:33.082 clat percentiles (usec): 00:28:33.082 | 1.00th=[ 1037], 5.00th=[ 1057], 10.00th=[ 1074], 20.00th=[ 1090], 00:28:33.082 | 30.00th=[ 1106], 40.00th=[ 1156], 50.00th=[41681], 60.00th=[41681], 00:28:33.082 | 70.00th=[41681], 80.00th=[41681], 90.00th=[41681], 95.00th=[41681], 00:28:33.082 | 99.00th=[41681], 99.50th=[42206], 99.90th=[43254], 99.95th=[43254], 00:28:33.082 | 99.99th=[43254] 00:28:33.082 bw ( KiB/s): min= 704, max= 768, per=49.98%, avg=742.40, stdev=30.45, samples=20 00:28:33.082 iops : min= 176, max= 192, avg=185.60, stdev= 7.61, samples=20 00:28:33.082 lat (msec) : 2=49.89%, 50=50.11% 00:28:33.082 cpu : usr=94.75%, sys=4.96%, ctx=13, majf=0, minf=186 00:28:33.082 IO depths : 1=25.0%, 2=50.0%, 4=25.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:28:33.082 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:33.082 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:33.082 issued rwts: total=1860,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:28:33.082 latency : target=0, window=0, percentile=100.00%, depth=4 00:28:33.082 00:28:33.082 Run status group 0 (all jobs): 00:28:33.082 READ: bw=1485KiB/s (1520kB/s), 742KiB/s-742KiB/s (760kB/s-760kB/s), io=14.5MiB (15.2MB), run=10021-10022msec 00:28:33.082 07:45:47 -- target/dif.sh@96 -- # destroy_subsystems 0 1 00:28:33.082 07:45:47 -- target/dif.sh@43 -- # local sub 00:28:33.082 07:45:47 -- target/dif.sh@45 -- # for sub in "$@" 00:28:33.082 07:45:47 -- target/dif.sh@46 -- # destroy_subsystem 0 00:28:33.082 07:45:47 -- target/dif.sh@36 -- # local sub_id=0 00:28:33.082 07:45:47 -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:28:33.082 07:45:47 -- common/autotest_common.sh@551 -- # xtrace_disable 00:28:33.082 07:45:47 -- common/autotest_common.sh@10 -- # set +x 00:28:33.082 07:45:47 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:28:33.082 07:45:47 -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:28:33.082 07:45:47 -- common/autotest_common.sh@551 -- # xtrace_disable 00:28:33.083 07:45:47 -- common/autotest_common.sh@10 -- # set +x 00:28:33.083 07:45:47 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:28:33.083 07:45:47 -- target/dif.sh@45 -- # for sub in "$@" 00:28:33.083 07:45:47 -- target/dif.sh@46 -- # destroy_subsystem 1 00:28:33.083 07:45:47 -- target/dif.sh@36 -- # local sub_id=1 00:28:33.083 07:45:47 -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:28:33.083 07:45:47 -- common/autotest_common.sh@551 -- # xtrace_disable 00:28:33.083 07:45:47 -- common/autotest_common.sh@10 -- # set +x 00:28:33.083 07:45:47 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:28:33.083 07:45:47 -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null1 00:28:33.083 07:45:47 -- common/autotest_common.sh@551 -- # xtrace_disable 00:28:33.083 07:45:47 -- common/autotest_common.sh@10 -- # set +x 00:28:33.083 07:45:47 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:28:33.083 00:28:33.083 real 0m11.442s 00:28:33.083 user 0m20.311s 00:28:33.083 sys 0m1.313s 00:28:33.083 07:45:47 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:28:33.083 07:45:47 -- common/autotest_common.sh@10 -- # set +x 00:28:33.083 ************************************ 00:28:33.083 END TEST fio_dif_1_multi_subsystems 00:28:33.083 ************************************ 00:28:33.083 07:45:47 -- target/dif.sh@143 -- # run_test fio_dif_rand_params fio_dif_rand_params 00:28:33.083 07:45:47 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:28:33.083 07:45:47 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:28:33.083 07:45:47 -- common/autotest_common.sh@10 -- # set +x 00:28:33.083 ************************************ 00:28:33.083 START TEST fio_dif_rand_params 00:28:33.083 ************************************ 00:28:33.083 07:45:47 -- common/autotest_common.sh@1104 -- # fio_dif_rand_params 00:28:33.083 07:45:47 -- target/dif.sh@100 -- # local NULL_DIF 00:28:33.083 07:45:47 -- target/dif.sh@101 -- # local bs numjobs runtime iodepth files 00:28:33.083 07:45:47 -- target/dif.sh@103 -- # NULL_DIF=3 00:28:33.083 07:45:47 -- target/dif.sh@103 -- # bs=128k 00:28:33.083 07:45:47 -- target/dif.sh@103 -- # numjobs=3 00:28:33.083 07:45:47 -- target/dif.sh@103 -- # iodepth=3 00:28:33.083 07:45:47 -- target/dif.sh@103 -- # runtime=5 00:28:33.083 07:45:47 -- target/dif.sh@105 -- # create_subsystems 0 00:28:33.083 07:45:47 -- target/dif.sh@28 -- # local sub 00:28:33.083 07:45:47 -- target/dif.sh@30 -- # for sub in "$@" 00:28:33.083 07:45:47 -- target/dif.sh@31 -- # create_subsystem 0 00:28:33.083 07:45:47 -- target/dif.sh@18 -- # local sub_id=0 00:28:33.083 07:45:47 -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 3 00:28:33.083 07:45:47 -- common/autotest_common.sh@551 -- # xtrace_disable 00:28:33.083 07:45:47 -- common/autotest_common.sh@10 -- # set +x 00:28:33.083 bdev_null0 00:28:33.083 07:45:47 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:28:33.083 07:45:47 -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:28:33.083 07:45:47 -- common/autotest_common.sh@551 -- # xtrace_disable 00:28:33.083 07:45:47 -- common/autotest_common.sh@10 -- # set +x 00:28:33.083 07:45:47 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:28:33.083 07:45:47 -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:28:33.083 07:45:47 -- common/autotest_common.sh@551 -- # xtrace_disable 00:28:33.083 07:45:47 -- common/autotest_common.sh@10 -- # set +x 00:28:33.083 07:45:47 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:28:33.083 07:45:47 -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:28:33.083 07:45:47 -- common/autotest_common.sh@551 -- # xtrace_disable 00:28:33.083 07:45:47 -- common/autotest_common.sh@10 -- # set +x 00:28:33.083 [2024-07-14 07:45:47.451498] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:28:33.083 07:45:47 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:28:33.083 07:45:47 -- target/dif.sh@106 -- # fio /dev/fd/62 00:28:33.083 07:45:47 -- target/dif.sh@106 -- # create_json_sub_conf 0 00:28:33.083 07:45:47 -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:28:33.083 07:45:47 -- common/autotest_common.sh@1335 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:28:33.083 07:45:47 -- target/dif.sh@51 -- # gen_nvmf_target_json 0 00:28:33.083 07:45:47 -- common/autotest_common.sh@1316 -- # local fio_dir=/usr/src/fio 00:28:33.083 07:45:47 -- target/dif.sh@82 -- # gen_fio_conf 00:28:33.083 07:45:47 -- nvmf/common.sh@520 -- # config=() 00:28:33.083 07:45:47 -- common/autotest_common.sh@1318 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:28:33.083 07:45:47 -- common/autotest_common.sh@1318 -- # local sanitizers 00:28:33.083 07:45:47 -- nvmf/common.sh@520 -- # local subsystem config 00:28:33.083 07:45:47 -- target/dif.sh@54 -- # local file 00:28:33.083 07:45:47 -- common/autotest_common.sh@1319 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:28:33.083 07:45:47 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:28:33.083 07:45:47 -- common/autotest_common.sh@1320 -- # shift 00:28:33.083 07:45:47 -- target/dif.sh@56 -- # cat 00:28:33.083 07:45:47 -- common/autotest_common.sh@1322 -- # local asan_lib= 00:28:33.083 07:45:47 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:28:33.083 { 00:28:33.083 "params": { 00:28:33.083 "name": "Nvme$subsystem", 00:28:33.083 "trtype": "$TEST_TRANSPORT", 00:28:33.083 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:33.083 "adrfam": "ipv4", 00:28:33.083 "trsvcid": "$NVMF_PORT", 00:28:33.083 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:33.083 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:33.083 "hdgst": ${hdgst:-false}, 00:28:33.083 "ddgst": ${ddgst:-false} 00:28:33.083 }, 00:28:33.083 "method": "bdev_nvme_attach_controller" 00:28:33.083 } 00:28:33.083 EOF 00:28:33.083 )") 00:28:33.083 07:45:47 -- common/autotest_common.sh@1323 -- # for sanitizer in "${sanitizers[@]}" 00:28:33.083 07:45:47 -- nvmf/common.sh@542 -- # cat 00:28:33.083 07:45:47 -- common/autotest_common.sh@1324 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:28:33.083 07:45:47 -- common/autotest_common.sh@1324 -- # grep libasan 00:28:33.083 07:45:47 -- common/autotest_common.sh@1324 -- # awk '{print $3}' 00:28:33.083 07:45:47 -- target/dif.sh@72 -- # (( file = 1 )) 00:28:33.083 07:45:47 -- target/dif.sh@72 -- # (( file <= files )) 00:28:33.083 07:45:47 -- nvmf/common.sh@544 -- # jq . 00:28:33.083 07:45:47 -- nvmf/common.sh@545 -- # IFS=, 00:28:33.083 07:45:47 -- nvmf/common.sh@546 -- # printf '%s\n' '{ 00:28:33.083 "params": { 00:28:33.083 "name": "Nvme0", 00:28:33.083 "trtype": "tcp", 00:28:33.083 "traddr": "10.0.0.2", 00:28:33.083 "adrfam": "ipv4", 00:28:33.083 "trsvcid": "4420", 00:28:33.083 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:28:33.083 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:28:33.083 "hdgst": false, 00:28:33.083 "ddgst": false 00:28:33.083 }, 00:28:33.083 "method": "bdev_nvme_attach_controller" 00:28:33.083 }' 00:28:33.083 07:45:47 -- common/autotest_common.sh@1324 -- # asan_lib= 00:28:33.083 07:45:47 -- common/autotest_common.sh@1325 -- # [[ -n '' ]] 00:28:33.083 07:45:47 -- common/autotest_common.sh@1323 -- # for sanitizer in "${sanitizers[@]}" 00:28:33.083 07:45:47 -- common/autotest_common.sh@1324 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:28:33.083 07:45:47 -- common/autotest_common.sh@1324 -- # grep libclang_rt.asan 00:28:33.083 07:45:47 -- common/autotest_common.sh@1324 -- # awk '{print $3}' 00:28:33.083 07:45:47 -- common/autotest_common.sh@1324 -- # asan_lib= 00:28:33.083 07:45:47 -- common/autotest_common.sh@1325 -- # [[ -n '' ]] 00:28:33.083 07:45:47 -- common/autotest_common.sh@1331 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:28:33.083 07:45:47 -- common/autotest_common.sh@1331 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:28:33.083 filename0: (g=0): rw=randread, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=3 00:28:33.083 ... 00:28:33.083 fio-3.35 00:28:33.083 Starting 3 threads 00:28:33.083 EAL: No free 2048 kB hugepages reported on node 1 00:28:33.083 [2024-07-14 07:45:48.197194] rpc.c: 181:spdk_rpc_listen: *ERROR*: RPC Unix domain socket path /var/tmp/spdk.sock in use. Specify another. 00:28:33.083 [2024-07-14 07:45:48.197273] rpc.c: 90:spdk_rpc_initialize: *ERROR*: Unable to start RPC service at /var/tmp/spdk.sock 00:28:37.278 00:28:37.278 filename0: (groupid=0, jobs=1): err= 0: pid=32404: Sun Jul 14 07:45:53 2024 00:28:37.278 read: IOPS=181, BW=22.7MiB/s (23.8MB/s)(114MiB/5013msec) 00:28:37.278 slat (nsec): min=7000, max=44816, avg=10923.56, stdev=2892.28 00:28:37.278 clat (usec): min=6184, max=54621, avg=16468.81, stdev=14561.08 00:28:37.278 lat (usec): min=6196, max=54633, avg=16479.73, stdev=14561.01 00:28:37.278 clat percentiles (usec): 00:28:37.278 | 1.00th=[ 6652], 5.00th=[ 7635], 10.00th=[ 8291], 20.00th=[ 8979], 00:28:37.278 | 30.00th=[ 9503], 40.00th=[10159], 50.00th=[11076], 60.00th=[11863], 00:28:37.278 | 70.00th=[12649], 80.00th=[13829], 90.00th=[51119], 95.00th=[52691], 00:28:37.278 | 99.00th=[54264], 99.50th=[54264], 99.90th=[54789], 99.95th=[54789], 00:28:37.278 | 99.99th=[54789] 00:28:37.278 bw ( KiB/s): min=11520, max=33024, per=29.56%, avg=23270.40, stdev=6017.60, samples=10 00:28:37.278 iops : min= 90, max= 258, avg=181.80, stdev=47.01, samples=10 00:28:37.278 lat (msec) : 10=37.61%, 20=48.25%, 50=1.54%, 100=12.61% 00:28:37.278 cpu : usr=92.64%, sys=6.82%, ctx=9, majf=0, minf=36 00:28:37.278 IO depths : 1=5.8%, 2=94.2%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:28:37.278 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:37.278 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:37.278 issued rwts: total=912,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:28:37.278 latency : target=0, window=0, percentile=100.00%, depth=3 00:28:37.278 filename0: (groupid=0, jobs=1): err= 0: pid=32405: Sun Jul 14 07:45:53 2024 00:28:37.278 read: IOPS=229, BW=28.7MiB/s (30.1MB/s)(144MiB/5007msec) 00:28:37.278 slat (nsec): min=7024, max=34080, avg=11940.30, stdev=2799.08 00:28:37.278 clat (usec): min=5819, max=58873, avg=13066.35, stdev=10154.11 00:28:37.278 lat (usec): min=5830, max=58886, avg=13078.29, stdev=10154.04 00:28:37.278 clat percentiles (usec): 00:28:37.278 | 1.00th=[ 6259], 5.00th=[ 7046], 10.00th=[ 7898], 20.00th=[ 8979], 00:28:37.278 | 30.00th=[ 9372], 40.00th=[ 9765], 50.00th=[10421], 60.00th=[11338], 00:28:37.278 | 70.00th=[12256], 80.00th=[13042], 90.00th=[14353], 95.00th=[50594], 00:28:37.278 | 99.00th=[54264], 99.50th=[54789], 99.90th=[55837], 99.95th=[58983], 00:28:37.278 | 99.99th=[58983] 00:28:37.278 bw ( KiB/s): min=23808, max=36864, per=37.24%, avg=29317.50, stdev=3612.08, samples=10 00:28:37.278 iops : min= 186, max= 288, avg=229.00, stdev=28.24, samples=10 00:28:37.278 lat (msec) : 10=44.34%, 20=49.65%, 50=0.52%, 100=5.49% 00:28:37.278 cpu : usr=90.85%, sys=8.51%, ctx=11, majf=0, minf=132 00:28:37.278 IO depths : 1=0.8%, 2=99.2%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:28:37.278 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:37.278 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:37.278 issued rwts: total=1148,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:28:37.278 latency : target=0, window=0, percentile=100.00%, depth=3 00:28:37.278 filename0: (groupid=0, jobs=1): err= 0: pid=32406: Sun Jul 14 07:45:53 2024 00:28:37.278 read: IOPS=206, BW=25.9MiB/s (27.1MB/s)(131MiB/5047msec) 00:28:37.278 slat (nsec): min=7047, max=58583, avg=11800.47, stdev=2765.58 00:28:37.278 clat (usec): min=5427, max=55507, avg=14398.31, stdev=12473.55 00:28:37.278 lat (usec): min=5445, max=55534, avg=14410.11, stdev=12473.56 00:28:37.278 clat percentiles (usec): 00:28:37.278 | 1.00th=[ 6259], 5.00th=[ 6783], 10.00th=[ 7504], 20.00th=[ 8717], 00:28:37.278 | 30.00th=[ 9110], 40.00th=[ 9765], 50.00th=[10421], 60.00th=[11731], 00:28:37.278 | 70.00th=[12518], 80.00th=[13173], 90.00th=[15401], 95.00th=[52167], 00:28:37.278 | 99.00th=[54264], 99.50th=[54264], 99.90th=[55313], 99.95th=[55313], 00:28:37.278 | 99.99th=[55313] 00:28:37.278 bw ( KiB/s): min=17920, max=35328, per=33.92%, avg=26700.80, stdev=5449.39, samples=10 00:28:37.278 iops : min= 140, max= 276, avg=208.60, stdev=42.57, samples=10 00:28:37.278 lat (msec) : 10=44.64%, 20=45.79%, 50=1.05%, 100=8.52% 00:28:37.278 cpu : usr=92.11%, sys=7.23%, ctx=10, majf=0, minf=170 00:28:37.278 IO depths : 1=0.8%, 2=99.2%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:28:37.278 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:37.278 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:37.278 issued rwts: total=1044,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:28:37.278 latency : target=0, window=0, percentile=100.00%, depth=3 00:28:37.278 00:28:37.278 Run status group 0 (all jobs): 00:28:37.278 READ: bw=76.9MiB/s (80.6MB/s), 22.7MiB/s-28.7MiB/s (23.8MB/s-30.1MB/s), io=388MiB (407MB), run=5007-5047msec 00:28:37.536 07:45:53 -- target/dif.sh@107 -- # destroy_subsystems 0 00:28:37.536 07:45:53 -- target/dif.sh@43 -- # local sub 00:28:37.536 07:45:53 -- target/dif.sh@45 -- # for sub in "$@" 00:28:37.536 07:45:53 -- target/dif.sh@46 -- # destroy_subsystem 0 00:28:37.536 07:45:53 -- target/dif.sh@36 -- # local sub_id=0 00:28:37.536 07:45:53 -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:28:37.536 07:45:53 -- common/autotest_common.sh@551 -- # xtrace_disable 00:28:37.536 07:45:53 -- common/autotest_common.sh@10 -- # set +x 00:28:37.536 07:45:53 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:28:37.536 07:45:53 -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:28:37.536 07:45:53 -- common/autotest_common.sh@551 -- # xtrace_disable 00:28:37.536 07:45:53 -- common/autotest_common.sh@10 -- # set +x 00:28:37.536 07:45:53 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:28:37.536 07:45:53 -- target/dif.sh@109 -- # NULL_DIF=2 00:28:37.536 07:45:53 -- target/dif.sh@109 -- # bs=4k 00:28:37.536 07:45:53 -- target/dif.sh@109 -- # numjobs=8 00:28:37.536 07:45:53 -- target/dif.sh@109 -- # iodepth=16 00:28:37.536 07:45:53 -- target/dif.sh@109 -- # runtime= 00:28:37.536 07:45:53 -- target/dif.sh@109 -- # files=2 00:28:37.536 07:45:53 -- target/dif.sh@111 -- # create_subsystems 0 1 2 00:28:37.536 07:45:53 -- target/dif.sh@28 -- # local sub 00:28:37.536 07:45:53 -- target/dif.sh@30 -- # for sub in "$@" 00:28:37.536 07:45:53 -- target/dif.sh@31 -- # create_subsystem 0 00:28:37.536 07:45:53 -- target/dif.sh@18 -- # local sub_id=0 00:28:37.536 07:45:53 -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 2 00:28:37.536 07:45:53 -- common/autotest_common.sh@551 -- # xtrace_disable 00:28:37.536 07:45:53 -- common/autotest_common.sh@10 -- # set +x 00:28:37.536 bdev_null0 00:28:37.536 07:45:53 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:28:37.536 07:45:53 -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:28:37.536 07:45:53 -- common/autotest_common.sh@551 -- # xtrace_disable 00:28:37.536 07:45:53 -- common/autotest_common.sh@10 -- # set +x 00:28:37.536 07:45:53 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:28:37.536 07:45:53 -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:28:37.536 07:45:53 -- common/autotest_common.sh@551 -- # xtrace_disable 00:28:37.536 07:45:53 -- common/autotest_common.sh@10 -- # set +x 00:28:37.536 07:45:53 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:28:37.536 07:45:53 -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:28:37.536 07:45:53 -- common/autotest_common.sh@551 -- # xtrace_disable 00:28:37.536 07:45:53 -- common/autotest_common.sh@10 -- # set +x 00:28:37.536 [2024-07-14 07:45:53.693799] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:28:37.536 07:45:53 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:28:37.536 07:45:53 -- target/dif.sh@30 -- # for sub in "$@" 00:28:37.536 07:45:53 -- target/dif.sh@31 -- # create_subsystem 1 00:28:37.536 07:45:53 -- target/dif.sh@18 -- # local sub_id=1 00:28:37.536 07:45:53 -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null1 64 512 --md-size 16 --dif-type 2 00:28:37.536 07:45:53 -- common/autotest_common.sh@551 -- # xtrace_disable 00:28:37.536 07:45:53 -- common/autotest_common.sh@10 -- # set +x 00:28:37.536 bdev_null1 00:28:37.536 07:45:53 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:28:37.536 07:45:53 -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 --serial-number 53313233-1 --allow-any-host 00:28:37.794 07:45:53 -- common/autotest_common.sh@551 -- # xtrace_disable 00:28:37.794 07:45:53 -- common/autotest_common.sh@10 -- # set +x 00:28:37.794 07:45:53 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:28:37.794 07:45:53 -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 bdev_null1 00:28:37.794 07:45:53 -- common/autotest_common.sh@551 -- # xtrace_disable 00:28:37.794 07:45:53 -- common/autotest_common.sh@10 -- # set +x 00:28:37.794 07:45:53 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:28:37.794 07:45:53 -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:28:37.794 07:45:53 -- common/autotest_common.sh@551 -- # xtrace_disable 00:28:37.794 07:45:53 -- common/autotest_common.sh@10 -- # set +x 00:28:37.794 07:45:53 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:28:37.794 07:45:53 -- target/dif.sh@30 -- # for sub in "$@" 00:28:37.794 07:45:53 -- target/dif.sh@31 -- # create_subsystem 2 00:28:37.794 07:45:53 -- target/dif.sh@18 -- # local sub_id=2 00:28:37.794 07:45:53 -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null2 64 512 --md-size 16 --dif-type 2 00:28:37.794 07:45:53 -- common/autotest_common.sh@551 -- # xtrace_disable 00:28:37.794 07:45:53 -- common/autotest_common.sh@10 -- # set +x 00:28:37.794 bdev_null2 00:28:37.794 07:45:53 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:28:37.794 07:45:53 -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 --serial-number 53313233-2 --allow-any-host 00:28:37.794 07:45:53 -- common/autotest_common.sh@551 -- # xtrace_disable 00:28:37.794 07:45:53 -- common/autotest_common.sh@10 -- # set +x 00:28:37.794 07:45:53 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:28:37.794 07:45:53 -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 bdev_null2 00:28:37.794 07:45:53 -- common/autotest_common.sh@551 -- # xtrace_disable 00:28:37.794 07:45:53 -- common/autotest_common.sh@10 -- # set +x 00:28:37.794 07:45:53 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:28:37.794 07:45:53 -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:28:37.794 07:45:53 -- common/autotest_common.sh@551 -- # xtrace_disable 00:28:37.794 07:45:53 -- common/autotest_common.sh@10 -- # set +x 00:28:37.794 07:45:53 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:28:37.794 07:45:53 -- target/dif.sh@112 -- # fio /dev/fd/62 00:28:37.794 07:45:53 -- target/dif.sh@112 -- # create_json_sub_conf 0 1 2 00:28:37.794 07:45:53 -- target/dif.sh@51 -- # gen_nvmf_target_json 0 1 2 00:28:37.794 07:45:53 -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:28:37.794 07:45:53 -- nvmf/common.sh@520 -- # config=() 00:28:37.794 07:45:53 -- common/autotest_common.sh@1335 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:28:37.794 07:45:53 -- nvmf/common.sh@520 -- # local subsystem config 00:28:37.794 07:45:53 -- target/dif.sh@82 -- # gen_fio_conf 00:28:37.794 07:45:53 -- common/autotest_common.sh@1316 -- # local fio_dir=/usr/src/fio 00:28:37.794 07:45:53 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:28:37.794 07:45:53 -- target/dif.sh@54 -- # local file 00:28:37.794 07:45:53 -- common/autotest_common.sh@1318 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:28:37.794 07:45:53 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:28:37.794 { 00:28:37.794 "params": { 00:28:37.794 "name": "Nvme$subsystem", 00:28:37.794 "trtype": "$TEST_TRANSPORT", 00:28:37.794 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:37.794 "adrfam": "ipv4", 00:28:37.794 "trsvcid": "$NVMF_PORT", 00:28:37.794 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:37.794 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:37.794 "hdgst": ${hdgst:-false}, 00:28:37.794 "ddgst": ${ddgst:-false} 00:28:37.794 }, 00:28:37.794 "method": "bdev_nvme_attach_controller" 00:28:37.794 } 00:28:37.794 EOF 00:28:37.794 )") 00:28:37.794 07:45:53 -- target/dif.sh@56 -- # cat 00:28:37.794 07:45:53 -- common/autotest_common.sh@1318 -- # local sanitizers 00:28:37.794 07:45:53 -- common/autotest_common.sh@1319 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:28:37.794 07:45:53 -- common/autotest_common.sh@1320 -- # shift 00:28:37.794 07:45:53 -- common/autotest_common.sh@1322 -- # local asan_lib= 00:28:37.794 07:45:53 -- common/autotest_common.sh@1323 -- # for sanitizer in "${sanitizers[@]}" 00:28:37.794 07:45:53 -- nvmf/common.sh@542 -- # cat 00:28:37.794 07:45:53 -- common/autotest_common.sh@1324 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:28:37.794 07:45:53 -- target/dif.sh@72 -- # (( file = 1 )) 00:28:37.794 07:45:53 -- common/autotest_common.sh@1324 -- # grep libasan 00:28:37.794 07:45:53 -- target/dif.sh@72 -- # (( file <= files )) 00:28:37.794 07:45:53 -- target/dif.sh@73 -- # cat 00:28:37.794 07:45:53 -- common/autotest_common.sh@1324 -- # awk '{print $3}' 00:28:37.794 07:45:53 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:28:37.794 07:45:53 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:28:37.794 { 00:28:37.794 "params": { 00:28:37.794 "name": "Nvme$subsystem", 00:28:37.794 "trtype": "$TEST_TRANSPORT", 00:28:37.794 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:37.794 "adrfam": "ipv4", 00:28:37.794 "trsvcid": "$NVMF_PORT", 00:28:37.794 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:37.794 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:37.794 "hdgst": ${hdgst:-false}, 00:28:37.794 "ddgst": ${ddgst:-false} 00:28:37.794 }, 00:28:37.794 "method": "bdev_nvme_attach_controller" 00:28:37.794 } 00:28:37.794 EOF 00:28:37.794 )") 00:28:37.794 07:45:53 -- target/dif.sh@72 -- # (( file++ )) 00:28:37.794 07:45:53 -- target/dif.sh@72 -- # (( file <= files )) 00:28:37.794 07:45:53 -- nvmf/common.sh@542 -- # cat 00:28:37.794 07:45:53 -- target/dif.sh@73 -- # cat 00:28:37.794 07:45:53 -- target/dif.sh@72 -- # (( file++ )) 00:28:37.794 07:45:53 -- target/dif.sh@72 -- # (( file <= files )) 00:28:37.794 07:45:53 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:28:37.794 07:45:53 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:28:37.794 { 00:28:37.794 "params": { 00:28:37.794 "name": "Nvme$subsystem", 00:28:37.794 "trtype": "$TEST_TRANSPORT", 00:28:37.794 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:37.794 "adrfam": "ipv4", 00:28:37.794 "trsvcid": "$NVMF_PORT", 00:28:37.794 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:37.794 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:37.794 "hdgst": ${hdgst:-false}, 00:28:37.794 "ddgst": ${ddgst:-false} 00:28:37.794 }, 00:28:37.794 "method": "bdev_nvme_attach_controller" 00:28:37.794 } 00:28:37.794 EOF 00:28:37.794 )") 00:28:37.794 07:45:53 -- nvmf/common.sh@542 -- # cat 00:28:37.794 07:45:53 -- nvmf/common.sh@544 -- # jq . 00:28:37.794 07:45:53 -- nvmf/common.sh@545 -- # IFS=, 00:28:37.794 07:45:53 -- nvmf/common.sh@546 -- # printf '%s\n' '{ 00:28:37.794 "params": { 00:28:37.795 "name": "Nvme0", 00:28:37.795 "trtype": "tcp", 00:28:37.795 "traddr": "10.0.0.2", 00:28:37.795 "adrfam": "ipv4", 00:28:37.795 "trsvcid": "4420", 00:28:37.795 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:28:37.795 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:28:37.795 "hdgst": false, 00:28:37.795 "ddgst": false 00:28:37.795 }, 00:28:37.795 "method": "bdev_nvme_attach_controller" 00:28:37.795 },{ 00:28:37.795 "params": { 00:28:37.795 "name": "Nvme1", 00:28:37.795 "trtype": "tcp", 00:28:37.795 "traddr": "10.0.0.2", 00:28:37.795 "adrfam": "ipv4", 00:28:37.795 "trsvcid": "4420", 00:28:37.795 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:28:37.795 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:28:37.795 "hdgst": false, 00:28:37.795 "ddgst": false 00:28:37.795 }, 00:28:37.795 "method": "bdev_nvme_attach_controller" 00:28:37.795 },{ 00:28:37.795 "params": { 00:28:37.795 "name": "Nvme2", 00:28:37.795 "trtype": "tcp", 00:28:37.795 "traddr": "10.0.0.2", 00:28:37.795 "adrfam": "ipv4", 00:28:37.795 "trsvcid": "4420", 00:28:37.795 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:28:37.795 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:28:37.795 "hdgst": false, 00:28:37.795 "ddgst": false 00:28:37.795 }, 00:28:37.795 "method": "bdev_nvme_attach_controller" 00:28:37.795 }' 00:28:37.795 07:45:53 -- common/autotest_common.sh@1324 -- # asan_lib= 00:28:37.795 07:45:53 -- common/autotest_common.sh@1325 -- # [[ -n '' ]] 00:28:37.795 07:45:53 -- common/autotest_common.sh@1323 -- # for sanitizer in "${sanitizers[@]}" 00:28:37.795 07:45:53 -- common/autotest_common.sh@1324 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:28:37.795 07:45:53 -- common/autotest_common.sh@1324 -- # grep libclang_rt.asan 00:28:37.795 07:45:53 -- common/autotest_common.sh@1324 -- # awk '{print $3}' 00:28:37.795 07:45:53 -- common/autotest_common.sh@1324 -- # asan_lib= 00:28:37.795 07:45:53 -- common/autotest_common.sh@1325 -- # [[ -n '' ]] 00:28:37.795 07:45:53 -- common/autotest_common.sh@1331 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:28:37.795 07:45:53 -- common/autotest_common.sh@1331 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:28:38.053 filename0: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=16 00:28:38.053 ... 00:28:38.053 filename1: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=16 00:28:38.053 ... 00:28:38.053 filename2: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=16 00:28:38.053 ... 00:28:38.053 fio-3.35 00:28:38.053 Starting 24 threads 00:28:38.053 EAL: No free 2048 kB hugepages reported on node 1 00:28:38.617 [2024-07-14 07:45:54.762102] rpc.c: 181:spdk_rpc_listen: *ERROR*: RPC Unix domain socket path /var/tmp/spdk.sock in use. Specify another. 00:28:38.617 [2024-07-14 07:45:54.762178] rpc.c: 90:spdk_rpc_initialize: *ERROR*: Unable to start RPC service at /var/tmp/spdk.sock 00:28:50.857 00:28:50.857 filename0: (groupid=0, jobs=1): err= 0: pid=33213: Sun Jul 14 07:46:05 2024 00:28:50.857 read: IOPS=59, BW=240KiB/s (245kB/s)(2432KiB/10148msec) 00:28:50.857 slat (nsec): min=8392, max=65360, avg=28823.02, stdev=10675.21 00:28:50.857 clat (msec): min=94, max=460, avg=264.62, stdev=48.68 00:28:50.857 lat (msec): min=94, max=460, avg=264.65, stdev=48.68 00:28:50.857 clat percentiles (msec): 00:28:50.857 | 1.00th=[ 95], 5.00th=[ 167], 10.00th=[ 207], 20.00th=[ 236], 00:28:50.857 | 30.00th=[ 264], 40.00th=[ 266], 50.00th=[ 271], 60.00th=[ 275], 00:28:50.857 | 70.00th=[ 284], 80.00th=[ 300], 90.00th=[ 313], 95.00th=[ 317], 00:28:50.857 | 99.00th=[ 359], 99.50th=[ 414], 99.90th=[ 460], 99.95th=[ 460], 00:28:50.857 | 99.99th=[ 460] 00:28:50.857 bw ( KiB/s): min= 128, max= 384, per=4.02%, avg=236.80, stdev=59.78, samples=20 00:28:50.857 iops : min= 32, max= 96, avg=59.20, stdev=14.94, samples=20 00:28:50.857 lat (msec) : 100=2.63%, 250=17.43%, 500=79.93% 00:28:50.857 cpu : usr=98.30%, sys=1.34%, ctx=15, majf=0, minf=9 00:28:50.857 IO depths : 1=3.5%, 2=9.7%, 4=25.0%, 8=52.8%, 16=9.0%, 32=0.0%, >=64=0.0% 00:28:50.857 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:50.857 complete : 0=0.0%, 4=94.3%, 8=0.0%, 16=5.7%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:50.857 issued rwts: total=608,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:28:50.857 latency : target=0, window=0, percentile=100.00%, depth=16 00:28:50.857 filename0: (groupid=0, jobs=1): err= 0: pid=33214: Sun Jul 14 07:46:05 2024 00:28:50.857 read: IOPS=84, BW=338KiB/s (346kB/s)(3432KiB/10159msec) 00:28:50.857 slat (usec): min=5, max=179, avg=13.90, stdev=14.92 00:28:50.857 clat (msec): min=16, max=298, avg=189.04, stdev=41.81 00:28:50.857 lat (msec): min=16, max=298, avg=189.05, stdev=41.81 00:28:50.857 clat percentiles (msec): 00:28:50.857 | 1.00th=[ 17], 5.00th=[ 146], 10.00th=[ 153], 20.00th=[ 165], 00:28:50.857 | 30.00th=[ 171], 40.00th=[ 178], 50.00th=[ 190], 60.00th=[ 205], 00:28:50.857 | 70.00th=[ 211], 80.00th=[ 222], 90.00th=[ 234], 95.00th=[ 255], 00:28:50.857 | 99.00th=[ 271], 99.50th=[ 275], 99.90th=[ 300], 99.95th=[ 300], 00:28:50.857 | 99.99th=[ 300] 00:28:50.857 bw ( KiB/s): min= 256, max= 496, per=5.72%, avg=336.80, stdev=66.78, samples=20 00:28:50.857 iops : min= 64, max= 124, avg=84.20, stdev=16.69, samples=20 00:28:50.857 lat (msec) : 20=1.86%, 100=1.86%, 250=89.51%, 500=6.76% 00:28:50.857 cpu : usr=96.97%, sys=1.62%, ctx=25, majf=0, minf=9 00:28:50.857 IO depths : 1=0.7%, 2=5.9%, 4=21.9%, 8=59.7%, 16=11.8%, 32=0.0%, >=64=0.0% 00:28:50.857 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:50.857 complete : 0=0.0%, 4=93.4%, 8=1.0%, 16=5.5%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:50.857 issued rwts: total=858,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:28:50.857 latency : target=0, window=0, percentile=100.00%, depth=16 00:28:50.857 filename0: (groupid=0, jobs=1): err= 0: pid=33215: Sun Jul 14 07:46:05 2024 00:28:50.857 read: IOPS=58, BW=234KiB/s (239kB/s)(2368KiB/10139msec) 00:28:50.857 slat (usec): min=4, max=123, avg=82.12, stdev=18.58 00:28:50.857 clat (msec): min=188, max=394, avg=273.30, stdev=27.68 00:28:50.857 lat (msec): min=188, max=394, avg=273.39, stdev=27.68 00:28:50.858 clat percentiles (msec): 00:28:50.858 | 1.00th=[ 207], 5.00th=[ 222], 10.00th=[ 241], 20.00th=[ 259], 00:28:50.858 | 30.00th=[ 264], 40.00th=[ 266], 50.00th=[ 271], 60.00th=[ 275], 00:28:50.858 | 70.00th=[ 284], 80.00th=[ 296], 90.00th=[ 317], 95.00th=[ 321], 00:28:50.858 | 99.00th=[ 334], 99.50th=[ 368], 99.90th=[ 393], 99.95th=[ 393], 00:28:50.858 | 99.99th=[ 393] 00:28:50.858 bw ( KiB/s): min= 128, max= 272, per=3.92%, avg=230.35, stdev=50.96, samples=20 00:28:50.858 iops : min= 32, max= 68, avg=57.55, stdev=12.73, samples=20 00:28:50.858 lat (msec) : 250=10.14%, 500=89.86% 00:28:50.858 cpu : usr=98.29%, sys=1.21%, ctx=21, majf=0, minf=9 00:28:50.858 IO depths : 1=4.9%, 2=11.1%, 4=25.0%, 8=51.4%, 16=7.6%, 32=0.0%, >=64=0.0% 00:28:50.858 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:50.858 complete : 0=0.0%, 4=94.2%, 8=0.0%, 16=5.8%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:50.858 issued rwts: total=592,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:28:50.858 latency : target=0, window=0, percentile=100.00%, depth=16 00:28:50.858 filename0: (groupid=0, jobs=1): err= 0: pid=33216: Sun Jul 14 07:46:05 2024 00:28:50.858 read: IOPS=58, BW=234KiB/s (239kB/s)(2368KiB/10136msec) 00:28:50.858 slat (usec): min=5, max=234, avg=30.88, stdev=18.67 00:28:50.858 clat (msec): min=137, max=401, avg=273.64, stdev=42.44 00:28:50.858 lat (msec): min=137, max=401, avg=273.67, stdev=42.44 00:28:50.858 clat percentiles (msec): 00:28:50.858 | 1.00th=[ 171], 5.00th=[ 176], 10.00th=[ 211], 20.00th=[ 259], 00:28:50.858 | 30.00th=[ 266], 40.00th=[ 268], 50.00th=[ 271], 60.00th=[ 279], 00:28:50.858 | 70.00th=[ 288], 80.00th=[ 300], 90.00th=[ 321], 95.00th=[ 359], 00:28:50.858 | 99.00th=[ 384], 99.50th=[ 393], 99.90th=[ 401], 99.95th=[ 401], 00:28:50.858 | 99.99th=[ 401] 00:28:50.858 bw ( KiB/s): min= 128, max= 256, per=3.92%, avg=230.40, stdev=50.70, samples=20 00:28:50.858 iops : min= 32, max= 64, avg=57.60, stdev=12.68, samples=20 00:28:50.858 lat (msec) : 250=15.54%, 500=84.46% 00:28:50.858 cpu : usr=97.09%, sys=1.72%, ctx=50, majf=0, minf=9 00:28:50.858 IO depths : 1=3.4%, 2=9.6%, 4=25.0%, 8=52.9%, 16=9.1%, 32=0.0%, >=64=0.0% 00:28:50.858 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:50.858 complete : 0=0.0%, 4=94.3%, 8=0.0%, 16=5.7%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:50.858 issued rwts: total=592,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:28:50.858 latency : target=0, window=0, percentile=100.00%, depth=16 00:28:50.858 filename0: (groupid=0, jobs=1): err= 0: pid=33217: Sun Jul 14 07:46:05 2024 00:28:50.858 read: IOPS=56, BW=228KiB/s (233kB/s)(2304KiB/10117msec) 00:28:50.858 slat (usec): min=26, max=110, avg=79.12, stdev=15.29 00:28:50.858 clat (msec): min=135, max=433, avg=278.04, stdev=42.86 00:28:50.858 lat (msec): min=135, max=433, avg=278.12, stdev=42.87 00:28:50.858 clat percentiles (msec): 00:28:50.858 | 1.00th=[ 150], 5.00th=[ 209], 10.00th=[ 241], 20.00th=[ 262], 00:28:50.858 | 30.00th=[ 264], 40.00th=[ 271], 50.00th=[ 275], 60.00th=[ 284], 00:28:50.858 | 70.00th=[ 288], 80.00th=[ 305], 90.00th=[ 317], 95.00th=[ 363], 00:28:50.858 | 99.00th=[ 418], 99.50th=[ 426], 99.90th=[ 435], 99.95th=[ 435], 00:28:50.858 | 99.99th=[ 435] 00:28:50.858 bw ( KiB/s): min= 128, max= 272, per=3.80%, avg=224.00, stdev=55.67, samples=20 00:28:50.858 iops : min= 32, max= 68, avg=56.00, stdev=13.92, samples=20 00:28:50.858 lat (msec) : 250=11.46%, 500=88.54% 00:28:50.858 cpu : usr=97.21%, sys=1.52%, ctx=86, majf=0, minf=9 00:28:50.858 IO depths : 1=4.3%, 2=10.6%, 4=25.0%, 8=51.9%, 16=8.2%, 32=0.0%, >=64=0.0% 00:28:50.858 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:50.858 complete : 0=0.0%, 4=94.2%, 8=0.0%, 16=5.8%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:50.858 issued rwts: total=576,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:28:50.858 latency : target=0, window=0, percentile=100.00%, depth=16 00:28:50.858 filename0: (groupid=0, jobs=1): err= 0: pid=33218: Sun Jul 14 07:46:05 2024 00:28:50.858 read: IOPS=58, BW=234KiB/s (240kB/s)(2368KiB/10123msec) 00:28:50.858 slat (usec): min=11, max=127, avg=37.20, stdev=19.97 00:28:50.858 clat (msec): min=170, max=423, avg=273.31, stdev=40.36 00:28:50.858 lat (msec): min=170, max=423, avg=273.35, stdev=40.36 00:28:50.858 clat percentiles (msec): 00:28:50.858 | 1.00th=[ 171], 5.00th=[ 178], 10.00th=[ 226], 20.00th=[ 259], 00:28:50.858 | 30.00th=[ 266], 40.00th=[ 268], 50.00th=[ 271], 60.00th=[ 279], 00:28:50.858 | 70.00th=[ 288], 80.00th=[ 300], 90.00th=[ 317], 95.00th=[ 321], 00:28:50.858 | 99.00th=[ 393], 99.50th=[ 401], 99.90th=[ 422], 99.95th=[ 422], 00:28:50.858 | 99.99th=[ 422] 00:28:50.858 bw ( KiB/s): min= 128, max= 320, per=3.92%, avg=230.40, stdev=53.04, samples=20 00:28:50.858 iops : min= 32, max= 80, avg=57.60, stdev=13.26, samples=20 00:28:50.858 lat (msec) : 250=16.22%, 500=83.78% 00:28:50.858 cpu : usr=98.32%, sys=1.18%, ctx=29, majf=0, minf=9 00:28:50.858 IO depths : 1=2.2%, 2=8.3%, 4=24.5%, 8=54.7%, 16=10.3%, 32=0.0%, >=64=0.0% 00:28:50.858 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:50.858 complete : 0=0.0%, 4=94.2%, 8=0.2%, 16=5.7%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:50.858 issued rwts: total=592,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:28:50.858 latency : target=0, window=0, percentile=100.00%, depth=16 00:28:50.858 filename0: (groupid=0, jobs=1): err= 0: pid=33220: Sun Jul 14 07:46:05 2024 00:28:50.858 read: IOPS=58, BW=234KiB/s (239kB/s)(2368KiB/10141msec) 00:28:50.858 slat (usec): min=4, max=100, avg=33.92, stdev=15.83 00:28:50.858 clat (msec): min=137, max=401, avg=273.78, stdev=38.21 00:28:50.858 lat (msec): min=137, max=401, avg=273.82, stdev=38.21 00:28:50.858 clat percentiles (msec): 00:28:50.858 | 1.00th=[ 171], 5.00th=[ 180], 10.00th=[ 226], 20.00th=[ 259], 00:28:50.858 | 30.00th=[ 266], 40.00th=[ 268], 50.00th=[ 271], 60.00th=[ 279], 00:28:50.858 | 70.00th=[ 288], 80.00th=[ 300], 90.00th=[ 317], 95.00th=[ 321], 00:28:50.858 | 99.00th=[ 363], 99.50th=[ 372], 99.90th=[ 401], 99.95th=[ 401], 00:28:50.858 | 99.99th=[ 401] 00:28:50.858 bw ( KiB/s): min= 128, max= 384, per=3.92%, avg=230.40, stdev=67.16, samples=20 00:28:50.858 iops : min= 32, max= 96, avg=57.60, stdev=16.79, samples=20 00:28:50.858 lat (msec) : 250=13.51%, 500=86.49% 00:28:50.858 cpu : usr=97.25%, sys=1.71%, ctx=25, majf=0, minf=9 00:28:50.858 IO depths : 1=4.6%, 2=10.8%, 4=25.0%, 8=51.7%, 16=7.9%, 32=0.0%, >=64=0.0% 00:28:50.858 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:50.858 complete : 0=0.0%, 4=94.2%, 8=0.0%, 16=5.8%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:50.858 issued rwts: total=592,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:28:50.858 latency : target=0, window=0, percentile=100.00%, depth=16 00:28:50.858 filename0: (groupid=0, jobs=1): err= 0: pid=33221: Sun Jul 14 07:46:05 2024 00:28:50.858 read: IOPS=58, BW=236KiB/s (241kB/s)(2368KiB/10043msec) 00:28:50.858 slat (nsec): min=7825, max=60583, avg=23450.49, stdev=12236.24 00:28:50.858 clat (msec): min=163, max=435, avg=271.23, stdev=35.81 00:28:50.858 lat (msec): min=163, max=435, avg=271.25, stdev=35.81 00:28:50.858 clat percentiles (msec): 00:28:50.858 | 1.00th=[ 167], 5.00th=[ 207], 10.00th=[ 224], 20.00th=[ 259], 00:28:50.858 | 30.00th=[ 264], 40.00th=[ 268], 50.00th=[ 271], 60.00th=[ 275], 00:28:50.858 | 70.00th=[ 284], 80.00th=[ 300], 90.00th=[ 313], 95.00th=[ 317], 00:28:50.858 | 99.00th=[ 359], 99.50th=[ 368], 99.90th=[ 435], 99.95th=[ 435], 00:28:50.858 | 99.99th=[ 435] 00:28:50.858 bw ( KiB/s): min= 128, max= 272, per=3.92%, avg=230.40, stdev=51.23, samples=20 00:28:50.858 iops : min= 32, max= 68, avg=57.60, stdev=12.81, samples=20 00:28:50.858 lat (msec) : 250=15.20%, 500=84.80% 00:28:50.858 cpu : usr=98.11%, sys=1.54%, ctx=15, majf=0, minf=9 00:28:50.858 IO depths : 1=3.4%, 2=9.6%, 4=25.0%, 8=52.9%, 16=9.1%, 32=0.0%, >=64=0.0% 00:28:50.858 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:50.858 complete : 0=0.0%, 4=94.3%, 8=0.0%, 16=5.7%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:50.858 issued rwts: total=592,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:28:50.858 latency : target=0, window=0, percentile=100.00%, depth=16 00:28:50.858 filename1: (groupid=0, jobs=1): err= 0: pid=33222: Sun Jul 14 07:46:05 2024 00:28:50.858 read: IOPS=61, BW=246KiB/s (252kB/s)(2496KiB/10159msec) 00:28:50.858 slat (usec): min=19, max=122, avg=80.30, stdev=15.43 00:28:50.858 clat (msec): min=11, max=399, avg=259.84, stdev=64.56 00:28:50.858 lat (msec): min=11, max=400, avg=259.92, stdev=64.57 00:28:50.858 clat percentiles (msec): 00:28:50.858 | 1.00th=[ 12], 5.00th=[ 97], 10.00th=[ 176], 20.00th=[ 234], 00:28:50.858 | 30.00th=[ 262], 40.00th=[ 266], 50.00th=[ 271], 60.00th=[ 271], 00:28:50.858 | 70.00th=[ 284], 80.00th=[ 296], 90.00th=[ 317], 95.00th=[ 321], 00:28:50.858 | 99.00th=[ 380], 99.50th=[ 388], 99.90th=[ 401], 99.95th=[ 401], 00:28:50.858 | 99.99th=[ 401] 00:28:50.858 bw ( KiB/s): min= 128, max= 384, per=4.14%, avg=243.20, stdev=69.37, samples=20 00:28:50.858 iops : min= 32, max= 96, avg=60.80, stdev=17.34, samples=20 00:28:50.858 lat (msec) : 20=2.40%, 50=0.16%, 100=2.56%, 250=16.67%, 500=78.21% 00:28:50.858 cpu : usr=98.22%, sys=1.13%, ctx=67, majf=0, minf=9 00:28:50.858 IO depths : 1=3.4%, 2=9.8%, 4=25.0%, 8=52.7%, 16=9.1%, 32=0.0%, >=64=0.0% 00:28:50.858 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:50.858 complete : 0=0.0%, 4=94.3%, 8=0.0%, 16=5.7%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:50.858 issued rwts: total=624,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:28:50.858 latency : target=0, window=0, percentile=100.00%, depth=16 00:28:50.858 filename1: (groupid=0, jobs=1): err= 0: pid=33223: Sun Jul 14 07:46:05 2024 00:28:50.858 read: IOPS=59, BW=240KiB/s (245kB/s)(2432KiB/10147msec) 00:28:50.858 slat (usec): min=4, max=262, avg=44.20, stdev=34.39 00:28:50.858 clat (msec): min=94, max=448, avg=264.50, stdev=46.43 00:28:50.858 lat (msec): min=94, max=448, avg=264.54, stdev=46.44 00:28:50.858 clat percentiles (msec): 00:28:50.858 | 1.00th=[ 95], 5.00th=[ 169], 10.00th=[ 207], 20.00th=[ 255], 00:28:50.858 | 30.00th=[ 262], 40.00th=[ 266], 50.00th=[ 271], 60.00th=[ 275], 00:28:50.858 | 70.00th=[ 284], 80.00th=[ 300], 90.00th=[ 313], 95.00th=[ 317], 00:28:50.858 | 99.00th=[ 334], 99.50th=[ 376], 99.90th=[ 447], 99.95th=[ 447], 00:28:50.858 | 99.99th=[ 447] 00:28:50.858 bw ( KiB/s): min= 128, max= 368, per=4.02%, avg=236.80, stdev=56.53, samples=20 00:28:50.858 iops : min= 32, max= 92, avg=59.20, stdev=14.13, samples=20 00:28:50.858 lat (msec) : 100=2.63%, 250=16.45%, 500=80.92% 00:28:50.858 cpu : usr=95.13%, sys=2.44%, ctx=71, majf=0, minf=9 00:28:50.858 IO depths : 1=2.3%, 2=8.6%, 4=25.0%, 8=53.9%, 16=10.2%, 32=0.0%, >=64=0.0% 00:28:50.858 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:50.858 complete : 0=0.0%, 4=94.3%, 8=0.0%, 16=5.7%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:50.858 issued rwts: total=608,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:28:50.858 latency : target=0, window=0, percentile=100.00%, depth=16 00:28:50.858 filename1: (groupid=0, jobs=1): err= 0: pid=33224: Sun Jul 14 07:46:05 2024 00:28:50.858 read: IOPS=58, BW=233KiB/s (238kB/s)(2360KiB/10135msec) 00:28:50.858 slat (usec): min=10, max=112, avg=69.44, stdev=23.15 00:28:50.858 clat (msec): min=138, max=399, avg=273.92, stdev=41.09 00:28:50.858 lat (msec): min=138, max=399, avg=273.99, stdev=41.09 00:28:50.858 clat percentiles (msec): 00:28:50.859 | 1.00th=[ 171], 5.00th=[ 182], 10.00th=[ 226], 20.00th=[ 259], 00:28:50.859 | 30.00th=[ 264], 40.00th=[ 268], 50.00th=[ 271], 60.00th=[ 279], 00:28:50.859 | 70.00th=[ 292], 80.00th=[ 300], 90.00th=[ 321], 95.00th=[ 347], 00:28:50.859 | 99.00th=[ 376], 99.50th=[ 393], 99.90th=[ 401], 99.95th=[ 401], 00:28:50.859 | 99.99th=[ 401] 00:28:50.859 bw ( KiB/s): min= 128, max= 256, per=3.90%, avg=229.60, stdev=48.50, samples=20 00:28:50.859 iops : min= 32, max= 64, avg=57.40, stdev=12.12, samples=20 00:28:50.859 lat (msec) : 250=14.92%, 500=85.08% 00:28:50.859 cpu : usr=97.46%, sys=1.57%, ctx=48, majf=0, minf=9 00:28:50.859 IO depths : 1=3.2%, 2=9.5%, 4=25.1%, 8=53.1%, 16=9.2%, 32=0.0%, >=64=0.0% 00:28:50.859 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:50.859 complete : 0=0.0%, 4=94.3%, 8=0.0%, 16=5.7%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:50.859 issued rwts: total=590,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:28:50.859 latency : target=0, window=0, percentile=100.00%, depth=16 00:28:50.859 filename1: (groupid=0, jobs=1): err= 0: pid=33225: Sun Jul 14 07:46:05 2024 00:28:50.859 read: IOPS=59, BW=240KiB/s (245kB/s)(2432KiB/10148msec) 00:28:50.859 slat (nsec): min=6118, max=67307, avg=25502.06, stdev=10536.26 00:28:50.859 clat (msec): min=94, max=412, avg=264.61, stdev=43.37 00:28:50.859 lat (msec): min=94, max=412, avg=264.64, stdev=43.38 00:28:50.859 clat percentiles (msec): 00:28:50.859 | 1.00th=[ 95], 5.00th=[ 171], 10.00th=[ 207], 20.00th=[ 257], 00:28:50.859 | 30.00th=[ 264], 40.00th=[ 268], 50.00th=[ 271], 60.00th=[ 275], 00:28:50.859 | 70.00th=[ 279], 80.00th=[ 300], 90.00th=[ 305], 95.00th=[ 317], 00:28:50.859 | 99.00th=[ 321], 99.50th=[ 321], 99.90th=[ 414], 99.95th=[ 414], 00:28:50.859 | 99.99th=[ 414] 00:28:50.859 bw ( KiB/s): min= 128, max= 384, per=4.02%, avg=236.80, stdev=62.64, samples=20 00:28:50.859 iops : min= 32, max= 96, avg=59.20, stdev=15.66, samples=20 00:28:50.859 lat (msec) : 100=2.63%, 250=13.16%, 500=84.21% 00:28:50.859 cpu : usr=98.19%, sys=1.44%, ctx=21, majf=0, minf=9 00:28:50.859 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.2%, 32=0.0%, >=64=0.0% 00:28:50.859 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:50.859 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:50.859 issued rwts: total=608,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:28:50.859 latency : target=0, window=0, percentile=100.00%, depth=16 00:28:50.859 filename1: (groupid=0, jobs=1): err= 0: pid=33226: Sun Jul 14 07:46:05 2024 00:28:50.859 read: IOPS=58, BW=233KiB/s (239kB/s)(2360KiB/10132msec) 00:28:50.859 slat (usec): min=16, max=126, avg=31.92, stdev=13.79 00:28:50.859 clat (msec): min=137, max=394, avg=274.16, stdev=38.07 00:28:50.859 lat (msec): min=137, max=394, avg=274.19, stdev=38.06 00:28:50.859 clat percentiles (msec): 00:28:50.859 | 1.00th=[ 171], 5.00th=[ 190], 10.00th=[ 228], 20.00th=[ 259], 00:28:50.859 | 30.00th=[ 266], 40.00th=[ 268], 50.00th=[ 271], 60.00th=[ 279], 00:28:50.859 | 70.00th=[ 288], 80.00th=[ 300], 90.00th=[ 317], 95.00th=[ 321], 00:28:50.859 | 99.00th=[ 372], 99.50th=[ 393], 99.90th=[ 397], 99.95th=[ 397], 00:28:50.859 | 99.99th=[ 397] 00:28:50.859 bw ( KiB/s): min= 128, max= 272, per=3.90%, avg=229.60, stdev=50.93, samples=20 00:28:50.859 iops : min= 32, max= 68, avg=57.40, stdev=12.73, samples=20 00:28:50.859 lat (msec) : 250=13.22%, 500=86.78% 00:28:50.859 cpu : usr=98.41%, sys=1.09%, ctx=33, majf=0, minf=9 00:28:50.859 IO depths : 1=2.2%, 2=8.5%, 4=25.1%, 8=54.1%, 16=10.2%, 32=0.0%, >=64=0.0% 00:28:50.859 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:50.859 complete : 0=0.0%, 4=94.3%, 8=0.0%, 16=5.7%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:50.859 issued rwts: total=590,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:28:50.859 latency : target=0, window=0, percentile=100.00%, depth=16 00:28:50.859 filename1: (groupid=0, jobs=1): err= 0: pid=33227: Sun Jul 14 07:46:05 2024 00:28:50.859 read: IOPS=58, BW=234KiB/s (239kB/s)(2368KiB/10141msec) 00:28:50.859 slat (nsec): min=4108, max=55040, avg=24475.96, stdev=7568.13 00:28:50.859 clat (msec): min=121, max=401, avg=273.82, stdev=43.62 00:28:50.859 lat (msec): min=121, max=401, avg=273.84, stdev=43.62 00:28:50.859 clat percentiles (msec): 00:28:50.859 | 1.00th=[ 171], 5.00th=[ 174], 10.00th=[ 211], 20.00th=[ 259], 00:28:50.859 | 30.00th=[ 266], 40.00th=[ 268], 50.00th=[ 271], 60.00th=[ 279], 00:28:50.859 | 70.00th=[ 288], 80.00th=[ 305], 90.00th=[ 321], 95.00th=[ 359], 00:28:50.859 | 99.00th=[ 384], 99.50th=[ 393], 99.90th=[ 401], 99.95th=[ 401], 00:28:50.859 | 99.99th=[ 401] 00:28:50.859 bw ( KiB/s): min= 128, max= 256, per=3.92%, avg=230.40, stdev=50.70, samples=20 00:28:50.859 iops : min= 32, max= 64, avg=57.60, stdev=12.68, samples=20 00:28:50.859 lat (msec) : 250=15.54%, 500=84.46% 00:28:50.859 cpu : usr=96.83%, sys=1.84%, ctx=42, majf=0, minf=9 00:28:50.859 IO depths : 1=3.2%, 2=9.5%, 4=25.0%, 8=53.0%, 16=9.3%, 32=0.0%, >=64=0.0% 00:28:50.859 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:50.859 complete : 0=0.0%, 4=94.3%, 8=0.0%, 16=5.7%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:50.859 issued rwts: total=592,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:28:50.859 latency : target=0, window=0, percentile=100.00%, depth=16 00:28:50.859 filename1: (groupid=0, jobs=1): err= 0: pid=33228: Sun Jul 14 07:46:05 2024 00:28:50.859 read: IOPS=59, BW=240KiB/s (245kB/s)(2432KiB/10154msec) 00:28:50.859 slat (usec): min=4, max=203, avg=65.91, stdev=29.83 00:28:50.859 clat (msec): min=95, max=431, avg=266.60, stdev=45.83 00:28:50.859 lat (msec): min=95, max=431, avg=266.67, stdev=45.84 00:28:50.859 clat percentiles (msec): 00:28:50.859 | 1.00th=[ 96], 5.00th=[ 180], 10.00th=[ 209], 20.00th=[ 255], 00:28:50.859 | 30.00th=[ 262], 40.00th=[ 266], 50.00th=[ 271], 60.00th=[ 275], 00:28:50.859 | 70.00th=[ 279], 80.00th=[ 300], 90.00th=[ 317], 95.00th=[ 321], 00:28:50.859 | 99.00th=[ 376], 99.50th=[ 409], 99.90th=[ 430], 99.95th=[ 430], 00:28:50.859 | 99.99th=[ 430] 00:28:50.859 bw ( KiB/s): min= 128, max= 384, per=4.02%, avg=236.80, stdev=59.55, samples=20 00:28:50.859 iops : min= 32, max= 96, avg=59.20, stdev=14.89, samples=20 00:28:50.859 lat (msec) : 100=2.63%, 250=15.46%, 500=81.91% 00:28:50.859 cpu : usr=97.34%, sys=1.61%, ctx=26, majf=0, minf=9 00:28:50.859 IO depths : 1=3.3%, 2=9.5%, 4=25.0%, 8=53.0%, 16=9.2%, 32=0.0%, >=64=0.0% 00:28:50.859 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:50.859 complete : 0=0.0%, 4=94.3%, 8=0.0%, 16=5.7%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:50.859 issued rwts: total=608,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:28:50.859 latency : target=0, window=0, percentile=100.00%, depth=16 00:28:50.859 filename1: (groupid=0, jobs=1): err= 0: pid=33229: Sun Jul 14 07:46:05 2024 00:28:50.859 read: IOPS=62, BW=251KiB/s (257kB/s)(2552KiB/10166msec) 00:28:50.859 slat (usec): min=5, max=182, avg=34.18, stdev=18.67 00:28:50.859 clat (msec): min=7, max=400, avg=254.35, stdev=74.04 00:28:50.859 lat (msec): min=7, max=400, avg=254.38, stdev=74.05 00:28:50.859 clat percentiles (msec): 00:28:50.859 | 1.00th=[ 8], 5.00th=[ 88], 10.00th=[ 171], 20.00th=[ 226], 00:28:50.859 | 30.00th=[ 259], 40.00th=[ 266], 50.00th=[ 271], 60.00th=[ 271], 00:28:50.859 | 70.00th=[ 284], 80.00th=[ 300], 90.00th=[ 317], 95.00th=[ 321], 00:28:50.859 | 99.00th=[ 372], 99.50th=[ 397], 99.90th=[ 401], 99.95th=[ 401], 00:28:50.859 | 99.99th=[ 401] 00:28:50.859 bw ( KiB/s): min= 128, max= 512, per=4.22%, avg=248.80, stdev=86.94, samples=20 00:28:50.859 iops : min= 32, max= 128, avg=62.20, stdev=21.73, samples=20 00:28:50.859 lat (msec) : 10=2.04%, 20=2.51%, 100=2.51%, 250=16.46%, 500=76.49% 00:28:50.859 cpu : usr=97.23%, sys=1.73%, ctx=32, majf=0, minf=9 00:28:50.859 IO depths : 1=3.1%, 2=8.8%, 4=25.1%, 8=53.8%, 16=9.2%, 32=0.0%, >=64=0.0% 00:28:50.859 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:50.859 complete : 0=0.0%, 4=94.3%, 8=0.0%, 16=5.7%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:50.859 issued rwts: total=638,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:28:50.859 latency : target=0, window=0, percentile=100.00%, depth=16 00:28:50.859 filename2: (groupid=0, jobs=1): err= 0: pid=33230: Sun Jul 14 07:46:05 2024 00:28:50.859 read: IOPS=58, BW=233KiB/s (239kB/s)(2360KiB/10123msec) 00:28:50.859 slat (usec): min=17, max=203, avg=50.03, stdev=34.22 00:28:50.859 clat (msec): min=170, max=427, avg=273.86, stdev=45.68 00:28:50.859 lat (msec): min=170, max=427, avg=273.91, stdev=45.68 00:28:50.859 clat percentiles (msec): 00:28:50.859 | 1.00th=[ 171], 5.00th=[ 171], 10.00th=[ 207], 20.00th=[ 257], 00:28:50.859 | 30.00th=[ 266], 40.00th=[ 268], 50.00th=[ 271], 60.00th=[ 284], 00:28:50.859 | 70.00th=[ 296], 80.00th=[ 305], 90.00th=[ 321], 95.00th=[ 363], 00:28:50.859 | 99.00th=[ 372], 99.50th=[ 401], 99.90th=[ 426], 99.95th=[ 426], 00:28:50.859 | 99.99th=[ 426] 00:28:50.859 bw ( KiB/s): min= 128, max= 336, per=3.90%, avg=229.60, stdev=59.48, samples=20 00:28:50.859 iops : min= 32, max= 84, avg=57.40, stdev=14.87, samples=20 00:28:50.859 lat (msec) : 250=17.29%, 500=82.71% 00:28:50.859 cpu : usr=95.83%, sys=2.42%, ctx=122, majf=0, minf=9 00:28:50.859 IO depths : 1=2.4%, 2=8.0%, 4=22.5%, 8=56.6%, 16=10.5%, 32=0.0%, >=64=0.0% 00:28:50.859 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:50.859 complete : 0=0.0%, 4=93.7%, 8=1.1%, 16=5.2%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:50.859 issued rwts: total=590,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:28:50.859 latency : target=0, window=0, percentile=100.00%, depth=16 00:28:50.859 filename2: (groupid=0, jobs=1): err= 0: pid=33232: Sun Jul 14 07:46:05 2024 00:28:50.859 read: IOPS=59, BW=240KiB/s (245kB/s)(2432KiB/10147msec) 00:28:50.859 slat (usec): min=7, max=121, avg=78.83, stdev=20.45 00:28:50.859 clat (msec): min=93, max=430, avg=266.37, stdev=46.39 00:28:50.859 lat (msec): min=93, max=430, avg=266.45, stdev=46.40 00:28:50.859 clat percentiles (msec): 00:28:50.859 | 1.00th=[ 94], 5.00th=[ 180], 10.00th=[ 209], 20.00th=[ 255], 00:28:50.859 | 30.00th=[ 262], 40.00th=[ 266], 50.00th=[ 268], 60.00th=[ 275], 00:28:50.859 | 70.00th=[ 284], 80.00th=[ 296], 90.00th=[ 317], 95.00th=[ 321], 00:28:50.859 | 99.00th=[ 372], 99.50th=[ 409], 99.90th=[ 430], 99.95th=[ 430], 00:28:50.859 | 99.99th=[ 430] 00:28:50.859 bw ( KiB/s): min= 128, max= 368, per=4.02%, avg=236.80, stdev=57.95, samples=20 00:28:50.859 iops : min= 32, max= 92, avg=59.20, stdev=14.49, samples=20 00:28:50.859 lat (msec) : 100=2.63%, 250=15.79%, 500=81.58% 00:28:50.859 cpu : usr=98.53%, sys=1.04%, ctx=15, majf=0, minf=9 00:28:50.859 IO depths : 1=3.3%, 2=9.5%, 4=25.0%, 8=53.0%, 16=9.2%, 32=0.0%, >=64=0.0% 00:28:50.859 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:50.859 complete : 0=0.0%, 4=94.3%, 8=0.0%, 16=5.7%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:50.859 issued rwts: total=608,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:28:50.859 latency : target=0, window=0, percentile=100.00%, depth=16 00:28:50.859 filename2: (groupid=0, jobs=1): err= 0: pid=33233: Sun Jul 14 07:46:05 2024 00:28:50.859 read: IOPS=57, BW=230KiB/s (235kB/s)(2304KiB/10022msec) 00:28:50.859 slat (usec): min=8, max=202, avg=59.12, stdev=29.68 00:28:50.859 clat (msec): min=136, max=460, avg=277.88, stdev=37.30 00:28:50.859 lat (msec): min=136, max=460, avg=277.94, stdev=37.29 00:28:50.859 clat percentiles (msec): 00:28:50.859 | 1.00th=[ 153], 5.00th=[ 234], 10.00th=[ 241], 20.00th=[ 262], 00:28:50.859 | 30.00th=[ 264], 40.00th=[ 271], 50.00th=[ 271], 60.00th=[ 284], 00:28:50.860 | 70.00th=[ 288], 80.00th=[ 305], 90.00th=[ 317], 95.00th=[ 317], 00:28:50.860 | 99.00th=[ 384], 99.50th=[ 393], 99.90th=[ 460], 99.95th=[ 460], 00:28:50.860 | 99.99th=[ 460] 00:28:50.860 bw ( KiB/s): min= 128, max= 272, per=3.80%, avg=224.00, stdev=51.91, samples=20 00:28:50.860 iops : min= 32, max= 68, avg=56.00, stdev=12.98, samples=20 00:28:50.860 lat (msec) : 250=10.07%, 500=89.93% 00:28:50.860 cpu : usr=96.00%, sys=2.15%, ctx=146, majf=0, minf=9 00:28:50.860 IO depths : 1=3.3%, 2=9.5%, 4=25.0%, 8=53.0%, 16=9.2%, 32=0.0%, >=64=0.0% 00:28:50.860 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:50.860 complete : 0=0.0%, 4=94.3%, 8=0.0%, 16=5.7%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:50.860 issued rwts: total=576,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:28:50.860 latency : target=0, window=0, percentile=100.00%, depth=16 00:28:50.860 filename2: (groupid=0, jobs=1): err= 0: pid=33234: Sun Jul 14 07:46:05 2024 00:28:50.860 read: IOPS=57, BW=230KiB/s (235kB/s)(2304KiB/10022msec) 00:28:50.860 slat (usec): min=8, max=114, avg=74.31, stdev=20.42 00:28:50.860 clat (msec): min=135, max=450, avg=277.76, stdev=41.10 00:28:50.860 lat (msec): min=135, max=450, avg=277.83, stdev=41.09 00:28:50.860 clat percentiles (msec): 00:28:50.860 | 1.00th=[ 150], 5.00th=[ 211], 10.00th=[ 241], 20.00th=[ 262], 00:28:50.860 | 30.00th=[ 264], 40.00th=[ 271], 50.00th=[ 271], 60.00th=[ 279], 00:28:50.860 | 70.00th=[ 288], 80.00th=[ 305], 90.00th=[ 317], 95.00th=[ 363], 00:28:50.860 | 99.00th=[ 430], 99.50th=[ 451], 99.90th=[ 451], 99.95th=[ 451], 00:28:50.860 | 99.99th=[ 451] 00:28:50.860 bw ( KiB/s): min= 128, max= 256, per=3.80%, avg=224.00, stdev=55.18, samples=20 00:28:50.860 iops : min= 32, max= 64, avg=56.00, stdev=13.80, samples=20 00:28:50.860 lat (msec) : 250=11.11%, 500=88.89% 00:28:50.860 cpu : usr=98.36%, sys=1.16%, ctx=19, majf=0, minf=9 00:28:50.860 IO depths : 1=4.2%, 2=10.4%, 4=25.0%, 8=52.1%, 16=8.3%, 32=0.0%, >=64=0.0% 00:28:50.860 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:50.860 complete : 0=0.0%, 4=94.2%, 8=0.0%, 16=5.8%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:50.860 issued rwts: total=576,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:28:50.860 latency : target=0, window=0, percentile=100.00%, depth=16 00:28:50.860 filename2: (groupid=0, jobs=1): err= 0: pid=33235: Sun Jul 14 07:46:05 2024 00:28:50.860 read: IOPS=61, BW=246KiB/s (252kB/s)(2496KiB/10156msec) 00:28:50.860 slat (usec): min=7, max=425, avg=19.63, stdev=26.21 00:28:50.860 clat (msec): min=97, max=448, avg=260.11, stdev=57.87 00:28:50.860 lat (msec): min=98, max=448, avg=260.13, stdev=57.87 00:28:50.860 clat percentiles (msec): 00:28:50.860 | 1.00th=[ 99], 5.00th=[ 150], 10.00th=[ 167], 20.00th=[ 234], 00:28:50.860 | 30.00th=[ 262], 40.00th=[ 264], 50.00th=[ 271], 60.00th=[ 275], 00:28:50.860 | 70.00th=[ 284], 80.00th=[ 296], 90.00th=[ 309], 95.00th=[ 313], 00:28:50.860 | 99.00th=[ 422], 99.50th=[ 430], 99.90th=[ 447], 99.95th=[ 447], 00:28:50.860 | 99.99th=[ 447] 00:28:50.860 bw ( KiB/s): min= 128, max= 384, per=4.14%, avg=243.20, stdev=53.85, samples=20 00:28:50.860 iops : min= 32, max= 96, avg=60.80, stdev=13.46, samples=20 00:28:50.860 lat (msec) : 100=2.56%, 250=20.19%, 500=77.24% 00:28:50.860 cpu : usr=96.13%, sys=2.07%, ctx=127, majf=0, minf=9 00:28:50.860 IO depths : 1=3.4%, 2=9.6%, 4=25.0%, 8=52.9%, 16=9.1%, 32=0.0%, >=64=0.0% 00:28:50.860 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:50.860 complete : 0=0.0%, 4=94.3%, 8=0.0%, 16=5.7%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:50.860 issued rwts: total=624,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:28:50.860 latency : target=0, window=0, percentile=100.00%, depth=16 00:28:50.860 filename2: (groupid=0, jobs=1): err= 0: pid=33236: Sun Jul 14 07:46:05 2024 00:28:50.860 read: IOPS=60, BW=242KiB/s (248kB/s)(2432KiB/10052msec) 00:28:50.860 slat (usec): min=7, max=132, avg=28.86, stdev=14.99 00:28:50.860 clat (msec): min=94, max=435, avg=264.29, stdev=45.88 00:28:50.860 lat (msec): min=94, max=435, avg=264.31, stdev=45.88 00:28:50.860 clat percentiles (msec): 00:28:50.860 | 1.00th=[ 95], 5.00th=[ 167], 10.00th=[ 207], 20.00th=[ 255], 00:28:50.860 | 30.00th=[ 264], 40.00th=[ 266], 50.00th=[ 271], 60.00th=[ 275], 00:28:50.860 | 70.00th=[ 279], 80.00th=[ 296], 90.00th=[ 313], 95.00th=[ 317], 00:28:50.860 | 99.00th=[ 338], 99.50th=[ 363], 99.90th=[ 435], 99.95th=[ 435], 00:28:50.860 | 99.99th=[ 435] 00:28:50.860 bw ( KiB/s): min= 128, max= 384, per=4.02%, avg=236.80, stdev=61.33, samples=20 00:28:50.860 iops : min= 32, max= 96, avg=59.20, stdev=15.33, samples=20 00:28:50.860 lat (msec) : 100=2.63%, 250=15.46%, 500=81.91% 00:28:50.860 cpu : usr=96.93%, sys=1.89%, ctx=32, majf=0, minf=9 00:28:50.860 IO depths : 1=4.8%, 2=11.0%, 4=25.0%, 8=51.5%, 16=7.7%, 32=0.0%, >=64=0.0% 00:28:50.860 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:50.860 complete : 0=0.0%, 4=94.2%, 8=0.0%, 16=5.8%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:50.860 issued rwts: total=608,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:28:50.860 latency : target=0, window=0, percentile=100.00%, depth=16 00:28:50.860 filename2: (groupid=0, jobs=1): err= 0: pid=33237: Sun Jul 14 07:46:05 2024 00:28:50.860 read: IOPS=58, BW=234KiB/s (239kB/s)(2368KiB/10132msec) 00:28:50.860 slat (usec): min=21, max=123, avg=80.23, stdev=14.69 00:28:50.860 clat (msec): min=137, max=399, avg=273.13, stdev=42.24 00:28:50.860 lat (msec): min=137, max=399, avg=273.21, stdev=42.25 00:28:50.860 clat percentiles (msec): 00:28:50.860 | 1.00th=[ 169], 5.00th=[ 176], 10.00th=[ 211], 20.00th=[ 259], 00:28:50.860 | 30.00th=[ 264], 40.00th=[ 268], 50.00th=[ 271], 60.00th=[ 279], 00:28:50.860 | 70.00th=[ 288], 80.00th=[ 300], 90.00th=[ 317], 95.00th=[ 359], 00:28:50.860 | 99.00th=[ 380], 99.50th=[ 388], 99.90th=[ 401], 99.95th=[ 401], 00:28:50.860 | 99.99th=[ 401] 00:28:50.860 bw ( KiB/s): min= 128, max= 256, per=3.92%, avg=230.35, stdev=50.68, samples=20 00:28:50.860 iops : min= 32, max= 64, avg=57.55, stdev=12.65, samples=20 00:28:50.860 lat (msec) : 250=15.54%, 500=84.46% 00:28:50.860 cpu : usr=98.29%, sys=1.27%, ctx=11, majf=0, minf=9 00:28:50.860 IO depths : 1=3.4%, 2=9.6%, 4=25.0%, 8=52.9%, 16=9.1%, 32=0.0%, >=64=0.0% 00:28:50.860 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:50.860 complete : 0=0.0%, 4=94.3%, 8=0.0%, 16=5.7%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:50.860 issued rwts: total=592,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:28:50.860 latency : target=0, window=0, percentile=100.00%, depth=16 00:28:50.860 filename2: (groupid=0, jobs=1): err= 0: pid=33238: Sun Jul 14 07:46:05 2024 00:28:50.860 read: IOPS=87, BW=350KiB/s (358kB/s)(3552KiB/10157msec) 00:28:50.860 slat (usec): min=7, max=132, avg=16.93, stdev=16.57 00:28:50.860 clat (msec): min=5, max=339, avg=182.25, stdev=54.77 00:28:50.860 lat (msec): min=5, max=339, avg=182.27, stdev=54.76 00:28:50.860 clat percentiles (msec): 00:28:50.860 | 1.00th=[ 8], 5.00th=[ 80], 10.00th=[ 124], 20.00th=[ 153], 00:28:50.860 | 30.00th=[ 167], 40.00th=[ 180], 50.00th=[ 182], 60.00th=[ 197], 00:28:50.860 | 70.00th=[ 207], 80.00th=[ 215], 90.00th=[ 239], 95.00th=[ 266], 00:28:50.860 | 99.00th=[ 321], 99.50th=[ 321], 99.90th=[ 338], 99.95th=[ 338], 00:28:50.860 | 99.99th=[ 338] 00:28:50.860 bw ( KiB/s): min= 256, max= 688, per=5.93%, avg=348.80, stdev=95.94, samples=20 00:28:50.860 iops : min= 64, max= 172, avg=87.20, stdev=23.99, samples=20 00:28:50.860 lat (msec) : 10=2.70%, 20=0.45%, 50=0.45%, 100=3.83%, 250=86.26% 00:28:50.860 lat (msec) : 500=6.31% 00:28:50.860 cpu : usr=97.86%, sys=1.64%, ctx=17, majf=0, minf=10 00:28:50.860 IO depths : 1=0.5%, 2=2.4%, 4=13.2%, 8=71.6%, 16=12.4%, 32=0.0%, >=64=0.0% 00:28:50.860 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:50.860 complete : 0=0.0%, 4=91.1%, 8=3.7%, 16=5.2%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:50.860 issued rwts: total=888,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:28:50.860 latency : target=0, window=0, percentile=100.00%, depth=16 00:28:50.860 00:28:50.860 Run status group 0 (all jobs): 00:28:50.860 READ: bw=5871KiB/s (6012kB/s), 228KiB/s-350KiB/s (233kB/s-358kB/s), io=58.3MiB (61.1MB), run=10022-10166msec 00:28:50.860 07:46:05 -- target/dif.sh@113 -- # destroy_subsystems 0 1 2 00:28:50.860 07:46:05 -- target/dif.sh@43 -- # local sub 00:28:50.860 07:46:05 -- target/dif.sh@45 -- # for sub in "$@" 00:28:50.860 07:46:05 -- target/dif.sh@46 -- # destroy_subsystem 0 00:28:50.860 07:46:05 -- target/dif.sh@36 -- # local sub_id=0 00:28:50.860 07:46:05 -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:28:50.860 07:46:05 -- common/autotest_common.sh@551 -- # xtrace_disable 00:28:50.860 07:46:05 -- common/autotest_common.sh@10 -- # set +x 00:28:50.860 07:46:05 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:28:50.860 07:46:05 -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:28:50.860 07:46:05 -- common/autotest_common.sh@551 -- # xtrace_disable 00:28:50.860 07:46:05 -- common/autotest_common.sh@10 -- # set +x 00:28:50.860 07:46:05 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:28:50.860 07:46:05 -- target/dif.sh@45 -- # for sub in "$@" 00:28:50.860 07:46:05 -- target/dif.sh@46 -- # destroy_subsystem 1 00:28:50.860 07:46:05 -- target/dif.sh@36 -- # local sub_id=1 00:28:50.860 07:46:05 -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:28:50.860 07:46:05 -- common/autotest_common.sh@551 -- # xtrace_disable 00:28:50.860 07:46:05 -- common/autotest_common.sh@10 -- # set +x 00:28:50.860 07:46:05 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:28:50.860 07:46:05 -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null1 00:28:50.860 07:46:05 -- common/autotest_common.sh@551 -- # xtrace_disable 00:28:50.860 07:46:05 -- common/autotest_common.sh@10 -- # set +x 00:28:50.860 07:46:05 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:28:50.860 07:46:05 -- target/dif.sh@45 -- # for sub in "$@" 00:28:50.860 07:46:05 -- target/dif.sh@46 -- # destroy_subsystem 2 00:28:50.860 07:46:05 -- target/dif.sh@36 -- # local sub_id=2 00:28:50.860 07:46:05 -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:28:50.860 07:46:05 -- common/autotest_common.sh@551 -- # xtrace_disable 00:28:50.860 07:46:05 -- common/autotest_common.sh@10 -- # set +x 00:28:50.860 07:46:05 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:28:50.860 07:46:05 -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null2 00:28:50.860 07:46:05 -- common/autotest_common.sh@551 -- # xtrace_disable 00:28:50.860 07:46:05 -- common/autotest_common.sh@10 -- # set +x 00:28:50.860 07:46:05 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:28:50.860 07:46:05 -- target/dif.sh@115 -- # NULL_DIF=1 00:28:50.860 07:46:05 -- target/dif.sh@115 -- # bs=8k,16k,128k 00:28:50.860 07:46:05 -- target/dif.sh@115 -- # numjobs=2 00:28:50.860 07:46:05 -- target/dif.sh@115 -- # iodepth=8 00:28:50.860 07:46:05 -- target/dif.sh@115 -- # runtime=5 00:28:50.860 07:46:05 -- target/dif.sh@115 -- # files=1 00:28:50.860 07:46:05 -- target/dif.sh@117 -- # create_subsystems 0 1 00:28:50.860 07:46:05 -- target/dif.sh@28 -- # local sub 00:28:50.860 07:46:05 -- target/dif.sh@30 -- # for sub in "$@" 00:28:50.860 07:46:05 -- target/dif.sh@31 -- # create_subsystem 0 00:28:50.860 07:46:05 -- target/dif.sh@18 -- # local sub_id=0 00:28:50.860 07:46:05 -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 1 00:28:50.860 07:46:05 -- common/autotest_common.sh@551 -- # xtrace_disable 00:28:50.860 07:46:05 -- common/autotest_common.sh@10 -- # set +x 00:28:50.860 bdev_null0 00:28:50.861 07:46:05 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:28:50.861 07:46:05 -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:28:50.861 07:46:05 -- common/autotest_common.sh@551 -- # xtrace_disable 00:28:50.861 07:46:05 -- common/autotest_common.sh@10 -- # set +x 00:28:50.861 07:46:05 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:28:50.861 07:46:05 -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:28:50.861 07:46:05 -- common/autotest_common.sh@551 -- # xtrace_disable 00:28:50.861 07:46:05 -- common/autotest_common.sh@10 -- # set +x 00:28:50.861 07:46:05 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:28:50.861 07:46:05 -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:28:50.861 07:46:05 -- common/autotest_common.sh@551 -- # xtrace_disable 00:28:50.861 07:46:05 -- common/autotest_common.sh@10 -- # set +x 00:28:50.861 [2024-07-14 07:46:05.473718] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:28:50.861 07:46:05 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:28:50.861 07:46:05 -- target/dif.sh@30 -- # for sub in "$@" 00:28:50.861 07:46:05 -- target/dif.sh@31 -- # create_subsystem 1 00:28:50.861 07:46:05 -- target/dif.sh@18 -- # local sub_id=1 00:28:50.861 07:46:05 -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null1 64 512 --md-size 16 --dif-type 1 00:28:50.861 07:46:05 -- common/autotest_common.sh@551 -- # xtrace_disable 00:28:50.861 07:46:05 -- common/autotest_common.sh@10 -- # set +x 00:28:50.861 bdev_null1 00:28:50.861 07:46:05 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:28:50.861 07:46:05 -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 --serial-number 53313233-1 --allow-any-host 00:28:50.861 07:46:05 -- common/autotest_common.sh@551 -- # xtrace_disable 00:28:50.861 07:46:05 -- common/autotest_common.sh@10 -- # set +x 00:28:50.861 07:46:05 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:28:50.861 07:46:05 -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 bdev_null1 00:28:50.861 07:46:05 -- common/autotest_common.sh@551 -- # xtrace_disable 00:28:50.861 07:46:05 -- common/autotest_common.sh@10 -- # set +x 00:28:50.861 07:46:05 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:28:50.861 07:46:05 -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:28:50.861 07:46:05 -- common/autotest_common.sh@551 -- # xtrace_disable 00:28:50.861 07:46:05 -- common/autotest_common.sh@10 -- # set +x 00:28:50.861 07:46:05 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:28:50.861 07:46:05 -- target/dif.sh@118 -- # fio /dev/fd/62 00:28:50.861 07:46:05 -- target/dif.sh@118 -- # create_json_sub_conf 0 1 00:28:50.861 07:46:05 -- target/dif.sh@51 -- # gen_nvmf_target_json 0 1 00:28:50.861 07:46:05 -- nvmf/common.sh@520 -- # config=() 00:28:50.861 07:46:05 -- nvmf/common.sh@520 -- # local subsystem config 00:28:50.861 07:46:05 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:28:50.861 07:46:05 -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:28:50.861 07:46:05 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:28:50.861 { 00:28:50.861 "params": { 00:28:50.861 "name": "Nvme$subsystem", 00:28:50.861 "trtype": "$TEST_TRANSPORT", 00:28:50.861 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:50.861 "adrfam": "ipv4", 00:28:50.861 "trsvcid": "$NVMF_PORT", 00:28:50.861 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:50.861 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:50.861 "hdgst": ${hdgst:-false}, 00:28:50.861 "ddgst": ${ddgst:-false} 00:28:50.861 }, 00:28:50.861 "method": "bdev_nvme_attach_controller" 00:28:50.861 } 00:28:50.861 EOF 00:28:50.861 )") 00:28:50.861 07:46:05 -- common/autotest_common.sh@1335 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:28:50.861 07:46:05 -- target/dif.sh@82 -- # gen_fio_conf 00:28:50.861 07:46:05 -- common/autotest_common.sh@1316 -- # local fio_dir=/usr/src/fio 00:28:50.861 07:46:05 -- target/dif.sh@54 -- # local file 00:28:50.861 07:46:05 -- common/autotest_common.sh@1318 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:28:50.861 07:46:05 -- target/dif.sh@56 -- # cat 00:28:50.861 07:46:05 -- common/autotest_common.sh@1318 -- # local sanitizers 00:28:50.861 07:46:05 -- common/autotest_common.sh@1319 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:28:50.861 07:46:05 -- common/autotest_common.sh@1320 -- # shift 00:28:50.861 07:46:05 -- common/autotest_common.sh@1322 -- # local asan_lib= 00:28:50.861 07:46:05 -- common/autotest_common.sh@1323 -- # for sanitizer in "${sanitizers[@]}" 00:28:50.861 07:46:05 -- nvmf/common.sh@542 -- # cat 00:28:50.861 07:46:05 -- target/dif.sh@72 -- # (( file = 1 )) 00:28:50.861 07:46:05 -- common/autotest_common.sh@1324 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:28:50.861 07:46:05 -- target/dif.sh@72 -- # (( file <= files )) 00:28:50.861 07:46:05 -- target/dif.sh@73 -- # cat 00:28:50.861 07:46:05 -- common/autotest_common.sh@1324 -- # grep libasan 00:28:50.861 07:46:05 -- common/autotest_common.sh@1324 -- # awk '{print $3}' 00:28:50.861 07:46:05 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:28:50.861 07:46:05 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:28:50.861 { 00:28:50.861 "params": { 00:28:50.861 "name": "Nvme$subsystem", 00:28:50.861 "trtype": "$TEST_TRANSPORT", 00:28:50.861 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:50.861 "adrfam": "ipv4", 00:28:50.861 "trsvcid": "$NVMF_PORT", 00:28:50.861 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:50.861 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:50.861 "hdgst": ${hdgst:-false}, 00:28:50.861 "ddgst": ${ddgst:-false} 00:28:50.861 }, 00:28:50.861 "method": "bdev_nvme_attach_controller" 00:28:50.861 } 00:28:50.861 EOF 00:28:50.861 )") 00:28:50.861 07:46:05 -- target/dif.sh@72 -- # (( file++ )) 00:28:50.861 07:46:05 -- nvmf/common.sh@542 -- # cat 00:28:50.861 07:46:05 -- target/dif.sh@72 -- # (( file <= files )) 00:28:50.861 07:46:05 -- nvmf/common.sh@544 -- # jq . 00:28:50.861 07:46:05 -- nvmf/common.sh@545 -- # IFS=, 00:28:50.861 07:46:05 -- nvmf/common.sh@546 -- # printf '%s\n' '{ 00:28:50.861 "params": { 00:28:50.861 "name": "Nvme0", 00:28:50.861 "trtype": "tcp", 00:28:50.861 "traddr": "10.0.0.2", 00:28:50.861 "adrfam": "ipv4", 00:28:50.861 "trsvcid": "4420", 00:28:50.861 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:28:50.861 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:28:50.861 "hdgst": false, 00:28:50.861 "ddgst": false 00:28:50.861 }, 00:28:50.861 "method": "bdev_nvme_attach_controller" 00:28:50.861 },{ 00:28:50.861 "params": { 00:28:50.861 "name": "Nvme1", 00:28:50.861 "trtype": "tcp", 00:28:50.861 "traddr": "10.0.0.2", 00:28:50.861 "adrfam": "ipv4", 00:28:50.861 "trsvcid": "4420", 00:28:50.861 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:28:50.861 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:28:50.861 "hdgst": false, 00:28:50.861 "ddgst": false 00:28:50.861 }, 00:28:50.861 "method": "bdev_nvme_attach_controller" 00:28:50.861 }' 00:28:50.861 07:46:05 -- common/autotest_common.sh@1324 -- # asan_lib= 00:28:50.861 07:46:05 -- common/autotest_common.sh@1325 -- # [[ -n '' ]] 00:28:50.861 07:46:05 -- common/autotest_common.sh@1323 -- # for sanitizer in "${sanitizers[@]}" 00:28:50.861 07:46:05 -- common/autotest_common.sh@1324 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:28:50.861 07:46:05 -- common/autotest_common.sh@1324 -- # grep libclang_rt.asan 00:28:50.861 07:46:05 -- common/autotest_common.sh@1324 -- # awk '{print $3}' 00:28:50.861 07:46:05 -- common/autotest_common.sh@1324 -- # asan_lib= 00:28:50.861 07:46:05 -- common/autotest_common.sh@1325 -- # [[ -n '' ]] 00:28:50.861 07:46:05 -- common/autotest_common.sh@1331 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:28:50.861 07:46:05 -- common/autotest_common.sh@1331 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:28:50.861 filename0: (g=0): rw=randread, bs=(R) 8192B-8192B, (W) 16.0KiB-16.0KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=8 00:28:50.861 ... 00:28:50.861 filename1: (g=0): rw=randread, bs=(R) 8192B-8192B, (W) 16.0KiB-16.0KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=8 00:28:50.861 ... 00:28:50.861 fio-3.35 00:28:50.861 Starting 4 threads 00:28:50.861 EAL: No free 2048 kB hugepages reported on node 1 00:28:50.861 [2024-07-14 07:46:06.457350] rpc.c: 181:spdk_rpc_listen: *ERROR*: RPC Unix domain socket path /var/tmp/spdk.sock in use. Specify another. 00:28:50.861 [2024-07-14 07:46:06.457404] rpc.c: 90:spdk_rpc_initialize: *ERROR*: Unable to start RPC service at /var/tmp/spdk.sock 00:28:56.140 00:28:56.140 filename0: (groupid=0, jobs=1): err= 0: pid=34756: Sun Jul 14 07:46:11 2024 00:28:56.140 read: IOPS=1700, BW=13.3MiB/s (13.9MB/s)(66.5MiB/5003msec) 00:28:56.140 slat (nsec): min=4378, max=45955, avg=12217.99, stdev=5207.86 00:28:56.140 clat (usec): min=1764, max=8107, avg=4667.29, stdev=687.00 00:28:56.140 lat (usec): min=1778, max=8121, avg=4679.51, stdev=686.67 00:28:56.140 clat percentiles (usec): 00:28:56.140 | 1.00th=[ 3261], 5.00th=[ 3752], 10.00th=[ 4015], 20.00th=[ 4293], 00:28:56.140 | 30.00th=[ 4424], 40.00th=[ 4555], 50.00th=[ 4621], 60.00th=[ 4621], 00:28:56.140 | 70.00th=[ 4686], 80.00th=[ 4817], 90.00th=[ 5473], 95.00th=[ 6259], 00:28:56.140 | 99.00th=[ 7242], 99.50th=[ 7439], 99.90th=[ 7767], 99.95th=[ 7767], 00:28:56.140 | 99.99th=[ 8094] 00:28:56.140 bw ( KiB/s): min=13056, max=14192, per=25.03%, avg=13602.60, stdev=295.00, samples=10 00:28:56.140 iops : min= 1632, max= 1774, avg=1700.30, stdev=36.89, samples=10 00:28:56.140 lat (msec) : 2=0.02%, 4=9.13%, 10=90.84% 00:28:56.140 cpu : usr=95.60%, sys=3.92%, ctx=14, majf=0, minf=9 00:28:56.140 IO depths : 1=0.2%, 2=2.0%, 4=69.4%, 8=28.3%, 16=0.0%, 32=0.0%, >=64=0.0% 00:28:56.140 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:56.140 complete : 0=0.0%, 4=93.1%, 8=6.9%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:56.140 issued rwts: total=8508,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:28:56.140 latency : target=0, window=0, percentile=100.00%, depth=8 00:28:56.140 filename0: (groupid=0, jobs=1): err= 0: pid=34757: Sun Jul 14 07:46:11 2024 00:28:56.140 read: IOPS=1702, BW=13.3MiB/s (13.9MB/s)(66.5MiB/5001msec) 00:28:56.140 slat (nsec): min=3908, max=45668, avg=12242.94, stdev=5187.31 00:28:56.140 clat (usec): min=2241, max=7845, avg=4661.35, stdev=689.73 00:28:56.140 lat (usec): min=2250, max=7856, avg=4673.59, stdev=689.42 00:28:56.141 clat percentiles (usec): 00:28:56.141 | 1.00th=[ 3261], 5.00th=[ 3785], 10.00th=[ 4015], 20.00th=[ 4293], 00:28:56.141 | 30.00th=[ 4424], 40.00th=[ 4555], 50.00th=[ 4621], 60.00th=[ 4621], 00:28:56.141 | 70.00th=[ 4686], 80.00th=[ 4817], 90.00th=[ 5342], 95.00th=[ 6390], 00:28:56.141 | 99.00th=[ 7242], 99.50th=[ 7308], 99.90th=[ 7504], 99.95th=[ 7701], 00:28:56.141 | 99.99th=[ 7832] 00:28:56.141 bw ( KiB/s): min=12944, max=13936, per=25.18%, avg=13681.22, stdev=298.00, samples=9 00:28:56.141 iops : min= 1618, max= 1742, avg=1710.11, stdev=37.25, samples=9 00:28:56.141 lat (msec) : 4=9.24%, 10=90.76% 00:28:56.141 cpu : usr=95.78%, sys=3.68%, ctx=33, majf=0, minf=9 00:28:56.141 IO depths : 1=0.3%, 2=3.2%, 4=69.3%, 8=27.2%, 16=0.0%, 32=0.0%, >=64=0.0% 00:28:56.141 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:56.141 complete : 0=0.0%, 4=92.2%, 8=7.8%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:56.141 issued rwts: total=8513,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:28:56.141 latency : target=0, window=0, percentile=100.00%, depth=8 00:28:56.141 filename1: (groupid=0, jobs=1): err= 0: pid=34758: Sun Jul 14 07:46:11 2024 00:28:56.141 read: IOPS=1669, BW=13.0MiB/s (13.7MB/s)(65.2MiB/5002msec) 00:28:56.141 slat (nsec): min=3940, max=40843, avg=11090.63, stdev=4422.49 00:28:56.141 clat (usec): min=2245, max=9678, avg=4755.63, stdev=664.24 00:28:56.141 lat (usec): min=2254, max=9690, avg=4766.72, stdev=664.17 00:28:56.141 clat percentiles (usec): 00:28:56.141 | 1.00th=[ 3490], 5.00th=[ 4015], 10.00th=[ 4178], 20.00th=[ 4359], 00:28:56.141 | 30.00th=[ 4490], 40.00th=[ 4621], 50.00th=[ 4621], 60.00th=[ 4686], 00:28:56.141 | 70.00th=[ 4752], 80.00th=[ 4948], 90.00th=[ 5473], 95.00th=[ 6325], 00:28:56.141 | 99.00th=[ 7177], 99.50th=[ 7439], 99.90th=[ 8455], 99.95th=[ 8586], 00:28:56.141 | 99.99th=[ 9634] 00:28:56.141 bw ( KiB/s): min=13088, max=13680, per=24.53%, avg=13329.78, stdev=211.12, samples=9 00:28:56.141 iops : min= 1636, max= 1710, avg=1666.22, stdev=26.39, samples=9 00:28:56.141 lat (msec) : 4=5.00%, 10=95.00% 00:28:56.141 cpu : usr=95.62%, sys=3.86%, ctx=7, majf=0, minf=9 00:28:56.141 IO depths : 1=0.1%, 2=5.1%, 4=65.9%, 8=28.9%, 16=0.0%, 32=0.0%, >=64=0.0% 00:28:56.141 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:56.141 complete : 0=0.0%, 4=93.3%, 8=6.7%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:56.141 issued rwts: total=8352,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:28:56.141 latency : target=0, window=0, percentile=100.00%, depth=8 00:28:56.141 filename1: (groupid=0, jobs=1): err= 0: pid=34759: Sun Jul 14 07:46:11 2024 00:28:56.141 read: IOPS=1720, BW=13.4MiB/s (14.1MB/s)(67.3MiB/5003msec) 00:28:56.141 slat (nsec): min=6244, max=61995, avg=12994.92, stdev=6740.37 00:28:56.141 clat (usec): min=2136, max=7847, avg=4604.10, stdev=609.53 00:28:56.141 lat (usec): min=2147, max=7854, avg=4617.09, stdev=609.49 00:28:56.141 clat percentiles (usec): 00:28:56.141 | 1.00th=[ 3163], 5.00th=[ 3687], 10.00th=[ 3982], 20.00th=[ 4293], 00:28:56.141 | 30.00th=[ 4424], 40.00th=[ 4490], 50.00th=[ 4621], 60.00th=[ 4621], 00:28:56.141 | 70.00th=[ 4686], 80.00th=[ 4817], 90.00th=[ 5145], 95.00th=[ 5800], 00:28:56.141 | 99.00th=[ 6718], 99.50th=[ 7177], 99.90th=[ 7701], 99.95th=[ 7767], 00:28:56.141 | 99.99th=[ 7832] 00:28:56.141 bw ( KiB/s): min=13552, max=14064, per=25.34%, avg=13771.20, stdev=166.11, samples=10 00:28:56.141 iops : min= 1694, max= 1758, avg=1721.40, stdev=20.76, samples=10 00:28:56.141 lat (msec) : 4=10.14%, 10=89.86% 00:28:56.141 cpu : usr=95.08%, sys=4.38%, ctx=9, majf=0, minf=9 00:28:56.141 IO depths : 1=0.3%, 2=6.4%, 4=66.4%, 8=26.9%, 16=0.0%, 32=0.0%, >=64=0.0% 00:28:56.141 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:56.141 complete : 0=0.0%, 4=91.8%, 8=8.2%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:56.141 issued rwts: total=8610,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:28:56.141 latency : target=0, window=0, percentile=100.00%, depth=8 00:28:56.141 00:28:56.141 Run status group 0 (all jobs): 00:28:56.141 READ: bw=53.1MiB/s (55.6MB/s), 13.0MiB/s-13.4MiB/s (13.7MB/s-14.1MB/s), io=265MiB (278MB), run=5001-5003msec 00:28:56.141 07:46:11 -- target/dif.sh@119 -- # destroy_subsystems 0 1 00:28:56.141 07:46:11 -- target/dif.sh@43 -- # local sub 00:28:56.141 07:46:11 -- target/dif.sh@45 -- # for sub in "$@" 00:28:56.141 07:46:11 -- target/dif.sh@46 -- # destroy_subsystem 0 00:28:56.141 07:46:11 -- target/dif.sh@36 -- # local sub_id=0 00:28:56.141 07:46:11 -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:28:56.141 07:46:11 -- common/autotest_common.sh@551 -- # xtrace_disable 00:28:56.141 07:46:11 -- common/autotest_common.sh@10 -- # set +x 00:28:56.141 07:46:11 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:28:56.141 07:46:11 -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:28:56.141 07:46:11 -- common/autotest_common.sh@551 -- # xtrace_disable 00:28:56.141 07:46:11 -- common/autotest_common.sh@10 -- # set +x 00:28:56.141 07:46:11 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:28:56.141 07:46:11 -- target/dif.sh@45 -- # for sub in "$@" 00:28:56.141 07:46:11 -- target/dif.sh@46 -- # destroy_subsystem 1 00:28:56.141 07:46:11 -- target/dif.sh@36 -- # local sub_id=1 00:28:56.141 07:46:11 -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:28:56.141 07:46:11 -- common/autotest_common.sh@551 -- # xtrace_disable 00:28:56.141 07:46:11 -- common/autotest_common.sh@10 -- # set +x 00:28:56.141 07:46:11 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:28:56.141 07:46:11 -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null1 00:28:56.141 07:46:11 -- common/autotest_common.sh@551 -- # xtrace_disable 00:28:56.141 07:46:11 -- common/autotest_common.sh@10 -- # set +x 00:28:56.141 07:46:11 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:28:56.141 00:28:56.141 real 0m24.515s 00:28:56.141 user 4m34.417s 00:28:56.141 sys 0m6.723s 00:28:56.141 07:46:11 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:28:56.141 07:46:11 -- common/autotest_common.sh@10 -- # set +x 00:28:56.141 ************************************ 00:28:56.141 END TEST fio_dif_rand_params 00:28:56.141 ************************************ 00:28:56.141 07:46:11 -- target/dif.sh@144 -- # run_test fio_dif_digest fio_dif_digest 00:28:56.141 07:46:11 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:28:56.141 07:46:11 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:28:56.141 07:46:11 -- common/autotest_common.sh@10 -- # set +x 00:28:56.141 ************************************ 00:28:56.141 START TEST fio_dif_digest 00:28:56.141 ************************************ 00:28:56.141 07:46:11 -- common/autotest_common.sh@1104 -- # fio_dif_digest 00:28:56.141 07:46:11 -- target/dif.sh@123 -- # local NULL_DIF 00:28:56.141 07:46:11 -- target/dif.sh@124 -- # local bs numjobs runtime iodepth files 00:28:56.141 07:46:11 -- target/dif.sh@125 -- # local hdgst ddgst 00:28:56.141 07:46:11 -- target/dif.sh@127 -- # NULL_DIF=3 00:28:56.141 07:46:11 -- target/dif.sh@127 -- # bs=128k,128k,128k 00:28:56.141 07:46:11 -- target/dif.sh@127 -- # numjobs=3 00:28:56.141 07:46:11 -- target/dif.sh@127 -- # iodepth=3 00:28:56.141 07:46:11 -- target/dif.sh@127 -- # runtime=10 00:28:56.141 07:46:11 -- target/dif.sh@128 -- # hdgst=true 00:28:56.141 07:46:11 -- target/dif.sh@128 -- # ddgst=true 00:28:56.141 07:46:11 -- target/dif.sh@130 -- # create_subsystems 0 00:28:56.141 07:46:11 -- target/dif.sh@28 -- # local sub 00:28:56.141 07:46:11 -- target/dif.sh@30 -- # for sub in "$@" 00:28:56.141 07:46:11 -- target/dif.sh@31 -- # create_subsystem 0 00:28:56.141 07:46:11 -- target/dif.sh@18 -- # local sub_id=0 00:28:56.141 07:46:11 -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 3 00:28:56.141 07:46:11 -- common/autotest_common.sh@551 -- # xtrace_disable 00:28:56.141 07:46:11 -- common/autotest_common.sh@10 -- # set +x 00:28:56.141 bdev_null0 00:28:56.141 07:46:11 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:28:56.141 07:46:11 -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:28:56.141 07:46:11 -- common/autotest_common.sh@551 -- # xtrace_disable 00:28:56.141 07:46:11 -- common/autotest_common.sh@10 -- # set +x 00:28:56.141 07:46:11 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:28:56.141 07:46:11 -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:28:56.141 07:46:11 -- common/autotest_common.sh@551 -- # xtrace_disable 00:28:56.141 07:46:11 -- common/autotest_common.sh@10 -- # set +x 00:28:56.141 07:46:11 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:28:56.141 07:46:11 -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:28:56.141 07:46:11 -- common/autotest_common.sh@551 -- # xtrace_disable 00:28:56.141 07:46:11 -- common/autotest_common.sh@10 -- # set +x 00:28:56.141 [2024-07-14 07:46:11.997242] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:28:56.141 07:46:12 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:28:56.141 07:46:12 -- target/dif.sh@131 -- # fio /dev/fd/62 00:28:56.141 07:46:12 -- target/dif.sh@131 -- # create_json_sub_conf 0 00:28:56.141 07:46:12 -- target/dif.sh@51 -- # gen_nvmf_target_json 0 00:28:56.141 07:46:12 -- nvmf/common.sh@520 -- # config=() 00:28:56.141 07:46:12 -- nvmf/common.sh@520 -- # local subsystem config 00:28:56.141 07:46:12 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:28:56.141 07:46:12 -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:28:56.141 07:46:12 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:28:56.141 { 00:28:56.141 "params": { 00:28:56.141 "name": "Nvme$subsystem", 00:28:56.141 "trtype": "$TEST_TRANSPORT", 00:28:56.141 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:56.141 "adrfam": "ipv4", 00:28:56.141 "trsvcid": "$NVMF_PORT", 00:28:56.141 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:56.141 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:56.141 "hdgst": ${hdgst:-false}, 00:28:56.141 "ddgst": ${ddgst:-false} 00:28:56.141 }, 00:28:56.141 "method": "bdev_nvme_attach_controller" 00:28:56.141 } 00:28:56.141 EOF 00:28:56.141 )") 00:28:56.141 07:46:12 -- common/autotest_common.sh@1335 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:28:56.141 07:46:12 -- common/autotest_common.sh@1316 -- # local fio_dir=/usr/src/fio 00:28:56.141 07:46:12 -- target/dif.sh@82 -- # gen_fio_conf 00:28:56.141 07:46:12 -- common/autotest_common.sh@1318 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:28:56.141 07:46:12 -- common/autotest_common.sh@1318 -- # local sanitizers 00:28:56.141 07:46:12 -- target/dif.sh@54 -- # local file 00:28:56.141 07:46:12 -- common/autotest_common.sh@1319 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:28:56.141 07:46:12 -- target/dif.sh@56 -- # cat 00:28:56.142 07:46:12 -- common/autotest_common.sh@1320 -- # shift 00:28:56.142 07:46:12 -- common/autotest_common.sh@1322 -- # local asan_lib= 00:28:56.142 07:46:12 -- common/autotest_common.sh@1323 -- # for sanitizer in "${sanitizers[@]}" 00:28:56.142 07:46:12 -- nvmf/common.sh@542 -- # cat 00:28:56.142 07:46:12 -- common/autotest_common.sh@1324 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:28:56.142 07:46:12 -- target/dif.sh@72 -- # (( file = 1 )) 00:28:56.142 07:46:12 -- target/dif.sh@72 -- # (( file <= files )) 00:28:56.142 07:46:12 -- common/autotest_common.sh@1324 -- # grep libasan 00:28:56.142 07:46:12 -- common/autotest_common.sh@1324 -- # awk '{print $3}' 00:28:56.142 07:46:12 -- nvmf/common.sh@544 -- # jq . 00:28:56.142 07:46:12 -- nvmf/common.sh@545 -- # IFS=, 00:28:56.142 07:46:12 -- nvmf/common.sh@546 -- # printf '%s\n' '{ 00:28:56.142 "params": { 00:28:56.142 "name": "Nvme0", 00:28:56.142 "trtype": "tcp", 00:28:56.142 "traddr": "10.0.0.2", 00:28:56.142 "adrfam": "ipv4", 00:28:56.142 "trsvcid": "4420", 00:28:56.142 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:28:56.142 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:28:56.142 "hdgst": true, 00:28:56.142 "ddgst": true 00:28:56.142 }, 00:28:56.142 "method": "bdev_nvme_attach_controller" 00:28:56.142 }' 00:28:56.142 07:46:12 -- common/autotest_common.sh@1324 -- # asan_lib= 00:28:56.142 07:46:12 -- common/autotest_common.sh@1325 -- # [[ -n '' ]] 00:28:56.142 07:46:12 -- common/autotest_common.sh@1323 -- # for sanitizer in "${sanitizers[@]}" 00:28:56.142 07:46:12 -- common/autotest_common.sh@1324 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:28:56.142 07:46:12 -- common/autotest_common.sh@1324 -- # grep libclang_rt.asan 00:28:56.142 07:46:12 -- common/autotest_common.sh@1324 -- # awk '{print $3}' 00:28:56.142 07:46:12 -- common/autotest_common.sh@1324 -- # asan_lib= 00:28:56.142 07:46:12 -- common/autotest_common.sh@1325 -- # [[ -n '' ]] 00:28:56.142 07:46:12 -- common/autotest_common.sh@1331 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:28:56.142 07:46:12 -- common/autotest_common.sh@1331 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:28:56.142 filename0: (g=0): rw=randread, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=3 00:28:56.142 ... 00:28:56.142 fio-3.35 00:28:56.142 Starting 3 threads 00:28:56.142 EAL: No free 2048 kB hugepages reported on node 1 00:28:56.707 [2024-07-14 07:46:12.798612] rpc.c: 181:spdk_rpc_listen: *ERROR*: RPC Unix domain socket path /var/tmp/spdk.sock in use. Specify another. 00:28:56.707 [2024-07-14 07:46:12.798688] rpc.c: 90:spdk_rpc_initialize: *ERROR*: Unable to start RPC service at /var/tmp/spdk.sock 00:29:08.924 00:29:08.924 filename0: (groupid=0, jobs=1): err= 0: pid=35655: Sun Jul 14 07:46:22 2024 00:29:08.924 read: IOPS=203, BW=25.5MiB/s (26.7MB/s)(256MiB/10047msec) 00:29:08.924 slat (nsec): min=4577, max=24070, avg=14229.48, stdev=1489.89 00:29:08.924 clat (usec): min=6271, max=58265, avg=14679.91, stdev=6030.68 00:29:08.924 lat (usec): min=6285, max=58279, avg=14694.14, stdev=6030.76 00:29:08.924 clat percentiles (usec): 00:29:08.924 | 1.00th=[ 8094], 5.00th=[ 9634], 10.00th=[10159], 20.00th=[11338], 00:29:08.924 | 30.00th=[13435], 40.00th=[14222], 50.00th=[14746], 60.00th=[15008], 00:29:08.924 | 70.00th=[15401], 80.00th=[15795], 90.00th=[16450], 95.00th=[16909], 00:29:08.924 | 99.00th=[55313], 99.50th=[56361], 99.90th=[57934], 99.95th=[57934], 00:29:08.924 | 99.99th=[58459] 00:29:08.924 bw ( KiB/s): min=18688, max=29696, per=35.48%, avg=26176.00, stdev=2644.66, samples=20 00:29:08.924 iops : min= 146, max= 232, avg=204.50, stdev=20.66, samples=20 00:29:08.924 lat (msec) : 10=8.11%, 20=89.89%, 50=0.15%, 100=1.86% 00:29:08.924 cpu : usr=93.73%, sys=5.72%, ctx=48, majf=0, minf=142 00:29:08.924 IO depths : 1=0.1%, 2=100.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:29:08.924 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:29:08.924 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:29:08.924 issued rwts: total=2048,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:29:08.924 latency : target=0, window=0, percentile=100.00%, depth=3 00:29:08.924 filename0: (groupid=0, jobs=1): err= 0: pid=35656: Sun Jul 14 07:46:22 2024 00:29:08.924 read: IOPS=204, BW=25.5MiB/s (26.8MB/s)(256MiB/10010msec) 00:29:08.924 slat (nsec): min=4614, max=28788, avg=13997.22, stdev=1648.76 00:29:08.924 clat (usec): min=6164, max=59136, avg=14672.84, stdev=6879.46 00:29:08.924 lat (usec): min=6178, max=59150, avg=14686.84, stdev=6879.59 00:29:08.924 clat percentiles (usec): 00:29:08.924 | 1.00th=[ 6915], 5.00th=[ 9372], 10.00th=[ 9896], 20.00th=[11338], 00:29:08.924 | 30.00th=[13304], 40.00th=[13960], 50.00th=[14353], 60.00th=[14746], 00:29:08.924 | 70.00th=[15139], 80.00th=[15533], 90.00th=[16057], 95.00th=[16909], 00:29:08.924 | 99.00th=[55837], 99.50th=[56886], 99.90th=[57410], 99.95th=[57934], 00:29:08.924 | 99.99th=[58983] 00:29:08.924 bw ( KiB/s): min=22272, max=29696, per=35.41%, avg=26124.80, stdev=1990.74, samples=20 00:29:08.924 iops : min= 174, max= 232, avg=204.10, stdev=15.55, samples=20 00:29:08.924 lat (msec) : 10=11.01%, 20=86.50%, 100=2.50% 00:29:08.924 cpu : usr=92.95%, sys=6.53%, ctx=20, majf=0, minf=135 00:29:08.924 IO depths : 1=0.2%, 2=99.8%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:29:08.924 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:29:08.924 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:29:08.924 issued rwts: total=2044,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:29:08.924 latency : target=0, window=0, percentile=100.00%, depth=3 00:29:08.924 filename0: (groupid=0, jobs=1): err= 0: pid=35657: Sun Jul 14 07:46:22 2024 00:29:08.924 read: IOPS=169, BW=21.2MiB/s (22.2MB/s)(213MiB/10050msec) 00:29:08.924 slat (nsec): min=4677, max=27905, avg=13933.91, stdev=1708.78 00:29:08.924 clat (usec): min=8012, max=97693, avg=17682.84, stdev=9995.41 00:29:08.924 lat (usec): min=8025, max=97707, avg=17696.77, stdev=9995.48 00:29:08.924 clat percentiles (usec): 00:29:08.924 | 1.00th=[10290], 5.00th=[11469], 10.00th=[12518], 20.00th=[14484], 00:29:08.924 | 30.00th=[15008], 40.00th=[15401], 50.00th=[15664], 60.00th=[16057], 00:29:08.924 | 70.00th=[16450], 80.00th=[16909], 90.00th=[17695], 95.00th=[54264], 00:29:08.924 | 99.00th=[57410], 99.50th=[58459], 99.90th=[96994], 99.95th=[98042], 00:29:08.924 | 99.99th=[98042] 00:29:08.924 bw ( KiB/s): min=18176, max=26624, per=29.46%, avg=21736.75, stdev=2278.26, samples=20 00:29:08.924 iops : min= 142, max= 208, avg=169.80, stdev=17.78, samples=20 00:29:08.924 lat (msec) : 10=0.94%, 20=93.24%, 50=0.24%, 100=5.58% 00:29:08.924 cpu : usr=93.25%, sys=6.25%, ctx=13, majf=0, minf=99 00:29:08.924 IO depths : 1=0.4%, 2=99.6%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:29:08.924 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:29:08.924 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:29:08.924 issued rwts: total=1701,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:29:08.924 latency : target=0, window=0, percentile=100.00%, depth=3 00:29:08.924 00:29:08.924 Run status group 0 (all jobs): 00:29:08.924 READ: bw=72.1MiB/s (75.6MB/s), 21.2MiB/s-25.5MiB/s (22.2MB/s-26.8MB/s), io=724MiB (759MB), run=10010-10050msec 00:29:08.924 07:46:23 -- target/dif.sh@132 -- # destroy_subsystems 0 00:29:08.924 07:46:23 -- target/dif.sh@43 -- # local sub 00:29:08.924 07:46:23 -- target/dif.sh@45 -- # for sub in "$@" 00:29:08.924 07:46:23 -- target/dif.sh@46 -- # destroy_subsystem 0 00:29:08.924 07:46:23 -- target/dif.sh@36 -- # local sub_id=0 00:29:08.924 07:46:23 -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:29:08.924 07:46:23 -- common/autotest_common.sh@551 -- # xtrace_disable 00:29:08.924 07:46:23 -- common/autotest_common.sh@10 -- # set +x 00:29:08.924 07:46:23 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:29:08.924 07:46:23 -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:29:08.924 07:46:23 -- common/autotest_common.sh@551 -- # xtrace_disable 00:29:08.924 07:46:23 -- common/autotest_common.sh@10 -- # set +x 00:29:08.924 07:46:23 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:29:08.924 00:29:08.924 real 0m11.236s 00:29:08.924 user 0m29.349s 00:29:08.924 sys 0m2.136s 00:29:08.924 07:46:23 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:29:08.925 07:46:23 -- common/autotest_common.sh@10 -- # set +x 00:29:08.925 ************************************ 00:29:08.925 END TEST fio_dif_digest 00:29:08.925 ************************************ 00:29:08.925 07:46:23 -- target/dif.sh@146 -- # trap - SIGINT SIGTERM EXIT 00:29:08.925 07:46:23 -- target/dif.sh@147 -- # nvmftestfini 00:29:08.925 07:46:23 -- nvmf/common.sh@476 -- # nvmfcleanup 00:29:08.925 07:46:23 -- nvmf/common.sh@116 -- # sync 00:29:08.925 07:46:23 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:29:08.925 07:46:23 -- nvmf/common.sh@119 -- # set +e 00:29:08.925 07:46:23 -- nvmf/common.sh@120 -- # for i in {1..20} 00:29:08.925 07:46:23 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:29:08.925 rmmod nvme_tcp 00:29:08.925 rmmod nvme_fabrics 00:29:08.925 rmmod nvme_keyring 00:29:08.925 07:46:23 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:29:08.925 07:46:23 -- nvmf/common.sh@123 -- # set -e 00:29:08.925 07:46:23 -- nvmf/common.sh@124 -- # return 0 00:29:08.925 07:46:23 -- nvmf/common.sh@477 -- # '[' -n 29161 ']' 00:29:08.925 07:46:23 -- nvmf/common.sh@478 -- # killprocess 29161 00:29:08.925 07:46:23 -- common/autotest_common.sh@926 -- # '[' -z 29161 ']' 00:29:08.925 07:46:23 -- common/autotest_common.sh@930 -- # kill -0 29161 00:29:08.925 07:46:23 -- common/autotest_common.sh@931 -- # uname 00:29:08.925 07:46:23 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:29:08.925 07:46:23 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 29161 00:29:08.925 07:46:23 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:29:08.925 07:46:23 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:29:08.925 07:46:23 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 29161' 00:29:08.925 killing process with pid 29161 00:29:08.925 07:46:23 -- common/autotest_common.sh@945 -- # kill 29161 00:29:08.925 07:46:23 -- common/autotest_common.sh@950 -- # wait 29161 00:29:08.925 07:46:23 -- nvmf/common.sh@480 -- # '[' iso == iso ']' 00:29:08.925 07:46:23 -- nvmf/common.sh@481 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:29:08.925 Waiting for block devices as requested 00:29:08.925 0000:88:00.0 (8086 0a54): vfio-pci -> nvme 00:29:08.925 0000:00:04.7 (8086 0e27): vfio-pci -> ioatdma 00:29:08.925 0000:00:04.6 (8086 0e26): vfio-pci -> ioatdma 00:29:08.925 0000:00:04.5 (8086 0e25): vfio-pci -> ioatdma 00:29:08.925 0000:00:04.4 (8086 0e24): vfio-pci -> ioatdma 00:29:08.925 0000:00:04.3 (8086 0e23): vfio-pci -> ioatdma 00:29:09.184 0000:00:04.2 (8086 0e22): vfio-pci -> ioatdma 00:29:09.184 0000:00:04.1 (8086 0e21): vfio-pci -> ioatdma 00:29:09.184 0000:00:04.0 (8086 0e20): vfio-pci -> ioatdma 00:29:09.184 0000:80:04.7 (8086 0e27): vfio-pci -> ioatdma 00:29:09.184 0000:80:04.6 (8086 0e26): vfio-pci -> ioatdma 00:29:09.442 0000:80:04.5 (8086 0e25): vfio-pci -> ioatdma 00:29:09.442 0000:80:04.4 (8086 0e24): vfio-pci -> ioatdma 00:29:09.442 0000:80:04.3 (8086 0e23): vfio-pci -> ioatdma 00:29:09.442 0000:80:04.2 (8086 0e22): vfio-pci -> ioatdma 00:29:09.700 0000:80:04.1 (8086 0e21): vfio-pci -> ioatdma 00:29:09.700 0000:80:04.0 (8086 0e20): vfio-pci -> ioatdma 00:29:09.700 07:46:25 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:29:09.700 07:46:25 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:29:09.700 07:46:25 -- nvmf/common.sh@273 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:29:09.700 07:46:25 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:29:09.700 07:46:25 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:29:09.700 07:46:25 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:29:09.700 07:46:25 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:29:12.231 07:46:27 -- nvmf/common.sh@278 -- # ip -4 addr flush cvl_0_1 00:29:12.231 00:29:12.231 real 1m7.577s 00:29:12.231 user 6m32.587s 00:29:12.231 sys 0m18.037s 00:29:12.231 07:46:27 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:29:12.231 07:46:27 -- common/autotest_common.sh@10 -- # set +x 00:29:12.231 ************************************ 00:29:12.231 END TEST nvmf_dif 00:29:12.231 ************************************ 00:29:12.231 07:46:27 -- spdk/autotest.sh@301 -- # run_test nvmf_abort_qd_sizes /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/abort_qd_sizes.sh 00:29:12.231 07:46:27 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:29:12.231 07:46:27 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:29:12.231 07:46:27 -- common/autotest_common.sh@10 -- # set +x 00:29:12.231 ************************************ 00:29:12.231 START TEST nvmf_abort_qd_sizes 00:29:12.231 ************************************ 00:29:12.231 07:46:27 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/abort_qd_sizes.sh 00:29:12.231 * Looking for test storage... 00:29:12.231 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:29:12.231 07:46:27 -- target/abort_qd_sizes.sh@14 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:29:12.231 07:46:27 -- nvmf/common.sh@7 -- # uname -s 00:29:12.231 07:46:27 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:29:12.231 07:46:27 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:29:12.231 07:46:27 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:29:12.231 07:46:27 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:29:12.231 07:46:27 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:29:12.231 07:46:27 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:29:12.231 07:46:27 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:29:12.231 07:46:27 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:29:12.231 07:46:27 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:29:12.231 07:46:27 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:29:12.231 07:46:27 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:29:12.231 07:46:27 -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:29:12.231 07:46:27 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:29:12.231 07:46:27 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:29:12.231 07:46:27 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:29:12.231 07:46:27 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:29:12.231 07:46:27 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:29:12.231 07:46:27 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:29:12.231 07:46:27 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:29:12.231 07:46:27 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:12.231 07:46:27 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:12.231 07:46:27 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:12.231 07:46:27 -- paths/export.sh@5 -- # export PATH 00:29:12.231 07:46:27 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:12.231 07:46:27 -- nvmf/common.sh@46 -- # : 0 00:29:12.231 07:46:27 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:29:12.231 07:46:27 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:29:12.231 07:46:27 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:29:12.231 07:46:27 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:29:12.232 07:46:27 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:29:12.232 07:46:27 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:29:12.232 07:46:27 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:29:12.232 07:46:27 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:29:12.232 07:46:27 -- target/abort_qd_sizes.sh@73 -- # nvmftestinit 00:29:12.232 07:46:27 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:29:12.232 07:46:27 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:29:12.232 07:46:27 -- nvmf/common.sh@436 -- # prepare_net_devs 00:29:12.232 07:46:27 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:29:12.232 07:46:27 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:29:12.232 07:46:27 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:29:12.232 07:46:27 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:29:12.232 07:46:27 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:29:12.232 07:46:27 -- nvmf/common.sh@402 -- # [[ phy != virt ]] 00:29:12.232 07:46:27 -- nvmf/common.sh@402 -- # gather_supported_nvmf_pci_devs 00:29:12.232 07:46:27 -- nvmf/common.sh@284 -- # xtrace_disable 00:29:12.232 07:46:27 -- common/autotest_common.sh@10 -- # set +x 00:29:13.613 07:46:29 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:29:13.613 07:46:29 -- nvmf/common.sh@290 -- # pci_devs=() 00:29:13.613 07:46:29 -- nvmf/common.sh@290 -- # local -a pci_devs 00:29:13.613 07:46:29 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:29:13.613 07:46:29 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:29:13.613 07:46:29 -- nvmf/common.sh@292 -- # pci_drivers=() 00:29:13.613 07:46:29 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:29:13.613 07:46:29 -- nvmf/common.sh@294 -- # net_devs=() 00:29:13.613 07:46:29 -- nvmf/common.sh@294 -- # local -ga net_devs 00:29:13.613 07:46:29 -- nvmf/common.sh@295 -- # e810=() 00:29:13.613 07:46:29 -- nvmf/common.sh@295 -- # local -ga e810 00:29:13.613 07:46:29 -- nvmf/common.sh@296 -- # x722=() 00:29:13.613 07:46:29 -- nvmf/common.sh@296 -- # local -ga x722 00:29:13.613 07:46:29 -- nvmf/common.sh@297 -- # mlx=() 00:29:13.613 07:46:29 -- nvmf/common.sh@297 -- # local -ga mlx 00:29:13.613 07:46:29 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:29:13.613 07:46:29 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:29:13.613 07:46:29 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:29:13.613 07:46:29 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:29:13.613 07:46:29 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:29:13.613 07:46:29 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:29:13.613 07:46:29 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:29:13.613 07:46:29 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:29:13.613 07:46:29 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:29:13.613 07:46:29 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:29:13.613 07:46:29 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:29:13.613 07:46:29 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:29:13.613 07:46:29 -- nvmf/common.sh@320 -- # [[ tcp == rdma ]] 00:29:13.613 07:46:29 -- nvmf/common.sh@326 -- # [[ e810 == mlx5 ]] 00:29:13.613 07:46:29 -- nvmf/common.sh@328 -- # [[ e810 == e810 ]] 00:29:13.613 07:46:29 -- nvmf/common.sh@329 -- # pci_devs=("${e810[@]}") 00:29:13.613 07:46:29 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:29:13.613 07:46:29 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:29:13.613 07:46:29 -- nvmf/common.sh@340 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:29:13.613 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:29:13.613 07:46:29 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:29:13.613 07:46:29 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:29:13.613 07:46:29 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:29:13.613 07:46:29 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:29:13.613 07:46:29 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:29:13.613 07:46:29 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:29:13.613 07:46:29 -- nvmf/common.sh@340 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:29:13.613 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:29:13.613 07:46:29 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:29:13.613 07:46:29 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:29:13.613 07:46:29 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:29:13.613 07:46:29 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:29:13.613 07:46:29 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:29:13.613 07:46:29 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:29:13.613 07:46:29 -- nvmf/common.sh@371 -- # [[ e810 == e810 ]] 00:29:13.613 07:46:29 -- nvmf/common.sh@371 -- # [[ tcp == rdma ]] 00:29:13.613 07:46:29 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:29:13.613 07:46:29 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:29:13.613 07:46:29 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:29:13.613 07:46:29 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:29:13.613 07:46:29 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:29:13.613 Found net devices under 0000:0a:00.0: cvl_0_0 00:29:13.613 07:46:29 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:29:13.613 07:46:29 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:29:13.613 07:46:29 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:29:13.613 07:46:29 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:29:13.613 07:46:29 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:29:13.613 07:46:29 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:29:13.613 Found net devices under 0000:0a:00.1: cvl_0_1 00:29:13.613 07:46:29 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:29:13.613 07:46:29 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:29:13.613 07:46:29 -- nvmf/common.sh@402 -- # is_hw=yes 00:29:13.613 07:46:29 -- nvmf/common.sh@404 -- # [[ yes == yes ]] 00:29:13.613 07:46:29 -- nvmf/common.sh@405 -- # [[ tcp == tcp ]] 00:29:13.613 07:46:29 -- nvmf/common.sh@406 -- # nvmf_tcp_init 00:29:13.613 07:46:29 -- nvmf/common.sh@228 -- # NVMF_INITIATOR_IP=10.0.0.1 00:29:13.613 07:46:29 -- nvmf/common.sh@229 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:29:13.613 07:46:29 -- nvmf/common.sh@230 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:29:13.613 07:46:29 -- nvmf/common.sh@233 -- # (( 2 > 1 )) 00:29:13.613 07:46:29 -- nvmf/common.sh@235 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:29:13.613 07:46:29 -- nvmf/common.sh@236 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:29:13.613 07:46:29 -- nvmf/common.sh@239 -- # NVMF_SECOND_TARGET_IP= 00:29:13.613 07:46:29 -- nvmf/common.sh@241 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:29:13.613 07:46:29 -- nvmf/common.sh@242 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:29:13.613 07:46:29 -- nvmf/common.sh@243 -- # ip -4 addr flush cvl_0_0 00:29:13.613 07:46:29 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_1 00:29:13.613 07:46:29 -- nvmf/common.sh@247 -- # ip netns add cvl_0_0_ns_spdk 00:29:13.613 07:46:29 -- nvmf/common.sh@250 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:29:13.872 07:46:29 -- nvmf/common.sh@253 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:29:13.872 07:46:29 -- nvmf/common.sh@254 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:29:13.872 07:46:29 -- nvmf/common.sh@257 -- # ip link set cvl_0_1 up 00:29:13.872 07:46:29 -- nvmf/common.sh@259 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:29:13.872 07:46:29 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:29:13.872 07:46:29 -- nvmf/common.sh@263 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:29:13.872 07:46:29 -- nvmf/common.sh@266 -- # ping -c 1 10.0.0.2 00:29:13.872 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:29:13.872 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.186 ms 00:29:13.872 00:29:13.872 --- 10.0.0.2 ping statistics --- 00:29:13.872 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:29:13.872 rtt min/avg/max/mdev = 0.186/0.186/0.186/0.000 ms 00:29:13.872 07:46:29 -- nvmf/common.sh@267 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:29:13.872 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:29:13.872 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.109 ms 00:29:13.872 00:29:13.872 --- 10.0.0.1 ping statistics --- 00:29:13.872 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:29:13.872 rtt min/avg/max/mdev = 0.109/0.109/0.109/0.000 ms 00:29:13.872 07:46:29 -- nvmf/common.sh@269 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:29:13.872 07:46:29 -- nvmf/common.sh@410 -- # return 0 00:29:13.872 07:46:29 -- nvmf/common.sh@438 -- # '[' iso == iso ']' 00:29:13.872 07:46:29 -- nvmf/common.sh@439 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:29:15.247 0000:00:04.7 (8086 0e27): ioatdma -> vfio-pci 00:29:15.247 0000:00:04.6 (8086 0e26): ioatdma -> vfio-pci 00:29:15.247 0000:00:04.5 (8086 0e25): ioatdma -> vfio-pci 00:29:15.247 0000:00:04.4 (8086 0e24): ioatdma -> vfio-pci 00:29:15.247 0000:00:04.3 (8086 0e23): ioatdma -> vfio-pci 00:29:15.247 0000:00:04.2 (8086 0e22): ioatdma -> vfio-pci 00:29:15.247 0000:00:04.1 (8086 0e21): ioatdma -> vfio-pci 00:29:15.247 0000:00:04.0 (8086 0e20): ioatdma -> vfio-pci 00:29:15.247 0000:80:04.7 (8086 0e27): ioatdma -> vfio-pci 00:29:15.247 0000:80:04.6 (8086 0e26): ioatdma -> vfio-pci 00:29:15.247 0000:80:04.5 (8086 0e25): ioatdma -> vfio-pci 00:29:15.247 0000:80:04.4 (8086 0e24): ioatdma -> vfio-pci 00:29:15.247 0000:80:04.3 (8086 0e23): ioatdma -> vfio-pci 00:29:15.247 0000:80:04.2 (8086 0e22): ioatdma -> vfio-pci 00:29:15.247 0000:80:04.1 (8086 0e21): ioatdma -> vfio-pci 00:29:15.247 0000:80:04.0 (8086 0e20): ioatdma -> vfio-pci 00:29:16.184 0000:88:00.0 (8086 0a54): nvme -> vfio-pci 00:29:16.184 07:46:32 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:29:16.184 07:46:32 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:29:16.184 07:46:32 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:29:16.184 07:46:32 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:29:16.184 07:46:32 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:29:16.184 07:46:32 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:29:16.184 07:46:32 -- target/abort_qd_sizes.sh@74 -- # nvmfappstart -m 0xf 00:29:16.184 07:46:32 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:29:16.184 07:46:32 -- common/autotest_common.sh@712 -- # xtrace_disable 00:29:16.184 07:46:32 -- common/autotest_common.sh@10 -- # set +x 00:29:16.184 07:46:32 -- nvmf/common.sh@469 -- # nvmfpid=40549 00:29:16.184 07:46:32 -- nvmf/common.sh@468 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xf 00:29:16.184 07:46:32 -- nvmf/common.sh@470 -- # waitforlisten 40549 00:29:16.184 07:46:32 -- common/autotest_common.sh@819 -- # '[' -z 40549 ']' 00:29:16.184 07:46:32 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:29:16.184 07:46:32 -- common/autotest_common.sh@824 -- # local max_retries=100 00:29:16.184 07:46:32 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:29:16.184 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:29:16.184 07:46:32 -- common/autotest_common.sh@828 -- # xtrace_disable 00:29:16.184 07:46:32 -- common/autotest_common.sh@10 -- # set +x 00:29:16.184 [2024-07-14 07:46:32.331149] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:29:16.184 [2024-07-14 07:46:32.331244] [ DPDK EAL parameters: nvmf -c 0xf --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:29:16.443 EAL: No free 2048 kB hugepages reported on node 1 00:29:16.443 [2024-07-14 07:46:32.399094] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:29:16.443 [2024-07-14 07:46:32.515043] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:29:16.443 [2024-07-14 07:46:32.515222] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:29:16.443 [2024-07-14 07:46:32.515242] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:29:16.443 [2024-07-14 07:46:32.515258] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:29:16.443 [2024-07-14 07:46:32.515331] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:29:16.443 [2024-07-14 07:46:32.515386] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:29:16.443 [2024-07-14 07:46:32.515448] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:29:16.443 [2024-07-14 07:46:32.515451] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:29:17.375 07:46:33 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:29:17.375 07:46:33 -- common/autotest_common.sh@852 -- # return 0 00:29:17.375 07:46:33 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:29:17.375 07:46:33 -- common/autotest_common.sh@718 -- # xtrace_disable 00:29:17.375 07:46:33 -- common/autotest_common.sh@10 -- # set +x 00:29:17.375 07:46:33 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:29:17.375 07:46:33 -- target/abort_qd_sizes.sh@76 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini || :; clean_kernel_target' SIGINT SIGTERM EXIT 00:29:17.375 07:46:33 -- target/abort_qd_sizes.sh@78 -- # mapfile -t nvmes 00:29:17.375 07:46:33 -- target/abort_qd_sizes.sh@78 -- # nvme_in_userspace 00:29:17.375 07:46:33 -- scripts/common.sh@311 -- # local bdf bdfs 00:29:17.375 07:46:33 -- scripts/common.sh@312 -- # local nvmes 00:29:17.375 07:46:33 -- scripts/common.sh@314 -- # [[ -n 0000:88:00.0 ]] 00:29:17.375 07:46:33 -- scripts/common.sh@315 -- # nvmes=(${pci_bus_cache["0x010802"]}) 00:29:17.375 07:46:33 -- scripts/common.sh@320 -- # for bdf in "${nvmes[@]}" 00:29:17.375 07:46:33 -- scripts/common.sh@321 -- # [[ -e /sys/bus/pci/drivers/nvme/0000:88:00.0 ]] 00:29:17.375 07:46:33 -- scripts/common.sh@322 -- # uname -s 00:29:17.375 07:46:33 -- scripts/common.sh@322 -- # [[ Linux == FreeBSD ]] 00:29:17.375 07:46:33 -- scripts/common.sh@325 -- # bdfs+=("$bdf") 00:29:17.375 07:46:33 -- scripts/common.sh@327 -- # (( 1 )) 00:29:17.375 07:46:33 -- scripts/common.sh@328 -- # printf '%s\n' 0000:88:00.0 00:29:17.375 07:46:33 -- target/abort_qd_sizes.sh@79 -- # (( 1 > 0 )) 00:29:17.375 07:46:33 -- target/abort_qd_sizes.sh@81 -- # nvme=0000:88:00.0 00:29:17.375 07:46:33 -- target/abort_qd_sizes.sh@83 -- # run_test spdk_target_abort spdk_target 00:29:17.375 07:46:33 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:29:17.375 07:46:33 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:29:17.375 07:46:33 -- common/autotest_common.sh@10 -- # set +x 00:29:17.375 ************************************ 00:29:17.375 START TEST spdk_target_abort 00:29:17.376 ************************************ 00:29:17.376 07:46:33 -- common/autotest_common.sh@1104 -- # spdk_target 00:29:17.376 07:46:33 -- target/abort_qd_sizes.sh@43 -- # local name=spdk_target 00:29:17.376 07:46:33 -- target/abort_qd_sizes.sh@44 -- # local subnqn=nqn.2016-06.io.spdk:spdk_target 00:29:17.376 07:46:33 -- target/abort_qd_sizes.sh@46 -- # rpc_cmd bdev_nvme_attach_controller -t pcie -a 0000:88:00.0 -b spdk_target 00:29:17.376 07:46:33 -- common/autotest_common.sh@551 -- # xtrace_disable 00:29:17.376 07:46:33 -- common/autotest_common.sh@10 -- # set +x 00:29:20.648 spdk_targetn1 00:29:20.648 07:46:36 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:29:20.648 07:46:36 -- target/abort_qd_sizes.sh@48 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:29:20.648 07:46:36 -- common/autotest_common.sh@551 -- # xtrace_disable 00:29:20.648 07:46:36 -- common/autotest_common.sh@10 -- # set +x 00:29:20.648 [2024-07-14 07:46:36.116175] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:29:20.648 07:46:36 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:29:20.648 07:46:36 -- target/abort_qd_sizes.sh@49 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:spdk_target -a -s SPDKISFASTANDAWESOME 00:29:20.648 07:46:36 -- common/autotest_common.sh@551 -- # xtrace_disable 00:29:20.648 07:46:36 -- common/autotest_common.sh@10 -- # set +x 00:29:20.648 07:46:36 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:29:20.648 07:46:36 -- target/abort_qd_sizes.sh@50 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:spdk_target spdk_targetn1 00:29:20.648 07:46:36 -- common/autotest_common.sh@551 -- # xtrace_disable 00:29:20.648 07:46:36 -- common/autotest_common.sh@10 -- # set +x 00:29:20.648 07:46:36 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:29:20.648 07:46:36 -- target/abort_qd_sizes.sh@51 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:spdk_target -t tcp -a 10.0.0.2 -s 4420 00:29:20.648 07:46:36 -- common/autotest_common.sh@551 -- # xtrace_disable 00:29:20.648 07:46:36 -- common/autotest_common.sh@10 -- # set +x 00:29:20.648 [2024-07-14 07:46:36.148451] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:29:20.648 07:46:36 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:29:20.648 07:46:36 -- target/abort_qd_sizes.sh@53 -- # rabort tcp IPv4 10.0.0.2 4420 nqn.2016-06.io.spdk:spdk_target 00:29:20.648 07:46:36 -- target/abort_qd_sizes.sh@17 -- # local trtype=tcp 00:29:20.648 07:46:36 -- target/abort_qd_sizes.sh@18 -- # local adrfam=IPv4 00:29:20.648 07:46:36 -- target/abort_qd_sizes.sh@19 -- # local traddr=10.0.0.2 00:29:20.648 07:46:36 -- target/abort_qd_sizes.sh@20 -- # local trsvcid=4420 00:29:20.648 07:46:36 -- target/abort_qd_sizes.sh@21 -- # local subnqn=nqn.2016-06.io.spdk:spdk_target 00:29:20.648 07:46:36 -- target/abort_qd_sizes.sh@23 -- # local qds qd 00:29:20.648 07:46:36 -- target/abort_qd_sizes.sh@24 -- # local target r 00:29:20.648 07:46:36 -- target/abort_qd_sizes.sh@26 -- # qds=(4 24 64) 00:29:20.648 07:46:36 -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:29:20.648 07:46:36 -- target/abort_qd_sizes.sh@29 -- # target=trtype:tcp 00:29:20.648 07:46:36 -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:29:20.648 07:46:36 -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4' 00:29:20.648 07:46:36 -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:29:20.648 07:46:36 -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.2' 00:29:20.648 07:46:36 -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:29:20.648 07:46:36 -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:29:20.648 07:46:36 -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:29:20.648 07:46:36 -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:spdk_target' 00:29:20.648 07:46:36 -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:29:20.648 07:46:36 -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 4 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:spdk_target' 00:29:20.648 EAL: No free 2048 kB hugepages reported on node 1 00:29:23.922 Initializing NVMe Controllers 00:29:23.922 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:spdk_target 00:29:23.922 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:spdk_target) NSID 1 with lcore 0 00:29:23.922 Initialization complete. Launching workers. 00:29:23.922 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:spdk_target) NSID 1 I/O completed: 10284, failed: 0 00:29:23.922 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:spdk_target) abort submitted 1331, failed to submit 8953 00:29:23.922 success 880, unsuccess 451, failed 0 00:29:23.922 07:46:39 -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:29:23.922 07:46:39 -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 24 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:spdk_target' 00:29:23.922 EAL: No free 2048 kB hugepages reported on node 1 00:29:27.204 Initializing NVMe Controllers 00:29:27.204 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:spdk_target 00:29:27.204 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:spdk_target) NSID 1 with lcore 0 00:29:27.204 Initialization complete. Launching workers. 00:29:27.204 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:spdk_target) NSID 1 I/O completed: 8646, failed: 0 00:29:27.204 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:spdk_target) abort submitted 1255, failed to submit 7391 00:29:27.204 success 324, unsuccess 931, failed 0 00:29:27.204 07:46:42 -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:29:27.204 07:46:42 -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 64 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:spdk_target' 00:29:27.204 EAL: No free 2048 kB hugepages reported on node 1 00:29:29.727 Initializing NVMe Controllers 00:29:29.727 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:spdk_target 00:29:29.727 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:spdk_target) NSID 1 with lcore 0 00:29:29.727 Initialization complete. Launching workers. 00:29:29.727 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:spdk_target) NSID 1 I/O completed: 31720, failed: 0 00:29:29.727 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:spdk_target) abort submitted 2677, failed to submit 29043 00:29:29.727 success 558, unsuccess 2119, failed 0 00:29:29.727 07:46:45 -- target/abort_qd_sizes.sh@55 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:spdk_target 00:29:29.727 07:46:45 -- common/autotest_common.sh@551 -- # xtrace_disable 00:29:29.727 07:46:45 -- common/autotest_common.sh@10 -- # set +x 00:29:29.984 07:46:45 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:29:29.984 07:46:45 -- target/abort_qd_sizes.sh@56 -- # rpc_cmd bdev_nvme_detach_controller spdk_target 00:29:29.984 07:46:45 -- common/autotest_common.sh@551 -- # xtrace_disable 00:29:29.984 07:46:45 -- common/autotest_common.sh@10 -- # set +x 00:29:31.357 07:46:47 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:29:31.357 07:46:47 -- target/abort_qd_sizes.sh@62 -- # killprocess 40549 00:29:31.357 07:46:47 -- common/autotest_common.sh@926 -- # '[' -z 40549 ']' 00:29:31.357 07:46:47 -- common/autotest_common.sh@930 -- # kill -0 40549 00:29:31.357 07:46:47 -- common/autotest_common.sh@931 -- # uname 00:29:31.357 07:46:47 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:29:31.357 07:46:47 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 40549 00:29:31.357 07:46:47 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:29:31.357 07:46:47 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:29:31.357 07:46:47 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 40549' 00:29:31.357 killing process with pid 40549 00:29:31.357 07:46:47 -- common/autotest_common.sh@945 -- # kill 40549 00:29:31.357 07:46:47 -- common/autotest_common.sh@950 -- # wait 40549 00:29:31.357 00:29:31.357 real 0m14.246s 00:29:31.357 user 0m54.832s 00:29:31.357 sys 0m3.146s 00:29:31.357 07:46:47 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:29:31.357 07:46:47 -- common/autotest_common.sh@10 -- # set +x 00:29:31.357 ************************************ 00:29:31.357 END TEST spdk_target_abort 00:29:31.357 ************************************ 00:29:31.619 07:46:47 -- target/abort_qd_sizes.sh@84 -- # run_test kernel_target_abort kernel_target 00:29:31.619 07:46:47 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:29:31.619 07:46:47 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:29:31.619 07:46:47 -- common/autotest_common.sh@10 -- # set +x 00:29:31.619 ************************************ 00:29:31.619 START TEST kernel_target_abort 00:29:31.619 ************************************ 00:29:31.619 07:46:47 -- common/autotest_common.sh@1104 -- # kernel_target 00:29:31.619 07:46:47 -- target/abort_qd_sizes.sh@66 -- # local name=kernel_target 00:29:31.619 07:46:47 -- target/abort_qd_sizes.sh@68 -- # configure_kernel_target kernel_target 00:29:31.619 07:46:47 -- nvmf/common.sh@621 -- # kernel_name=kernel_target 00:29:31.619 07:46:47 -- nvmf/common.sh@622 -- # nvmet=/sys/kernel/config/nvmet 00:29:31.619 07:46:47 -- nvmf/common.sh@623 -- # kernel_subsystem=/sys/kernel/config/nvmet/subsystems/kernel_target 00:29:31.619 07:46:47 -- nvmf/common.sh@624 -- # kernel_namespace=/sys/kernel/config/nvmet/subsystems/kernel_target/namespaces/1 00:29:31.619 07:46:47 -- nvmf/common.sh@625 -- # kernel_port=/sys/kernel/config/nvmet/ports/1 00:29:31.619 07:46:47 -- nvmf/common.sh@627 -- # local block nvme 00:29:31.619 07:46:47 -- nvmf/common.sh@629 -- # [[ ! -e /sys/module/nvmet ]] 00:29:31.619 07:46:47 -- nvmf/common.sh@630 -- # modprobe nvmet 00:29:31.619 07:46:47 -- nvmf/common.sh@633 -- # [[ -e /sys/kernel/config/nvmet ]] 00:29:31.619 07:46:47 -- nvmf/common.sh@635 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:29:32.553 Waiting for block devices as requested 00:29:32.553 0000:88:00.0 (8086 0a54): vfio-pci -> nvme 00:29:32.812 0000:00:04.7 (8086 0e27): vfio-pci -> ioatdma 00:29:32.812 0000:00:04.6 (8086 0e26): vfio-pci -> ioatdma 00:29:33.071 0000:00:04.5 (8086 0e25): vfio-pci -> ioatdma 00:29:33.071 0000:00:04.4 (8086 0e24): vfio-pci -> ioatdma 00:29:33.071 0000:00:04.3 (8086 0e23): vfio-pci -> ioatdma 00:29:33.071 0000:00:04.2 (8086 0e22): vfio-pci -> ioatdma 00:29:33.330 0000:00:04.1 (8086 0e21): vfio-pci -> ioatdma 00:29:33.330 0000:00:04.0 (8086 0e20): vfio-pci -> ioatdma 00:29:33.330 0000:80:04.7 (8086 0e27): vfio-pci -> ioatdma 00:29:33.330 0000:80:04.6 (8086 0e26): vfio-pci -> ioatdma 00:29:33.330 0000:80:04.5 (8086 0e25): vfio-pci -> ioatdma 00:29:33.588 0000:80:04.4 (8086 0e24): vfio-pci -> ioatdma 00:29:33.588 0000:80:04.3 (8086 0e23): vfio-pci -> ioatdma 00:29:33.588 0000:80:04.2 (8086 0e22): vfio-pci -> ioatdma 00:29:33.588 0000:80:04.1 (8086 0e21): vfio-pci -> ioatdma 00:29:33.846 0000:80:04.0 (8086 0e20): vfio-pci -> ioatdma 00:29:33.846 07:46:49 -- nvmf/common.sh@638 -- # for block in /sys/block/nvme* 00:29:33.846 07:46:49 -- nvmf/common.sh@639 -- # [[ -e /sys/block/nvme0n1 ]] 00:29:33.846 07:46:49 -- nvmf/common.sh@640 -- # block_in_use nvme0n1 00:29:33.846 07:46:49 -- scripts/common.sh@380 -- # local block=nvme0n1 pt 00:29:33.846 07:46:49 -- scripts/common.sh@389 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdk-gpt.py nvme0n1 00:29:33.846 No valid GPT data, bailing 00:29:33.846 07:46:49 -- scripts/common.sh@393 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:29:33.846 07:46:49 -- scripts/common.sh@393 -- # pt= 00:29:33.846 07:46:49 -- scripts/common.sh@394 -- # return 1 00:29:33.846 07:46:49 -- nvmf/common.sh@640 -- # nvme=/dev/nvme0n1 00:29:33.846 07:46:49 -- nvmf/common.sh@643 -- # [[ -b /dev/nvme0n1 ]] 00:29:33.846 07:46:49 -- nvmf/common.sh@645 -- # mkdir /sys/kernel/config/nvmet/subsystems/kernel_target 00:29:33.846 07:46:49 -- nvmf/common.sh@646 -- # mkdir /sys/kernel/config/nvmet/subsystems/kernel_target/namespaces/1 00:29:33.846 07:46:49 -- nvmf/common.sh@647 -- # mkdir /sys/kernel/config/nvmet/ports/1 00:29:33.846 07:46:49 -- nvmf/common.sh@652 -- # echo SPDK-kernel_target 00:29:33.846 07:46:49 -- nvmf/common.sh@654 -- # echo 1 00:29:33.846 07:46:49 -- nvmf/common.sh@655 -- # echo /dev/nvme0n1 00:29:33.846 07:46:49 -- nvmf/common.sh@656 -- # echo 1 00:29:33.846 07:46:49 -- nvmf/common.sh@662 -- # echo 10.0.0.1 00:29:33.846 07:46:49 -- nvmf/common.sh@663 -- # echo tcp 00:29:33.846 07:46:49 -- nvmf/common.sh@664 -- # echo 4420 00:29:33.846 07:46:49 -- nvmf/common.sh@665 -- # echo ipv4 00:29:33.846 07:46:49 -- nvmf/common.sh@668 -- # ln -s /sys/kernel/config/nvmet/subsystems/kernel_target /sys/kernel/config/nvmet/ports/1/subsystems/ 00:29:33.846 07:46:49 -- nvmf/common.sh@671 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -a 10.0.0.1 -t tcp -s 4420 00:29:34.104 00:29:34.104 Discovery Log Number of Records 2, Generation counter 2 00:29:34.104 =====Discovery Log Entry 0====== 00:29:34.104 trtype: tcp 00:29:34.104 adrfam: ipv4 00:29:34.104 subtype: current discovery subsystem 00:29:34.104 treq: not specified, sq flow control disable supported 00:29:34.104 portid: 1 00:29:34.104 trsvcid: 4420 00:29:34.104 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:29:34.104 traddr: 10.0.0.1 00:29:34.104 eflags: none 00:29:34.104 sectype: none 00:29:34.104 =====Discovery Log Entry 1====== 00:29:34.104 trtype: tcp 00:29:34.104 adrfam: ipv4 00:29:34.104 subtype: nvme subsystem 00:29:34.104 treq: not specified, sq flow control disable supported 00:29:34.104 portid: 1 00:29:34.104 trsvcid: 4420 00:29:34.104 subnqn: kernel_target 00:29:34.104 traddr: 10.0.0.1 00:29:34.104 eflags: none 00:29:34.104 sectype: none 00:29:34.104 07:46:50 -- target/abort_qd_sizes.sh@69 -- # rabort tcp IPv4 10.0.0.1 4420 kernel_target 00:29:34.104 07:46:50 -- target/abort_qd_sizes.sh@17 -- # local trtype=tcp 00:29:34.104 07:46:50 -- target/abort_qd_sizes.sh@18 -- # local adrfam=IPv4 00:29:34.104 07:46:50 -- target/abort_qd_sizes.sh@19 -- # local traddr=10.0.0.1 00:29:34.104 07:46:50 -- target/abort_qd_sizes.sh@20 -- # local trsvcid=4420 00:29:34.104 07:46:50 -- target/abort_qd_sizes.sh@21 -- # local subnqn=kernel_target 00:29:34.104 07:46:50 -- target/abort_qd_sizes.sh@23 -- # local qds qd 00:29:34.104 07:46:50 -- target/abort_qd_sizes.sh@24 -- # local target r 00:29:34.104 07:46:50 -- target/abort_qd_sizes.sh@26 -- # qds=(4 24 64) 00:29:34.104 07:46:50 -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:29:34.104 07:46:50 -- target/abort_qd_sizes.sh@29 -- # target=trtype:tcp 00:29:34.104 07:46:50 -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:29:34.104 07:46:50 -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4' 00:29:34.104 07:46:50 -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:29:34.105 07:46:50 -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.1' 00:29:34.105 07:46:50 -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:29:34.105 07:46:50 -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420' 00:29:34.105 07:46:50 -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:29:34.105 07:46:50 -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:kernel_target' 00:29:34.105 07:46:50 -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:29:34.105 07:46:50 -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 4 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:kernel_target' 00:29:34.105 EAL: No free 2048 kB hugepages reported on node 1 00:29:37.380 Initializing NVMe Controllers 00:29:37.380 Attached to NVMe over Fabrics controller at 10.0.0.1:4420: kernel_target 00:29:37.380 Associating TCP (addr:10.0.0.1 subnqn:kernel_target) NSID 1 with lcore 0 00:29:37.380 Initialization complete. Launching workers. 00:29:37.380 NS: TCP (addr:10.0.0.1 subnqn:kernel_target) NSID 1 I/O completed: 26824, failed: 0 00:29:37.380 CTRLR: TCP (addr:10.0.0.1 subnqn:kernel_target) abort submitted 26824, failed to submit 0 00:29:37.380 success 0, unsuccess 26824, failed 0 00:29:37.380 07:46:53 -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:29:37.380 07:46:53 -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 24 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:kernel_target' 00:29:37.380 EAL: No free 2048 kB hugepages reported on node 1 00:29:40.658 Initializing NVMe Controllers 00:29:40.658 Attached to NVMe over Fabrics controller at 10.0.0.1:4420: kernel_target 00:29:40.658 Associating TCP (addr:10.0.0.1 subnqn:kernel_target) NSID 1 with lcore 0 00:29:40.658 Initialization complete. Launching workers. 00:29:40.658 NS: TCP (addr:10.0.0.1 subnqn:kernel_target) NSID 1 I/O completed: 55440, failed: 0 00:29:40.658 CTRLR: TCP (addr:10.0.0.1 subnqn:kernel_target) abort submitted 13946, failed to submit 41494 00:29:40.658 success 0, unsuccess 13946, failed 0 00:29:40.658 07:46:56 -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:29:40.658 07:46:56 -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 64 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:kernel_target' 00:29:40.658 EAL: No free 2048 kB hugepages reported on node 1 00:29:43.185 Initializing NVMe Controllers 00:29:43.185 Attached to NVMe over Fabrics controller at 10.0.0.1:4420: kernel_target 00:29:43.185 Associating TCP (addr:10.0.0.1 subnqn:kernel_target) NSID 1 with lcore 0 00:29:43.185 Initialization complete. Launching workers. 00:29:43.185 NS: TCP (addr:10.0.0.1 subnqn:kernel_target) NSID 1 I/O completed: 54461, failed: 0 00:29:43.185 CTRLR: TCP (addr:10.0.0.1 subnqn:kernel_target) abort submitted 13590, failed to submit 40871 00:29:43.185 success 0, unsuccess 13590, failed 0 00:29:43.445 07:46:59 -- target/abort_qd_sizes.sh@70 -- # clean_kernel_target 00:29:43.445 07:46:59 -- nvmf/common.sh@675 -- # [[ -e /sys/kernel/config/nvmet/subsystems/kernel_target ]] 00:29:43.445 07:46:59 -- nvmf/common.sh@677 -- # echo 0 00:29:43.445 07:46:59 -- nvmf/common.sh@679 -- # rm -f /sys/kernel/config/nvmet/ports/1/subsystems/kernel_target 00:29:43.445 07:46:59 -- nvmf/common.sh@680 -- # rmdir /sys/kernel/config/nvmet/subsystems/kernel_target/namespaces/1 00:29:43.445 07:46:59 -- nvmf/common.sh@681 -- # rmdir /sys/kernel/config/nvmet/ports/1 00:29:43.445 07:46:59 -- nvmf/common.sh@682 -- # rmdir /sys/kernel/config/nvmet/subsystems/kernel_target 00:29:43.445 07:46:59 -- nvmf/common.sh@684 -- # modules=(/sys/module/nvmet/holders/*) 00:29:43.445 07:46:59 -- nvmf/common.sh@686 -- # modprobe -r nvmet_tcp nvmet 00:29:43.445 00:29:43.445 real 0m11.861s 00:29:43.445 user 0m3.987s 00:29:43.445 sys 0m2.493s 00:29:43.445 07:46:59 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:29:43.445 07:46:59 -- common/autotest_common.sh@10 -- # set +x 00:29:43.445 ************************************ 00:29:43.445 END TEST kernel_target_abort 00:29:43.445 ************************************ 00:29:43.445 07:46:59 -- target/abort_qd_sizes.sh@86 -- # trap - SIGINT SIGTERM EXIT 00:29:43.445 07:46:59 -- target/abort_qd_sizes.sh@87 -- # nvmftestfini 00:29:43.445 07:46:59 -- nvmf/common.sh@476 -- # nvmfcleanup 00:29:43.445 07:46:59 -- nvmf/common.sh@116 -- # sync 00:29:43.445 07:46:59 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:29:43.445 07:46:59 -- nvmf/common.sh@119 -- # set +e 00:29:43.445 07:46:59 -- nvmf/common.sh@120 -- # for i in {1..20} 00:29:43.445 07:46:59 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:29:43.445 rmmod nvme_tcp 00:29:43.445 rmmod nvme_fabrics 00:29:43.445 rmmod nvme_keyring 00:29:43.445 07:46:59 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:29:43.445 07:46:59 -- nvmf/common.sh@123 -- # set -e 00:29:43.445 07:46:59 -- nvmf/common.sh@124 -- # return 0 00:29:43.445 07:46:59 -- nvmf/common.sh@477 -- # '[' -n 40549 ']' 00:29:43.445 07:46:59 -- nvmf/common.sh@478 -- # killprocess 40549 00:29:43.445 07:46:59 -- common/autotest_common.sh@926 -- # '[' -z 40549 ']' 00:29:43.445 07:46:59 -- common/autotest_common.sh@930 -- # kill -0 40549 00:29:43.445 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 930: kill: (40549) - No such process 00:29:43.445 07:46:59 -- common/autotest_common.sh@953 -- # echo 'Process with pid 40549 is not found' 00:29:43.445 Process with pid 40549 is not found 00:29:43.445 07:46:59 -- nvmf/common.sh@480 -- # '[' iso == iso ']' 00:29:43.445 07:46:59 -- nvmf/common.sh@481 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:29:44.821 0000:88:00.0 (8086 0a54): Already using the nvme driver 00:29:44.821 0000:00:04.7 (8086 0e27): Already using the ioatdma driver 00:29:44.821 0000:00:04.6 (8086 0e26): Already using the ioatdma driver 00:29:44.821 0000:00:04.5 (8086 0e25): Already using the ioatdma driver 00:29:44.821 0000:00:04.4 (8086 0e24): Already using the ioatdma driver 00:29:44.821 0000:00:04.3 (8086 0e23): Already using the ioatdma driver 00:29:44.821 0000:00:04.2 (8086 0e22): Already using the ioatdma driver 00:29:44.821 0000:00:04.1 (8086 0e21): Already using the ioatdma driver 00:29:44.821 0000:00:04.0 (8086 0e20): Already using the ioatdma driver 00:29:44.821 0000:80:04.7 (8086 0e27): Already using the ioatdma driver 00:29:44.821 0000:80:04.6 (8086 0e26): Already using the ioatdma driver 00:29:44.821 0000:80:04.5 (8086 0e25): Already using the ioatdma driver 00:29:44.821 0000:80:04.4 (8086 0e24): Already using the ioatdma driver 00:29:44.821 0000:80:04.3 (8086 0e23): Already using the ioatdma driver 00:29:44.821 0000:80:04.2 (8086 0e22): Already using the ioatdma driver 00:29:44.821 0000:80:04.1 (8086 0e21): Already using the ioatdma driver 00:29:44.821 0000:80:04.0 (8086 0e20): Already using the ioatdma driver 00:29:44.821 07:47:00 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:29:44.821 07:47:00 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:29:44.821 07:47:00 -- nvmf/common.sh@273 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:29:44.821 07:47:00 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:29:44.821 07:47:00 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:29:44.821 07:47:00 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:29:44.821 07:47:00 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:29:47.355 07:47:02 -- nvmf/common.sh@278 -- # ip -4 addr flush cvl_0_1 00:29:47.355 00:29:47.355 real 0m35.064s 00:29:47.355 user 1m1.020s 00:29:47.355 sys 0m8.967s 00:29:47.355 07:47:02 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:29:47.355 07:47:02 -- common/autotest_common.sh@10 -- # set +x 00:29:47.355 ************************************ 00:29:47.355 END TEST nvmf_abort_qd_sizes 00:29:47.355 ************************************ 00:29:47.355 07:47:02 -- spdk/autotest.sh@311 -- # '[' 0 -eq 1 ']' 00:29:47.355 07:47:02 -- spdk/autotest.sh@315 -- # '[' 0 -eq 1 ']' 00:29:47.355 07:47:02 -- spdk/autotest.sh@319 -- # '[' 0 -eq 1 ']' 00:29:47.355 07:47:02 -- spdk/autotest.sh@324 -- # '[' 0 -eq 1 ']' 00:29:47.355 07:47:02 -- spdk/autotest.sh@333 -- # '[' 0 -eq 1 ']' 00:29:47.355 07:47:02 -- spdk/autotest.sh@338 -- # '[' 0 -eq 1 ']' 00:29:47.355 07:47:02 -- spdk/autotest.sh@342 -- # '[' 0 -eq 1 ']' 00:29:47.355 07:47:02 -- spdk/autotest.sh@346 -- # '[' 0 -eq 1 ']' 00:29:47.355 07:47:02 -- spdk/autotest.sh@350 -- # '[' 0 -eq 1 ']' 00:29:47.355 07:47:02 -- spdk/autotest.sh@355 -- # '[' 0 -eq 1 ']' 00:29:47.355 07:47:02 -- spdk/autotest.sh@359 -- # '[' 0 -eq 1 ']' 00:29:47.355 07:47:02 -- spdk/autotest.sh@366 -- # [[ 0 -eq 1 ]] 00:29:47.355 07:47:02 -- spdk/autotest.sh@370 -- # [[ 0 -eq 1 ]] 00:29:47.355 07:47:02 -- spdk/autotest.sh@374 -- # [[ 0 -eq 1 ]] 00:29:47.355 07:47:02 -- spdk/autotest.sh@378 -- # [[ 0 -eq 1 ]] 00:29:47.355 07:47:02 -- spdk/autotest.sh@383 -- # trap - SIGINT SIGTERM EXIT 00:29:47.355 07:47:02 -- spdk/autotest.sh@385 -- # timing_enter post_cleanup 00:29:47.355 07:47:02 -- common/autotest_common.sh@712 -- # xtrace_disable 00:29:47.355 07:47:02 -- common/autotest_common.sh@10 -- # set +x 00:29:47.355 07:47:02 -- spdk/autotest.sh@386 -- # autotest_cleanup 00:29:47.355 07:47:02 -- common/autotest_common.sh@1371 -- # local autotest_es=0 00:29:47.355 07:47:02 -- common/autotest_common.sh@1372 -- # xtrace_disable 00:29:47.355 07:47:02 -- common/autotest_common.sh@10 -- # set +x 00:29:48.727 INFO: APP EXITING 00:29:48.727 INFO: killing all VMs 00:29:48.727 INFO: killing vhost app 00:29:48.727 INFO: EXIT DONE 00:29:49.661 0000:88:00.0 (8086 0a54): Already using the nvme driver 00:29:49.661 0000:00:04.7 (8086 0e27): Already using the ioatdma driver 00:29:49.661 0000:00:04.6 (8086 0e26): Already using the ioatdma driver 00:29:49.661 0000:00:04.5 (8086 0e25): Already using the ioatdma driver 00:29:49.661 0000:00:04.4 (8086 0e24): Already using the ioatdma driver 00:29:49.661 0000:00:04.3 (8086 0e23): Already using the ioatdma driver 00:29:49.661 0000:00:04.2 (8086 0e22): Already using the ioatdma driver 00:29:49.661 0000:00:04.1 (8086 0e21): Already using the ioatdma driver 00:29:49.661 0000:00:04.0 (8086 0e20): Already using the ioatdma driver 00:29:49.661 0000:80:04.7 (8086 0e27): Already using the ioatdma driver 00:29:49.661 0000:80:04.6 (8086 0e26): Already using the ioatdma driver 00:29:49.661 0000:80:04.5 (8086 0e25): Already using the ioatdma driver 00:29:49.661 0000:80:04.4 (8086 0e24): Already using the ioatdma driver 00:29:49.661 0000:80:04.3 (8086 0e23): Already using the ioatdma driver 00:29:49.661 0000:80:04.2 (8086 0e22): Already using the ioatdma driver 00:29:49.661 0000:80:04.1 (8086 0e21): Already using the ioatdma driver 00:29:49.661 0000:80:04.0 (8086 0e20): Already using the ioatdma driver 00:29:51.037 Cleaning 00:29:51.037 Removing: /var/run/dpdk/spdk0/config 00:29:51.037 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-0 00:29:51.037 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-1 00:29:51.037 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-2 00:29:51.037 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-3 00:29:51.037 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-1-0 00:29:51.037 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-1-1 00:29:51.037 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-1-2 00:29:51.037 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-1-3 00:29:51.037 Removing: /var/run/dpdk/spdk0/fbarray_memzone 00:29:51.037 Removing: /var/run/dpdk/spdk0/hugepage_info 00:29:51.037 Removing: /var/run/dpdk/spdk1/config 00:29:51.037 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-0 00:29:51.037 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-1 00:29:51.037 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-2 00:29:51.037 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-3 00:29:51.037 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-1-0 00:29:51.037 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-1-1 00:29:51.037 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-1-2 00:29:51.037 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-1-3 00:29:51.037 Removing: /var/run/dpdk/spdk1/fbarray_memzone 00:29:51.038 Removing: /var/run/dpdk/spdk1/hugepage_info 00:29:51.038 Removing: /var/run/dpdk/spdk1/mp_socket 00:29:51.038 Removing: /var/run/dpdk/spdk2/config 00:29:51.038 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-0 00:29:51.038 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-1 00:29:51.038 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-2 00:29:51.038 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-3 00:29:51.038 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-1-0 00:29:51.038 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-1-1 00:29:51.038 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-1-2 00:29:51.038 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-1-3 00:29:51.038 Removing: /var/run/dpdk/spdk2/fbarray_memzone 00:29:51.038 Removing: /var/run/dpdk/spdk2/hugepage_info 00:29:51.038 Removing: /var/run/dpdk/spdk3/config 00:29:51.038 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-0 00:29:51.038 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-1 00:29:51.038 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-2 00:29:51.038 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-3 00:29:51.038 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-1-0 00:29:51.038 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-1-1 00:29:51.038 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-1-2 00:29:51.038 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-1-3 00:29:51.038 Removing: /var/run/dpdk/spdk3/fbarray_memzone 00:29:51.038 Removing: /var/run/dpdk/spdk3/hugepage_info 00:29:51.038 Removing: /var/run/dpdk/spdk4/config 00:29:51.038 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-0 00:29:51.038 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-1 00:29:51.038 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-2 00:29:51.038 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-3 00:29:51.038 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-1-0 00:29:51.038 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-1-1 00:29:51.038 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-1-2 00:29:51.038 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-1-3 00:29:51.038 Removing: /var/run/dpdk/spdk4/fbarray_memzone 00:29:51.038 Removing: /var/run/dpdk/spdk4/hugepage_info 00:29:51.038 Removing: /dev/shm/bdev_svc_trace.1 00:29:51.038 Removing: /dev/shm/nvmf_trace.0 00:29:51.038 Removing: /dev/shm/spdk_tgt_trace.pid3969232 00:29:51.038 Removing: /var/run/dpdk/spdk0 00:29:51.038 Removing: /var/run/dpdk/spdk1 00:29:51.038 Removing: /var/run/dpdk/spdk2 00:29:51.038 Removing: /var/run/dpdk/spdk3 00:29:51.038 Removing: /var/run/dpdk/spdk4 00:29:51.038 Removing: /var/run/dpdk/spdk_pid10217 00:29:51.038 Removing: /var/run/dpdk/spdk_pid10766 00:29:51.038 Removing: /var/run/dpdk/spdk_pid11316 00:29:51.295 Removing: /var/run/dpdk/spdk_pid11867 00:29:51.295 Removing: /var/run/dpdk/spdk_pid14540 00:29:51.295 Removing: /var/run/dpdk/spdk_pid14685 00:29:51.295 Removing: /var/run/dpdk/spdk_pid18544 00:29:51.295 Removing: /var/run/dpdk/spdk_pid18725 00:29:51.295 Removing: /var/run/dpdk/spdk_pid20485 00:29:51.295 Removing: /var/run/dpdk/spdk_pid26280 00:29:51.295 Removing: /var/run/dpdk/spdk_pid26291 00:29:51.295 Removing: /var/run/dpdk/spdk_pid29341 00:29:51.295 Removing: /var/run/dpdk/spdk_pid30784 00:29:51.295 Removing: /var/run/dpdk/spdk_pid32222 00:29:51.295 Removing: /var/run/dpdk/spdk_pid33107 00:29:51.295 Removing: /var/run/dpdk/spdk_pid34576 00:29:51.295 Removing: /var/run/dpdk/spdk_pid35473 00:29:51.295 Removing: /var/run/dpdk/spdk_pid3967539 00:29:51.295 Removing: /var/run/dpdk/spdk_pid3968285 00:29:51.295 Removing: /var/run/dpdk/spdk_pid3969232 00:29:51.295 Removing: /var/run/dpdk/spdk_pid3969718 00:29:51.295 Removing: /var/run/dpdk/spdk_pid3970938 00:29:51.295 Removing: /var/run/dpdk/spdk_pid3971881 00:29:51.295 Removing: /var/run/dpdk/spdk_pid3972190 00:29:51.295 Removing: /var/run/dpdk/spdk_pid3972390 00:29:51.295 Removing: /var/run/dpdk/spdk_pid3972724 00:29:51.295 Removing: /var/run/dpdk/spdk_pid3972922 00:29:51.295 Removing: /var/run/dpdk/spdk_pid3973088 00:29:51.295 Removing: /var/run/dpdk/spdk_pid3973363 00:29:51.295 Removing: /var/run/dpdk/spdk_pid3973545 00:29:51.295 Removing: /var/run/dpdk/spdk_pid3973882 00:29:51.295 Removing: /var/run/dpdk/spdk_pid3976427 00:29:51.295 Removing: /var/run/dpdk/spdk_pid3976713 00:29:51.295 Removing: /var/run/dpdk/spdk_pid3976896 00:29:51.295 Removing: /var/run/dpdk/spdk_pid3977035 00:29:51.295 Removing: /var/run/dpdk/spdk_pid3977357 00:29:51.295 Removing: /var/run/dpdk/spdk_pid3977491 00:29:51.295 Removing: /var/run/dpdk/spdk_pid3977932 00:29:51.295 Removing: /var/run/dpdk/spdk_pid3978062 00:29:51.295 Removing: /var/run/dpdk/spdk_pid3978238 00:29:51.295 Removing: /var/run/dpdk/spdk_pid3978380 00:29:51.295 Removing: /var/run/dpdk/spdk_pid3978552 00:29:51.295 Removing: /var/run/dpdk/spdk_pid3978694 00:29:51.295 Removing: /var/run/dpdk/spdk_pid3979060 00:29:51.295 Removing: /var/run/dpdk/spdk_pid3979333 00:29:51.295 Removing: /var/run/dpdk/spdk_pid3979539 00:29:51.295 Removing: /var/run/dpdk/spdk_pid3979715 00:29:51.295 Removing: /var/run/dpdk/spdk_pid3979861 00:29:51.295 Removing: /var/run/dpdk/spdk_pid3979924 00:29:51.295 Removing: /var/run/dpdk/spdk_pid3980137 00:29:51.295 Removing: /var/run/dpdk/spdk_pid3980342 00:29:51.295 Removing: /var/run/dpdk/spdk_pid3980495 00:29:51.295 Removing: /var/run/dpdk/spdk_pid3980651 00:29:51.295 Removing: /var/run/dpdk/spdk_pid3980914 00:29:51.295 Removing: /var/run/dpdk/spdk_pid3981074 00:29:51.295 Removing: /var/run/dpdk/spdk_pid3981223 00:29:51.295 Removing: /var/run/dpdk/spdk_pid3981497 00:29:51.295 Removing: /var/run/dpdk/spdk_pid3981647 00:29:51.295 Removing: /var/run/dpdk/spdk_pid3981808 00:29:51.295 Removing: /var/run/dpdk/spdk_pid3982071 00:29:51.295 Removing: /var/run/dpdk/spdk_pid3982230 00:29:51.295 Removing: /var/run/dpdk/spdk_pid3982373 00:29:51.295 Removing: /var/run/dpdk/spdk_pid3982618 00:29:51.295 Removing: /var/run/dpdk/spdk_pid3982792 00:29:51.295 Removing: /var/run/dpdk/spdk_pid3982960 00:29:51.295 Removing: /var/run/dpdk/spdk_pid3983115 00:29:51.295 Removing: /var/run/dpdk/spdk_pid3983381 00:29:51.295 Removing: /var/run/dpdk/spdk_pid3983524 00:29:51.296 Removing: /var/run/dpdk/spdk_pid3983692 00:29:51.296 Removing: /var/run/dpdk/spdk_pid3983949 00:29:51.296 Removing: /var/run/dpdk/spdk_pid3984114 00:29:51.296 Removing: /var/run/dpdk/spdk_pid3984255 00:29:51.296 Removing: /var/run/dpdk/spdk_pid3984538 00:29:51.296 Removing: /var/run/dpdk/spdk_pid3984680 00:29:51.296 Removing: /var/run/dpdk/spdk_pid3984842 00:29:51.296 Removing: /var/run/dpdk/spdk_pid3985104 00:29:51.296 Removing: /var/run/dpdk/spdk_pid3985265 00:29:51.296 Removing: /var/run/dpdk/spdk_pid3985408 00:29:51.296 Removing: /var/run/dpdk/spdk_pid3985654 00:29:51.296 Removing: /var/run/dpdk/spdk_pid3985833 00:29:51.296 Removing: /var/run/dpdk/spdk_pid3985998 00:29:51.296 Removing: /var/run/dpdk/spdk_pid3986229 00:29:51.296 Removing: /var/run/dpdk/spdk_pid3986428 00:29:51.296 Removing: /var/run/dpdk/spdk_pid3986569 00:29:51.296 Removing: /var/run/dpdk/spdk_pid3986873 00:29:51.296 Removing: /var/run/dpdk/spdk_pid3987014 00:29:51.296 Removing: /var/run/dpdk/spdk_pid3987175 00:29:51.296 Removing: /var/run/dpdk/spdk_pid3987436 00:29:51.296 Removing: /var/run/dpdk/spdk_pid3987604 00:29:51.296 Removing: /var/run/dpdk/spdk_pid3987781 00:29:51.296 Removing: /var/run/dpdk/spdk_pid3987986 00:29:51.296 Removing: /var/run/dpdk/spdk_pid3990102 00:29:51.296 Removing: /var/run/dpdk/spdk_pid4045924 00:29:51.296 Removing: /var/run/dpdk/spdk_pid4048578 00:29:51.296 Removing: /var/run/dpdk/spdk_pid4056311 00:29:51.296 Removing: /var/run/dpdk/spdk_pid4059668 00:29:51.296 Removing: /var/run/dpdk/spdk_pid4062182 00:29:51.296 Removing: /var/run/dpdk/spdk_pid4062717 00:29:51.296 Removing: /var/run/dpdk/spdk_pid4067804 00:29:51.296 Removing: /var/run/dpdk/spdk_pid4068087 00:29:51.296 Removing: /var/run/dpdk/spdk_pid4070656 00:29:51.296 Removing: /var/run/dpdk/spdk_pid4074526 00:29:51.296 Removing: /var/run/dpdk/spdk_pid4076647 00:29:51.296 Removing: /var/run/dpdk/spdk_pid4083268 00:29:51.296 Removing: /var/run/dpdk/spdk_pid4088666 00:29:51.296 Removing: /var/run/dpdk/spdk_pid4089987 00:29:51.296 Removing: /var/run/dpdk/spdk_pid4090811 00:29:51.296 Removing: /var/run/dpdk/spdk_pid40986 00:29:51.296 Removing: /var/run/dpdk/spdk_pid4101825 00:29:51.296 Removing: /var/run/dpdk/spdk_pid4104073 00:29:51.296 Removing: /var/run/dpdk/spdk_pid4107037 00:29:51.296 Removing: /var/run/dpdk/spdk_pid4108251 00:29:51.296 Removing: /var/run/dpdk/spdk_pid4109626 00:29:51.296 Removing: /var/run/dpdk/spdk_pid4109774 00:29:51.296 Removing: /var/run/dpdk/spdk_pid4110047 00:29:51.296 Removing: /var/run/dpdk/spdk_pid4110199 00:29:51.296 Removing: /var/run/dpdk/spdk_pid4110795 00:29:51.296 Removing: /var/run/dpdk/spdk_pid4112160 00:29:51.296 Removing: /var/run/dpdk/spdk_pid4113179 00:29:51.296 Removing: /var/run/dpdk/spdk_pid4113624 00:29:51.296 Removing: /var/run/dpdk/spdk_pid4117208 00:29:51.296 Removing: /var/run/dpdk/spdk_pid4120703 00:29:51.296 Removing: /var/run/dpdk/spdk_pid4124964 00:29:51.296 Removing: /var/run/dpdk/spdk_pid41390 00:29:51.296 Removing: /var/run/dpdk/spdk_pid4148472 00:29:51.296 Removing: /var/run/dpdk/spdk_pid4151195 00:29:51.296 Removing: /var/run/dpdk/spdk_pid4155643 00:29:51.296 Removing: /var/run/dpdk/spdk_pid4156739 00:29:51.296 Removing: /var/run/dpdk/spdk_pid4157870 00:29:51.554 Removing: /var/run/dpdk/spdk_pid4160450 00:29:51.554 Removing: /var/run/dpdk/spdk_pid4162965 00:29:51.554 Removing: /var/run/dpdk/spdk_pid4167341 00:29:51.554 Removing: /var/run/dpdk/spdk_pid4167349 00:29:51.554 Removing: /var/run/dpdk/spdk_pid4170188 00:29:51.554 Removing: /var/run/dpdk/spdk_pid4170413 00:29:51.554 Removing: /var/run/dpdk/spdk_pid4170565 00:29:51.554 Removing: /var/run/dpdk/spdk_pid4170838 00:29:51.554 Removing: /var/run/dpdk/spdk_pid4170849 00:29:51.554 Removing: /var/run/dpdk/spdk_pid4171949 00:29:51.554 Removing: /var/run/dpdk/spdk_pid4173170 00:29:51.554 Removing: /var/run/dpdk/spdk_pid4174388 00:29:51.554 Removing: /var/run/dpdk/spdk_pid4175603 00:29:51.554 Removing: /var/run/dpdk/spdk_pid4176820 00:29:51.554 Removing: /var/run/dpdk/spdk_pid4178095 00:29:51.554 Removing: /var/run/dpdk/spdk_pid41797 00:29:51.554 Removing: /var/run/dpdk/spdk_pid4182044 00:29:51.554 Removing: /var/run/dpdk/spdk_pid4182391 00:29:51.554 Removing: /var/run/dpdk/spdk_pid4183705 00:29:51.554 Removing: /var/run/dpdk/spdk_pid4184458 00:29:51.554 Removing: /var/run/dpdk/spdk_pid4188863 00:29:51.554 Removing: /var/run/dpdk/spdk_pid4190908 00:29:51.554 Removing: /var/run/dpdk/spdk_pid43269 00:29:51.554 Removing: /var/run/dpdk/spdk_pid43681 00:29:51.554 Removing: /var/run/dpdk/spdk_pid44090 00:29:51.554 Removing: /var/run/dpdk/spdk_pid4458 00:29:51.554 Removing: /var/run/dpdk/spdk_pid560 00:29:51.554 Removing: /var/run/dpdk/spdk_pid8205 00:29:51.554 Removing: /var/run/dpdk/spdk_pid8665 00:29:51.554 Removing: /var/run/dpdk/spdk_pid9189 00:29:51.554 Removing: /var/run/dpdk/spdk_pid9615 00:29:51.554 Clean 00:29:51.554 killing process with pid 3939211 00:29:59.662 killing process with pid 3939208 00:29:59.662 killing process with pid 3939210 00:29:59.662 killing process with pid 3939209 00:29:59.662 07:47:15 -- common/autotest_common.sh@1436 -- # return 0 00:29:59.662 07:47:15 -- spdk/autotest.sh@387 -- # timing_exit post_cleanup 00:29:59.662 07:47:15 -- common/autotest_common.sh@718 -- # xtrace_disable 00:29:59.662 07:47:15 -- common/autotest_common.sh@10 -- # set +x 00:29:59.662 07:47:15 -- spdk/autotest.sh@389 -- # timing_exit autotest 00:29:59.662 07:47:15 -- common/autotest_common.sh@718 -- # xtrace_disable 00:29:59.662 07:47:15 -- common/autotest_common.sh@10 -- # set +x 00:29:59.662 07:47:15 -- spdk/autotest.sh@390 -- # chmod a+r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/timing.txt 00:29:59.662 07:47:15 -- spdk/autotest.sh@392 -- # [[ -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/udev.log ]] 00:29:59.662 07:47:15 -- spdk/autotest.sh@392 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/udev.log 00:29:59.662 07:47:15 -- spdk/autotest.sh@394 -- # hash lcov 00:29:59.662 07:47:15 -- spdk/autotest.sh@394 -- # [[ CC_TYPE=gcc == *\c\l\a\n\g* ]] 00:29:59.662 07:47:15 -- spdk/autotest.sh@396 -- # hostname 00:29:59.662 07:47:15 -- spdk/autotest.sh@396 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -c -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk -t spdk-gp-11 -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_test.info 00:29:59.662 geninfo: WARNING: invalid characters removed from testname! 00:30:26.223 07:47:40 -- spdk/autotest.sh@397 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -a /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_base.info -a /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_test.info -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:30:28.761 07:47:44 -- spdk/autotest.sh@398 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info '*/dpdk/*' -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:30:32.113 07:47:47 -- spdk/autotest.sh@399 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info '/usr/*' -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:30:35.409 07:47:51 -- spdk/autotest.sh@400 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info '*/examples/vmd/*' -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:30:37.948 07:47:53 -- spdk/autotest.sh@401 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info '*/app/spdk_lspci/*' -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:30:41.242 07:47:57 -- spdk/autotest.sh@402 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info '*/app/spdk_top/*' -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:30:44.531 07:47:59 -- spdk/autotest.sh@403 -- # rm -f cov_base.info cov_test.info OLD_STDOUT OLD_STDERR 00:30:44.531 07:47:59 -- common/autobuild_common.sh@15 -- $ source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:30:44.531 07:48:00 -- scripts/common.sh@433 -- $ [[ -e /bin/wpdk_common.sh ]] 00:30:44.531 07:48:00 -- scripts/common.sh@441 -- $ [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:30:44.531 07:48:00 -- scripts/common.sh@442 -- $ source /etc/opt/spdk-pkgdep/paths/export.sh 00:30:44.531 07:48:00 -- paths/export.sh@2 -- $ PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:44.531 07:48:00 -- paths/export.sh@3 -- $ PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:44.531 07:48:00 -- paths/export.sh@4 -- $ PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:44.531 07:48:00 -- paths/export.sh@5 -- $ export PATH 00:30:44.531 07:48:00 -- paths/export.sh@6 -- $ echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:44.531 07:48:00 -- common/autobuild_common.sh@434 -- $ out=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output 00:30:44.531 07:48:00 -- common/autobuild_common.sh@435 -- $ date +%s 00:30:44.531 07:48:00 -- common/autobuild_common.sh@435 -- $ mktemp -dt spdk_1720936080.XXXXXX 00:30:44.531 07:48:00 -- common/autobuild_common.sh@435 -- $ SPDK_WORKSPACE=/tmp/spdk_1720936080.NSKi2t 00:30:44.531 07:48:00 -- common/autobuild_common.sh@437 -- $ [[ -n '' ]] 00:30:44.531 07:48:00 -- common/autobuild_common.sh@441 -- $ '[' -n '' ']' 00:30:44.531 07:48:00 -- common/autobuild_common.sh@444 -- $ scanbuild_exclude='--exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/' 00:30:44.531 07:48:00 -- common/autobuild_common.sh@448 -- $ scanbuild_exclude+=' --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/xnvme --exclude /tmp' 00:30:44.531 07:48:00 -- common/autobuild_common.sh@450 -- $ scanbuild='scan-build -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/scan-build-tmp --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/ --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/xnvme --exclude /tmp --status-bugs' 00:30:44.531 07:48:00 -- common/autobuild_common.sh@451 -- $ get_config_params 00:30:44.531 07:48:00 -- common/autotest_common.sh@387 -- $ xtrace_disable 00:30:44.531 07:48:00 -- common/autotest_common.sh@10 -- $ set +x 00:30:44.531 07:48:00 -- common/autobuild_common.sh@451 -- $ config_params='--enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-coverage --with-ublk' 00:30:44.531 07:48:00 -- spdk/autopackage.sh@10 -- $ MAKEFLAGS=-j48 00:30:44.531 07:48:00 -- spdk/autopackage.sh@11 -- $ cd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:30:44.531 07:48:00 -- spdk/autopackage.sh@13 -- $ [[ 0 -eq 1 ]] 00:30:44.531 07:48:00 -- spdk/autopackage.sh@18 -- $ [[ 1 -eq 0 ]] 00:30:44.531 07:48:00 -- spdk/autopackage.sh@18 -- $ [[ 0 -eq 0 ]] 00:30:44.531 07:48:00 -- spdk/autopackage.sh@19 -- $ timing_finish 00:30:44.531 07:48:00 -- common/autotest_common.sh@724 -- $ flamegraph=/usr/local/FlameGraph/flamegraph.pl 00:30:44.531 07:48:00 -- common/autotest_common.sh@725 -- $ '[' -x /usr/local/FlameGraph/flamegraph.pl ']' 00:30:44.531 07:48:00 -- common/autotest_common.sh@727 -- $ /usr/local/FlameGraph/flamegraph.pl --title 'Build Timing' --nametype Step: --countname seconds /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/timing.txt 00:30:44.531 07:48:00 -- spdk/autopackage.sh@20 -- $ exit 0 00:30:44.531 + [[ -n 3896927 ]] 00:30:44.531 + sudo kill 3896927 00:30:44.542 [Pipeline] } 00:30:44.560 [Pipeline] // stage 00:30:44.565 [Pipeline] } 00:30:44.582 [Pipeline] // timeout 00:30:44.588 [Pipeline] } 00:30:44.605 [Pipeline] // catchError 00:30:44.610 [Pipeline] } 00:30:44.627 [Pipeline] // wrap 00:30:44.634 [Pipeline] } 00:30:44.656 [Pipeline] // catchError 00:30:44.665 [Pipeline] stage 00:30:44.668 [Pipeline] { (Epilogue) 00:30:44.686 [Pipeline] catchError 00:30:44.688 [Pipeline] { 00:30:44.707 [Pipeline] echo 00:30:44.710 Cleanup processes 00:30:44.718 [Pipeline] sh 00:30:45.010 + sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:30:45.010 55848 sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:30:45.026 [Pipeline] sh 00:30:45.312 ++ sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:30:45.312 ++ grep -v 'sudo pgrep' 00:30:45.312 ++ awk '{print $1}' 00:30:45.312 + sudo kill -9 00:30:45.312 + true 00:30:45.334 [Pipeline] sh 00:30:45.663 + jbp/jenkins/jjb-config/jobs/scripts/compress_artifacts.sh 00:30:57.872 [Pipeline] sh 00:30:58.154 + jbp/jenkins/jjb-config/jobs/scripts/check_artifacts_size.sh 00:30:58.154 Artifacts sizes are good 00:30:58.166 [Pipeline] archiveArtifacts 00:30:58.172 Archiving artifacts 00:30:58.407 [Pipeline] sh 00:30:58.688 + sudo chown -R sys_sgci /var/jenkins/workspace/nvmf-tcp-phy-autotest 00:30:58.701 [Pipeline] cleanWs 00:30:58.709 [WS-CLEANUP] Deleting project workspace... 00:30:58.709 [WS-CLEANUP] Deferred wipeout is used... 00:30:58.716 [WS-CLEANUP] done 00:30:58.718 [Pipeline] } 00:30:58.735 [Pipeline] // catchError 00:30:58.743 [Pipeline] sh 00:30:59.019 + logger -p user.info -t JENKINS-CI 00:30:59.027 [Pipeline] } 00:30:59.043 [Pipeline] // stage 00:30:59.048 [Pipeline] } 00:30:59.065 [Pipeline] // node 00:30:59.070 [Pipeline] End of Pipeline 00:30:59.100 Finished: SUCCESS